Linear algebra
Vectors
Vectors in 2 and 3-dimentional space are defined as members of the sets $\mathbb{R}^2$ and $\mathbb{R}^3$ respectively.
The vector $\overrightarrow{OP}$ is the directed line segment starting at the origin and ending at the point $P$. Vectors with the same length and direction are equivalent.
The scalar or dot product of two vectors, $\mathbf{a}.\mathbf{b}$ is $|\mathbf{a}||\mathbf{b}|\cos{\theta}$ and rearranged $\cos{\theta} = \frac{\mathbf{a}.\mathbf{b}}{|\mathbf{a}||\mathbf{b}|}$. Two vectors are perpendicular if their dot product is 0.
Vectors in $n$ dimensions
An $n$-dimensional space $\mathbb{R}^n$ exists for any positive
integer.
Summation, scalar multiplication, modulus and dot product all work
similarly to 2 and 3 dimensions.
Linear combinations and subspaces
Linear combinations
Given two vectors $u$ and $v$ and two scalars $a$ and $b$ a vector of the form $au+bv$ is a linear combination of $u$ and $v$. Take $(5,3)$. This can be written as $5(1,0) + 3(0,1)$ or $1(2,0) + 3(1,1)$. $(5,3)$ is a linear combination of these.
To express (6,6) as a linear combination of (0,3) and (2,1), let $(6,6) = \alpha(0,3) + \beta(2,1)$. Expanding this gives $(6,6) = (2 \beta, 3 \alpha + \beta)$. Setting $2\beta =6$ and $3 \alpha + \beta = 6$ and solving gives $\alpha = 1$ and $\beta = 3$. Therefore $(6,6) = 1(0,3)+3(2,1)$.
Linear combinations also exist for $n$-dimensional vectors. To express $(3,0)$ as a linear combination of $(1,1)$, $(1,0)$ and $(1,-1)$ in multiple different ways let $(3,0) = \alpha(1,1) + \beta(1,0) + \gamma(1,-1)$ which gives $(3,0) = (\alpha + \beta + \gamma, \alpha - \gamma)$. Solving gives $\alpha = \gamma$ and $\beta = 3 - 2\gamma$. Then any value can be chosen for $\gamma$ to give different linear combinations, such as $\gamma = 0$ gives $\alpha = 0$ and $\beta=3$.
Given non-parallel vectors $u$ and $v$ the vector $\alpha u + \beta v$ is the diagonal of the parallelogram with sides $\alpha u$ and $\beta v$.
Let $u = (1,0,3)$, $v = (0,2,0)$ and $w = (0,3,1)$.
The linear combination
$2u+3v+4w = 2(1,0,3) + 3(0,2,0) + 4(0,3,1) = (2,18,10)$.
It is not possible to express $(1,5,4)$ as a linear combination of $u$,
$v$ and $w$ because when solving the equations the solutions are not
consistent.
Span
If $U = \{u_1 + u_2, ..., u_m\}$ is a finite set of vectors in $\mathbb{R}^n$ then the span of $U$ is the set of all linear combinations of $u_1, u_2,..., u_m$.
If $U = \{u\}$ then span
$\{u\} = \{\alpha u\:|\:\alpha \in \mathbb{R}\}$.
If $U = \{(1,0),(0,1)\}$ then the span of $U$ is $\mathbb{R}^2$. Any
arbitrary vector $(x,y)= x(1,0) + y(0,1)$.
Subspace
A subspace of $\mathbb{R}^n$ is a nonempty subset $S$ of $\mathbb{R}^n$ with two properties. The first is closure under addition, which means $u,v \in S \rightarrow u+v \in S$ and the second is closure under scalar multiplication which means $u \in S, \lambda \in \mathbb{R} \rightarrow \lambda u \in S$.
If $S$ is a subspace and $u_1, u_2, ..., u_m \in S$ then any linear combination of $u_1, u_2, ..., u_m$ also belongs to $S$. Two simple subspaces of $\mathbb{R}^n$ are the set containing only the zero vector and $\mathbb{R}^n$ itself.
Linear Independence
A set of vectors $\{u_1,u_2,...,u_m\}$ in $\mathbb{R}^n$ is linearly
dependent if there are numbers
$\alpha_1,\alpha_2,...,\alpha_m \in \mathbb{R}$, not all zero such that
$\alpha_1 u_1 + \alpha_2 u_2 + ... + \alpha_m u_m = 0$.
A set of vectors is linearly independent it is not linearly dependent
($a_i = 0$ for all $i$).
$\{(1,2,3),(1,-1,-1),(5,1,3)\}$ is linearly dependent. $\alpha(1,2,3) + \beta(1,-1,-1) + \gamma(5,1,3) = 0$. This gives $\alpha + \beta + 5 \gamma=0$, $2\alpha - \beta+\gamma$ and $3\alpha -\beta +3\gamma$. Solving these gives $\alpha = -2\gamma$ and $\beta = -3\gamma$. Let $\gamma \neq 0$. This means $\alpha$ and $\beta \neq 0$, so they are linearly independent.
Basis
Given a subspace $S$ of $\mathbb{R}^n$, a set of vectors is called a basis of $S$ if it is a linearly independent set which spans $S$. The set $\{(1,0,0), (0,1,0),(0,0,1)\}$ is a basis for $\mathbb{R}^3$.
Standard basis
In $\mathbb{R}^n$, the standard basis is the set
$\{e_1, e_2, ..., e_n\}$ where $e_r$ is the vector with $r$th component
1 and all other components 0.
The standard basis for $\mathbb{R}^5$ is $\{e_1, e_2, e_3, e_4, e_5\}$
where
$$ \begin{aligned} e_1 &= (1,0,0,0,0)\\ e_2 &= (0,1,0,0,0)\\ e_3 &= (0,0,1,0,0)\\ e_4 &= (0,0,0,1,0)\\ e_5 &= (0,0,0,0,1) \end{aligned} $$
If the set $\{v_1, v_2, ..., v_m\}$ spans $S$, a subspace of $\mathbb{R}^n$, then any linearly independent subset of $S$ contains at most $m$ vectors.
Dimension
The dimension of a subspace of $\mathbb{R}^n$ is the number of vectors in a basis for the subspace. Since the standard basis for $\mathbb{R}^n$ contains $n$ vectors it follows that $\mathbb{R}^n$ has dimension $n$.
Coordinates
Let $V = \{v_i, v_2, ..., v_n\}$ be a basis for $\mathbb{R}^n$. Every $x \in \mathbb{R}^n$ has a unique expansion as a linear combination $x = \alpha_1 v_1 + \alpha_2 v_2 + ... + \alpha_n v_n$ of these basis vectors. The coefficients $\alpha_1, \alpha_2, ..., \alpha_n$ are the coordinates of $x$ with respect to basis $V$.
Image of basis vectors and transformation of coordinates
Let $T: \mathbb{R}^m \rightarrow \mathbb{R}^n$ be a linear transformation and let $M$ be the matrix of $T$ with respect to bases $V \in \mathbb{R}^m$ and $W \in \mathbb{R}^n$. Suppose $V = \{v_1, v_2, ..., v_m\}$. The image of basis vector $v_i$ is the $i$th column of $M$.
If $x \in \mathbb{R}^m$ has coordinates $[x_1, x_2, ..., x_m]$ with respect to $V$ it follows that $T(x) \in \mathbb{R}^n$ has coordinates $[y_1, y_2, ..., y_n$ with respect to $W$ where $$ \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{bmatrix} = M \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} $$ The left matrix is the coordinates in $W$ and the right matrix is the coordinates in $V$.
Transition matrix
Given two bases $V, W \in \mathbb{R}^n$ any given vector will have different coordinates with respect to each basis. The change in coordinates can be described by the transition matrix from $V$ to $W$.
Let $V = \{v_1, v_2, ..., v_n\}$ and $W = \{w_1, w_2, ..., w_n\}$ be two bases for $\mathbb{R}^n$. Every $v_j \in V$ can be expressed as a linear combination of the vectors in $W$, $v_j = \alpha_{1j}w_1 + \alpha_{2j}w_2+...+\alpha_{nj}w_n$.
The transition matrix from $V$ to $W$ is the matrix $M$ obtained by joining the coefficients ($\alpha_{ij}$).