Skip to content

Linear Transformations

Definition

Let \(V,W\) be vector spaces over \(F\). Let \(T:V\rightarrow W\) and it is linear if 1. \(\forall x,y\in V, T(x+y)=T(x)+T(y)\) 2. \(\forall x\in V, \forall c\in F, T(cx)=cT(x)\)

Definition

Null space (aka kernel) of \(T\), \(N(T)=\{x\in V/ T(x)=0\}\)

Range (aka image) of \(T\), \(R(T)=\{T(x)/x\in V\}\)

Check $N(T)<V,R(T)<W$

\(\text{Nullity}(T)=\text{dim}(N(T))\)

\(\text{Rank}(T)=\text{dim}(R(T))\)

Theorem (Rank-Nullity Theorem)

\(\text{Nullity}(T)+\text{Rank}(T)=\text{dim}(V)\)

Proof

Example of the Theorem

\(T\) is one-to-one \(\iff\) \(N(T)=\{0\}\)

Theorem (one-to-one is onto if the dimension is the same)

Let \(\text{dim}V=\text{dim}W. T:V\rightarrow W\) linear map

TFAE: 1. \(T\) is one-to-one 2. \(T\) is onto 3. \(\text{rank}(T)=\text{dim}W\)

Proof.

\(T\) is one-to-one \(\iff N(T)=\{0\}\iff \text{Nullity}(T)=0\iff \text{rank}(T)=\text{dim}(V)=\text{dim}(W)\)

\(R(T)< W\) with \(\text{dim}(R(T))=\text{dim}(W)\iff R(T)=W\) i.e., \(T\) is onto.

\(T\) is onto if \(\forall w\in W, \exists v\in V, T(v)=w\)

Example

\(T:P_2\rightarrow P_3\)

\(T(f(x))=2f'(x)+\int ^x_3 f(dt)dt\)

Check if this is one-to-one:

\(\impliedby T\) is linear

\(\impliedby N(T)=\{0\}\)

\(\impliedby\text{rank}(T)=2\)

\(\impliedby \{1,x,x^2\}\) is a basis of \(P_2\)

\(\impliedby \text{Rank}(T)=3\)

\(\impliedby R(T)=\text{span}(\{T(1),T(x),T(x^2)\})\)

\(\impliedby \{T(1),T(x),T(x^2)\}\) is linearly independent set

Example

Let \(B=\{v_1,...,v_n\}\) basis for \(V\). Then, a generating set. for \(R(T)\) is \(\{T(V_1),...,T(V_n)\}\)

Proof.

\(y\in R(T)\implies y=T(x)\) for some \(x\in V\)

Thus, \(x=\sum a_iV_i\), \(T(x)=\sum a_i T(V_i)\)

\(\implies R(T)=\text{span}(\{T(V_1),...,T(V_n)\})\)

Theorem (Linear maps are determined by their basis)

Let \(\{v_1,...,v_n\}\) be a basis of \(V\), choose \(\{w_1,...,w_n\}\in W\) then \(\exists\) a unique linear transformation \(T:V\rightarrow W\) st \(T(v_i)=w_i\)

Proof.

Let \(x\in V\), \(x=\sum_{i=1}^n a_iv_i, T(x)=T(\sum_{i=1}^n a_iv_i)=\sum_{i=1}^n a_iT(v_i)=\sum_{i=1}^n a_iw_i\)

  1. Check \(T\) is linear
  2. \(T(v_i)=w_i\)
  3. If \(U:V\rightarrow W\) is linear, st. \(U(v_i)=w_i, U(X)=U(\sum a_iv_i)=\sum a_i U(v_i)=\sum a_iw_i=T(x)\)

Inverses

\(F:X\rightarrow Y\), inverse of \(F\) is a function \(G:Y\rightarrow X\) st. \(F\circ G=id_Y, G\circ F=id_X\)

If \(F\) is invertible, \(F\) us one-to-one and onto

Theorem (Inverse of linear map gives inverse of a matrix)

Let \(V, W\) (finite dimensional) vector spaces over \(F\) with basis \(\beta, \gamma\) . \(T\) is invertible \(\iff\) \([T]^\gamma_\beta\) is invertible and \([T^{-1}]^\beta_\gamma=([T]^\gamma_\beta)^{-1}\)

Lemma

\(T:V\rightarrow W\) is linear \(T\) is invertible, \(\implies \text{dim}(V)=\text{dim}(W)\) \(T\) is one-to-one \(\implies\) \(\text{Nullity}(T)=0\) \(T\) is onto \(\implies\) \(\text{R}(T)=W\) \(\text{Rank}(T)=\text{dim}(V)=\text{dim}(W)\) By Rank Nullity Theorem \(\text{Rank}(T)+\text{Nullity}(T)=\text{dim}(V)=\text{dim}(W)\)

Proof.

Suppose \(T\) is invertible, \(\text{dim}(V)=\text{dim}(W)=n\)

\(TT^{-1}=id_W\)

\([TT^{-1}]^\gamma_\gamma=[T]^\gamma_\beta[T^{-1}]^\beta_\gamma=[id_W]^\gamma_\gamma=[T]^\gamma_\beta[T^{-1}]^\beta_\gamma=In=[T]^\gamma_\beta[T^{-1}]^\beta_\gamma\)

\([T^{-1}T]^\beta_\beta=[T^{-1}]^\beta_\gamma[T]^\gamma_\beta=[id_V]^\beta_\beta=[T^{-1}]^\beta_\gamma[T]^\gamma_\beta=In\)

\(\implies\) Inverse of \([T]^\gamma_\beta\) is \([T^{-1}]^\beta_\gamma\)

Theorem (Inverse of Linear Map has Matrix Inverse)

\(T\) is an invertible linear map iff \([T]^\gamma_\beta\) is invertible.

Furthermore \([T^{-1}]^\gamma_\beta=([T]^\gamma_\beta)^{-1}\)

Proof.

Forward:

Suppose \(T\) is invertible, claim \(\text{dim}(V)=\text{dim}(W)\)

\(T\) is one-to-one iff \(N(T)=\{0\}\)

\(T\) is onto iff \(\text{R}(T)=W\)

By the rank-nullity theorem, \(\text{Rank}(T)+\text{Nullity}(T)=\text{dim}(W)+0=\text{dim}(W)=\text{dim}(V)\)

\(\exists T^{-1}: V\rightarrow W\) st \(T^{-1}\circ T=Id_V,T\circ T^{-1}=Id_W\)

\([T^{-1}\circ T]_\beta=[Id_V]_\beta,[T\circ T^{-1}]_\gamma=[Id_W]_\gamma\)

\(\implies [T]^\gamma_\beta\) is invertible and has inverse \([T^{-1}]^\gamma_\beta\)

Reverse

Suppose \([T]^\gamma_\beta=A\) is invertible

We need \(T^{-1}:w\rightarrow V\) linear with \([T^{-1}]^\gamma_\beta=([T]^\gamma_\beta)^{-1}\)

i.e., exists matrix \(B\in M_{n\times n}(F)\) st \(AB=BA=In\)

Let \(B=[b_{ij}]\)

Define \(U:W\rightarrow V\) by \(U(w_j)=\sum^n_{i=1}b_{ij}v_i, \forall j=1,...,n\)

We get such a linear map \(U\) by theorem and \([U]^\beta_\gamma=B\), \([TU]_\gamma=[T]_\beta^\gamma[U]^\beta_\gamma=AB=In=[Id_W]_\gamma\)

Similarly \(TU=Id_V\)

Definition

We say \(V\) is isomorphic to \(W\), \(V\cong W\) if \(\exists\) a linear invertible map \(T:V\rightarrow W\)

Theorem (Isomorphic if same dimension)

\(V\cong W\iff \text{dim}(V)=\text{dim}(W)\)

Forward is omitted

Reverse

Suppose \(\text{dim}(V)=\text{dim}(W)=n\)

Fix basis \(\beta=\{v_1,...,v_n\}\) and \(\gamma=\{w_1,...,w_n\}\) of \(V\) and \(W\)

Need \(T:V\rightarrow W\) linear and invertible

There exists a linear \(T:V\rightarrow W\) st \(T(v_j)=w_j\)

\(R(T)=\text{span}(T(v_i))=\text{span}(w_i)=W\)

\(T\) is onto

\(T\) is one-to-one because the kernel is \(\{0\}\)

So \(T\) is invertible

Eg

With \(\alpha=\{(1,2),(3,4)\}\) and \(\beta=\{(1,0),(1,1)\}\), \(T(a_1,a_2)=(a_1+2a_2,a_2-a_1)\)

\([T]_\alpha^\beta=[T(1,2)\ T(3,4)]=\begin{bmatrix}4&10\\1&1\end{bmatrix}\)

\(T(1,2)=(5,1)=a(1,0)+b(1,1)\implies (a,b)=(4,1)\)

Theorem (Identity Expressed in Different Basis Changes Coordinate)

Let \(Q=[I_V]_\alpha^\beta\) then

  • \(Q\) is invertible
  • For any \(v\in V\), \([v]_\beta=Q[v]_\alpha=[I]^\beta_\alpha[v]_\alpha\)

Proof.

  • \(Q\) is invertible because it represents the invertible linear map \(In\)
  • \([v]_\beta=[I(v)]_\beta=[I]^\beta_\alpha[v]_\alpha\)

Example

Let \(\alpha=\{(1,0),(1,1)\}, \beta=\{(1,2),(0,1)\}\) Take \(v\in V\) with \([v]_\beta=(3,4)\)

\([I]_\beta^\alpha=\begin{bmatrix}-1&-1\\2&1\end{bmatrix}\)

\([v]_\alpha=[I]_\beta^\alpha\circ[v]_\beta=\begin{bmatrix}-1&-1\\2&1\end{bmatrix}\cdot\begin{pmatrix}3\\4\end{pmatrix}=\begin{pmatrix}-7\\10\end{pmatrix}\)

Theorem (2.14)

\([T(V)]_\beta=[T]^\beta_\alpha[v]_\alpha\)

Proof.

\(v=\sum a_i\alpha_i, [v]_\alpha=(a_i), T(v)=\sum a_iT(\alpha_i)\)

\([T(v)]_\beta=\sum a_i[T(\alpha_i)]_\beta=[T]^\beta_\alpha[v]_\alpha\)

Theorem (2.23)

Suppose \(Q=[I]^\alpha_\beta\), then \([T]_\beta=Q^{-1}[T]_\alpha Q=[I]^\beta_\alpha [T]_\alpha [I]^\alpha_\beta\)

Proof.

\(Q[T]_\beta=[I]_\beta^\alpha[T]_\beta=[IT]_\beta^\alpha=[TI]_\beta^\alpha=[T]_\alpha[I]_\beta^\alpha=[T]_\alpha Q\implies [T]_\beta=Q^{-1}[T]_\alpha Q\)

Example

Let \(\alpha=\{(1,1),(1,-1)\}, \beta=\{(2,4),(3,1)\}\) \(T:\mathbb{R}^2\rightarrow \mathbb{R}^2, T(a,b)=(3a-b,a+3b)\)

\([T]_\alpha=\begin{bmatrix}3&1\\-1&3\end{bmatrix}\)

\([I]_\beta^\alpha=\begin{bmatrix}3&2\\-1&1\end{bmatrix}\)

\([T]_\beta=\begin{bmatrix}3&2\\-1&1\end{bmatrix}^{-1}\begin{bmatrix}3&1\\-1&3\end{bmatrix}\begin{bmatrix}3&2\\-1&1\end{bmatrix}=\begin{bmatrix}4&1\\-2&2\end{bmatrix}\)

Example - Rotation, Reflection

Counter-clockwise rotation:

\(R(\theta) = \begin{pmatrix}\cos \theta & -\sin \theta \\\sin \theta & \cos \theta\end{pmatrix}\)

Reflection About \(X\) axis:

\([T]=\begin{bmatrix}1&0\\0&-1\end{bmatrix}\)

Reflection About an axis that is \(\theta\) counter-clockwise from \(X\) axis:

\([F]=R_\theta[T]R^{-1}_\theta=\begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}\begin{bmatrix}1&0\\0&-1\end{bmatrix}\begin{pmatrix}\cos \theta & \sin \theta \\-\sin \theta & \cos \theta\end{pmatrix}=\begin{pmatrix}\cos(2\theta) & \sin(2\theta) \\\sin(2\theta) & -\cos(2\theta)\end{pmatrix}\)