Skip to content

Singular Value Decomposition

Introduction.

Let \(f: V \ra W\) be a linear map between finite dimensional vectors spaces, \(dim(V)=N, dim(W)=M\) endowed with scalar products. The Singular Value Decomposition theorem, shows us that we can find ortho-normal bases \(v_n,w_m\) of \(V,W\) respectively, so that the matrix representation of \(f\) with respect to \(v_n,w_n\) has only diagonal entries: \(f(v_i) = \sigma_i w_i\), \(i=1 \dots min(N,M)\) Furthermore the bases can be arranged in a way that \(\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_r \geq 0 \geq \dots 0.\)

In this note, we are going to explore an abstract "coordinate free" version of SVD that can be expressed without choices of bases. Instead, we are going to study the eigenvalues of the self-adjoint operators \(S=f^* f\) on \(V\) and \(T=f f^*\) on \(W\).

\[ \begin{array}{lcr} S = f^* f \circlearrowright V & \xrightarrow{\hspace{2em}\text{f}\hspace{2em}} & W \circlearrowleft f f^* = T \end{array} \]

By the Spectral Decomposition Theorem, the vector spaces \(V,W\) decompose as orthogonal direct sums of the Eigenspaces of \(S,T\) respectively: \(V = \Vsum_\lambda V_\lambda, W = \Vsum_\lambda W_\lambda\).

How does \(f\) look like in a basis that is adjusted to this decompositions?

Proposition

  • The map \(f\) induces linear maps \(f_\lambda: V_\lambda \ra W_\lambda\), i.e. \(f(V_\lambda) \subset W_\lambda\).
  • The map \(f^*\) induces linear maps \(f^*_\lambda: W_\lambda \ra V_\lambda\), i.e. \(f^*(W_\lambda) \subset V_\lambda\).
  • \(f_\lambda^* f_\lambda = \lambda I_{V_\lambda}\), and \(f_\lambda f^*_\lambda = \lambda I_{W_\lambda}.\)
  • For \(\lambda > 0\) the maps \(\frac{1}{\sqrt{\lambda}} f_\lambda\), \(\frac{1}{\sqrt{\lambda}} f^*_\lambda\) are inverse isometries between the Eigenspaces \(V_\lambda \isom W_\lambda\).
  • For \(\lambda = 0\) we find \(V_0 = Ker(f)\) and \(W_0 = Im(f)^\perp\), \(f_0 = 0\), \(f^*_0 = 0\).
Proof

1) For \(v \in V_\lambda\), we have \((T - \lambda)f(v) = ff^*fv-\lambda fv = f(f^*f v-\lambda v) = f(0)=0.\) 2) For \(w \in W_\lambda\), we have \((S - \lambda)f^*(v) = f^*ff^*v-\lambda f^*v = 0.\) 3) For \(v \in V_\lambda\) we have \(f_\lambda^* f_\lambda v = f^* f(v) = \lambda v\) by definition of \(V_\lambda\). 4) Clear from 3. 5) \(Ker(f^*f) = Ker(f)\) by properties of the adjoint. Similarly \(Ker(ff^*)=Im(ff^*)^\perp=Im(f)^\perp\).

Corollary (SVD). We find orthogonal bases of \(V,W\) with respect to which \(f\) has only diagonal entries \(\geq 0\).

Proof

We recover the SVD from the Proposition, by choosing orthogonal basis \(B_\lambda\) for each Eigenspace \(V_\lambda\). For \(\lambda > 0\), we get a ONB for \(W_\lambda\) as \(C_\lambda = \frac{1}{\sqrt{\lambda}} f_\lambda U_\lambda\). Choose \(B_0\), and \(C_0\) arbitrary in \(V_0, W_0\). Now for \(\lambda > 0\) and a basis vector \(b \in V_\lambda\) we have \(f(b) = \sqrt{\lambda} c\) for \(c = \frac{1}{\sqrt{\lambda}} f(v) \in C_\lambda\). For \(\lambda = 0\) we have \(f(b)=0\). QED.

Corollary. Choosing a basis \(B,C\) of \(V,W\), that is adjusted to the Eigenspaces \(V_\lambda\), \(W_\lambda\) in descending order, the linear map f is represented by a block diagonal matrix \(A\) with entries \(A_\lambda\):

\[ A = \begin{bmatrix} A_{\lambda_1} & & & & \\ & A_{\lambda_2} & & & \\ & & \ddots & & \\ & & & A_{\lambda_k} & \\ & & & & A_0 \end{bmatrix} \]

Where, \(A_\lambda^* A_\lambda = \lambda I\) and \(A_\lambda A_\lambda^* = \lambda I\). In particular \(A_\lambda\) is an isomorphisom for \(\lambda > 0\).

Application - Least Squares via Duality

Regular least squares problems can commonly be written as a minimization problem of the form:

\[ \text{Find $v\in V$ so that } \quad L(v) = \| \phi v - b \|^2 \qtext{is minimal.} \]

Where \(\phi: V \ra W\) is an injective linear map, and \(b \in W\).

We can translate this minimization problem on \(V\) to a minimization problem on \(W\) by considering:

\[ \text{Find $w\in W$ so that } \quad M(w) = \| \phi \phi^* w - b \|^2 \qtext{is minimal.} \]

Since \(\phi\) is injective, \(\phi^*\) is surjective. So a global minimum of \(M=L\circ \phi^*\) will map to a global minimum of \(L\) under \(\phi^*\).

Now \(M\) can be explicitly minimized by noting that:

\[ \| \phi \phi^* w - b \|^2 = \sum_{\lambda \in Spec(\phi \phi^*)} \| \lambda w_\lambda - b_\lambda \| = \| b_0 \| + \sum_{\lambda > 0} \| \lambda w_\lambda - b_\lambda \| \]

Where \(w_\lambda, b_\lambda\) are the components of \(w,b\) in \(W_\lambda\). This function is minized by \(\hat{w}\) with \(\hat{w}_0=0 \in W_0\) and \(\hat{w}_\lambda = b_\lambda/\lambda \in W_\lambda\), i.e.

\[ \hat{w} = \sum_{\lambda > 0} \frac{1}{\lambda} b_\lambda. \]

We obtain the following formula for the solution \(\hat{v} := \phi^*(\hat{w}) = \sum_\lambda \frac{1}{\lambda} \phi^* b_\lambda.\)

We claim that \(\hat{v}\) is given by following explicit formula \(\hat{v} = (\phi^* \phi)^{-1} \phi^* b\).

Indeed, \(\phi^* b_\lambda \in V_\lambda\), where \((\phi^* \phi)\) operats as \(\lambda\). Hence \((\phi^* \phi) \sum_{\lambda>0} \phi^* b_\lambda / \lambda= \sum_{\lambda>0} \phi^* b_\lambda = \phi^* b\) as \(\phi^* b_0 = 0.\)