7. ERA from Initial Condition Response



The state-variable response of a linear time-invariant system with zero input and an arbitrary set of initial conditions \(\boldsymbol{x}(0)\) is:

\begin{align} \label{Eq: Difference eq time invariant IC response1} \boldsymbol{x}(k) &= A^k\boldsymbol{x}(0),\\ \label{Eq: Difference eq time invariant IC response2} \boldsymbol{y}(k) &= CA^k\boldsymbol{x}(0). \end{align}

In that situation, the significance of previously defined Markov parameters is gone. As they are originally defined as pulse response, there is no worthwile definition for these matrices here. Similarly, there is no meaning for controllability in this case as the input control is set to zero. However, the concept of observability is still relevant. Even though observability and controllability of a linear system are mathematical duals, the concept of observability is just a measure of how well internal states of a system can be inferred from knowledge of its external outputs. The same way observability was previously defined, the discrete time-invariant linear system, Eq. \eqref{Eq: Difference eq time invariant IC response1} and Eq. \eqref{Eq: Difference eq time invariant IC response2}, of order \(n\) is observable if and only if the \(pm \times n\) block observability matrix \(\boldsymbol{O}^{(p)}\) has rank \(n\), where

\begin{align} \boldsymbol{O}^{(p)} = \begin{bmatrix} C\\ CA\\ CA^2\\ \vdots\\ CA^{p-1} \end{bmatrix}. \end{align}

Even though controllability has no substantial meaning, it is possible to define a controllability-like matrix, named \(\boldsymbol{Q}^{(q)}\), that gathers the state variable at different times:

\begin{align} \boldsymbol{Q}^{(q)} = \begin{bmatrix} \boldsymbol{x}(0) & A\boldsymbol{x}(0) & A^2\boldsymbol{x}(0) & \cdots & A^{q-1}\boldsymbol{x}(0) \end{bmatrix}. \end{align}

Since \(\boldsymbol{x}(0)\) is a \(n\)-dimensional vector and \(A\in \mathbb{R}^{n\times n}\), \(\boldsymbol{Q}^{(q)} \in \mathbb{R}^{n\times q}\) and has rank \(n\) from the moment \(q \geq n\).

Let's now define a Hankel matrix as

\begin{align} \label{Eq: Hankel matrix k IC response} \boldsymbol{H}_k^{(p, q)} = \begin{bmatrix} \boldsymbol{y}(k) & \boldsymbol{y}(k+1) & \cdots & \boldsymbol{y}(k+q-1)\\ \boldsymbol{y}(k+1) & \boldsymbol{y}(k+2) & \cdots & \boldsymbol{y}(k+q)\\ \vdots & \vdots & \ddots & \vdots\\ \boldsymbol{y}(k+p-1) & \boldsymbol{y}(k+p) & \cdots & \boldsymbol{y}(k+p+q-2)\\ \end{bmatrix} = \boldsymbol{O}^{(p)}A^k\boldsymbol{Q}^{(q)}. \end{align}

For the case when \(k=0\),

\begin{align} \label{Eq: Hankel matrix 0 IC response} \boldsymbol{H}_0^{(p, q)} = \begin{bmatrix} \boldsymbol{y}(0) & \boldsymbol{y}(1) & \cdots & \boldsymbol{y}(q-1)\\ \boldsymbol{y}(1) & \boldsymbol{y}(2) & \cdots & \boldsymbol{y}(q)\\ \vdots & \vdots & \ddots & \vdots\\ \boldsymbol{y}(p-1) & \boldsymbol{y}(p) & \cdots & \boldsymbol{y}(p+q-2)\\ \end{bmatrix} = \boldsymbol{O}^{(p)}\boldsymbol{Q}^{(q)}. \end{align}

If \(pm\geq n\) and \(q\geq n\), matrices \(\boldsymbol{Q}^{(q)}\) and \(\boldsymbol{O}^{(p)}\) are of rank maximum \(n\). If the system is observable, the block matrix \(\boldsymbol{O}_p\) is of rank \(n\) and

\begin{align} \text{rank}\left[\boldsymbol{H}_k^{(p, q)}\right] = n. \end{align}

Following the exact same steps as for the classical ERA, it leads to

\begin{align} \label{Eq: factorization OpQq IC repsonse} \boldsymbol{H}_0^{(p, q)} = \boldsymbol{U}^{(n)}\boldsymbol{\Sigma}^{(n)}{\boldsymbol{V}^{(n)}}^\intercal = \boldsymbol{O}^{(p)}\boldsymbol{Q}^{(q)} \Rightarrow \left\lbrace\begin{array}{ll} \boldsymbol{O}^{(p)} &\hspace{-0.7em}= \boldsymbol{U}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{1/2}\\ \boldsymbol{Q}^{(q)} &\hspace{-0.7em}= {\boldsymbol{\Sigma}^{(n)}}^{1/2}{\boldsymbol{V}^{(n)}}^\intercal \end{array}\right., \end{align}

and a minimum realization is given by

\begin{align} \hat{A} &= {\boldsymbol{O}^{(p)}}^\dagger\boldsymbol{H}_1^{(p, q)}{\boldsymbol{Q}^{(q)}}^\dagger = {\boldsymbol{\Sigma}^{(n)}}^{-1/2}{\boldsymbol{U}^{(n)}}^\intercal\boldsymbol{H}_1^{(p, q)}\boldsymbol{V}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{-1/2},\\ \hat{C} &= {\boldsymbol{E}^{(m)}}^\intercal\boldsymbol{O}^{(p)} = {\boldsymbol{E}^{(m)}}^\intercal\boldsymbol{U}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{1/2},\\ \hat{\boldsymbol{x}}_0 &= \boldsymbol{Q}^{(q)}\boldsymbol{E}^{(1)} = {\boldsymbol{\Sigma}^{(n)}}^{1/2}{\boldsymbol{V}^{(n)}}^\intercal\boldsymbol{E}^{(1)}. \end{align}

Note that this formulation is very close to the classical ERA formulation.

\begin{array}{|c|c|} \hline \text{Classical ERA} & \text{ERA with Initial Condition Response}\\ \hline \hline &\\[-1em] \boldsymbol{H}_k^{(p, q)} \in \mathbb{R}^{pm \times qr} & \boldsymbol{H}_k^{(p, q)} \in \mathbb{R}^{pm \times q} \\ [+0.5em] \hline &\\[-1em] \boldsymbol{R}^{(q)} = \begin{bmatrix} B & AB & \cdots & A^{q-1}B \end{bmatrix} \in \mathbb{R}^{n \times qr} & \boldsymbol{Q}^{(q)} = \begin{bmatrix} \boldsymbol{x}(0) & A\boldsymbol{x}(0) & \cdots & A^{q-1}\boldsymbol{x}(0) \end{bmatrix} \in \mathbb{R}^{n \times q}\\[+0.5em] \hline &\\[-1em] {\boldsymbol{V}^{(n)}}^\intercal \in \mathbb{R}^{n\times qr} & {\boldsymbol{V}^{(n)}}^\intercal \in \mathbb{R}^{n\times q}\\[+0.5em] \hline &\\[-1em] \hat{B} = \boldsymbol{R}^{(q)}\boldsymbol{E}^{(r)} & \hat{\boldsymbol{x}}_0 = \boldsymbol{Q}^{(q)}\boldsymbol{E}^{(1)}\\[+0.5em] \hline \end{array}