• Levinson module
  • Original author: Thomas Cokelaer, 2011
spafe.utils.levinsondr.LEVINSON(r, order=None, allow_singularity=False)[source]

Levinson-Durbin recursion. Find the coefficients of a length(r)-1 order autoregressive linear process

r (array) : autocorrelation sequence of length N + 1
(first element being the zero-lag autocorrelation)
order (int) : requested order of the autoregressive coefficients.
Default is N.
allow_singularity (bool) : Other implementations may be True (e.g., octave)
Default is False.
  • the N+1 autoregressive coefficients \(A=(1, a_1...a_N)\)
  • the prediction errors
  • the N reflections coefficients values


This algorithm solves the set of complex linear simultaneous equations using Levinson algorithm.
\[old{T}_M \left( egin{array}{c} 1 \ old{a}_M \end{array}\]
ight) =
left( egin{array}{c}

ho_M old{0}_M end{array} ight)

where \(old{T}_M\) is a Hermitian Toeplitz matrix with elements \(T_0, T_1, \dots ,T_M\).

Solving this equations by Gaussian elimination would require \(M^3\) operations whereas the levinson algorithm requires \(M^2+M\) additions and \(M^2+M\) multiplications.

This is equivalent to solve the following symmetric Toeplitz system of linear equations

\[\left( egin{array}{cccc} r_1 & r_2^* & \dots & r_{n}^*\ r_2 & r_1^* & \dots & r_{n-1}^*\ \dots & \dots & \dots & \dots\ r_n & \dots & r_2 & r_1 \end{array}\]
left( egin{array}{cccc} a_2a_3 dots a_{N+1} end{array}
= left( egin{array}{cccc} -r_2-r_3 dots -r_{N+1} end{array}

where \(r = (r_1 ... r_{N+1})\) is the input autocorrelation vector, and \(r_i^*\) denotes the complex conjugate of \(r_i\). The input r is typically a vector of autocorrelation coefficients where lag 0 is the first element \(r_1\). .. raw::python

import numpy; from spectrum import LEVINSON T = numpy.array([3., -2+0.5j, .7-1j]) a, e, k = LEVINSON(T)
spafe.utils.levinsondr.rlevinson(a, efinal)[source]

computes the autocorrelation coefficients, R based on the prediction polynomial A and the final prediction error Efinal, using the stepdown algorithm. Works for real or complex data :param a: :param efinal:

  • R, the autocorrelation
  • U prediction coefficient
  • kr reflection coefficients
  • e errors

A should be a minimum phase polynomial and A(1) is assumed to be unity.

(P+1) by (P+1) upper triangular matrix, U, that holds the i’th order
prediction polynomials Ai, i=1:P, where P is the order of the input polynomial, A.
[ 1 a1(1)* a2(2)* ….. aP(P) * ] [ 0 1 a2(1)* ….. aP(P-1)* ]
U = [ ……………………………]
[ 0 0 0 ….. 1 ]

from which the i’th order prediction polynomial can be extracted using Ai=U(i+1:-1:1,i+1)’. The first row of U contains the conjugates of the reflection coefficients, and the K’s may be extracted using, K=conj(U(1,2:end)).

To do:
remove the conjugate when data is real data, clean up the code test and doc.