Recursive Least Squares (RLS)¶
New in version 0.1.
Changed in version 1.2.0.
The Recursive Least Squares filter can be created as follows
>>> import padasip as pa
>>> pa.filters.FilterRLS(n)
where the n is amount of filter inputs (size of input vector).
Content of this page:
See also
Algorithm Explanation¶
The RLS adaptive filter may be described as
\(y(k) = w_1 \cdot x_{1}(k) + ... + w_n \cdot x_{n}(k)\),
or in a vector form
\(y(k) = \textbf{x}^T(k) \textbf{w}(k)\),
where \(k\) is discrete time index, \((.)^T\) denotes the transposition, \(y(k)\) is filtered signal, \(\textbf{w}\) is vector of filter adaptive parameters and \(\textbf{x}\) is input vector (for a filter of size \(n\)) as follows
\(\textbf{x}(k) = [x_1(k), ..., x_n(k)]\).
The update is done as folows
\(\textbf{w}(k+1) = \textbf{w}(k) + \Delta \textbf{w}(k)\)
where \(\Delta \textbf{w}(k)\) is obtained as follows
\(\Delta \textbf{w}(k) = \textbf{R}(k) \textbf{x}(k) e(k)\),
where \(e(k)\) is error and it is estimated according to filter output and desired value \(d(k)\) as follows
\(e(k) = d(k) - y(k)\)
The \(\textbf{R}(k)\) is inverse of autocorrelation matrix and it is calculated as follows
\(\textbf{R}(k) = \frac{1}{\mu}( \textbf{R}(k-1) - \frac{\textbf{R}(k-1)\textbf{x}(k) \textbf{x}(k)^{T} \textbf{R}(k-1)} {\mu + \textbf{x}(k)^{T}\textbf{R}(k-1)\textbf{x}(k)} )\).
The initial value of autocorrelation matrix should be set to
\(\textbf{R}(0) = \frac{1}{\delta} \textbf{I}\),
where \(\textbf{I}\) is identity matrix and \(\delta\) is small positive constant.
Stability and Optimal Performance¶
Make the RLS working correctly with a real data can be tricky. The forgetting factor \(\mu\) should be in range from 0 to 1. But in a lot of cases it works only with values close to 1 (for example something like 0.99).
Minimal Working Examples¶
If you have measured data you may filter it as follows
import numpy as np
import matplotlib.pylab as plt
import padasip as pa
# creation of data
N = 500
x = np.random.normal(0, 1, (N, 4)) # input matrix
v = np.random.normal(0, 0.1, N) # noise
d = 2*x[:,0] + 0.1*x[:,1] - 4*x[:,2] + 0.5*x[:,3] + v # target
# identification
f = pa.filters.FilterRLS(n=4, mu=0.1, w="random")
y, e, w = f.run(d, x)
# show results
plt.figure(figsize=(15,9))
plt.subplot(211);plt.title("Adaptation");plt.xlabel("samples - k")
plt.plot(d,"b", label="d - target")
plt.plot(y,"g", label="y - output");plt.legend()
plt.subplot(212);plt.title("Filter error");plt.xlabel("samples - k")
plt.plot(10*np.log10(e**2),"r", label="e - error [dB]");plt.legend()
plt.tight_layout()
plt.show()
An example how to filter data measured in real-time
import numpy as np
import matplotlib.pylab as plt
import padasip as pa
# these two function supplement your online measurment
def measure_x():
# it produces input vector of size 3
x = np.random.random(3)
return x
def measure_d(x):
# meausure system output
d = 2*x[0] + 1*x[1] - 1.5*x[2]
return d
N = 100
log_d = np.zeros(N)
log_y = np.zeros(N)
filt = pa.filters.FilterRLS(3, mu=0.5)
for k in range(N):
# measure input
x = measure_x()
# predict new value
y = filt.predict(x)
# do the important stuff with prediction output
pass
# measure output
d = measure_d(x)
# update filter
filt.adapt(d, x)
# log values
log_d[k] = d
log_y[k] = y
### show results
plt.figure(figsize=(15,9))
plt.subplot(211);plt.title("Adaptation");plt.xlabel("samples - k")
plt.plot(log_d,"b", label="d - target")
plt.plot(log_y,"g", label="y - output");plt.legend()
plt.subplot(212);plt.title("Filter error");plt.xlabel("samples - k")
plt.plot(10*np.log10((log_d-log_y)**2),"r", label="e - error [dB]")
plt.legend(); plt.tight_layout(); plt.show()