Least-mean-fourth (LMF)

New in version 1.1.0.

The least-mean-fourth (LMF) adaptive filter implemented according to the paper [1].

The LMF filter can be created as follows

>>> import padasip as pa
>>> pa.filters.FilterLMF(n)

where n is the size (number of taps) of the filter.

Content of this page:

See also

Adaptive Filters

Algorithm Explanation

The LMF adaptive filter could be described as

\(y(k) = w_1 \cdot x_{1}(k) + ... + w_n \cdot x_{n}(k)\),

or in a vector form

\(y(k) = \textbf{x}^T(k) \textbf{w}(k)\),

where \(k\) is discrete time index, \((.)^T\) denotes the transposition, \(y(k)\) is filtered signal, \(\textbf{w}\) is vector of filter adaptive parameters and \(\textbf{x}\) is input vector (for a filter of size \(n\)) as follows

\(\textbf{x}(k) = [x_1(k), ..., x_n(k)]\).

The LMF weights adaptation could be described as follows

\(\textbf{w}(k+1) = \textbf{w}(k) + \Delta \textbf{w}(k)\),

where \(\Delta \textbf{w}(k)\) is

\(\Delta \textbf{w}(k) = \frac{1}{2} \mu \frac{\partial e^4(k)} { \partial \textbf{w}(k)}\ = \mu \cdot e(k)^{3} \cdot \textbf{x}(k)\),

where \(\mu\) is the learning rate (step size) and \(e(k)\) is error defined as

\(e(k) = d(k) - y(k)\).

Minimal Working Examples

If you have measured data you may filter it as follows

# creation of data
N = 500
x = np.random.normal(0, 1, (N, 4)) # input matrix
v = np.random.normal(0, 0.1, N) # noise
d = 2*x[:,0] + 0.1*x[:,1] - 4*x[:,2] + 0.5*x[:,3] + v # target

# identification
f = pa.filters.FilterLMF(n=4, mu=0.01, w="random")
y, e, w = f.run(d, x)

# show results
plt.figure(figsize=(15,9))
plt.subplot(211);plt.title("Adaptation");plt.xlabel("samples - k")
plt.plot(d,"b", label="d - target")
plt.plot(y,"g", label="y - output");plt.legend()
plt.subplot(212);plt.title("Filter error");plt.xlabel("samples - k")
plt.plot(10*np.log10(e**2),"r", label="e - error [dB]");plt.legend()
plt.tight_layout()
plt.show()

References

[1]Azzedine Zerguine. Convergence behavior of the normalized least mean fourth algorithm. In Signals, Systems and Computers, 2000. Conference Record of the Thirty-Fourth Asilomar Conference on, volume 1, 275–278. IEEE, 2000.

Code Explanation

class padasip.filters.lmf.FilterLMF(n, mu=0.01, w='random')[source]

Bases: padasip.filters.base_filter.AdaptiveFilter

This class represents an adaptive LMF filter.

Args:

  • n : length of filter (integer) - how many input is input array (row of input matrix)

Kwargs:

  • mu : learning rate (float). Also known as step size. If it is too slow, the filter may have bad performance. If it is too high, the filter will be unstable. The default value can be unstable for ill-conditioned input data.

  • w : initial weights of filter. Possible values are:

    • array with initial weights (1 dimensional array) of filter size
    • “random” : create random weights
    • “zeros” : create zero value weights
adapt(d, x)[source]

Adapt weights according one desired value and its input.

Args:

  • d : desired value (float)
  • x : input array (1-dimensional array)
run(d, x)[source]

This function filters multiple samples in a row.

Args:

  • d : desired value (1 dimensional array)
  • x : input matrix (2-dimensional array). Rows are samples, columns are input arrays.

Returns:

  • y : output value (1 dimensional array). The size corresponds with the desired value.
  • e : filter error for every sample (1 dimensional array). The size corresponds with the desired value.
  • w : history of all weights (2 dimensional array). Every row is set of the weights for given sample.