Normalized Least-mean-fourth (NLMF)

New in version 1.1.0.

The least-mean-fourth (LMF) adaptive filter implemented according to the paper [1]. The NLMF is an extension of the LMF adaptive filter (Least-mean-fourth (LMF)).

The NLMF filter can be created as follows

>>> import padasip as pa
>>> pa.filters.FilterNLMF(n)

where n is the size (number of taps) of the filter.

Content of this page:

See also

Adaptive Filters

Algorithm Explanation

The NLMF is extension of LMF filter. See Least-mean-fourth (LMF) for explanation of the algorithm behind.

The extension is based on normalization of learning rate. The learning rage \(\mu\) is replaced by learning rate \(\eta(k)\) normalized with every new sample according to input power as follows

\(\eta (k) = \frac{\mu}{\epsilon + || \textbf{x}(k) ||^2}\),

where \(|| \textbf{x}(k) ||^2\) is norm of input vector and \(\epsilon\) is a small positive constant (regularization term). This constant is introduced to preserve the stability in cases where the input is close to zero.

Minimal Working Examples

If you have measured data you may filter it as follows

import numpy as np
import matplotlib.pylab as plt
import padasip as pa 

# creation of data
N = 500
x = np.random.normal(0, 1, (N, 4)) # input matrix
v = np.random.normal(0, 0.1, N) # noise
d = 2*x[:,0] + 0.1*x[:,1] - 0.3*x[:,2] + 0.5*x[:,3] + v # target

# identification
f = pa.filters.FilterNLMF(n=4, mu=0.005, w="random")
y, e, w = f.run(d, x)

# show results
plt.figure(figsize=(15,9))
plt.subplot(211);plt.title("Adaptation");plt.xlabel("samples - k")
plt.plot(d,"b", label="d - target")
plt.plot(y,"g", label="y - output");plt.legend()
plt.subplot(212);plt.title("Filter error");plt.xlabel("samples - k")
plt.plot(10*np.log10(e**2),"r", label="e - error [dB]");plt.legend()
plt.tight_layout()
plt.show()

References

[1]Azzedine Zerguine. Convergence behavior of the normalized least mean fourth algorithm. In Signals, Systems and Computers, 2000. Conference Record of the Thirty-Fourth Asilomar Conference on, volume 1, 275–278. IEEE, 2000.

Code Explanation

class padasip.filters.nlmf.FilterNLMF(n, mu=0.1, eps=1.0, w='random')[source]

Bases: padasip.filters.base_filter.AdaptiveFilter

Adaptive NLMF filter.

Args:

  • n : length of filter (integer) - how many input is input array (row of input matrix)

Kwargs:

  • mu : learning rate (float). Also known as step size. If it is too slow, the filter may have bad performance. If it is too high, the filter will be unstable. The default value can be unstable for ill-conditioned input data.

  • eps : regularization term (float). It is introduced to preserve stability for close-to-zero input vectors

  • w : initial weights of filter. Possible values are:

    • array with initial weights (1 dimensional array) of filter size
    • “random” : create random weights
    • “zeros” : create zero value weights
adapt(d, x)[source]

Adapt weights according one desired value and its input.

Args:

  • d : desired value (float)
  • x : input array (1-dimensional array)
run(d, x)[source]

This function filters multiple samples in a row.

Args:

  • d : desired value (1 dimensional array)
  • x : input matrix (2-dimensional array). Rows are samples, columns are input arrays.

Returns:

  • y : output value (1 dimensional array). The size corresponds with the desired value.
  • e : filter error for every sample (1 dimensional array). The size corresponds with the desired value.
  • w : history of all weights (2 dimensional array). Every row is set of the weights for given sample.