## Tuesday, February 28, 2012

### Finite differences with Toeplitz matrix

A Toeplitz matrix is a band matrix in which each descending diagonal from left to right is constant. In this post we will see how to approximate the derivative of a function f(x) as matrix-vector products between a Toeplitz matrix and a vector of equally spaced values of f. Let's see how to generate the matrices we need using the function toeplitz(...) provided by numpy:
```from numpy import *
from scipy.linalg import toeplitz
import pylab

def forward(size):
""" returns a toeplitz matrix
for forward differences
"""
r = zeros(size)
c = zeros(size)
r[0] = -1
r[size-1] = 1
c[1] = 1

def backward(size):
""" returns a toeplitz matrix
for backward differences
"""
r = zeros(size)
c = zeros(size)
r[0] = 1
r[size-1] = -1
c[1] = -1

def central(size):
""" returns a toeplitz matrix
for central differences
"""
r = zeros(size)
c = zeros(size)
r[1] = .5
r[size-1] = -.5
c[1] = -.5
c[size-1] = .5

# testing the functions printing some 4-by-4 matrices
print 'Forward matrix'
print forward(4)
print 'Backward matrix'
print backward(4)
print 'Central matrix'
print central(4)
```
The result of the test above is as follows:
```Forward matrix
[[-1.  1.  0.  0.]
[ 0. -1.  1.  0.]
[ 0.  0. -1.  1.]
[ 1.  0.  0. -1.]]
Backward matrix
[[ 1.  0.  0. -1.]
[-1.  1.  0.  0.]
[ 0. -1.  1.  0.]
[ 0.  0. -1.  1.]]
Central matrix
[[ 0.   0.5  0.  -0.5]
[-0.5  0.   0.5  0. ]
[ 0.  -0.5  0.   0.5]
[ 0.5  0.  -0.5  0. ]]
```
We can observe that the matrix-vector product between those matrices and the vector of equally spaced values of f(x) implements, respectively, the following equations:

Forward difference,
Backward difference,
And central difference,

where h is the step size between the samples. Those equations are called Finite Differences and they give us an approximate derivative of f. So, let's approximate some derivatives!
```x = linspace(0,10,15)
y = cos(x) # recall, the derivative of cos(x) is sin(x)
# we need the step h to compute f'(x)
# because the product gives h*f'(x)
h = x[1]-x[2]
# generating the matrices
Tf = forward(15)/h
Tb = backward(15)/h
Tc = central(15)/h

pylab.subplot(211)
# approximation and plotting
pylab.plot(x,dot(Tf,y),'g',x,dot(Tb,y),'r',x,dot(Tc,y),'m')
pylab.plot(x,sin(x),'b--',linewidth=3)
pylab.axis([0,10,-1,1])

# the same experiment with more samples (h is smaller)
x = linspace(0,10,50)
y = cos(x)
h = x[1]-x[2]
Tf = forward(50)/h
Tb = backward(50)/h
Tc = central(50)/h

pylab.subplot(212)
pylab.plot(x,dot(Tf,y),'g',x,dot(Tb,y),'r',x,dot(Tc,y),'m')
pylab.plot(x,sin(x),'b--',linewidth=3)
pylab.axis([0,10,-1,1])
pylab.legend(['Forward', 'Backward', 'Central', 'True f prime'],loc=4)
pylab.show()
```
The resulting plot would appear as follows:

As the theory suggests, the approximation is better when h is smaller and the central differences are more accurate (note that, they have a higher order of accuracy respect to the backward and forward ones).

#### 1 comment:

1. Glad to see you're back posting again! More good stuff to learn.