A quick example

Finite difference methods estimate derivatives of functions from point-evaluations of said function. The same is true for probabilistic finite differences.

This set of notes explains the very basics of computing numerical derivatives with probfindiff. As a side quest, some basic design choices are explained.

[1]:
import jax.numpy as jnp

from probfindiff import central, differentiate, forward

First-order derivatives

At the heart of probfindiff, there is the function differentiate(), and a set of finite difference schemes. For example, to differentiate a function with a central scheme, compute the following.

[2]:
scheme, xs = central(dx=0.01)
dfx, _ = differentiate(jnp.sin(xs), scheme=scheme)

print(dfx, jnp.cos(0.0))
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
1.0003637 1.0

The function differentiate acts on point-evaluations of a function on some grid-points. These points can be chosen by a user, but more often than not, they are coupled tightly to the scheme itself.

[3]:
print("xs =", xs)
print()
print("scheme =", scheme)
xs = [-0.01  0.    0.01]

scheme = FiniteDifferenceScheme(weights=DeviceArray([-5.0015533e+01, -6.9692903e-03,  5.0022503e+01], dtype=float32), covs_marginal=DeviceArray(-0.00038028, dtype=float32), order_derivative=DeviceArray(1, dtype=int32, weak_type=True))

The function differentiate() is self is so simple and lightweight, you could in fact implement it yourself.

[4]:
dfx = jnp.sin(xs) @ scheme.weights
print(dfx, jnp.cos(0.0))
1.0003637 1.0

The finite difference scheme expects that the array consists of function evaluations at a specific grid. This is important, because, for instance, smaller step-sizes imply different weights/coefficients, and different accuracy.

The requirement of acting only on discretised functions is different to many existing finite difference implementations, which behave more like automatic differentiation (i.e., they act on the function as a function and evaluate it internally).

Why? This design choice is deliberate. In many applications, e.g. differential equations, the number of function evaluations counts. Depending on the implementation, some functions can also be batched efficiently, while others cannot. To make this transparent, probfindiff lets a user evaluate their functions themselves. It is therefore closer to np.gradient than to automatic differentiation. (There are also some other advantages regarding types, compilation, and vectorisation, but this is left for a different tutorial.)

Higher-order derivatives

It is easy to compute higher-order derivatives by changing the scheme accordingly.

[5]:
scheme, xs = central(dx=0.01, order_derivative=2)
d2fx, _ = differentiate(jnp.sin(xs), scheme=scheme)

print(d2fx, -jnp.sin(0.0))
-1.1569691e-06 -0.0

Higher-order methods

To increase the accuracy of the approximation, the method-order can be increased freely.

[6]:
scheme, xs = central(dx=0.02, order_method=4)
dfx, _ = differentiate(jnp.sin(xs), scheme=scheme)

print(dfx, jnp.cos(0.0))
0.9998326 1.0

Forward, central, and backward schemes

While central schemes tend to be more accurate than forward and backward schemes, all three are available. For example, we can replace the central scheme with a forward scheme

[7]:
scheme, xs = forward(dx=0.02)
dfx, _ = differentiate(jnp.sin(xs), scheme=scheme)

print(dfx, jnp.cos(0.0))
1.0013572 1.0

What has been left out?

In all the examples above, we have ignored the second output of differentiate(): the uncertainty associated with the estimate. Its meaning, and how to make the most of it, are subject for a different tutorial.