Let be a Lévy process with Lévy-Khintchin exponent for a complete Bernstein function (some examples below). Let be the supremum functional of . Given some growth condition on , a formula for was given in my recent preprint written with Jacek Małecki and Michał Ryznar. (The same one where the conjecture is solved). In this blog post I would like to discuss a numerical scheme to calculate , which is based on this formula.

It is perhaps worth noting that if denotes the smallest such that , then , so equally well this note could have a title *Numerical computation of the distribution of the first passage time*.

This post is not about the supremum functional or first passage times. It is about numerical approximation to a rather complicated object. No knowledge of Lévy processes, complete Bernstein functions and other theoretical objects is required to follow this post.

**1.** Some properties of complete Bernstein functions are discussed in my previous post on conjecture. Here it is enough to know that a complete Bernstein function extends to a holomorphic function on such that the imaginary part of is positive in the upper complex half-plane and negative in the lower half-plane. Most important examples are (corresponding Lévy process: Brownian motion), , (symmetric -stable process), (variance gamma process) and (relativistic process).

**2.** The formula for the cumulative distribution function of the supremum functional is rather complicated (apart from two well-known cases and ). It involves not only the values of for positive , but also the jump of along the negative half-axis. In general, this jump should be understood in the distributional sense and it can be a measure. However, for simplicity, we only consider the case when extends to a continuous function in the closed upper complex half-plane . We begin with the following auxiliary functions:

Here . Suppose that is integrable in . Then

For some special cases this formula has been proved in my older preprint, while the general theorem is a joint result with Jacek Małecki and Michał Ryznar. The role of the functions is explained in the articles and exceeds the scope of this note. Just a remark: and .

**3.** The numerical scheme based on formula (1) is not quite straightforward: there are four nested integrals involved, one of them is in the exponent, and the other one in the argument of the sine function. And if this was not enough, there are essential numerical stability issues due to many cancellations in the formula.

One thing should be clarified here: I know next to nothing about numerical computations. There are two reasons for which I write this post. One of them is that I believe it may be useful for someone working in applications of Lévy processes. The other one is that I hope someone more experienced in numerical maths (perhaps the same person) would visit this page, read the code below and help me improve it. I welcome all comments.

The code below is written in Mathematica language, and it was executed on a (rather old) HP laptop with Pentium dual core 1.66 GHz processor and 1 GB RAM, running Mathematica 8.0.1.0. *MathematicaMark8* benchmark result is 0.27. Instead of just giving the final code, I prefer to start with a naive attempt, then describe the stability issues and how can one resolve them. The code itself is given at the end of this note.

**4.** We start with . After some manipulation, we obtain a slightly better formula,

Hence, we define:

`theta[k_?NumericQ] := theta[k] =`

NIntegrate[

Log[(psi[k^2]-psi[(k z)^2])/(psi[(k/z)^2]-psi[k^2])/z^2]/(1-z^2),

{z, 0, 1}

] / Pi;

Note that we memorize computed values by mixing `Set`

and `SetDelayed`

in the definition `theta[k_] := theta[k] = ...`

. The first referrence to, say, `theta[1]`

, makes the kernel execute the command `theta[1] = NIntegrate[...]`

. After computing the integral, a new definition `theta[1] = `

is added to the kernel, and this definition takes precedence over the general definition of *number*`theta[lambda_]`

. Hence, when `theta[1]`

is referred again, no integration is performed.

The exponent in the definition of is computed in a similar manner, but we take the term outside, and again use a simple substitution. That is, we use the formula

We let:

`exponent[k_?NumericQ, y_?NumericQ] := exponent[k, y] =`

NIntegrate[

Log[(1-psi[y^2 z^2]/psi[k^2])/(1-y^2 z^2/k^2)]/(1+z^2),

{z, 0, Infinity}

];

and we define the Laplace transform of :

`G[k_?NumericQ, x_?NumericQ] := G[k, x] =`

Sqrt[psi[k^2] dpsi[k^2]] / Pi NIntegrate[

Im[1/(psi[k^2]-psi[-y^2])] Exp[exponent[k, y]/Pi - x y],

{y, 0, Infinity}

];

In the above definition we use the nice feature of Mathematica: functions with a branch cut along negative half-axis, evaluated at negative arguments, return the boundary limit approached from above. This rule applies, for example, for power functions and logarithms. The function `dpsi`

is simply the derivative of `psi`

. The definition of is now straightforward:

`F[k_?NumericQ, x_?NumericQ] :=`

Sin[k x + theta[k]] - G[k, x];

It remains to define the main function. We use formula (1):

`supremum[t_?NumericQ, x_?NumericQ] := supremum[t, x] =`

2/Pi NIntegrate[

Exp[-t psi[k^2]] Sqrt[dpsi[k^2]/psi[k^2]] F[k, x],

{k, 0, Infinity}

];

**5.** Let us check what happens if . It takes less than 15 seconds to make a plot of :

Plotting (which gives the same picture, since in this special case ) consumes much more time, nearly 9 minutes. This is because for the latter plot, the values of `exponent[k, y]`

cannot be re-used: `k`

is no longer a constant. For the same reason, the computation of `supremum[t, x]`

would take a lot of time. But we can do little about it.

Consider now . In this case increases from to . Evaluation of `theta[1]`

, `theta[2]`

and `theta[10^10]`

gives correct results, but after typing `theta[10^(-10)]`

, we obtain an error:

. Numerical errors become visible when , as seen in the plot of :*The integrand (...) has evaluated to Overflow, Indeterminate, or Infinity for all sampling points in the region with boundaries {{0,1}}*

This error is caused by numerical instability of the argument of `Log`

in the definition of `theta`

when `z`

is close to . Replacing this argument by:

`(Sqrt[(k/z)^2 + 1] + Sqrt[k^2 + 1])/(Sqrt[k^2 + 1] + Sqrt[(k z)^2 + 1])`

solves the problem. When is small, Mathematica still gives a warning

, but this is simply because the value of the integral is close to zero. We can get rid of this message by setting *NIntegrate failed to converge to prescribed accuracy after 9 recursive bisections in z*`AccuracyGoal`

to a finite value (instead of the default `Infinity`

).

Similar stability problems may occur in the definition of `exponent`

for `z`

close to , so again we replace the argument of the `Log`

by:

`(Sqrt[1 + k^2] + 1)/(Sqrt[1 + k^2] + Sqrt[1 + (y z)^2])`

Finally, the definition of `psi`

should be changed to the stable form `psi[y_] = y/(Sqrt[1 + y] + 1)`

. With these corrections, the computation of `supremum[1, 1]`

is stable (well, at least Mathematica does not rise any warnings) and takes less than 3 minutes.

Let us now come back to the first example . The execution of `supremum[1, 1]`

, after ca. 15 minutes, results in an integrability error in the integral in `k`

near zero. Why? Well, in the present case decays when as a constant times . Hence, the integrand in (1) behaves near zero as , which is integrable. However, a tiny numerical instability in the computation of , the Laplace transform of , results in a large *relative* error in , which gives rise to a large *absolute* error in the integrand. The simplest solution is to replace for close to zero by an appropriate approximation, such as . In the general case, we choose the approximation , and the correct choice for is for some small (but not too small!) .

One more thing can be improved. We have just dealt with the unstable behavior of the integrand in (1) near zero. But what makes the computation of `supremum`

so time-consuming is the oscillatory part. Note that only the sine part has oscillatory behavior. Hence, the calculation can be easily sped up (sometimes even more than 10 times!) by expanding the integral over, say, as the difference of two integrals.

**6.** The final code is as follows:

`(*parameters*)`

accuracy = 5;

thetaaccuracy = 10;

exponentaccuracy = 10;

gaccuracy = Infinity;

e = 10^(-2);

(*definitions*)

theta[k_?NumericQ] := theta[k] =

NIntegrate[

Log[thetalog[k, z]]/(1 - z^2),

{z, 0, 1},

AccuracyGoal -> thetaaccuracy

]/Pi;

exponent[k_?NumericQ, y_?NumericQ] := exponent[k, y] =

NIntegrate[

Log[exponentlog[k, y, z]]/(1 + z^2),

{z, 0, Infinity},

AccuracyGoal -> exponentaccuracy

];

G0[k_?NumericQ, x_?NumericQ] := G0[k, x] =

NIntegrate[

imag[k, y] Exp[exponent[k, y]/Pi - x y],

{y, psimin, psimax},

AccuracyGoal -> gaccuracy

]/Pi;

F[k_?NumericQ, x_?NumericQ] :=

Sin[k x + theta[k]] - Sqrt[psi2[k] dpsi2[k]] G0[k, x];

supremum[t_?NumericQ, x_?NumericQ] := supremum[t, x] =

2/Pi (

F[e/x, x]/(e/x Sqrt[dpsi2[e/x]]) NIntegrate[

Exp[-t psi2[k]] k dpsi2[k]/Sqrt[psi2[k]],

{k, 0, e/x},

AccuracyGoal -> accuracy

] +

NIntegrate[

Exp[-t psi2[k]] (Sqrt[dpsi2[k]/psi2[k]] Sin[k x + theta[k]] -

dpsi2[k] G0[k, x]),

{k, e/x, 1/x},

AccuracyGoal -> accuracy

] +

NIntegrate[

Exp[-t psi2[k]] Sqrt[dpsi2[k]/psi2[k]] Sin[k x + theta[k]],

{k, 1/x, Infinity},

AccuracyGoal -> accuracy

] -

NIntegrate[

Exp[-t psi2[k]] dpsi2[k] G0[k, x],

{k, 1/x, Infinity},

AccuracyGoal -> accuracy

]

);

Furthermore, the following definitions specific to a given are needed:

`psi[y_] = `

*complete Bernstein function*;

psi2[y_] = psi[y^2];

dpsi[y_] = psi'[y];

dpsi2[y_] = dpsi[y^2];

thetalog[k_, z_] = FullSimplify[

(psi2[k] - psi2[k z])/(psi2[k/z] - psi2[k])/z^2,

k > 0 && z > 0

];

exponentlog[k_, y_, z_] = FullSimplify[

(1 - psi2[y z]/psi2[k])/(1 - (y z/k)^2),

k > 0 && y > 0 && z > 0

];

psimin = 0;

psimax = Infinity;

imag[k_, y_] = FullSimplify[

Im[1/(psi2[k] - psi[-y^2])],

k > 0 && psimin < y < psimax

];

If possible, the above definitions should be changed to numerically stable forms, and `psimin`

, `psimax`

should be changed to the minimal and the maximal such that the imaginary part of is nonzero. For example, for the -stable process:

`alpha = `

*number*;

psi[y_] = y^(alpha/2);

psi2[y_] = y^alpha;

dpsi[y_] = alpha/2 y^(alpha/2 - 1);

dpsi2[y_] = alpha/2 y^(alpha - 2);

thetalog[k_, z_] = z^(alpha - 2);

exponentlog[k_, y_, z_] = (1 - psi2[y z]/psi2[k])/(1 - y^2 z^2/k^2);

psimin = 0;

psimax = Infinity;

imag[k_, y_] = Sin[Pi alpha/2] y^alpha/

(k^(2 alpha) - 2 (k y)^alpha Cos[Pi alpha/2] + y^(2 alpha));

When is rational, then it is a good idea to manually remove singularities in `exponentlog`

. For the relativistic process:

`psi[y_] = y/(Sqrt[1 + y] + 1);`

psi2[y_] = psi[y^2];

dpsi[y_] = 1/(2 Sqrt[1 + y]);

dpsi2[y_] = dpsi[y^2];

thetalog[k_, z_] = (Sqrt[(k/z)^2 + 1] + Sqrt[k^2 + 1])/

(Sqrt[k^2 + 1] + Sqrt[(k z)^2 + 1]);

exponentlog[k_, y_, z_] = (Sqrt[1 + k^2] + 1)/

(Sqrt[1 + k^2] + Sqrt[1 + (y z)^2]);

psimin = 1;

psimax = Infinity;

imag[k_, y_] = Sqrt[y^2 - 1]/(k^2 + y^2);

Finally, for the variance gamma process:

`psi[y_] = Log[1 + y];`

psi2[y_] = psi[y^2];

dpsi[y_] = psi'[y];

dpsi2[y_] = dpsi[y^2];

thetalog[k_, z_] =

Log[(1 + k^2)/(1 + (k z)^2)]/Log[(1 + (k/z)^2)/(1 + k^2)]/z^2;

exponentlog[k_, y_, z_] =

Log[(1 + k^2)/(1 + (y z)^2)]/Log[1 + k^2]/(1 - (y z)^2/k^2);

psimin = 1;

psimax = Infinity;

imag[k_, y_] = Pi;

Here, however, the integral in (1) converges rather slowly when is small, so extra care is needed with the oscillatory integral. In such cases, when grows slowly at infinity, it is a good idea to replace `theta[k]`

in the third `NIntegrate`

in the definition of `supremum`

by a constant , so that Mathematica can apply its efficient algorithm for oscillatory integrals. For the variance gamma process, the constant is equal to , and this is a typical limit for slowly growing . Of course, one needs to update also the fourth `NIntegrate`

in this case:

`supremum[t_?NumericQ, x_?NumericQ] := supremum[t, x] =`

2/Pi (

F[e/x, x]/(e/x Sqrt[dpsi2[e/x]]) NIntegrate[

Exp[-t psi2[k]] k dpsi2[k]/Sqrt[psi2[k]],

{k, 0, e/x},

AccuracyGoal -> accuracy

] +

NIntegrate[

Exp[-t psi2[k]] (Sqrt[dpsi2[k]/psi2[k]] Sin[k x + theta[k]] -

dpsi2[k] G0[k, x]),

{k, e/x, 1/x},

AccuracyGoal -> accuracy

] +

NIntegrate[

Exp[-t psi2[k]] Sqrt[dpsi2[k]/psi2[k]] Sin[k x + thetainfinity],

{k, 1/x, Infinity},

AccuracyGoal -> accuracy

] +

NIntegrate[

Exp[-t psi2[k]] (2 Sqrt[dpsi2[k]/psi2[k]]

Cos[k x + (theta[k] + thetainfinity)/2]

Sin[(theta[k] - thetainfinity)/2] - dpsi2[k] G0[k, x]),

{k, 1/x, Infinity},

AccuracyGoal -> accuracy

]

);

Here `thetainfinity`

is the constant . The above modification slows down the computation significantly, but on the other hand, it makes the algorithm numerically stable even for small `t`

.

**7.** When `psimin`

is nonzero, then the Laplace transform of vanishes exponentially, and so the computation is relatively fast. For example, for the relativistic case, the first call to `supremum[1, 1]`

takes nearly 40 seconds to compute, but thanks to memorization, each subsequent call to `supremum[t, 1]`

with `t`

close to requires only tenths of a second. However, any change of the `x`

argument in `supremum[t, x]`

, or a significant change of `t`

, requires another 40 seconds. Similar times are obtained for the variance gamma process; in this case, however, the computation time increases significantly as `t`

goes to zero.

In the -stable case, the Laplace transform of has power-type behavior at infinity, and also the singularities in `G0`

are of higher order. For these reasons, the computation of `supremum[1, 1]`

takes significantly more time, approximately 5 minuts.

Obviously, in the special cases considered above we could do better. For example, for the symmetric -stable process, we can simply let `theta[_] = (2 - alpha)/8 Pi`

. However, my goal was to study the general picture rather than focus on one particular case. Also, I only wrote about efficiency; it would be very interesting to compare the above algorithm with, say, Monte Carlo methods. Accuracy of the method described above is a completely different topic. I am rather convinced that it is possible to *prove* that the error is small in an appropriate sense. However, I know far too little about numerical methods to do that myself.