fadin.solver.FaDIn

class fadin.solver.FaDIn(n_dim, kernel, init='random', optim_mask=None, kernel_length=1, delta=0.01, optim='RMSprop', params_optim={}, max_iter=2000, ztzG_approx=True, grad_kernel=None, tol=0.0001, random_state=None)

Define the FaDIn framework for estimated Hawkes processes.

The framework is detailed in:

Guillaume Staerman, Cédric Allain, Alexandre Gramfort, Thomas Moreau FaDIn: Fast Discretized Inference for Hawkes Processes with General Parametric Kernels https://arxiv.org/abs/2210.04635

FaDIn minimizes the discretized L2 loss of Hawkes processes defined by the intensity as a convolution between the kernel \(\phi_{ij}\) and the sum of Dirac functions \(z_i := \sum_{t^i_n \in \mathscr{F}^i_T} \delta_{t^i_n}\) located at the event occurrences \(t^i_n\):

\[\forall i \in [1 \dots p], \quad \lambda_i(t) = \mu_i + \sum_{j=1}^p \phi_{ij} * z_j(t), \quad t \in [0, T]\]

where

  • \(p\) is the dimension of the process

  • \(\mu_i\) are the baseline intensities

  • \(\phi_{ij}\) are the kernels

  • \(z_j(t)\) are the activation vector on the discretized grid.

Parameters:
n_dimint

Dimension of the underlying Hawkes process.

kernelstr or callable

Either define a kernel in {'raised_cosine' | 'truncated_gaussian' | 'truncated_exponential'} or a custom kernel.

init: `str` or `dict`, default=’random’

Initialization strategy of the parameters of the Hawkes process. If set to ‘random’, the parameters are initialized randomly. If set to ‘moment_matching_max’, the parameters are initialized using the moment matching method with max mode. If set to ‘moment_matching_mean’, the parameters are initialized using the moment matching method with mean mode. Otherwise, the parameters are initialized using the given dictionary, , which must contain the following keys: - ‘baseline’: tensor, shape (n_dim,): Initial baseline - ‘alpha’: tensor, shape (n_dim, n_dim): Initial alpha - ‘kernel’: list of tensors of shape (n_dim, n_dim):

Initial kernel parameters.

optim_mask: `dict` of `tensor` or `None`, default=`None`.

Dictionary containing the masks for the optimization of the parameters of the Hawkes process. If set to None, all parameters are optimized. The dictionary must contain the following keys: - ‘baseline’: tensor of shape (n_dim,), or None.

Tensor of same shape as the baseline vector, with values in (0, 1). baseline coordinates where then tensor is equal to 0 will not be optimized. If set to None, all coordinates of baseline will be optimized.

  • ‘alpha’: tensor of shape (n_dim, n_dim), or None.

    Tensor of same shape as the alpha tensor, with values in (0, 1). alpha coordinates and kernel parameters where alpha_mask = 0 will not be optimized. If set to None, all coordinates of alpha and kernel parameters will be optimized.

kernel_lengthfloat, default=1.

Length of kernels in the Hawkes process.

deltafloat, default=0.01

Step size of the discretization grid.

optimstr in {'RMSprop' | 'Adam' | 'GD'}, default=’RMSprop’

The algorithms used to optimize the Hawkes processes parameters.

params_optimdict, {‘lr’, …}, default=dict()

Learning rate and parameters of the chosen optimization algorithm. Will be passed as arguments to the torch.optimizer constructor chosen via the optim parameter. If ‘lr’ is not given, it is set to 1e-3.

step_sizefloat, default=1e-3

Learning rate of the chosen optimization algorithm.

max_iterint, default=1000

Maximum number of iterations during fit.

ztzG_approxboolean, default=True

If ztzG_approx is false, compute the true ztzG precomputation constant that is the computational bottleneck of FaDIn. if ztzG_approx is true, ztzG is approximated with Toeplitz matrix not taking into account edge effects.

grad_kernelNone or callable, default=None

If kernel in {'raised_cosine'| 'truncated_gaussian' | 'truncated_exponential'} the gradient function is implemented. If kernel is custom, the custom gradient must be given.

tolfloat, default=1e-5

The tolerance of the solver (iterations stop when the stopping criterion is below it). If not reached the solver does ‘max_iter’ iterations.

random_stateint, RandomState instance or None, default=None

Set the torch seed to ‘random_state’. If set to None, torch seed will be set to 0.

Attributes:
baseline_tensor, shape (n_dim,)

Return the fitted baseline parameter of the Hawkes process.

alpha_tensor, shape (n_dim, n_dim)

Return the fitted alpha parameter of the Hawkes process.

kernel_list of tensor

Return the fitted kernel parameters of the Hawkes process.

param_baseline_tensor, shape (max_iter, n_dim)

Baseline parameter of the Hawkes process for each fit iteration.

param_baseline_noise_tensor, shape (max_iter, n_dim)

Baseline parameter of the Hawkes process for each fit iteration.

param_alpha_tensor, shape (max_iter, n_dim, n_dim)

Weight parameter of the Hawkes process for each fit iteration.

param_kernel_list of tensor

list containing tensor array of kernels parameters for each fit iteration. The size of the list varies depending the number of parameters. The shape of each tensor is (n_dim, n_dim).

v_loss_tensor, shape (n_iter)

loss accross iterations. If no early stopping, n_iter is equal to max_iter.

Methods

compute_gradient(solver, events_grid, ...)

Updates gradients for optimizer iteration of FaDIn solver, with l2 loss and precomputations.

fit(events, end_time)

Learn the parameters of the Hawkes processes on a discrete grid.

__init__(n_dim, kernel, init='random', optim_mask=None, kernel_length=1, delta=0.01, optim='RMSprop', params_optim={}, max_iter=2000, ztzG_approx=True, grad_kernel=None, tol=0.0001, random_state=None)

Methods

__init__(n_dim, kernel[, init, optim_mask, ...])

compute_gradient(solver, events_grid, ...)

Updates gradients for optimizer iteration of FaDIn solver, with l2 loss and precomputations.

fit(events, end_time)

Learn the parameters of the Hawkes processes on a discrete grid.

Attributes

alpha_

Return the fitted alpha parameter of the Hawkes process.

baseline_

Return the fitted baseline parameter of the Hawkes process.

kernel_

Return the fitted kernel parameters of the Hawkes process.

precomputations