FourierOperatorBase#
- class mrinufft.operators.base.FourierOperatorBase[source]#
Bases:
ABCBase Fourier Operator class.
Every (Linear) Fourier operator inherits from this class, to ensure that we have all the functions rightly implemented.
Methods
__init__Compute adjoint operator transform.
Validate the shapes of the image or k-space data against operator shapes.
Compute the density compensation weights and set it.
Compute the sensitivity maps and set it.
Compute the gradient data consistency.
Return the Lipschitz constant of the operator.
Make a new Operator with autodiff support.
Make a new DeepInv Physics with NUFFT operator.
Create a Scipy Linear Operator from the NUFFT operator.
Compute operator transform.
Solves the linear system Ax = y.
Return a Fourier operator with autograd capabilities.
Return a new operator with Off Resonnance Correction.
Attributes
autograd_availableReturn complex floating precision of the operator.
Density compensation of the operator.
Return floating precision of the operator.
Full image shape with batch and coil dimensions.
interfacesFull kspace shape with batch and coil dimensions.
Number of coils for the operator.
Number of coils for the operator.
Return the number of samples used by the operator.
Number of dimensions in image space of the operator.
Normalization factor of the operator.
Return the samples used by the operator.
Shape of the image space of the operator.
Sensitivity maps of the operator.
Return True if the operator uses density compensation.
Return True if the operator uses sensitivity maps.
backendavailableExamples using
mrinufft.operators.base.FourierOperatorBase#- check_shape(*, image=None, ksp=None)[source]#
Validate the shapes of the image or k-space data against operator shapes.
- Parameters:
image (NDArray, optional) – If passed, the shape of image data will be checked.
ksp (NDArray or object, optional) – If passed, the shape of the k-space data will be checked.
- Raises:
ValueError – If the shape of the provided image does not match the expected operator shape, or if the number of k-space samples does not match the expected number of samples.
- abstract op(data: ndarray[tuple[int, ...], dtype[_ScalarType_co]]) ndarray[tuple[int, ...], dtype[_ScalarType_co]][source]#
Compute operator transform.
- Parameters:
data (NDArray) – input as array.
- Returns:
result – operator transform of the input.
- Return type:
NDArray
- abstract adj_op(coeffs: ndarray[tuple[int, ...], dtype[_ScalarType_co]]) ndarray[tuple[int, ...], dtype[_ScalarType_co]][source]#
Compute adjoint operator transform.
- Parameters:
x (NDArray) – input data array.
- Returns:
results – adjoint operator transform.
- Return type:
NDArray
- data_consistency(image_data: ndarray[tuple[int, ...], dtype[_ScalarType_co]], obs_data: ndarray[tuple[int, ...], dtype[_ScalarType_co]]) ndarray[tuple[int, ...], dtype[_ScalarType_co]][source]#
Compute the gradient data consistency.
This is the naive implementation using adj_op(op(x)-y). Specific backend can (and should!) implement a more efficient version.
- with_off_resonance_correction(b0_map: ndarray[tuple[int, ...], dtype[_ScalarType_co]] | None = None, readout_time: ndarray[tuple[int, ...], dtype[_ScalarType_co]] | None = None, r2star_map: ndarray[tuple[int, ...], dtype[_ScalarType_co]] | None = None, mask: ndarray[tuple[int, ...], dtype[_ScalarType_co]] | None = None, interpolator: str | dict | tuple[ndarray[tuple[int, ...], dtype[_ScalarType_co]], ndarray[tuple[int, ...], dtype[_ScalarType_co]]] = 'svd')[source]#
Return a new operator with Off Resonnance Correction.
- compute_smaps(method: ndarray[tuple[int, ...], dtype[_ScalarType_co]] | Callable | str | dict | None = None)[source]#
Compute the sensitivity maps and set it.
- Parameters:
method (callable or dict or array) – The method to use to compute the sensitivity maps. If an array, it should be of shape (NCoils,XYZ) and will be used as is. If a dict, it should have a key ‘name’, to determine which method to use. other items will be used as kwargs. If a callable, it should take the samples and the shape as input. Note that this callable function should also hold the k-space data (use funtools.partial)
- make_linops(*, cupy: bool = False)[source]#
Create a Scipy Linear Operator from the NUFFT operator.
We add a _nufft private attribute with the current operator.
- Parameters:
cupy (bool, default False) – If True, create a Cupy Linear Operator
See also
-,-
- make_deepinv_phy(*args, **kwargs) Any[source]#
Make a new DeepInv Physics with NUFFT operator.
- Parameters:
wrt_data (bool, optional) – If the gradient with respect to the data is computed, default is true
wrt_traj (bool, optional) – If the gradient with respect to the trajectory is computed, default is false
paired_batch (int, optional) – If provided, specifies batch size for varying data/smaps pairs. Default is None, which means no batching
- Returns:
A NUFFT operator with autodiff capabilities.
- Return type:
torch.nn.module
- Raises:
ValueError – If autograd is not available.
- make_autograd(*, wrt_data: bool = True, wrt_traj: bool = False, paired_batch: bool = False) Any[source]#
Make a new Operator with autodiff support.
- Parameters:
variable (, default data) – variable on which the gradient is computed with respect to.
wrt_data (bool, optional) – If the gradient with respect to the data is computed, default is true
wrt_traj (bool, optional) – If the gradient with respect to the trajectory is computed, default is false
paired_batch (int, optional) – If provided, specifies batch size for varying data/smaps pairs. Default is None, which means no batching
- Returns:
A NUFFT operator with autodiff capabilities.
- Return type:
torch.nn.module
- Raises:
ValueError – If autograd is not available.
- compute_density(method: Callable[[...], ndarray[tuple[int, ...], dtype[_ScalarType_co]]] | None = None)[source]#
Compute the density compensation weights and set it.
- Parameters:
method (str or callable or array or dict or bool) –
The method to use to compute the density compensation.
If a string, the method should be registered in the density registry.
If a callable, it should take the samples and the shape as input.
If a dict, it should have a key ‘name’, to determine which method to use. other items will be used as kwargs.
If an array, it should be of shape (Nsamples,) and will be used as is.
If True, the method pipe is chosen as default estimation method.
Notes
The “pipe” method is only available for the following backends: tensorflow, finufft, cufinufft, gpunufft, torchkbnufft-cpu and torchkbnufft-gpu.
- get_lipschitz_cst(max_iter=10) floating | ndarray[tuple[int, ...], dtype[floating]][source]#
Return the Lipschitz constant of the operator.
- Parameters:
max_iter (int) – number of iteration to compute the lipschitz constant.
**kwargs – Extra arguments givent
- Returns:
Spectral Radius
- Return type:
Notes
This uses the Iterative Power Method to compute the largest singular value of a minified version of the nufft operator. No coil or B0 compensation is used, but includes any computed density.
- pinv_solver(kspace_data, optim='lsqr', **kwargs)[source]#
Solves the linear system Ax = y.
It uses a least-square optimization solver,
- Parameters:
kspace_data (NDArray) – The k-space data to reconstruct.
optim (str, default "lsqr") – name of the least-square optimizer to use.
**kwargs – Extra arguments to pass to the least-square optimizer.
- Returns:
Reconstructed image
- Return type:
NDArray
- property ksp_full_shape: tuple[int, int, int][source]#
Full kspace shape with batch and coil dimensions.
- property density: ndarray[tuple[int, ...], dtype[floating]] | None[source]#
Density compensation of the operator.