{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n\n# Need GPU warning\n\nRunning this mri-nufft example requires a GPU, and hence is NOT possible on binder currently We request you to kindly run this notebook on Google Colab by clicking the link below. Additionally, please make sure to set the runtime on Colab to use a GPU and install the below libraries before running.\n
\n
\n \n \"Open\n \n
\n " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Learn Sampling pattern for multi-coil MRI\n\nA small pytorch example to showcase learning k-space sampling patterns.\nThis example showcases the auto-diff capabilities of the NUFFT operator \nwrt to k-space trajectory in mri-nufft.\n\nBriefly, in this example we try to learn the k-space samples $\\mathbf{K}$ for the following cost function:\n\n\\begin{align}\\mathbf{\\hat{K}} = arg \\min_{\\mathbf{K}} || \\sum_{\\ell=1}^LS_\\ell^* \\mathcal{F}_\\mathbf{K}^* D_\\mathbf{K} \\mathcal{F}_\\mathbf{K} x_\\ell - \\mathbf{x}_{sos} ||_2^2\\end{align}\nwhere $S_\\ell$ is the sensitivity map for the $\\ell$-th coil, $\\mathcal{F}_\\mathbf{K}$ is the forward NUFFT operator and $D_\\mathbf{K}$ is the density compensators for trajectory $\\mathbf{K}$, $\\mathbf{x}_\\ell$ is the image for the $\\ell$-th coil, and $\\mathbf{x}_{sos} = \\sqrt{\\sum_{\\ell=1}^L x_\\ell^2}$ is the sum-of-squares image as target image to be reconstructed.\n\nIn this example, the forward NUFFT operator $\\mathcal{F}_\\mathbf{K}$ is implemented with `model.operator` while the SENSE operator $model.sense_op$ models the term $\\mathbf{A} = \\sum_{\\ell=1}^LS_\\ell^* \\mathcal{F}_\\mathbf{K}^* D_\\mathbf{K}$.\nFor our data, we use a 2D slice of a 3D MRI image from the BrainWeb dataset, and the sensitivity maps are simulated using the `birdcage_maps` function from `sigpy.mri`.\n\n

Note

To showcase the features of ``mri-nufft``, we use ``\n \"cufinufft\"`` backend for ``model.operator`` without density compensation and ``\"gpunufft\"`` backend for ``model.sense_op`` with density compensation.

\n

Warning

This example only showcases the autodiff capabilities, the learned sampling pattern is not scanner compliant as the scanner gradients required to implement it violate the hardware constraints. In practice, a projection $\\Pi_\\mathcal{Q}(\\mathbf{K})$ into the scanner constraints set $\\mathcal{Q}$ is recommended (see [Proj]_). This is implemented in the proprietary SPARKLING package [Sparks]_. Users are encouraged to contact the authors if they want to use it.

\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Install libraries\n!pip install mri-nufft[gpunufft] cufinufft sigpy scikit-image\npip install brainweb-dl # Required for data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import time\nimport joblib\n\nimport brainweb_dl as bwdl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom tqdm import tqdm\nfrom PIL import Image, ImageSequence\n\nfrom mrinufft import get_operator\nfrom mrinufft.extras import get_smaps\nfrom mrinufft.trajectories import initialize_2D_radial\nfrom sigpy.mri import birdcage_maps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup a simple class to learn trajectory\n

Note

While we are only learning the NUFFT operator, we still need the gradient `wrt_data=True` to have all the gradients computed correctly.\n See [Projector]_ for more details.

\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class Model(torch.nn.Module):\n def __init__(self, inital_trajectory, n_coils, img_size=(256, 256)):\n super(Model, self).__init__()\n self.trajectory = torch.nn.Parameter(\n data=torch.Tensor(inital_trajectory),\n requires_grad=True,\n )\n sample_points = inital_trajectory.reshape(-1, inital_trajectory.shape[-1])\n # A simple acquisition model simulated with a forward NUFFT operator. We dont need density compensation here.\n # The trajectory is scaled by 2*pi for cufinufft backend.\n self.operator = get_operator(\"cufinufft\", wrt_data=True, wrt_traj=True)(\n sample_points * 2 * np.pi,\n shape=img_size,\n n_coils=n_coils,\n squeeze_dims=False,\n )\n # A simple density compensated adjoint SENSE operator with sensitivity maps `smaps`.\n self.sense_op = get_operator(\"gpunufft\", wrt_data=True, wrt_traj=True)(\n sample_points,\n shape=img_size,\n density=True,\n n_coils=n_coils,\n smaps=np.ones(\n (n_coils, *img_size)\n ), # Dummy smaps, this is updated in forward pass\n squeeze_dims=False,\n )\n self.img_size = img_size\n\n def forward(self, x):\n \"\"\"Forward pass of the model.\"\"\"\n # Update the trajectory in the NUFFT operator.\n # The trajectory is scaled by 2*pi for cufinufft backend.\n # Note that the re-computation of density compensation happens internally.\n self.operator.samples = self.trajectory.clone() * 2 * np.pi\n self.sense_op.samples = self.trajectory.clone()\n\n # Simulate the acquisition process\n kspace = self.operator.op(x)\n\n # Recompute the sensitivity maps for the updated trajectory.\n self.sense_op.smaps, _ = get_smaps(\"low_frequency\")(\n self.trajectory.detach().numpy(),\n self.img_size,\n kspace.detach(),\n backend=\"gpunufft\",\n density=self.sense_op.density,\n blurr_factor=20,\n )\n # Reconstruction using the sense operator\n adjoint = self.sense_op.adj_op(kspace).abs()\n return adjoint / torch.mean(adjoint)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Util function to plot the state of the model\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def plot_state(axs, mri_2D, traj, recon, loss=None, save_name=None):\n axs = axs.flatten()\n axs[0].imshow(np.abs(mri_2D), cmap=\"gray\")\n axs[0].axis(\"off\")\n axs[0].set_title(\"MR Image\")\n axs[1].scatter(*traj.T, s=1)\n axs[1].set_title(\"Trajectory\")\n axs[2].imshow(np.abs(recon[0][0].detach().cpu().numpy()), cmap=\"gray\")\n axs[2].axis(\"off\")\n axs[2].set_title(\"Reconstruction\")\n if loss is not None:\n axs[3].plot(loss)\n axs[3].set_title(\"Loss\")\n axs[3].grid(\"on\")\n if save_name is not None:\n plt.savefig(save_name, bbox_inches=\"tight\")\n plt.close()\n else:\n plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup model and optimizer\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n_coils = 6\ninit_traj = initialize_2D_radial(32, 256).astype(np.float32).reshape(-1, 2)\nmodel = Model(init_traj, n_coils=n_coils, img_size=(256, 256))\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\nschedulder = torch.optim.lr_scheduler.LinearLR(\n optimizer,\n start_factor=1,\n end_factor=0.1,\n total_iters=100,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup data\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "mri_2D = torch.from_numpy(np.flipud(bwdl.get_mri(4, \"T1\")[80, ...]).astype(np.float32))\nmri_2D = mri_2D / torch.mean(mri_2D)\nsmaps_simulated = torch.from_numpy(birdcage_maps((n_coils, *mri_2D.shape)))\nmcmri_2D = mri_2D[None].to(torch.complex64) * smaps_simulated\nmodel.eval()\nrecon = model(mcmri_2D)\nfig, axs = plt.subplots(1, 3, figsize=(15, 5))\nplot_state(axs, mri_2D, model.trajectory.detach().cpu().numpy(), recon)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start training loop\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "losses = []\nimage_files = []\nmodel.train()\n\nwith tqdm(range(100), unit=\"steps\") as tqdms:\n for i in tqdms:\n out = model(mcmri_2D)\n loss = torch.nn.functional.mse_loss(out, mri_2D[None, None])\n numpy_loss = loss.detach().cpu().numpy()\n tqdms.set_postfix({\"loss\": numpy_loss})\n losses.append(numpy_loss)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n schedulder.step()\n with torch.no_grad():\n # Clamp the value of trajectory between [-0.5, 0.5]\n for param in model.parameters():\n param.clamp_(-0.5, 0.5)\n # Generate images for gif\n hashed = joblib.hash((i, \"learn_traj\", time.time()))\n filename = \"/tmp/\" + f\"{hashed}.png\"\n plt.clf()\n fig, axs = plt.subplots(2, 2, figsize=(10, 10))\n plot_state(\n axs,\n mri_2D,\n model.trajectory.detach().cpu().numpy(),\n out,\n losses,\n save_name=filename,\n )\n image_files.append(filename)\n\n\n# Make a GIF of all images.\nimgs = [Image.open(img) for img in image_files]\nimgs[0].save(\n \"mrinufft_learn_traj_mc.gif\",\n save_all=True,\n append_images=imgs[1:],\n optimize=False,\n duration=2,\n loop=0,\n)\n\n# sphinx_gallery_thumbnail_path = 'generated/autoexamples/GPU/images/mrinufft_learn_traj_mc.gif'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. image-sg:: /generated/autoexamples/GPU/images/mrinufft_learn_traj_mc.gif\n :alt: example learn_samples\n :srcset: /generated/autoexamples/GPU/images/mrinufft_learn_traj_mc.gif\n :class: sphx-glr-single-img\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Trained trajectory\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model.eval()\nrecon = model(mcmri_2D)\nfig, axs = plt.subplots(2, 2, figsize=(10, 10))\nplot_state(axs, mri_2D, model.trajectory.detach().cpu().numpy(), recon, losses)\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### References\n\n.. [Proj] N. Chauffert, P. Weiss, J. Kahn and P. Ciuciu, \"A Projection Algorithm for\n Gradient Waveforms Design in Magnetic Resonance Imaging,\" in\n IEEE Transactions on Medical Imaging, vol. 35, no. 9, pp. 2026-2039, Sept. 2016,\n doi: 10.1109/TMI.2016.2544251.\n.. [Sparks] G. R. Chaithya, P. Weiss, G. Daval-Frérot, A. Massire, A. Vignaud and P. Ciuciu,\n \"Optimizing Full 3D SPARKLING Trajectories for High-Resolution Magnetic\n Resonance Imaging,\" in IEEE Transactions on Medical Imaging, vol. 41, no. 8,\n pp. 2105-2117, Aug. 2022, doi: 10.1109/TMI.2022.3157269.\n.. [Projector] Chaithya GR, and Philippe Ciuciu. 2023. \"Jointly Learning Non-Cartesian\n k-Space Trajectories and Reconstruction Networks for 2D and 3D MR Imaging\n through Projection\" Bioengineering 10, no. 2: 158.\n https://doi.org/10.3390/bioengineering10020158\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.14" } }, "nbformat": 4, "nbformat_minor": 0 }