Note
Go to the end to download the full example code. or to run this example in your browser via Binder
Learn Sampling pattern for multi-coil MRI#
A small pytorch example to showcase learning k-space sampling patterns. This example showcases the auto-diff capabilities of the NUFFT operator wrt to k-space trajectory in mri-nufft.
Briefly, in this example we try to learn the k-space samples \(\mathbf{K}\) for the following cost function:
where \(S_\ell\) is the sensitivity map for the \(\ell\)-th coil, \(\mathcal{F}_\mathbf{K}\) is the forward NUFFT operator and \(D_\mathbf{K}\) is the density compensators for trajectory \(\mathbf{K}\), \(\mathbf{x}_\ell\) is the image for the \(\ell\)-th coil, and \(\mathbf{x}_{sos} = \sqrt{\sum_{\ell=1}^L x_\ell^2}\) is the sum-of-squares image as target image to be reconstructed.
In this example, the forward NUFFT operator \(\mathcal{F}_\mathbf{K}\) is implemented with model.operator while the SENSE operator \(model.sense_op\) models the term \(\mathbf{A} = \sum_{\ell=1}^LS_\ell^* \mathcal{F}_\mathbf{K}^* D_\mathbf{K}\). For our data, we use a 2D slice of a 3D MRI image from the BrainWeb dataset, and the sensitivity maps are simulated using the birdcage_maps function from sigpy.mri.
Note
To showcase the features of mri-nufft
, we use ``
“cufinufft”`` backend for model.operator
without density compensation and "gpunufft"
backend for model.sense_op
with density compensation.
Warning
This example only showcases the autodiff capabilities, the learned sampling pattern is not scanner compliant as the scanner gradients required to implement it violate the hardware constraints. In practice, a projection \(\Pi_\mathcal{Q}(\mathbf{K})\) into the scanner constraints set \(\mathcal{Q}\) is recommended (see [Proj]). This is implemented in the proprietary SPARKLING package [Sparks]. Users are encouraged to contact the authors if they want to use it.
Imports#
import time
import joblib
import brainweb_dl as bwdl
import matplotlib.pyplot as plt
import numpy as np
import torch
from tqdm import tqdm
from PIL import Image, ImageSequence
from mrinufft import get_operator
from mrinufft.extras import get_smaps
from mrinufft.trajectories import initialize_2D_radial
from sigpy.mri import birdcage_maps
Setup a simple class to learn trajectory#
Note
While we are only learning the NUFFT operator, we still need the gradient wrt_data=True to have all the gradients computed correctly. See [Projector] for more details.
class Model(torch.nn.Module):
def __init__(self, inital_trajectory, n_coils, img_size=(256, 256)):
super(Model, self).__init__()
self.trajectory = torch.nn.Parameter(
data=torch.Tensor(inital_trajectory),
requires_grad=True,
)
sample_points = inital_trajectory.reshape(-1, inital_trajectory.shape[-1])
# A simple acquisition model simulated with a forward NUFFT operator. We dont need density compensation here.
# The trajectory is scaled by 2*pi for cufinufft backend.
self.operator = get_operator("cufinufft", wrt_data=True, wrt_traj=True)(
sample_points * 2 * np.pi,
shape=img_size,
n_coils=n_coils,
squeeze_dims=False,
)
# A simple density compensated adjoint SENSE operator with sensitivity maps `smaps`.
self.sense_op = get_operator("gpunufft", wrt_data=True, wrt_traj=True)(
sample_points,
shape=img_size,
density=True,
n_coils=n_coils,
smaps=np.ones(
(n_coils, *img_size)
), # Dummy smaps, this is updated in forward pass
squeeze_dims=False,
)
self.img_size = img_size
def forward(self, x):
"""Forward pass of the model."""
# Update the trajectory in the NUFFT operator.
# The trajectory is scaled by 2*pi for cufinufft backend.
# Note that the re-computation of density compensation happens internally.
self.operator.samples = self.trajectory.clone() * 2 * np.pi
self.sense_op.samples = self.trajectory.clone()
# Simulate the acquisition process
kspace = self.operator.op(x)
# Recompute the sensitivity maps for the updated trajectory.
self.sense_op.smaps, _ = get_smaps("low_frequency")(
self.trajectory.detach().numpy(),
self.img_size,
kspace.detach(),
backend="gpunufft",
density=self.sense_op.density,
blurr_factor=20,
)
# Reconstruction using the sense operator
adjoint = self.sense_op.adj_op(kspace).abs()
return adjoint / torch.mean(adjoint)
Util function to plot the state of the model#
def plot_state(axs, mri_2D, traj, recon, loss=None, save_name=None):
axs = axs.flatten()
axs[0].imshow(np.abs(mri_2D), cmap="gray")
axs[0].axis("off")
axs[0].set_title("MR Image")
axs[1].scatter(*traj.T, s=1)
axs[1].set_title("Trajectory")
axs[2].imshow(np.abs(recon[0][0].detach().cpu().numpy()), cmap="gray")
axs[2].axis("off")
axs[2].set_title("Reconstruction")
if loss is not None:
axs[3].plot(loss)
axs[3].set_title("Loss")
axs[3].grid("on")
if save_name is not None:
plt.savefig(save_name, bbox_inches="tight")
plt.close()
else:
plt.show()
Setup model and optimizer#
n_coils = 6
init_traj = initialize_2D_radial(32, 256).astype(np.float32).reshape(-1, 2)
model = Model(init_traj, n_coils=n_coils, img_size=(256, 256))
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
schedulder = torch.optim.lr_scheduler.LinearLR(
optimizer,
start_factor=1,
end_factor=0.1,
total_iters=100,
)
/volatile/github-ci-mind-inria/_work/_tool/Python/3.10.15/x64/lib/python3.10/site-packages/mrinufft/operators/interfaces/gpunufft.py:146: UserWarning: no pinning provided, pinning existing smaps now.
warnings.warn("no pinning provided, pinning existing smaps now.")
Setup data#
mri_2D = torch.from_numpy(np.flipud(bwdl.get_mri(4, "T1")[80, ...]).astype(np.float32))
mri_2D = mri_2D / torch.mean(mri_2D)
smaps_simulated = torch.from_numpy(birdcage_maps((n_coils, *mri_2D.shape)))
mcmri_2D = mri_2D[None].to(torch.complex64) * smaps_simulated
model.eval()
recon = model(mcmri_2D)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
plot_state(axs, mri_2D, model.trajectory.detach().cpu().numpy(), recon)
Start training loop#
losses = []
image_files = []
model.train()
with tqdm(range(100), unit="steps") as tqdms:
for i in tqdms:
out = model(mcmri_2D)
loss = torch.nn.functional.mse_loss(out, mri_2D[None, None])
numpy_loss = loss.detach().cpu().numpy()
tqdms.set_postfix({"loss": numpy_loss})
losses.append(numpy_loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
schedulder.step()
with torch.no_grad():
# Clamp the value of trajectory between [-0.5, 0.5]
for param in model.parameters():
param.clamp_(-0.5, 0.5)
# Generate images for gif
hashed = joblib.hash((i, "learn_traj", time.time()))
filename = "/tmp/" + f"{hashed}.png"
plt.clf()
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
plot_state(
axs,
mri_2D,
model.trajectory.detach().cpu().numpy(),
out,
losses,
save_name=filename,
)
image_files.append(filename)
# Make a GIF of all images.
imgs = [Image.open(img) for img in image_files]
imgs[0].save(
"mrinufft_learn_traj_mc.gif",
save_all=True,
append_images=imgs[1:],
optimize=False,
duration=2,
loop=0,
)
# sphinx_gallery_thumbnail_path = 'generated/autoexamples/GPU/images/mrinufft_learn_traj_mc.gif'
0%| | 0/100 [00:00<?, ?steps/s]
0%| | 0/100 [00:00<?, ?steps/s, loss=0.15466064]/volatile/github-ci-mind-inria/_work/_tool/Python/3.10.15/x64/lib/python3.10/site-packages/mrinufft/operators/autodiff.py:98: UserWarning: Casting complex values to real discards the imaginary part (Triggered internally at ../aten/src/ATen/native/Copy.cpp:308.)
grad_traj = torch.transpose(torch.sum(grad_traj, dim=1), 0, 1).to(
1%| | 1/100 [00:00<01:16, 1.30steps/s, loss=0.15466064]
1%| | 1/100 [00:00<01:16, 1.30steps/s, loss=0.2720748]
2%|▏ | 2/100 [00:01<01:18, 1.24steps/s, loss=0.2720748]
2%|▏ | 2/100 [00:01<01:18, 1.24steps/s, loss=0.21598782]
3%|▎ | 3/100 [00:02<01:19, 1.22steps/s, loss=0.21598782]
3%|▎ | 3/100 [00:02<01:19, 1.22steps/s, loss=0.12380665]
4%|▍ | 4/100 [00:03<01:29, 1.07steps/s, loss=0.12380665]
4%|▍ | 4/100 [00:03<01:29, 1.07steps/s, loss=0.10401522]
5%|▌ | 5/100 [00:04<01:25, 1.11steps/s, loss=0.10401522]
5%|▌ | 5/100 [00:04<01:25, 1.11steps/s, loss=0.09469631]
6%|▌ | 6/100 [00:05<01:23, 1.13steps/s, loss=0.09469631]
6%|▌ | 6/100 [00:05<01:23, 1.13steps/s, loss=0.07827435]
7%|▋ | 7/100 [00:06<01:24, 1.11steps/s, loss=0.07827435]
7%|▋ | 7/100 [00:06<01:24, 1.11steps/s, loss=0.08140828]
8%|▊ | 8/100 [00:07<01:25, 1.07steps/s, loss=0.08140828]
8%|▊ | 8/100 [00:07<01:25, 1.07steps/s, loss=0.076064855]
9%|▉ | 9/100 [00:08<01:24, 1.07steps/s, loss=0.076064855]
9%|▉ | 9/100 [00:08<01:24, 1.07steps/s, loss=0.066033944]
10%|█ | 10/100 [00:08<01:21, 1.11steps/s, loss=0.066033944]
10%|█ | 10/100 [00:09<01:21, 1.11steps/s, loss=0.06404585]
11%|█ | 11/100 [00:09<01:20, 1.11steps/s, loss=0.06404585]
11%|█ | 11/100 [00:10<01:20, 1.11steps/s, loss=0.059282042]
12%|█▏ | 12/100 [00:10<01:21, 1.08steps/s, loss=0.059282042]
12%|█▏ | 12/100 [00:11<01:21, 1.08steps/s, loss=0.059806146]
13%|█▎ | 13/100 [00:12<01:28, 1.02s/steps, loss=0.059806146]
13%|█▎ | 13/100 [00:12<01:28, 1.02s/steps, loss=0.056980282]
14%|█▍ | 14/100 [00:12<01:21, 1.06steps/s, loss=0.056980282]
14%|█▍ | 14/100 [00:13<01:21, 1.06steps/s, loss=0.053128533]
15%|█▌ | 15/100 [00:13<01:17, 1.10steps/s, loss=0.053128533]
15%|█▌ | 15/100 [00:13<01:17, 1.10steps/s, loss=0.050664976]
16%|█▌ | 16/100 [00:14<01:16, 1.09steps/s, loss=0.050664976]
16%|█▌ | 16/100 [00:14<01:16, 1.09steps/s, loss=0.050598208]
17%|█▋ | 17/100 [00:15<01:17, 1.08steps/s, loss=0.050598208]
17%|█▋ | 17/100 [00:15<01:17, 1.08steps/s, loss=0.049714588]
18%|█▊ | 18/100 [00:16<01:18, 1.05steps/s, loss=0.049714588]
18%|█▊ | 18/100 [00:16<01:18, 1.05steps/s, loss=0.047137633]
19%|█▉ | 19/100 [00:17<01:16, 1.06steps/s, loss=0.047137633]
19%|█▉ | 19/100 [00:17<01:16, 1.06steps/s, loss=0.045094274]
20%|██ | 20/100 [00:18<01:13, 1.08steps/s, loss=0.045094274]
20%|██ | 20/100 [00:18<01:13, 1.08steps/s, loss=0.044252988]
21%|██ | 21/100 [00:19<01:14, 1.06steps/s, loss=0.044252988]
21%|██ | 21/100 [00:19<01:14, 1.06steps/s, loss=0.04380132]
22%|██▏ | 22/100 [00:20<01:22, 1.05s/steps, loss=0.04380132]
22%|██▏ | 22/100 [00:20<01:22, 1.05s/steps, loss=0.04250335]
23%|██▎ | 23/100 [00:21<01:22, 1.08s/steps, loss=0.04250335]
23%|██▎ | 23/100 [00:22<01:22, 1.08s/steps, loss=0.042206317]
24%|██▍ | 24/100 [00:22<01:21, 1.07s/steps, loss=0.042206317]
24%|██▍ | 24/100 [00:23<01:21, 1.07s/steps, loss=0.042251565]
25%|██▌ | 25/100 [00:23<01:19, 1.06s/steps, loss=0.042251565]
25%|██▌ | 25/100 [00:24<01:19, 1.06s/steps, loss=0.040298127]
26%|██▌ | 26/100 [00:24<01:16, 1.03s/steps, loss=0.040298127]
26%|██▌ | 26/100 [00:25<01:16, 1.03s/steps, loss=0.038643155]
27%|██▋ | 27/100 [00:25<01:15, 1.04s/steps, loss=0.038643155]
27%|██▋ | 27/100 [00:26<01:15, 1.04s/steps, loss=0.038096286]
28%|██▊ | 28/100 [00:26<01:09, 1.04steps/s, loss=0.038096286]
28%|██▊ | 28/100 [00:26<01:09, 1.04steps/s, loss=0.039089262]
29%|██▉ | 29/100 [00:27<01:05, 1.08steps/s, loss=0.039089262]
29%|██▉ | 29/100 [00:27<01:05, 1.08steps/s, loss=0.039146356]
30%|███ | 30/100 [00:28<01:02, 1.11steps/s, loss=0.039146356]
30%|███ | 30/100 [00:28<01:02, 1.11steps/s, loss=0.035987206]
31%|███ | 31/100 [00:29<01:02, 1.11steps/s, loss=0.035987206]
31%|███ | 31/100 [00:29<01:02, 1.11steps/s, loss=0.037031166]
32%|███▏ | 32/100 [00:30<01:07, 1.01steps/s, loss=0.037031166]
32%|███▏ | 32/100 [00:30<01:07, 1.01steps/s, loss=0.035293736]
33%|███▎ | 33/100 [00:31<01:07, 1.01s/steps, loss=0.035293736]
33%|███▎ | 33/100 [00:31<01:07, 1.01s/steps, loss=0.034346588]
34%|███▍ | 34/100 [00:32<01:07, 1.02s/steps, loss=0.034346588]
34%|███▍ | 34/100 [00:32<01:07, 1.02s/steps, loss=0.034372263]
35%|███▌ | 35/100 [00:33<01:06, 1.02s/steps, loss=0.034372263]
35%|███▌ | 35/100 [00:33<01:06, 1.02s/steps, loss=0.033988178]
36%|███▌ | 36/100 [00:34<01:02, 1.02steps/s, loss=0.033988178]
36%|███▌ | 36/100 [00:34<01:02, 1.02steps/s, loss=0.034932118]
37%|███▋ | 37/100 [00:35<01:00, 1.05steps/s, loss=0.034932118]
37%|███▋ | 37/100 [00:35<01:00, 1.05steps/s, loss=0.031671047]
38%|███▊ | 38/100 [00:36<00:59, 1.04steps/s, loss=0.031671047]
38%|███▊ | 38/100 [00:36<00:59, 1.04steps/s, loss=0.033215318]
39%|███▉ | 39/100 [00:37<00:58, 1.04steps/s, loss=0.033215318]
39%|███▉ | 39/100 [00:37<00:58, 1.04steps/s, loss=0.032454766]
40%|████ | 40/100 [00:38<00:57, 1.04steps/s, loss=0.032454766]
40%|████ | 40/100 [00:38<00:57, 1.04steps/s, loss=0.031823248]
41%|████ | 41/100 [00:39<01:02, 1.05s/steps, loss=0.031823248]
41%|████ | 41/100 [00:39<01:02, 1.05s/steps, loss=0.03163269]
42%|████▏ | 42/100 [00:40<00:55, 1.04steps/s, loss=0.03163269]
42%|████▏ | 42/100 [00:40<00:55, 1.04steps/s, loss=0.030861769]
43%|████▎ | 43/100 [00:41<00:52, 1.09steps/s, loss=0.030861769]
43%|████▎ | 43/100 [00:41<00:52, 1.09steps/s, loss=0.030153567]
44%|████▍ | 44/100 [00:41<00:48, 1.15steps/s, loss=0.030153567]
44%|████▍ | 44/100 [00:42<00:48, 1.15steps/s, loss=0.029758083]
45%|████▌ | 45/100 [00:42<00:46, 1.17steps/s, loss=0.029758083]
45%|████▌ | 45/100 [00:42<00:46, 1.17steps/s, loss=0.029635226]
46%|████▌ | 46/100 [00:43<00:46, 1.17steps/s, loss=0.029635226]
46%|████▌ | 46/100 [00:43<00:46, 1.17steps/s, loss=0.029876392]
47%|████▋ | 47/100 [00:44<00:44, 1.18steps/s, loss=0.029876392]
47%|████▋ | 47/100 [00:44<00:44, 1.18steps/s, loss=0.029284265]
48%|████▊ | 48/100 [00:45<00:43, 1.19steps/s, loss=0.029284265]
48%|████▊ | 48/100 [00:45<00:43, 1.19steps/s, loss=0.02949998]
49%|████▉ | 49/100 [00:46<00:42, 1.20steps/s, loss=0.02949998]
49%|████▉ | 49/100 [00:46<00:42, 1.20steps/s, loss=0.030086499]
50%|█████ | 50/100 [00:47<00:45, 1.09steps/s, loss=0.030086499]
50%|█████ | 50/100 [00:47<00:45, 1.09steps/s, loss=0.030058254]
51%|█████ | 51/100 [00:48<00:45, 1.07steps/s, loss=0.030058254]
51%|█████ | 51/100 [00:48<00:45, 1.07steps/s, loss=0.029954491]
52%|█████▏ | 52/100 [00:49<00:46, 1.03steps/s, loss=0.029954491]
52%|█████▏ | 52/100 [00:49<00:46, 1.03steps/s, loss=0.02942919]
53%|█████▎ | 53/100 [00:50<00:44, 1.05steps/s, loss=0.02942919]
53%|█████▎ | 53/100 [00:50<00:44, 1.05steps/s, loss=0.028294848]
54%|█████▍ | 54/100 [00:50<00:43, 1.06steps/s, loss=0.028294848]
54%|█████▍ | 54/100 [00:51<00:43, 1.06steps/s, loss=0.027815942]
55%|█████▌ | 55/100 [00:51<00:42, 1.06steps/s, loss=0.027815942]
55%|█████▌ | 55/100 [00:52<00:42, 1.06steps/s, loss=0.02840538]
56%|█████▌ | 56/100 [00:52<00:41, 1.06steps/s, loss=0.02840538]
56%|█████▌ | 56/100 [00:53<00:41, 1.06steps/s, loss=0.028172035]
57%|█████▋ | 57/100 [00:53<00:39, 1.08steps/s, loss=0.028172035]
57%|█████▋ | 57/100 [00:53<00:39, 1.08steps/s, loss=0.02757661]
58%|█████▊ | 58/100 [00:54<00:36, 1.15steps/s, loss=0.02757661]
58%|█████▊ | 58/100 [00:54<00:36, 1.15steps/s, loss=0.027394459]
59%|█████▉ | 59/100 [00:55<00:34, 1.20steps/s, loss=0.027394459]
59%|█████▉ | 59/100 [00:55<00:34, 1.20steps/s, loss=0.027417555]
60%|██████ | 60/100 [00:56<00:36, 1.08steps/s, loss=0.027417555]
60%|██████ | 60/100 [00:56<00:36, 1.08steps/s, loss=0.027910175]
61%|██████ | 61/100 [00:57<00:35, 1.11steps/s, loss=0.027910175]
61%|██████ | 61/100 [00:57<00:35, 1.11steps/s, loss=0.027296115]
62%|██████▏ | 62/100 [00:58<00:33, 1.14steps/s, loss=0.027296115]
62%|██████▏ | 62/100 [00:58<00:33, 1.14steps/s, loss=0.027057406]
63%|██████▎ | 63/100 [00:58<00:31, 1.16steps/s, loss=0.027057406]
63%|██████▎ | 63/100 [00:59<00:31, 1.16steps/s, loss=0.027343735]
64%|██████▍ | 64/100 [00:59<00:30, 1.17steps/s, loss=0.027343735]
64%|██████▍ | 64/100 [00:59<00:30, 1.17steps/s, loss=0.027444351]
65%|██████▌ | 65/100 [01:00<00:29, 1.17steps/s, loss=0.027444351]
65%|██████▌ | 65/100 [01:00<00:29, 1.17steps/s, loss=0.027029283]
66%|██████▌ | 66/100 [01:01<00:28, 1.17steps/s, loss=0.027029283]
66%|██████▌ | 66/100 [01:01<00:28, 1.17steps/s, loss=0.02684293]
67%|██████▋ | 67/100 [01:02<00:28, 1.16steps/s, loss=0.02684293]
67%|██████▋ | 67/100 [01:02<00:28, 1.16steps/s, loss=0.026578784]
68%|██████▊ | 68/100 [01:03<00:28, 1.12steps/s, loss=0.026578784]
68%|██████▊ | 68/100 [01:03<00:28, 1.12steps/s, loss=0.02643847]
69%|██████▉ | 69/100 [01:04<00:30, 1.01steps/s, loss=0.02643847]
69%|██████▉ | 69/100 [01:04<00:30, 1.01steps/s, loss=0.026714873]
70%|███████ | 70/100 [01:05<00:29, 1.00steps/s, loss=0.026714873]
70%|███████ | 70/100 [01:05<00:29, 1.00steps/s, loss=0.026385147]
71%|███████ | 71/100 [01:06<00:27, 1.04steps/s, loss=0.026385147]
71%|███████ | 71/100 [01:06<00:27, 1.04steps/s, loss=0.026372783]
72%|███████▏ | 72/100 [01:07<00:25, 1.11steps/s, loss=0.026372783]
72%|███████▏ | 72/100 [01:07<00:25, 1.11steps/s, loss=0.026027571]
73%|███████▎ | 73/100 [01:07<00:23, 1.17steps/s, loss=0.026027571]
73%|███████▎ | 73/100 [01:08<00:23, 1.17steps/s, loss=0.025973203]
74%|███████▍ | 74/100 [01:08<00:21, 1.19steps/s, loss=0.025973203]
74%|███████▍ | 74/100 [01:08<00:21, 1.19steps/s, loss=0.025756933]
75%|███████▌ | 75/100 [01:09<00:21, 1.18steps/s, loss=0.025756933]
75%|███████▌ | 75/100 [01:09<00:21, 1.18steps/s, loss=0.025628973]
76%|███████▌ | 76/100 [01:10<00:20, 1.20steps/s, loss=0.025628973]
76%|███████▌ | 76/100 [01:10<00:20, 1.20steps/s, loss=0.025530588]
77%|███████▋ | 77/100 [01:11<00:19, 1.18steps/s, loss=0.025530588]
77%|███████▋ | 77/100 [01:11<00:19, 1.18steps/s, loss=0.025679462]
78%|███████▊ | 78/100 [01:12<00:20, 1.09steps/s, loss=0.025679462]
78%|███████▊ | 78/100 [01:12<00:20, 1.09steps/s, loss=0.025687832]
79%|███████▉ | 79/100 [01:13<00:19, 1.10steps/s, loss=0.025687832]
79%|███████▉ | 79/100 [01:13<00:19, 1.10steps/s, loss=0.025675839]
80%|████████ | 80/100 [01:14<00:17, 1.12steps/s, loss=0.025675839]
80%|████████ | 80/100 [01:14<00:17, 1.12steps/s, loss=0.025890771]
81%|████████ | 81/100 [01:14<00:17, 1.10steps/s, loss=0.025890771]
81%|████████ | 81/100 [01:15<00:17, 1.10steps/s, loss=0.026197925]
82%|████████▏ | 82/100 [01:15<00:16, 1.11steps/s, loss=0.026197925]
82%|████████▏ | 82/100 [01:16<00:16, 1.11steps/s, loss=0.025904939]
83%|████████▎ | 83/100 [01:16<00:15, 1.09steps/s, loss=0.025904939]
83%|████████▎ | 83/100 [01:17<00:15, 1.09steps/s, loss=0.025439452]
84%|████████▍ | 84/100 [01:17<00:14, 1.08steps/s, loss=0.025439452]
84%|████████▍ | 84/100 [01:17<00:14, 1.08steps/s, loss=0.025366735]
85%|████████▌ | 85/100 [01:18<00:13, 1.14steps/s, loss=0.025366735]
85%|████████▌ | 85/100 [01:18<00:13, 1.14steps/s, loss=0.02520642]
86%|████████▌ | 86/100 [01:19<00:11, 1.20steps/s, loss=0.02520642]
86%|████████▌ | 86/100 [01:19<00:11, 1.20steps/s, loss=0.02493716]
87%|████████▋ | 87/100 [01:20<00:10, 1.23steps/s, loss=0.02493716]
87%|████████▋ | 87/100 [01:20<00:10, 1.23steps/s, loss=0.024759976]
88%|████████▊ | 88/100 [01:21<00:10, 1.11steps/s, loss=0.024759976]
88%|████████▊ | 88/100 [01:21<00:10, 1.11steps/s, loss=0.025271658]
89%|████████▉ | 89/100 [01:21<00:09, 1.16steps/s, loss=0.025271658]
89%|████████▉ | 89/100 [01:22<00:09, 1.16steps/s, loss=0.02606378]
90%|█████████ | 90/100 [01:22<00:08, 1.18steps/s, loss=0.02606378]
90%|█████████ | 90/100 [01:22<00:08, 1.18steps/s, loss=0.025720831]
91%|█████████ | 91/100 [01:23<00:07, 1.20steps/s, loss=0.025720831]
91%|█████████ | 91/100 [01:23<00:07, 1.20steps/s, loss=0.025386788]
92%|█████████▏| 92/100 [01:24<00:06, 1.22steps/s, loss=0.025386788]
92%|█████████▏| 92/100 [01:24<00:06, 1.22steps/s, loss=0.025298832]
93%|█████████▎| 93/100 [01:25<00:05, 1.21steps/s, loss=0.025298832]
93%|█████████▎| 93/100 [01:25<00:05, 1.21steps/s, loss=0.025275249]
94%|█████████▍| 94/100 [01:26<00:05, 1.16steps/s, loss=0.025275249]
94%|█████████▍| 94/100 [01:26<00:05, 1.16steps/s, loss=0.02517961]
95%|█████████▌| 95/100 [01:27<00:04, 1.14steps/s, loss=0.02517961]
95%|█████████▌| 95/100 [01:27<00:04, 1.14steps/s, loss=0.02500946]
96%|█████████▌| 96/100 [01:27<00:03, 1.11steps/s, loss=0.02500946]
96%|█████████▌| 96/100 [01:28<00:03, 1.11steps/s, loss=0.024845285]
97%|█████████▋| 97/100 [01:29<00:02, 1.07steps/s, loss=0.024845285]
97%|█████████▋| 97/100 [01:29<00:02, 1.07steps/s, loss=0.024686184]
98%|█████████▊| 98/100 [01:29<00:01, 1.14steps/s, loss=0.024686184]
98%|█████████▊| 98/100 [01:29<00:01, 1.14steps/s, loss=0.024431208]
99%|█████████▉| 99/100 [01:30<00:00, 1.18steps/s, loss=0.024431208]
99%|█████████▉| 99/100 [01:30<00:00, 1.18steps/s, loss=0.024350762]
100%|██████████| 100/100 [01:31<00:00, 1.21steps/s, loss=0.024350762]
100%|██████████| 100/100 [01:31<00:00, 1.10steps/s, loss=0.024350762]
Trained trajectory#
References#
N. Chauffert, P. Weiss, J. Kahn and P. Ciuciu, “A Projection Algorithm for Gradient Waveforms Design in Magnetic Resonance Imaging,” in IEEE Transactions on Medical Imaging, vol. 35, no. 9, pp. 2026-2039, Sept. 2016, doi: 10.1109/TMI.2016.2544251.
G. R. Chaithya, P. Weiss, G. Daval-Frérot, A. Massire, A. Vignaud and P. Ciuciu, “Optimizing Full 3D SPARKLING Trajectories for High-Resolution Magnetic Resonance Imaging,” in IEEE Transactions on Medical Imaging, vol. 41, no. 8, pp. 2105-2117, Aug. 2022, doi: 10.1109/TMI.2022.3157269.
Chaithya GR, and Philippe Ciuciu. 2023. “Jointly Learning Non-Cartesian k-Space Trajectories and Reconstruction Networks for 2D and 3D MR Imaging through Projection” Bioengineering 10, no. 2: 158. https://doi.org/10.3390/bioengineering10020158
Total running time of the script: (1 minutes 39.016 seconds)