| dc.description.abstract |
The human brain has a complex network of nerve fibers, particularly in the white matter regions. This nerve fiber network forms the basis of connectivity within brain regions and to the other parts of the body. Traditional imaging techniques like magnetic resonance imaging (MRI) provide an accurate anatomical picture of the brain but offer limited insights into neural connectivity. This limitation is critical, as some neurological disorders can only be diagnosed by examining the alterations in nerve fiber bundles. To address this challenge, diffusion MRI (dMRI) has emerged as a powerful imaging modality capable of accurately characterizing the orientation of axonal fiber bundles. Diffusion MRI is acquired as a collection of MR volumes (referred to as q-samples). Among dMRI techniques, High Angular Resolution Diffusion Imaging (HARDI) is known to produce better fiber orientation representations than Diffusion Tensor Imaging (DTI) and has more practical computational requirements compared to Diffusion Spectrum Imaging (DSI). Despite its potential, HARDI faces challenges in acquisition and post-processing, requiring innovative solutions. For instance, to achieve the better resolution that HARDI offers over DTI, a large number of q-samples are required, which makes the scanning process slow and prone to motion artifacts. Long scanning times can also be inconvenient for patients. On the other hand, reducing the number of q-samples can reduce reconstruction accuracy. To solve this issue, compressive sensing of HARDI data in k-space and/or q-space presents a potential solution to accelerate the scanning process, where the signal is reconstructed later by exploiting the inherent regularity of the signal. There are two prominent sampling schemes in k-space: ‘Cartesian’ and ‘Radial’, with their pros and cons. The Cartesian sampling scheme is more immune to hardware-based distortions, while radial sampling patterns are more immune to motion-induced artifacts. We propose two methods to reconstruct the compressively acquired measurements from the scanner: MSR-HARDI and TL-HARDI. Both methods primarily focus on acquisition through Cartesian sampling schemes in k-space. The first method, MSR-HARDI, utilizes Multiple Sparsity Regularizers in joint (k − q)-space, allowing higher subsampling ratios that are not feasible with only k-space or only q-space subsampling. Additionally, combining regularizers has been shown to yield improved reconstructions compared to individual regularizers. Building upon this work using fixed sparsifying dictionaries, our second method, TL-HARDI, further explores the application of adaptively learned transforms for the accelerated reconstruction of HARDI. This transform is learned using compressively sensed measurements, eliminating the overhead of selecting data-specific fixed sparsifying dictionaries. Further, since the transform is learned on overlapping patches, it captures local image structure effectively, providing an additional denoising effect to the framework. We also recognize that the radial sampling patterns have less pronounced aliasing due to undersampling compared to Cartesian sampling. These advantages can be leveraged particularly in the acquisition of multidimensional signals like HARDI, with a tremendous scope of acceleration. However, despite these benefits, some hardware issues may lead to k-space samples being acquired along deviated radial trajectories, severely degrading the image reconstruction quality. To address this problem, we have proposed a method called CSR-PERT. In CSR-PERT, we investigated a realistic model of gradient delays, which leads to the measurements being acquired from unknown miscentered radial trajectories. This method proposes a joint framework where these perturbed radial trajectories are estimated and used for reconstructing images from the compressively sensed measurements of MRI and HARDI data. After addressing the acquisition aspect, we also explored another important avenue, i.e., the estimation of fiber orientations from HARDI data. Accurate estimation of the local white matter fiber orientations is required for a reliable neural connectivity analysis. In the absence of ground truth, most previous methods relied on assumptions about the physical models of the diffusion signal. This leads to overly simplistic mathematical models, such as DTI, which fail in regions with multiple fiber crossings or excessively complex models that may fail to converge in real time using conventional optimization techniques. Additionally, previous deep learning (DL) methods also attempted to estimate fiber orientations using orientation distribution functions (ODFs), with possible orientations on a predefined set of directions on the sphere, leading to unavoidable discretization errors. To solve these issues, we proposed two methods: FOREST and DL-MuTE. In FOREST, we estimate the peak fiber orientations directly from the diffusion signal using a branched multi-layer perceptron (MLP). In DL-MuTE, we utilize a multi-tensor model to analyze regions with complex tissue structures, such as crossing fibers. This method presents a DL pipeline that uses a branched architecture to estimate individual tensors of a multi-tensor model, which are then used to infer the underlying fiber orientations within an imaging voxel. The models are trained on synthetic data and evaluated on phantoms with known ground truth. The approach is validated on signals with varying noise levels. |
en_US |