Show Summary Details
Page of

Image Distortion and Its Correction in Diffusion MRI 

Image Distortion and Its Correction in Diffusion MRI
Chapter:
Image Distortion and Its Correction in Diffusion MRI
Author(s):

Jesper L.R. Andersson

and Stefan Skare

DOI:
10.1093/med/9780195369779.003.0017
Page of

PRINTED FROM OXFORD MEDICINE ONLINE (www.oxfordmedicine.com). © Oxford University Press, 2016. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Medicine Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 24 April 2018

Summary Box

  • Diffusion tensor imaging is usually performed by means of diffusion weighted spin-echo echo-planar images. Hence the images will be spatially distorted in the phase-encoding direction as a consequence of any, even small, off-resonance field.

  • There are two distinct sources of off-resonance frequencies in diffusion images. One is the object (e.g., the head) itself, where the tissue and the air-filled cavities (e.g., sinuses) have a different susceptibility to becoming magnetized, leading to nontrivial inhomogeneous fields. The second source is lingering remnants of the diffusion gradients, leading to different combinations of linear ramps in the x-, y- and z-directions.

  • Methods to correct these distortions can be broadly divided into those that attempt to measure the off-resonance effects with additional acquisitions and those that try to estimate them from the distorted data using image processing and registration techniques.

  • All the methods that attempt to measure the effects do so by disentangling phase that has been accumulated from the encoding gradients and phase that has been accumulated over time as a consequence of the off-resonance field.

  • The main challenge for image registration techniques is to deal with the different image contrasts in the various images constituting a diffusion tensor data set.

A vast majority of diffusion imaging is performed using some variant of echo-planar imaging (EPI). The main advantage of EPI is that it allows one to collect a full diffusion-weighted slice in a single shot and, hence, in a very short time. However, this comes at the cost of a very low bandwidth in the phase-encode (PE) direction, thus images are spatially distorted along that axis. The total difference in the magnetic field experienced by two adjacent (in the PE direction) voxels is comparable to the “errors” in field caused by the differences in magnetic susceptibility within the head. Therefore, those differences can cause intensity to be misplaced by several voxels in the resulting images. These distortions are common to gradient-echo (GE) EPI (mostly used for fMRI) and spin-echo (SE) EPI, which is used for diffusion imaging. In addition, there is also an aftereffect of the gradients used to achieve diffusion weighting. The resulting distortions are known as eddy current–induced distortions and will be different for different diffusion-weighted images, since they depend on strength and direction of the diffusion-weighting gradients.

This chapter deals with how to correct the data once they have been collected and the distortions that are manifest. Other chapters (e.g., Chapters 12 and 13) discuss how to avoid the distortions by changing the way data are collected, such as minimizing the readout time using parallel imaging (which will help both susceptibility- and eddy current–induced distortions; see, e.g., Bammer et al., 2002) or bipolar diffusion gradients to minimize eddy currents (e.g., Alexander et al., 1997; Reese et al., 2003). There is no contradiction between the two approaches. One should always strive to collect data with the highest possible fidelity, and if there are still appreciable distortions in the data, one should use post-processing approaches to correct these.

Distortions

Why Are Echo-Planar Images Distorted?

Spatial encoding is achieved in MR images by using field gradients that causes the signal from different positions to be emitted at different frequencies. Typically (and this is true for EPI), the encoding is done so that the signal represents the Fourier transform of the object, and the images are reconstructed by applying the inverse Fourier transform to the emitted signal. Inherent in this practice is an assumption that there is a linear relationship between position and frequency, which is achieved by applying linear gradients. However, as will be described later, there are additional fields that add up with the linear gradients. Some of these fields (i.e., susceptibility-induced off-resonance fields) are nonlinear and will thus add a nonlinear component to the linear gradient. Other off-resonance fields are linear, like the gradients themselves (i.e., eddy current–induced fields), and will serve to make the slope or direction of the gradient different from what was intended. Figure 17.1 shows a nonlinear addition to a linear gradient and the effect that this will have on the location of two objects in the reconstructed image.

Figure 17.1 The left side of the figure demonstrates what happens in the absence of an off-resonance field. The linear gradient will cause a linear relationship between position and frequency of the emitted signal. An image is reconstructed from the signal by means of the inverse Fourier transform (FT). On the right side of the figure, a nonlinear off-resonance field has been added to the linear gradient, so the relationship between position and frequency is no longer linear. When reconstructing an image from the resulting signal, the position of the objects will be wrong, i.e., the image will be distorted. If one knows the off-resonance field (and hence the shape of the resulting gradient), one can correct for these distortions by either modifying the reconstruction itself to use a different transform (see the K-matrix method, below) or resampling the finished image using the off-resonance field as a voxel displacement map.

Figure 17.1
The left side of the figure demonstrates what happens in the absence of an off-resonance field. The linear gradient will cause a linear relationship between position and frequency of the emitted signal. An image is reconstructed from the signal by means of the inverse Fourier transform (FT). On the right side of the figure, a nonlinear off-resonance field has been added to the linear gradient, so the relationship between position and frequency is no longer linear. When reconstructing an image from the resulting signal, the position of the objects will be wrong, i.e., the image will be distorted. If one knows the off-resonance field (and hence the shape of the resulting gradient), one can correct for these distortions by either modifying the reconstruction itself to use a different transform (see the K-matrix method, below) or resampling the finished image using the off-resonance field as a voxel displacement map.

The distortions in an EPI image occur primarily along the PE direction. The easiest way of understanding this is to consider a nonblipped EPI sequence, i.e., one where the PE gradient is on throughout the acquisition of the EPI echo train. This type of sequence was used in the early days of EPI and leads to a zigzag sampling of k-space, rather than the trapezoidal sampling achieved with a blipped EPI sequence. Because the PE gradient is on throughout the entire acquisition (lasting ∼60 ms on a modern scanner), its strength must be such that during that time the phase evolves no more than one lap in a voxel relative to that of the neighboring voxel. This means that the difference in frequency between neighboring voxels can be no greater than ∼17 Hz—i.e., the “bandwidth per pixel” can be no greater than that. If it is greater, the position of the signal can no longer be uniquely assigned to the proper voxel. In the frequency-encoding direction, by contrast, the encoding is realized once per lobe of the frequency-encoding gradient and is hence performed in a time that is Ny times (where Ny is the number of pixels in the PE direction) smaller than the whole acquisition. Correspondingly, the bandwidth per pixel becomes Ny times greater, or ∼1600 Hz if we assume 96 pixels and a total acquisition time of 60 ms. We now understand why the gradient must be much weaker in the phase-encoding than in the frequency-encoding direction.

To illustrate this with numbers, a pixel size of 2 mm gives us gradient strengths of ∼8300 Hz/m and 800,000 Hz/m in the phase-encoding and frequency-encoding directions, respectively. We now need to relate these numbers to the strengths of typical gradients and offsets caused by off-resonance fields. In Figure 17.2 we see a typical phase-encoding gradient and a typical frequency-encoding gradient, and the result when a profile through an off-resonance field is superimposed on them. Because the frequency-encoding gradient is orders of magnitude stronger than the off-resonance effects, the result after superimposing them is visibly indistinguishable from the original gradient. For the phase-encoding gradient, by contrast, the resulting gradient is no longer linear, even when assessed visibly, and there will be obvious distortions in the resulting image.

Figure 17.2 Left: Typical phase-encoding (top) and frequency-encoding (bottom) gradients for a nonblipped EPI sequence. Note the vastly different scales on the y-axis. Middle: Profile through a field map (off-resonance field) at the approximate level of cerebellum-orbitofrontal cortex. The “total” gradient by which data are actually encoded will be the superposition of the applied gradient and the off-resonance field. Right: This has an appreciable effect on the encoding in the phase-encoding direction; the superposition (blue solid line) is significantly different from the ideal (black dashed line) gradient. In contrast, the resulting frequency encoding is virtually indistinguishable from the ideal case.

Figure 17.2
Left: Typical phase-encoding (top) and frequency-encoding (bottom) gradients for a nonblipped EPI sequence. Note the vastly different scales on the y-axis. Middle: Profile through a field map (off-resonance field) at the approximate level of cerebellum-orbitofrontal cortex. The “total” gradient by which data are actually encoded will be the superposition of the applied gradient and the off-resonance field. Right: This has an appreciable effect on the encoding in the phase-encoding direction; the superposition (blue solid line) is significantly different from the ideal (black dashed line) gradient. In contrast, the resulting frequency encoding is virtually indistinguishable from the ideal case.

It should also be realized that even if the relationship between the frequency-encoding gradient and any off-resonance field were less favorable than it is, it would still not lead to geometric distortions in that direction. The reason for this is that we (typically) collect data during both the negative and positive lobes of the gradient. Hence, we collect every other echo left to right and those in between from right to left; then we left–right swap the even echoes before reconstruction. The off-resonance field, however, remains the same throughout and will thus have an opposing effect in the even compared with the odd echoes. This will lead to ghosting (in the PE direction) of those areas where there is a strong off-resonance field.

The argument for the nonblipped sequence holds in the same way for the blipped sequence, whereby if one took the integral of all the blips and distributed that integral evenly over the acquisition period, one would obtain the (weak) gradient derived above.

A different way of thinking about the distortions is useful when discussing multi-reference methods. In Figure 17.3 we see a blipped EPI sequence collecting a small image (14 pixels), and next to it is a time–ky plot where each echo is represented by one point describing when it was acquired (time) and the total phase encoding (the intended one) up to that point (ky). If there were no off-resonance effects at all, the time would be of no consequence. In the presence of an off-resonance field, however, any spin that is not exactly at the resonance frequency will accumulate phase as time goes by, and the accumulated phase will be linearly related to the time. Hence, we can see that the intended (assumed) phase accumulation (ky) is conflated with that caused by off-resonance effects over time, and there is no way to resolve the two from just a single EPI measurement.

Figure 17.3 Schematic description of the encoding part of an EPI sequence for a 14 × 14 image. On the right is a time–ky plot. Each point in the plot shows the value of ky and time (relative to the echo time) for the center of one echo. For an “ordinary” gradient-echo sequence, all the points would fall on a vertical line, indicating that all echoes were collected at the same time after excitation. The distortions in EP images are caused by the sequence tracing a diagonal line in the time–ky plot, allowing time for the phase to evolve due to off-resonance between consecutive echoes.

Figure 17.3
Schematic description of the encoding part of an EPI sequence for a 14 × 14 image. On the right is a time–ky plot. Each point in the plot shows the value of ky and time (relative to the echo time) for the center of one echo. For an “ordinary” gradient-echo sequence, all the points would fall on a vertical line, indicating that all echoes were collected at the same time after excitation. The distortions in EP images are caused by the sequence tracing a diagonal line in the time–ky plot, allowing time for the phase to evolve due to off-resonance between consecutive echoes.

Susceptibility-Induced Distortions

The word susceptibility in this context refers to how susceptible a given material is to becoming magnetized, i.e., how easily magnetized it is. The materials we usually put in the scanner (flesh and bone) are diamagnetic, which means that they will “resist” magnetization. Therefore, the field inside a diamagnetic object will be slightly lower than the external field; for example, inside a water phantom (or head) in a 1.5T scanner the field will be more like 1.49999T. Whenever there are two materials present (which there always are, one of them being air) with different susceptibility, the field will have an inhomogeneous distribution. The exact distribution is given by a set of partial differential equations (PDEs) known as Maxwell's equations, the details of which we will not describe here. Inside a homogeneous object, the field will be homogeneous but the field outside the object will not, as exemplified by the water phantom in Figure 17.4. However, a human head is not homogeneous. In particular, the ear canals and the sinuses constitute air-filled cavities that will distort the field. Inside the cavities the field will be close to its tentative value (1.5T); inside the tissue at some distance away from the cavities it will be 1.49999T; but in the vicinity of the cavities it will have nonintuitive distribution, as is evident in the lower panel of Figure 17.4.

Figure 17.4 Top: Water-filled spherical phantom (left panel) and the resulting field (right panel; white: high field, black: low field). Bottom: CT of a human head (black: air; gray: water; white: bone) in the left panel. The resulting magnetic field is shown in the right panel.

Figure 17.4
Top: Water-filled spherical phantom (left panel) and the resulting field (right panel; white: high field, black: low field). Bottom: CT of a human head (black: air; gray: water; white: bone) in the left panel. The resulting magnetic field is shown in the right panel.

As an example, if we scan the subject in the lower panel of Figure 17.4 using an EPI sequence with positive phase-encoding blips (higher field at the front), the bright areas (the ear canals and the nasal cavity) will have a higher field than expected. Thus their phase will evolve more rapidly over time, and in the reconstruction the intensity in those areas will be displaced toward the front. If we scan the same subject with negative blips (higher field toward the back), those same areas will be displaced toward the posterior.

One property of susceptibility-induced distortions is that, because they are a function of the object, or head, in the scanner, they are approximately constant for the duration of the experiment. This is an important property that distinguishes this type of distortion from the next type described.

Eddy Current–Induced Distortions

The term eddy-current refers to a current induced in a conductor when the conductor is exposed to a changing magnetic field. In MR, we are rapidly changing magnetic fields all the time, but never more so than when collecting diffusion-weighted images. If we consider Stejskal-Tanner encoding, it consists of two identical gradients with a 180° pulse in between them. To achieve high diffusion weighting with the shortest possible echo time, these gradients should be as strong, and switched as fast, as possible. When switching off the final gradient, a “changing magnetic field” occurs, and it will give rise to eddy currents in any conductors that are present. While it is not completely clear which conductors contribute most to the effects observed, we (the authors) believe that it is the gradient coils themselves. A schematic depiction of a gradient field caused by eddy currents can be found in Figure 17.5.

Figure 17.5 Schematic description of gradient field caused by eddy currents. When the diffusion gradient (applied in the x-direction in this example) is switched off, the current will first rapidly decay, but then there will be a slow component of the eddy current that we assume will remain approximately constant throughout the encoding part of the sequence. This leads to an “extra,” smaller, gradient that is left on for the duration of the encoding. For visualization, the magnitude of the extra gradient has been vastly exaggerated.

Figure 17.5
Schematic description of gradient field caused by eddy currents. When the diffusion gradient (applied in the x-direction in this example) is switched off, the current will first rapidly decay, but then there will be a slow component of the eddy current that we assume will remain approximately constant throughout the encoding part of the sequence. This leads to an “extra,” smaller, gradient that is left on for the duration of the encoding. For visualization, the magnitude of the extra gradient has been vastly exaggerated.

It should be noted that only if the eddy current remains constant over the encoding phase (i.e., an eddy current that is constant for a long time) will it lead to the “simple” geometric distortions described below. A component that changes over time will lead to other effects, such as intensity distortions and a widening of the point-spread function.

The eddy current–induced gradient will combine with the gradients for spatial encoding and thereby modify the traversal through k-space. For example, an eddy current gradient in the x-direction (with positive polarity) will elevate the positive lobes of the readout gradient and lower the magnitude of the negative lobes. This means that for each left-to-right traversal, the sampling will extend slightly further to the right than was intended, and, similarly, each right-to-left traversal will stop short of the intended path. This pattern will accumulate for each traversal, resulting in a sheared sampling of k-space, which will in turn result in an xy-shear in the reconstructed image (see Fig. 17.6). Figure 17.6 also shows how an eddy current–induced gradient in the PE direction leads to a stretch of the object in the PE direction. If the induced gradient has the same polarity as the PE blips it will lead to a zoom-out; with a reversed polarity it leads to a zoom-in.

Figure 17.6 Demonstration of k-space traversal and resulting images for eddy current–induced gradients in the x- and y-directions. The left panels show the intended path through k-space and a distortion-free image. The middle panel demonstrates the effect of an eddy current–induced gradient in the x-direction with higher field to the right. Each left-to-right traversal is slightly too long, and each right-to-left traversal stops short. This effect accumulates over the phase-encoding steps, resulting in a sheared traversal of k-space and a shear in the reconstructed image. The right panel shows the consequences of an eddy current–induced gradient (with the same polarity as the phase-encoding blips) in the y-direction. The traversal will be too rapid, resembling what would happen if the phase-encoding blips were too large. The resulting reconstructed image will be zoomed in the y-direction. On closer inspection one can also see that the intensity in the right image is higher than that in the other two images, a consequence of the signal having been compressed into a smaller area.

Figure 17.6
Demonstration of k-space traversal and resulting images for eddy current–induced gradients in the x- and y-directions. The left panels show the intended path through k-space and a distortion-free image. The middle panel demonstrates the effect of an eddy current–induced gradient in the x-direction with higher field to the right. Each left-to-right traversal is slightly too long, and each right-to-left traversal stops short. This effect accumulates over the phase-encoding steps, resulting in a sheared traversal of k-space and a shear in the reconstructed image. The right panel shows the consequences of an eddy current–induced gradient (with the same polarity as the phase-encoding blips) in the y-direction. The traversal will be too rapid, resembling what would happen if the phase-encoding blips were too large. The resulting reconstructed image will be zoomed in the y-direction. On closer inspection one can also see that the intensity in the right image is higher than that in the other two images, a consequence of the signal having been compressed into a smaller area.

An eddy current–induced gradient in the z-direction (slice-select direction) will lead to a constant field-offset within any plane perpendicular to the z-axis. The magnitude of the offset will depend linearly on the distance between the plane and the iso-center of the scanner, but from the perspective of the plane it will be constant. Hence, it will cause the resonance frequency to be slightly different from what we think it is, leading to an accumulation of phase as k-space is traversed in the PE direction. This linear phase accumulation will result in the reconstructed images being translated in the PE direction. The effect is analogous to the “chemical shift” of the fat signal observed in an EPI image without fat saturation, where the frequency for the entire plane is different from that expected, but only for the fat signal. Even though the z-gradient will lead to an addition to the field that is constant within a given plane, it will be different for each plane. If we assume that the z-gradient implies a higher field cranially and that we are using positive PE blips, the superior slices will be translated forward, the slice coinciding with the scanner isocenter will not be translated at all, and the inferior slices will translated backward. The effect will be a little like pushing a deck of cards in such a way that each card is situated a little ahead of the card below it.

Like susceptibility-induced distortions, the relationship between eddy current–induced gradients and the distortions depends on the polarity of the PE blips. For example, if we have an eddy current–induced gradient in the z-direction that results in a positive addition to the field in the plane we are scanning, this will lead to a translation toward the front in a sequence with positive blips and toward the back in a sequence with negative blips.

Hence, assuming that eddy currents reside primarily in the gradient coils, that they are constant for a longer period than that of the readout, and that they add up without interactions, the expected distortions and transforms for a given diffusion-weighted volume consist of an xy-shear and a y-zoom, both of which should be constant over the volume, and a translation that varies linearly with slice position (i.e., a zy-shear) (Jezzard et al., 1998). This characterization has largely been verified experimentally by Bastin and Armitage (2000). Thus the distortions caused by eddy currents can be succinctly described by just three parameters per scan, yielding image processing–based approaches that are more feasible than for susceptibility distortions where a large number of parameters would be needed to describe the resulting field (i.e., the field in the lower right panel of Fig. 17.4). For a set of n unique diffusion-weighted images, one would expect to need to estimate a total of 3n parameters, but there is reason to believe that even that number can be reduced. Haselgrove and Moore (1996) suggested that the degree of distortion is a linear function of the strength of the preceding diffusion-weighting gradient. Others (e.g., Horsfield, 1999; Andersson and Skare, 2002; Zhuang et al., 2006) have extended this proposal to suggest that if one knows the shear, zoom, and translation caused by a diffusion gradient in the x-, y-, and z-direction, respectively, one can predict the distortions resulting from any arbitrary gradient from superposition. This would imply that only nine parameters are necessary to describe the distortions in any set of diffusion-weighted images. Additional experimental support for the assumption of linearity between the diffusion gradients and the eddy currents can be found in Papadakis et al. (2005).

Distortions Caused by Concomitant Fields

When a linear gradient is present (e.g., in the x-direction), we expect the field to be described by B = b0 + Gxx. However, that field does not fulfill the Maxwell equations and is therefore physically impossible (Zhou et al., 1998). Instead, the field will be described by B=B0+Gxx+12B0Gx2z2+, where “…” indicate higher-order terms of less consequence. Hence, in addition to the linear gradient intended, there will be an additional quadratic “gradient” in the z-direction. From the perspective of a given plane, this will appear as a constant field offset and will cause a translation in the PE direction (c.f. discussion of eddy current–induced gradient in the z-direction, above). The translation will be proportional to the field in the given plane and will thus have a quadratic dependence on the distance from the isocenter in the z-direction. If uncorrected, it will cause a cylinder with its long axis along z to appear as a banana shape when viewed in a zy-plane. This effect is less pronounced on high-field scanners (because of the 12B0 factor) and is usually corrected for on the scanner, which means that it is often not noticed or noticeable on a well-tuned system (although see Mattila et al., 2007, for an example of distortions caused by concomitant fields). If it is not adequately corrected for on the scanner, the residual effects will “automatically” be corrected for by some of the methods used to correct for susceptibility-induced distortions, but not by other methods. The effect may be invisible to (and hence not corrected by) methods that measure the field using two scans with different echo times (see below), since the concomitant field will be identical for the two acquisitions. We believe that cases in which EPI data have been distortion corrected using a field map but that still do not appear to match the structural scan in the cranial-most slices (personal observations and anecdotal accounts) can be explained by this phenomenon.

Correcting the Distortions

How Can We Correct Susceptibility Distortions When the Δ‎B0 Field Is Known?

In principle, there are two ways to correct the data once the Δ‎B0 field is known. One is the “image-processing” approach in which a scaled version of the field map is used as a voxel displacement map. This map is used to deduce the location in the distorted space (i.e., the EPI) of the pertinent intensity for a given voxel in the nondistorted space (see Fig. 17.7). Note that the field or displacement map needs to be in undistorted space in order to carry out this approach. If the displacements are measured in the distorted space (e.g., using an EPI sequence to measure the field; see below) one will instead have to go into the distorted EPI image and for each voxel ask where to put this intensity, which presents problems when anything other than a voxel center is indicated.

Figure 17.7 Top: Field map (left) and enlarged section of the field map (right) scaled to voxel displacements. Shown for each point in nondistorted space is how far away the intensity that should have been there has been displaced. On the right is a half-finished, distortion-corrected image. The correction is performed by starting with a blank sheet of voxels. For each voxel, one refers to the displacement map to see how far the intensity from that voxel was displaced and then refers to the original EPI image to find the intensity at that point and write that into the corrected image. One then proceeds to the next voxel, repeating this sequence. Typically, the deflected point does not fall onto a voxel center in the original image, in which case the intensity will have to be deduced by interpolation between neighboring points.

Figure 17.7
Top: Field map (left) and enlarged section of the field map (right) scaled to voxel displacements. Shown for each point in nondistorted space is how far away the intensity that should have been there has been displaced. On the right is a half-finished, distortion-corrected image. The correction is performed by starting with a blank sheet of voxels. For each voxel, one refers to the displacement map to see how far the intensity from that voxel was displaced and then refers to the original EPI image to find the intensity at that point and write that into the corrected image. One then proceeds to the next voxel, repeating this sequence. Typically, the deflected point does not fall onto a voxel center in the original image, in which case the intensity will have to be deduced by interpolation between neighboring points.

The other option is to create a “resampling matrix” by modeling the imaging and reconstruction process (Munger et al., 2000). The relationship between an object ρ‎(m, n) (in image space) and the collected signal s(k, l) is given by s = Aρ‎, where s and ρ‎ are vectorized versions of s and ρ‎, respectively, and where A is an encoding matrix. For most sequences (including EPI, when excluding things like ramp-sampling), the elements of A are of the form [17.1]

Ak+lK,m+nM=e2πi(km/M+ln/N)
As can be seen, this is simply a matrix formulation of the 2D Fourier transform. The image reconstruction is then performed using matrix F, which implements an inverse Fourier transform. If all goes well, FA = I, i.e., the unity matrix, so that the reconstructed image is identical to ρ‎(m, n) (though in practice one would use FFT rather than the matrix formulation). However, in the presence of an off-resonance field, the “true” encoding matrix A′ will have the following elements: [17.2]
Ak+lK,m+nM=e2πi(km/M+ln/N+(γ/2π)ΔB0(m,n)tk,l)
The mapping given by the acquisition and the reconstruction process now becomes K = FA′, where K is no longer (necessarily) the unity matrix. Using some simplifying assumptions (see Munger et al., 2000, for details), the matrix K can be partitioned into a set of “column-wise” Ki matrices, where each matrix maps a single “column” (a line along the PE direction). The Ki matrices are small (N × N, where N is the number of voxels in the PE direction), hence it is reasonable to use [17.3]
ρ^i=Ki1ρi=ρi
to obtain an estimate, ρ^i , of the true intensity profile along the ith column, given the observed intensity ρi and the matrix Ki1, which can easily be calculated using Eq. 17.2 given knowledge of the field Δ‎B0(m, n). The relationship between the field and a matrix Ki is shown in Figure 17.8. Note that when defined as above, K will be a complex matrix and should be applied to complex image data. This will not work for diffusion-weighted images because of the additional phase introduced by the diffusion gradient, and the K-matrix will have to be calculated in a slightly different manner (see Andersson et al., 2003, for details).

Figure 17.8 The middle panel shows a profile (in Hz) along the dashed line in the field map in the left panel. The right panel shows the modulus of the resulting K-matrix, which demonstrates the mapping between the “true” and the observed intensity function. Under ideal conditions, the K-matrix is the identity matrix, i.e., will have unity values along the diagonal. The curved deviations from the diagonal in the K-matrix shown here indicate the presence of distortions.

Figure 17.8
The middle panel shows a profile (in Hz) along the dashed line in the field map in the left panel. The right panel shows the modulus of the resulting K-matrix, which demonstrates the mapping between the “true” and the observed intensity function. Under ideal conditions, the K-matrix is the identity matrix, i.e., will have unity values along the diagonal. The curved deviations from the diagonal in the K-matrix shown here indicate the presence of distortions.

How Do We Measure the Δ‎B0 Field?

As can be seen in Figure 17.3, there is a conflation between time and ky in the EPI sequence. This means that without additional information it is not possible to distinguish between phase that has accumulated from ky and the effects of off-resonance field Δ‎B0(x, y, z) over time. However, if we perform two acquisitions with different echo times, the phase accumulated from ky will be identical for the two cases, and any phase differences between the two acquisitions can be attributed to Δ‎B0(x, y, z). Given a field Δ‎B0(x, y, z), we would expect the observed difference in phase between the two acquisitions to be [17.4]

ΔΦ(x,y,z)=2πγΔB0(x,y,z)Δt
and hence it is easy to calculate Δ‎B0(x, y, z) from a map ΔΦ‎(x, y, z) of observed phase differences and knowledge of the echo-time difference Δ‎t. The two scans do not necessarily have to be EPI acquisitions but can be any two scans where the accumulated phase due to Δ‎B0 differs in a predictable manner. Hence, it can be two gradient-echo images, EPI or non-EPI, or two spin-echo images with a different asymmetry between the two. If the field-map is not based on EPI acquisitions, it can be used directly to resample the EPI images (after the values in the map have been scaled to a pixel-displacement map), since the map will be in the undistorted space. With an EPI-based field map, the resulting pixel-displacement map must be inverted before it can be used to resample the data.

What is a Good Echo-Time Difference?

Let us assume we have two images acquired with echo times τ‎1 and τ‎2 and we observe a phase difference ΔΦ‎. It would seem straightforward to calculate Δ‎B0 = ΔΦ‎/2πγ‎ (τ‎2 − τ‎1). However, the observed ΔΦ‎ will have been calculated by an arctan operation and will have a range −π‎ 〈 Φ‎ 〈 π‎. Thus, there is ambiguity in ΔΦ‎, and we cannot know if the true value really is ΔΦ‎ or if it should be ΔΦ‎ + n2π‎, where n can be any integer. This implies that we should keep the echo-time difference small enough so that the differential phase evolution is never greater than ±π‎, or perhaps even ±π‎/2 to be on the safe side, so that we know that the “true” value really is ΔΦ‎. For a 1.5T field, susceptibility-induced Δ‎B0 field will typically be within a range −100 Hz 〈 Δ‎B0 〈 100 Hz (strictly speaking, these values refer to γΔ‎B0); from that one would conclude that Δ‎t should be below 5 ms.

However, it is also the case that the phase difference between the two images represents a noisy measure of ΔΦ‎, and by using a small value for Δ‎t we are effectively making the desired signal and, thus, the SNR smaller. There is a tradeoff between keeping Δ‎t small to avoid phase wraps and allowing it to be larger to obtain higher precision in the estimate of Δ‎B0.

It should also be noted that ΔΦ‎ will reflect many sources of off-resonance, including chemical shift, whereas frequently one is interested in isolating the component caused by susceptibility differences. One approach for achieving this would be to saturate the fat prior to collecting the images for the phase map. Another approach is to make sure that Δ‎t is such that fat and water have the same relative phase for τ‎1 and τ‎2 (see Fig. 17.9).

Figure 17.9 Signal from water (blue arrow), signal from fat (red arrow), and the resulting total signal (black). Depicted here is the situation in a voxel where there is no off-resonance field and the phase of the water signal is hence identical for the three echo times (specified relative to the first echo time). It is further assumed that the field is 1.5T, so the fat–water difference is 220 Hz. With a Δ‎t of 2.25 ms, the signal from the fat would be π‎ radians out of phase for the two acquisitions, resulting in an apparent evolution of the phase even in the absence of an off-resonance field. If Δ‎t is 4.5 ms (or an integer multiple thereof), the fat signal will have the same phase at both echo times and no phase difference will be observed.

Figure 17.9
Signal from water (blue arrow), signal from fat (red arrow), and the resulting total signal (black). Depicted here is the situation in a voxel where there is no off-resonance field and the phase of the water signal is hence identical for the three echo times (specified relative to the first echo time). It is further assumed that the field is 1.5T, so the fat–water difference is 220 Hz. With a Δ‎t of 2.25 ms, the signal from the fat would be π‎ radians out of phase for the two acquisitions, resulting in an apparent evolution of the phase even in the absence of an off-resonance field. If Δ‎t is 4.5 ms (or an integer multiple thereof), the fat signal will have the same phase at both echo times and no phase difference will be observed.

How do we resolve this compromise? The black art of phase unwrapping

One approach to resolving the tradeoff between SNR and the risk of phase wraps is to simply accept a poor SNR and use a small value for Δ‎t. An argument for this approach is that in areas where this leads to very poor SNR (areas with small Δ‎B0), the distortions are already small. Another option is to perform “phase unwrapping” on the observed ΔΦ‎(x, y, z) maps, i.e., detecting and correcting any n2π‎ jumps in the data (see Fig. 17.10).

Figure 17.10 Left: The field and “true” phase difference for two scans, with 10 ms echo-time difference, as a function of position along some axis. Middle: What the observed phase difference would actually look like. There is a portion where the phase has been “wrapped” into the −π‎ to π‎ range. This could be easily rectified by traversing the “phase-map” from left to right, and whenever the absolute difference between two consecutive points is greater than π‎, adding or subtracting 2π‎ to it. Right: The situation when part of the “field-map” has poor SNR such that the phase is essentially a random value between −π‎ and π‎. If one now traverses the map from left to right, one will hit the area with poor SNR. As one exits it on the other side, one will have performed a random number of additions or subtractions of 2π‎ that will propagate through the reminder of the map.

Figure 17.10
Left: The field and “true” phase difference for two scans, with 10 ms echo-time difference, as a function of position along some axis. Middle: What the observed phase difference would actually look like. There is a portion where the phase has been “wrapped” into the −π‎ to π‎ range. This could be easily rectified by traversing the “phase-map” from left to right, and whenever the absolute difference between two consecutive points is greater than π‎, adding or subtracting 2π‎ to it. Right: The situation when part of the “field-map” has poor SNR such that the phase is essentially a random value between −π‎ and π‎. If one now traverses the map from left to right, one will hit the area with poor SNR. As one exits it on the other side, one will have performed a random number of additions or subtractions of 2π‎ that will propagate through the reminder of the map.

The problem with this approach is that it means having to solve a highly nonlinear integer-programming problem, which can be difficult to do in a robust and reliable manner. The general strategy of phase unwrapping (see Fig. 17.10) is to start at some arbitrary point (assuming that there has been no wrapping there) and, as one moves to neighboring points, look for phase difference whose absolute value is larger than π‎. On detecting such a jump, one assumes that it has been caused by a wrap and one adds or subtracts 2π‎ depending on the sign of the difference. The difficulty with this strategy is that as soon as a single error (i.e., wrapping when there is no wrap, and vice versa) has been made, that error will be carried forward to every subsequent voxel. The areas where there is a risk of committing errors are those where the signal is very small, resulting in a phase that is essentially some random value between −π‎ and π‎. There are two strategies for dealing with this issue. One set of methods (see, e.g., Cusack and Papadakis, 2002; Hutton et al., 2002) involves visiting the voxels with poor SNR in the image volume last, thus ensuring that the error will affect as few voxels as possible.

Figure 17.11 demonstrates the general principle of unwrapping along different paths and how this can affect the risk of errors in the end result. More formally, these methods attempt to assess the SNR at each and every voxel, start the unwrapping at a voxel with high SNR and then let it proceed outward from that voxel along the directions where the SNR is highest.

Figure 17.11 Schematic explanation of how the order in which different voxels are visited will affect the risk of committing errors. The outlines indicate the edge of the brain, within which there is strong signal and hence good SNR. Outside the outline the signal is weaker, thus the SNR is poorer. Let us assume that one wants to unwrap the phase at point B, having started at point A. The path indicated by the red arrow passes through an area of poor SNR, and there is a risk that the phase is wrong by some multiple of 2π‎. By contrast, the path indicated by the green arrow does not pass any area of poor SNR when going from A to B.

Figure 17.11
Schematic explanation of how the order in which different voxels are visited will affect the risk of committing errors. The outlines indicate the edge of the brain, within which there is strong signal and hence good SNR. Outside the outline the signal is weaker, thus the SNR is poorer. Let us assume that one wants to unwrap the phase at point B, having started at point A. The path indicated by the red arrow passes through an area of poor SNR, and there is a risk that the phase is wrong by some multiple of 2π‎. By contrast, the path indicated by the green arrow does not pass any area of poor SNR when going from A to B.

Jenkinson (2003) has opted for a different strategy, outlined in Figure 17.12. It is based on merging regions rather than performing unwrapping along some path or front. First, a set of regions is created by partitioning the observed angles into ranges of, for example, π‎/3 and performing a connected component labeling. These regions are then merged, and the decision to wrap one region relative to the other when merging is based on the average difference between voxels on different sides of the surface that separates the regions. This averaging increases the SNR and decreases the risk that an error will occur. These authors find that the region-merging strategy is more robust than the other technique.

Figure 17.12 Schematic explanation of phase-unwrapping algorithm suggested by Jenkinson (2003). The brain is divided into a set of regions within which there are no wraps. Two regions are then selected for merging. By selecting two large regions, one ensures that the regions represent areas with high SNR (low-SNR areas will result in many small regions). Let us say one wants to merge regions A and B. One will then compare the phases of adjacent voxels across the region border (the red area). If the absolute average difference is greater than π‎, 2π‎ will be added or subtracted to the smaller region (B) prior to merging them into a single region. In this way, the decision to wrap the phase is based on a group of voxels rather than on a single, possibly noisy, difference between two voxels. Once A and B have been merged, the new region (A∪B) is merged with region C, etc., until all regions have been merged into a single (unwrapped) region.

Figure 17.12
Schematic explanation of phase-unwrapping algorithm suggested by Jenkinson (2003). The brain is divided into a set of regions within which there are no wraps. Two regions are then selected for merging. By selecting two large regions, one ensures that the regions represent areas with high SNR (low-SNR areas will result in many small regions). Let us say one wants to merge regions A and B. One will then compare the phases of adjacent voxels across the region border (the red area). If the absolute average difference is greater than π‎, 2π‎ will be added or subtracted to the smaller region (B) prior to merging them into a single region. In this way, the decision to wrap the phase is based on a group of voxels rather than on a single, possibly noisy, difference between two voxels. Once A and B have been merged, the new region (AB) is merged with region C, etc., until all regions have been merged into a single (unwrapped) region.

Another trick to make unwrapping easier was suggested by Reber et al. (1998), who collected a series of gradient-echo EP images with different echo times. The echo-time spacing is chosen in such a way that it is unlikely to find a wrap between two consecutive acquisitions, even in regions with the strongest off-resonance field. Phase unwrapping can then be performed along the temporal (echo-time) domain, and an error in one voxel will not affect any other voxels. Once the “time series” has been unwrapped, the phase is calculated using linear regression. The disadvantage of this method is that the phase information is collected in the distorted space, which, as we have already seen, is not what one wants.

Chen et al. (2006) have suggested a scheme in which the multi-echo-time field maps are collected using an “EPI” sequence without PE blips, but each EPI train of n echoes is preceded by a “traditional” PE gradient. Thus the echoes collected in a given train can be seen as corresponding to the “same echo” in n acquisitions with different echo times. Re-sorting of the data yields n full k-spaces collected for n different echo times. Within each k-space, every echo is collected at the same time after the excitation, hence does not exhibit any distortions in the PE direction.

Calculating the field from a known susceptibility map

One way to measure the field would be to calculate it, provided one knows the spatial distribution of magnetic susceptibility in the object. It is feasible to segment the image in the lower left panel of Figure 17.4 into bone, water, and air, then assign known values for the susceptibility of the three materials and calculate the field using Maxwell's equations, assuming the field is identical to b0 on the surface of some sufficiently large cylinder that encompasses the object with a good margin (Jenkinson et al., 2004). The difficulty with this approach is that even though it is easy to segment a computed tomography (CT) image (Fig. 17.4) into bone, water, and air, it is considerably more difficult with an MR image (where bone and air will have similar intensities), and if the options are to collect additional field-map data or collect CT data, the former will typically be more practical.

Multi-reference methods for estimating and correcting susceptibility distortions

A large number of methods are available for estimating and correcting distortions using a set of reference scans to characterize off-resonance effects (Robson et al., 1997; Wan et al., 1997; Chen and Wyrwicz, 1999, 2001; Schmithorst et al., 2001; Zaitsev et al., 2004; Zeng et al., 2004). These methods are often used to apply the correction directly to the k-space data, which is difficult with DTI data, as the diffusion weighting will introduce large phase shifts. In principle, this could be resolved by acquiring a separate set of reference scans for each diffusion direction, but that would render the methods very impractical. It is nonetheless of interest to understand the nature of the information that the multiple reference scans provide, especially since one can use this information to perform the corrections on the magnitude image data where the phase errors are of no consequence. To understand the information present in the reference scans, we need to take another look at the time–ky plot in Figure 17.3 and recall that the origin of the distortions can be explained in terms of the conflation of ky (the history of the PE gradient) and time since the beginning of the acquisition. This means that one cannot distinguish between “true” phase encoding and accumulation of phase due to time (and off-resonance frequency). The idea behind the multi-reference methods is to acquire additional information so that time and ky can be properly disentangled. To better understand this process, we will first examine the time–ky plots for the gradient echo and the EPI-based field-map acquisitions discussed above (Fig. 17.13).

Figure 17.13 Upper row: Two gradient-echo sequences with different echo times (TE) (hence together constituting a field-map sequence) and the corresponding time–ky plot. Lower row: The same information for an EPI field-map sequence. Note that there is no conflation between time (and hence evolution of off-resonance over time) and ky for the gradient-echo sequence, resulting in an image with no distortions in the phase-encoding direction. The addition of another echo time means that a field map can be reconstructed directly in undistorted space. The EPI sequence, by contrast, traverses the time–ky plane along the diagonal, hence one cannot completely disambiguate off-resonance effects and ky, resulting in a distorted image. The additional echo time means that these can be disentangled, but at the cost of additional processing compared with the gradient-echo sequence, since straightforward reconstruction will result in the field map being in distorted space. Both of these scenarios suffer from a lack of ability to directly disambiguate between a φ‎ and a ϕ±2nπ phase evolution as described in the main text.

Figure 17.13
Upper row: Two gradient-echo sequences with different echo times (TE) (hence together constituting a field-map sequence) and the corresponding time–ky plot. Lower row: The same information for an EPI field-map sequence. Note that there is no conflation between time (and hence evolution of off-resonance over time) and ky for the gradient-echo sequence, resulting in an image with no distortions in the phase-encoding direction. The addition of another echo time means that a field map can be reconstructed directly in undistorted space. The EPI sequence, by contrast, traverses the time–ky plane along the diagonal, hence one cannot completely disambiguate off-resonance effects and ky, resulting in a distorted image. The additional echo time means that these can be disentangled, but at the cost of additional processing compared with the gradient-echo sequence, since straightforward reconstruction will result in the field map being in distorted space. Both of these scenarios suffer from a lack of ability to directly disambiguate between a φ‎ and a ϕ±2nπ phase evolution as described in the main text.

The various multi-reference methods (and some that are not conceived as such) all work by acquiring additional data to “fill out” the time–ky plane. Because there are only so many ways to do this, many of these methods use the same (or very similar) acquisition schemes (Fig. 17.14).

Figure 17.14 This figure demonstrates how the goal of the various multi-reference methods is to fill out the time–ky plane with sufficient information to correct for distortions (and often for other artifacts as well). Within each group, the referenced methods differ in how many reference scans are actually acquired and how they are used. For example, among the methods in category A, Robson et al. (1997) will typically use fewer phase-preparation steps than there are phase-encoding steps in the EPI sequence (acknowledging the limited magnitude of the distortions) whereas Chen and Wyrwicz (2001) require 2Ny scans with different phase preparations (where Ny is the number of phase-encoding steps). In category B, Chen et al. (2006) use only one additional scan. The various methods also differ in how they use the information.

Figure 17.14
This figure demonstrates how the goal of the various multi-reference methods is to fill out the time–ky plane with sufficient information to correct for distortions (and often for other artifacts as well). Within each group, the referenced methods differ in how many reference scans are actually acquired and how they are used. For example, among the methods in category A, Robson et al. (1997) will typically use fewer phase-preparation steps than there are phase-encoding steps in the EPI sequence (acknowledging the limited magnitude of the distortions) whereas Chen and Wyrwicz (2001) require 2Ny scans with different phase preparations (where Ny is the number of phase-encoding steps). In category B, Chen et al. (2006) use only one additional scan. The various methods also differ in how they use the information.

In many of these methods, as they are originally described in the literature, the corrections are applied directly to the k-space data. This does not work directly for diffusion-weighted data because typically the data are translated and out of focus in k-space. Thus if one were to apply the information acquired with a non–diffusion-weighted reference scan, one would be comparing apples to oranges. However, the three basic acquisition schemes (A, B, and C) all contain the information needed for calculating the off-resonance field in undistorted space and thus can be used to correct for distortions using either of the two methods described above for performing a correction in image space. In this case, the main advantage of the multireference schemes is that phase unwrapping can be performed in 1D (along the echo-time dimension).

How to Assess Eddy Currents Using Additional Measurements

Several methods have been suggested for assessing the eddy currents on the basis of additional measurements. One suggestion (Jezzard et al., 1998) was to modify the sequence so that, along with the usual EPI acquisition, two additional acquisitions would be performed (although each after its own excitation and diffusion weighting). One of the additional acquisitions consists of playing out the “usual” EPI frequency-encoding gradients without any phase-encoding blips, enabling estimation of both a linear gradient in the frequency-encoding direction and the b0-offset component by comparing consecutive echoes. The same procedure is then repeated for the phase-encoding direction (i.e., playing out the frequency-encoding gradients in the phase-encoding direction), yielding the linear gradient in the phase-encoding direction and a second estimate of the b0-offset. This method was originally suggested and implemented within a segmented EPI scheme, which means that for an eight-interleaved acquisition, the two additional measurements would only cause a 25% increase in total acquisition time. With the faster gradients currently available, most workers opt for single-shot EPI, in which case the suggested strategy would cause a threefold increase in time, making the method unpractical. It could, however, be combined with the assumption of a linear relationship between diffusion gradient and eddy currents (Horsfield, 1999). The additional measurements would only have to be performed for three different gradients (from those that constitute the full DTI data) and linear combinations of these values would be used for the remaining direction. To our knowledge, this has not been attempted. A similar method was suggested by Calamante et al. (1999), although it is effectively only half of the method suggested by Jezzard et al. (1998) and only corrects the b0-offset.

Another suggestion for assessing eddy currents is to estimate the effects of eddy currents directly from phantom images collected with the same sequence that will be used for the intended studies (Horsfield, 1999; Bastin and Armitage, 2000). Again, one can use the assumption of linearity to extend the results beyond the actual measurements (Horsfield, 1999). The question here is the frequency at which these phantom calibrations would have to be performed. The available data (Bastin and Armitage, 2000) suggest that the distortions change slowly over time (the time scale here being weeks to months), although it may be unsafe to exclude the possibility of sudden changes. Furthermore, it is not clear whether the results from the calibration scan generalize to a different FOV and/or slice plane orientation.

Chen et al. (2006) have also suggested collecting phantom data and measuring field maps for each diffusion gradient. The field maps are measured using a clever EPI-like scheme that results in multi-echo-time images in nondistorted space. The multiple echo times facilitate phase unwrapping in 1D (over the different echo times). The phantom-based eddy-current field maps are then combined with an individually acquired b0 field map (for susceptibility-induced inhomogeneity) to simultaneously correct for both sources of distortion.

Papadakis et al. (2000) have developed a sequence for measuring the b0 component and eddy current–induced linear gradients in the x-, y-, and z-directions following a diffusion gradient applied in some direction i (where i is x, y, or z). They suggest performing this calibration measurement in a phantom and then using it to either set the scanner gradient pre-emphasis parameters (Papadakis et al., 2000) or correct subsequently collected data (Papadakis et al., 2005).

Image Processing–Based Methods for Estimating the Δ‎B0 Field

An alternative to explicitly measuring the Δ‎B0 field is to try to estimate it directly from the distorted images. This is performed with techniques that have long been used for image registration. These techniques are based on a model for transformation of the image and a cost function that assesses the agreement between the transformed image and a “reference” image that is typically not subject to any transformation. A simple, and construed, example would be a transform consisting of left–right and up–down translation and a sum-of-squared difference cost function (Fig. 17.15).

Figure 17.15 The stationary image is shown in red, and the image being transformed is shown in blue. When there is a complete overlap (match) the resulting image will look purple. An incomplete overlap will be signaled by blue and/or red along the edges of the brain. The four panels demonstrate a complete match (red frame), a 10 mm mismatch in the y-direction (cyan frame), a 10 mm mismatch in the x-direction (green frame), and a 10 mm mismatch in the x- and y-directions (magenta frame). The resulting values for the negation of the sum-of-squared differences are also shown. The negation is used here to aid visualization. A “registration” for this particular model would consist of starting at some unknown point on the surface shown and then taking uphill steps until the summit was reached. Had we included a rotation in our transformation, the search space would have been three-dimensional. The commonly used rigid-body model for movement correction of the 3D case has a six-dimensional search space (three translations and three rotations).

Figure 17.15
The stationary image is shown in red, and the image being transformed is shown in blue. When there is a complete overlap (match) the resulting image will look purple. An incomplete overlap will be signaled by blue and/or red along the edges of the brain. The four panels demonstrate a complete match (red frame), a 10 mm mismatch in the y-direction (cyan frame), a 10 mm mismatch in the x-direction (green frame), and a 10 mm mismatch in the x- and y-directions (magenta frame). The resulting values for the negation of the sum-of-squared differences are also shown. The negation is used here to aid visualization. A “registration” for this particular model would consist of starting at some unknown point on the surface shown and then taking uphill steps until the summit was reached. Had we included a rotation in our transformation, the search space would have been three-dimensional. The commonly used rigid-body model for movement correction of the 3D case has a six-dimensional search space (three translations and three rotations).

The task of estimating the distortions in a diffusion-weighted image is generally more complicated than finding the rigid-body transformation between two images in an fMRI time series. First, it hinges on the existence of a “reference” image that is free of distortions. While such an image will often exist (e.g., a T1-weighted structural image), typically it will have a contrast that is substantially different from that of the distorted image to be corrected. Second, the transformation has many more degrees of freedom than that of a rigid-body transform. Essentially, one needs to estimate a complete field/displacement map, thus every voxel value in that field needs to be estimated. The degrees of freedom can be reduced somewhat by representing the field as a linear combination of some basis functions (see Fig. 17.16) but will still be in the thousands, which means that the search landscape of Figure 17.15 will be several thousand–dimensional.

Figure 17.16 Demonstration of how a 64 × 64 field map can be represented by just nine values. These values represent the weights of the basis functions, in this example consisting of equidistantly spaced 2D B-splines. The resolution of the field will depend on how many functions and coefficients are used to represent it.

Figure 17.16
Demonstration of how a 64 × 64 field map can be represented by just nine values. These values represent the weights of the basis functions, in this example consisting of equidistantly spaced 2D B-splines. The resolution of the field will depend on how many functions and coefficients are used to represent it.

The issue of the contrast in the reference image being different from that in the distorted image can be addressed by using a different cost function, such as mutual information (MI) (Maes et al., 1997). MI is a measure of statistical dependence between the two images and hence makes very few assumptions about the relationship between the intensities in the two images; MI has been used successfully for distortion correction (Studholme et al., 2000). Its downside is that the lack of assumptions means that the cost-function landscape (c.f. Fig. 17.15) is relatively flat and difficult to search. By contrast, the sum of differences makes strong assumptions about the two images (that they are identical except for the distortions), and if these are not fulfilled, it is unlikely that one will find the “true” distortions. It has been suggested (Kybic et al., 2000; Ardekani and Sinha, 2005) that the images be preprocessed to make them more similar before using sum-of-squared differences. We (Andersson et al., 2008) have had some success with a cost function that explicitly models a bias field and a nonlinear relationship between the intensities (see Fig. 17.17).

Figure 17.17 Left panel: T1-weighted structural scan, with points with the strongest edge information overlaid in red. Middle panel: FA image acquired in the same subject after rigid-body alignment to the structural scan. The mismatch in the area of the anterior cingulate cortex and the corpus callosum is obvious. This is a typical observation and is caused by the way the magnetic field is perturbed by the air–tissue interface in the sinus located inferior of the imaging plane. The right panel shows the same FA image after registration using the FMRIB nonlinear registration tool (FNIRT).

Figure 17.17
Left panel: T1-weighted structural scan, with points with the strongest edge information overlaid in red. Middle panel: FA image acquired in the same subject after rigid-body alignment to the structural scan. The mismatch in the area of the anterior cingulate cortex and the corpus callosum is obvious. This is a typical observation and is caused by the way the magnetic field is perturbed by the air–tissue interface in the sinus located inferior of the imaging plane. The right panel shows the same FA image after registration using the FMRIB nonlinear registration tool (FNIRT).

It would be very useful to have an image registration–based method that can estimate and correct for distortions only on the basis of data that are typically collected as part of a DTI experiment (i.e., no modifications to sequences or collection of extra data). As yet, no such method has been extensively used or validated.

Image Processing–Based Methods for Estimating Eddy Current–Induced Distortions

The considerations for image processing–based estimation and correction of eddy current–induced distortions are similar to those for Δ‎B0, although the transformation model is less complex and contains fewer parameters. As described earlier, the transforms should theoretically consist of an xy-shear, a y-zoom, and a zy-shear, all part of an affine (linear) transform. However, the problem of finding a reference image without distortions that has the same contrast as that of the distorted images remains. An important consideration here is that the reference image should be as affected by Δ‎B0 distortions as the images to be corrected for EC distortions, a criterion fulfilled, e.g., by the T2-weighted EP image.

It has been suggested that an additional set of images with low b-values be collected, which makes them more similar to the T2-weighted images, thereby enabling the use of simple image similarity criteria, such as cross-correlation (CC) (Haselgrove and Moore, 1996). The distortions estimated from the low b-value scans are then extrapolated and applied to the “normal” high b-value diffusion-weighted images.

Another approach is to use CSF suppression when collecting the T2-weighted EP images (Bastin, 2001), thereby rendering it more similar to the diffusion-weighted images. This approach can be implemented by either collecting an additional CSF-suppressed T2-weighted image purely for registration purposes or using CSF suppression for all scans. The latter approach will increase the length of the experiment.

Bodammer et al. (2004) have suggested collecting pairs of diffusion-weighted images with opposite diffusion-gradient polarity (e.g., g1 = [1 0 0] and g2 = [− 1 0 0]). The resulting images will have the same information content pertaining to diffusion, but with distortions in antiparallel directions (under the assumption that the distortions are a linear function of the diffusion gradient). This information can then be used to estimate the distortion analogously to the blip-up–blip-down methods discussed below. This process entails collecting each diffusion direction twice, which will make the experiment longer. Both images will be used for estimation of the tensor and contribute to the precision of this value.

Finally, Andersson and Skare (2002) calculated an initial estimate of the tensor parameters from the distorted images and used this to “simulate” synthetic diffusion-weighted images for the different diffusion gradient directions used in the experiment. After this calculation, each actual diffusion-weighted image is registered to the corresponding simulated image, and the corrected images are used to recalculate the tensor parameters. This process is repeated until it has converged.

All of the methods described here depend on finding or creating pairs of images with similar contrast to aid in registering them. By contrast, Rohde et al. (2004) used mutual information, which is more feasible for eddy current–induced distortions than for Δ‎B0 since one needs to estimate considerably fewer parameters. MI is then used to register the diffusion-weighted images directly to the T2-weighted EP image that is also collected as part of the experiment.

The methods described above are summarized in Figure 17.18.

Figure 17.18 Summary of different image processing–based methods for correcting eddy current (EC)–induced distortions. With one exception, they all attempt to find a pair of images with as similar contrast as possible and differ only with respect to the EC-induced distortions.

Figure 17.18
Summary of different image processing–based methods for correcting eddy current (EC)–induced distortions. With one exception, they all attempt to find a pair of images with as similar contrast as possible and differ only with respect to the EC-induced distortions.

Combining Acquisition and Processing Methods for Distortion Correction

One set of methods requires both the acquisition of additional data and subsequent signal and image processing. This method is based on manipulating the EPI sequence in such a way that a given off-resonance field will affect two consecutive acquisitions in different ways. The most common manipulation is to reverse the polarity of the phase-encoding blips, causing k-space to be traversed from bottom to top in one of the acquisitions and from top to bottom in the other. This will cause the (image) intensity in a voxel with appreciable off-resonance field to be deflected upward in one of the resulting images and an identical distance downward in the other image. The “true image” can then be derived by warping the two images to a point halfway between them. This method was originally suggested (Chang and Fitzpatrick, 1992) for correcting spin-echo (i.e., non-EPI) images in the frequency-encoding direction and was subsequently adapted to EPI (Bowtell et al., 1994). Initial work attempted to estimate the mapping (field) independently for each set of two columns along the direction to be corrected. By not recognizing that the underlying field is continuous (and typically smooth), the correction was rendered more sensitive to noise and edge effects, which prompted new approaches for finding the warps (Kannengießer et al., 1999). Andersson et al. (2003) suggested modeling the field as a linear combination of smooth basis functions, which allowed the inclusion of subject movement into the correction. This is a potentially important addition, since any subject movement between the two acquisitions will otherwise be interpreted as part of the off-resonance field, something that is not always obvious from looking at the resulting images (see Fig. 17.19).

Figure 17.19 Demonstration of the potential danger of not including the possibility of subject movement when estimating the field/displacement map from top-down and bottom-up data. The leftmost panel shows an EPI image acquired using top-down phase encoding (blips with negative polarity). This is combined with an image acquired with bottom-up phase encoding (second image, top row) to estimate a field/displacement map (third image, top row). For this case (when the subject has been absolutely still between the two acquisitions), subject movement need not be included as part of the model. The second image of the lower section shows a bottom-up image where the subject has clearly moved since the top-down acquisition. We can use this to estimate the field if we include subject movement in the model; it can be seen (middle row) that the estimated field is nearly identical to what was estimated from the data without movement. If we use the same data without including subject movement in the model, a very different field will be estimated (lower row), since all the differences (between the two images) caused by the movement will now have to be accounted for by the field. Surprisingly, if the “faulty” field is used to correct the EPI image, it will look reasonable, even though the field is obviously vastly wrong. This is a danger in the sense that the corrected image is “wrong” (consider, e.g., the ventricles), but not obviously so to the observer, who may well believe that the correction was successful.

Figure 17.19
Demonstration of the potential danger of not including the possibility of subject movement when estimating the field/displacement map from top-down and bottom-up data. The leftmost panel shows an EPI image acquired using top-down phase encoding (blips with negative polarity). This is combined with an image acquired with bottom-up phase encoding (second image, top row) to estimate a field/displacement map (third image, top row). For this case (when the subject has been absolutely still between the two acquisitions), subject movement need not be included as part of the model. The second image of the lower section shows a bottom-up image where the subject has clearly moved since the top-down acquisition. We can use this to estimate the field if we include subject movement in the model; it can be seen (middle row) that the estimated field is nearly identical to what was estimated from the data without movement. If we use the same data without including subject movement in the model, a very different field will be estimated (lower row), since all the differences (between the two images) caused by the movement will now have to be accounted for by the field. Surprisingly, if the “faulty” field is used to correct the EPI image, it will look reasonable, even though the field is obviously vastly wrong. This is a danger in the sense that the corrected image is “wrong” (consider, e.g., the ventricles), but not obviously so to the observer, who may well believe that the correction was successful.

It should be stressed that the use of two images with different phase-encoding blips is a special case of a more general principle. For a given EP acquisition, there is a fixed relationship between an (unknown) off-resonance field and the distortions in the image. By changing one or more parameters of the acquisition, the relationship between Δ‎B0(x, y, z) and the distortions will change, and the pair of acquisitions will carry information that allows one to estimate the field. One might acquire one scan with phase encoding that is posterior-to-anterior and one that is left-to-right, and estimate the distortions from that (Fig. 17.20). One might even acquire two scans with the same direction and polarity phase encoding but with different dwell times, thereby making the distortions larger in one of the acquisitions (Fig. 17.20). This strategy (of using acquisitions where the phase encoding differs in a non-180° way) has allowed its use for correcting a set of blades from a short-axis readout PROPELLER (SAP-EPI; Skare et al., 2006) acquisition, thus combining an acquisition that yields inherently smaller distortions with the correction method described here (see Chapter 13 by Pipe and Chapter 12 by Skare and Bammer for more information on the PROPELLER and SAP-EPI methods, respectively).

Figure 17.20 Top row: EPI images acquired top-down with long dwell time (A), top-down with short dwell time (B), bottom-up (C) and left–right (D). Bottom row: Field maps estimated from a separate GE field-map scan (E), from B and C (F), from B and D (G), and from A and B (H). Any of these combinations are sufficient to estimate a field that is reasonably close to the measured field. It can also be seen that the greater the difference between the observed images, the better one is able to estimate the field—i.e., the field estimated from combining the top-down and the bottom-up acquisitions looks best, and that from combining two top-down acquisitions with different dwell times looks worst.

Figure 17.20
Top row: EPI images acquired top-down with long dwell time (A), top-down with short dwell time (B), bottom-up (C) and left–right (D). Bottom row: Field maps estimated from a separate GE field-map scan (E), from B and C (F), from B and D (G), and from A and B (H). Any of these combinations are sufficient to estimate a field that is reasonably close to the measured field. It can also be seen that the greater the difference between the observed images, the better one is able to estimate the field—i.e., the field estimated from combining the top-down and the bottom-up acquisitions looks best, and that from combining two top-down acquisitions with different dwell times looks worst.

These correction methods (the SAP-EPI strategy excluded) typically require the acquisition of at least one additional scan. However, unlike many other methods, the additional data carry the same information as that of the normal scans and will hence also improve the SNR of the total data set (Morgan et al., 2004).

Acknowledgements

The authors are grateful for discussions with Mark Jenkinson, Matt Robson and Karla Miller.

References

Alexander AL, Tsuruda JS, Parker DL (1997). Elimination of eddy current artifacts in diffusion weighted echo-planar images: the use of bipolar gradients. Magn Reson Med 38(2):1016–1021.Find this resource:

Andersson JLR, Skare S (2002). A model-based method for retrospective correction of geometric distortions in diffusion-weighted EPI. Neuroimage 16:177–199.Find this resource:

Andersson JLR, Skare S, Ashburner J (2003). How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage 20:870–888.Find this resource:

Andersson JLR, Smith S, Jenkinson M (2008). FNIRT—FMRIB's non-linear image registration tool. In 14th Annual Meeting of the Organisation for Human Brain Mapping, p. 496.Find this resource:

    Ardekani S, Sinha U (2005). Geometric distortion correction of high-resolution 3T diffusion tensor brain images. Magn Reson Med 54:1163–1171.Find this resource:

    Bammer R, Auer M, Keeling SL, Augustin M, Stables LA, Prokesch RW, Stollberger R, Moseley ME, Fazekas F (2002). Diffusion tensor imaging using single-shot SENSE-EPI. Magn Reson Med 48:128–136.Find this resource:

    Bastin ME (2001). On the use of the FLAIR technique to improve the correction of eddy current induced artefacts in MR diffusion tensor imaging. Magn Reson Imaging, 19:937–950, 2001.Find this resource:

    Bastin ME, Armitage PA (2000). On the use water phantom images to calibrate and correct eddy current induced artefacts in MR diffusion tensor imaging. Magn Reson Imaging 18:681–687.Find this resource:

    Bodammer N, Kaufmann J, Kanowski M, Tempelmann C (2004). Eddy current correction in diffusion-weighted imaging using pairs of images acquired with opposite diffusion gradient polarity. Magn Reson Med 51:188–193.Find this resource:

    Bowtell R, McIntyre DJO, Commandre MJ, Glover PM, Suetens P (1994). Correction of geometric distortions in echo planar images. In Proceedings of the 2nd Meeting of the Socitey for Magnetic Resonance, p. 411.Find this resource:

      Calamante F, Porter DA, Gadian DG, Connely A (1999). Correction for eddy current induced b0 shifts in diffusion-weighted echo-planar imaging. Magn Reson Med 41:95–102.Find this resource:

      Chang HJ, Fitzpatrick JM (1992). A technique for accurate magnetic resonance imaging in the presence of field inhomogeneities. IEEE Trans Med Imaging 11:319–329.Find this resource:

      Chen B, Guo H, Song AW (2006). Correction for direction-dependent distortions in diffusion tensor imaging using matched magnetic field maps. Neuroimage 30:121–129.Find this resource:

      Chen N-K, Wyrwicz AM (1999). Correction for EPI distortions using multi-echo gradient-echo imaging. Magn Reson Med 41:1206–1213.Find this resource:

      Chen N-K, Wyrwicz AM (2001). Optimized distortion correction technique for echo planar imaging. Magn Reson Med 45:525–528.Find this resource:

      Cusack R, Papadakis N (2002). New robust 3D phase unwrapping algorithm: application to magnetic field mapping and undistorting echo-planar images. Neuroimage 16:754–764.Find this resource:

      Haselgrove JC, Moore JR (1996). Correction for distortion of echo-planar images used to calculate the apparent diffusion coefficient. Magn Reson Med 36:960–964.Find this resource:

      Horsfield MA (1999). Mapping eddy current induced fields for the correction of diffusion-weighted echo planar images. Magn Reson Imaging 17:1335–1345.Find this resource:

      Hutton C, Bork A, Josephs O, Deichmann R, Ashburner J, Turner R (2002). Image distortion correction in fMRI: a quantitative evaluation. Neuroimage 16:217–240.Find this resource:

      Jenkinson M (2003). Fast, automated n-dimensional phase-unwrapping algorithm. Magn Reson Med 49:193–197.Find this resource:

      Jenkinson M, Wilson J, Jezzard P (2004). A perturbation method for magnetic field calculations of non-conductive objects. Magn Reson Med 52:471–477.Find this resource:

      Jezzard P, Barnett AS, Pierpaoli C (1998). Characterization of and correction for eddy current artifacts in echo planar diffusion imaging. Magn Reson Med 39:801–812.Find this resource:

      Kannengießer SAR, Wang Y, Haacke ME (1999). Geometric distortion correction in gradient-echo imaging by use of dynamic time warping. Magn Reson Med 42:585–590.Find this resource:

      Kybic J, Thévenaz P, Nirkko A, Unser M (2000). Unwarping of unidirectionally distorted EPI images. IEEE Trans Med Imaging 19:80–93.Find this resource:

      Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P (1997). Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging 16:187–198.Find this resource:

      Mattila S, Renvall V, Hiltunen J, Kirven D, Sepponen R, Hari R, Antti T (2007). Phantom-based evaluation of geometric distortions in functional magnetic resonance and diffusion tensor imaging. Magn Reson Med 57:754–763.Find this resource:

      Morgan P, Bowtell R, McIntyre D, Worthington B (2004). Correction of spatial distortion in EPI due to inhomogeneous static magnetic fields using the reversed gradient method. J Magn Reson Imaging, 19:499–507.Find this resource:

      Munger P, Crelier GR, Peters TM, Pike GB (2000). An inverse problem approach to the correction of distortion in EPI images. IEEE Trans Med Imaging 19:681–689.Find this resource:

      Papadakis NG, Martin KM, Pickard JD, Hall LD, Carpenter TA, Huang CL-H (2000). Gradient preemphasis calibration in diffusion-weighted echo-planar imaging. Magn Reson Med 44:616–624.Find this resource:

      Papadakis NG, Smponias T, Berwick J, Mayhew JE (2005). k-space correction of eddy-current-induced distortions in diffusion-weighted echo-planar imaging. Magn Reson Med 53:1103–1111.Find this resource:

      Reber PJ, Wong EC, Buxton RR, Frank LR (1998). Correction of off resonance-related distortion in echo-planar imaging using EPI-based field maps. Magn Reson Med 39:328–330.Find this resource:

      Reese TG, Heid O, Weisskoff RM, Vedeen VJ (2003). Reduction of eddy-current-induced distortion in diffusion MRI using a twice-refocused spin echo. Magn Reson Med 49:177–182.Find this resource:

      Robson MD, Gore JC, Constable RT (1997). Measurement of the point spread function in MRI using constant time imaging. Magn Reson Med 38:733–740.Find this resource:

      Rohde GK, Barnett AS, Basser PJ, Marenco S, Pierpaoli C (2004). Comprehensive approach for correction of motion and distortion in diffusion-weighted MRI. Magn Reson Med 51:103–114.Find this resource:

      Schmithorst VJ, Dardzinski BJ, Holland SK (2001). Simultaneous correction of ghost and geometric distortion artifacts in EPI using a multiecho reference scan. IEEE Trans Med Imaging 20:535–539.Find this resource:

      Skare S, Newbould RD, Clayton DB, Bammer R (2006). Propeller EPI in the other direction. Magn Reson Med 55(6):1298–1307.Find this resource:

      Studholme C, Constable RT, Duncan JS (2000). Accurate alignment of functional EPI data to anatomical MRI using a physics-based distortion model. IEEE Trans Med Imaging 19:1115–1127.Find this resource:

      Wan X, Gullberg GT, Parker DL, Zeng GL (1997). Reduction of geometric and intensity distortions in echo-planar imaging using a multireference scan. Magn Reson Med 37:932–944.Find this resource:

      Xiang Q-S, Ye FQ (2007). Correction for geometric distortion and n/2 ghosting in EPI by phase labeling for additional coordinate encoding (PLACE). Magn Reson Med 57(4):731–741.Find this resource:

      Zaitsev M, Hennig J, Speck O (2004). Point spread function mapping with parallel imaging techniques and high acceleration factors: fast, robust and flexible method for echo-planar imaging distortion correction. Magn Reson Med 52:1156–1166.Find this resource:

      Zeng H, Gatenby CJ, Zhao Y, Gore JC (2004). New approach for correcting distortions in echo planar imaging. Magn Reson Med 52:1373–1378.Find this resource:

      Zhou XJ, Du YP, Bernstein MA, Reynolds HG, Maier JK, Polzin JA (1998). Concomitant magnetic-field-induced artifacts in axial echo planar imaging. Magn Reson Med 39:596–605.Find this resource:

      Zhuang J, Hrabe J, Kangarlu A, Xu D, Bansal R, Branch CA, Peterson BS (2006). Correction of eddy-current distortions in diffusion tensor images using the known directions and strengths of diffusion gradients. J Magn Reson Imaging 24:1188–1193.Find this resource: