Interpolate/Resize 3D array
You can use TorchIO for that:
import torchio as tio
image = tio.ScalarImage(sFileName)
resample = tio.Resample(1)
resampled = resample(image)
ndimage.zoom
This is probably the best approach, the zoom method is designed for precisely this kind of task.
from scipy.ndimage import zoom
new_array = zoom(array, (0.5, 0.5, 2))
changes the size in each dimension by the specified factor. If the original shape of array was, say, (40, 50, 60)
, the new one will be (20, 25, 120)
.
signal.resample_poly
SciPy has a large set of methods for signal processing. Most relevant here are decimate and resample_poly. I use the latter below
from scipy.signal import resample_poly
factors = [(1, 2), (1, 2), (2, 1)]
for k in range(3):
array = resample_poly(array, factors[k][0], factors[k][1], axis=k)
The factors (which must be integers) are of up- and down-sampling. That is:
- (1, 2) means size divided by 2
- (2, 1) means size multiplied by 2
- (2, 3) would mean up by 2 then down by 3, so the size is multiplied by 2/3
Possible downside: the process happens independently in each dimension, so the spatial structure may not be taken into account as well as by ndimage methods.
RegularGridInterpolator
This is more hands-on, but also more laborious and without the benefit of filtering: straightforward downsampling. We have to make a grid for the interpolator, using original step sizes in each direction. After the interpolator is created, it needs to be evaluated on a new grid; its call method takes a different kind of grid format, prepared with mgrid
.
from scipy.interpolate import RegularGridInterpolator
values = np.random.randint(0, 256, size=(40, 50, 60)).astype(np.uint8) # example
steps = [0.5, 0.5, 2.0] # original step sizes
x, y, z = [steps[k] * np.arange(values.shape[k]) for k in range(3)] # original grid
f = RegularGridInterpolator((x, y, z), values) # interpolator
dx, dy, dz = 1.0, 1.0, 1.0 # new step sizes
new_grid = np.mgrid[0:x[-1]:dx, 0:y[-1]:dy, 0:z[-1]:dz] # new grid
new_grid = np.moveaxis(new_grid, (0, 1, 2, 3), (3, 0, 1, 2)) # reorder axes for evaluation
new_values = f(new_grid)
Downside: e.g., when a dimension is reduced by 2, it will in effect drop every other value, which is simple downsampling. Ideally, one should average neighboring values in this case. In terms of signal processing, low-pass filtering precedes downsampling in decimation.