Sandia National Laboratories researchers have created a method of processing 3D images for computer simulations that could have beneficial implications for several industries, including health care, manufacturing and electric vehicles.
The method could prove vital in certifying the credibility of high-performance computer simulations used in determining the effectiveness of various materials for weapons programs and other efforts, We can also use the new 3D-imaging workflow to test and optimise batteries used for large-scale energy storage and in vehicles.
– Sandia’s principal investigator
The researchers shared the new workflow, dubbed by the team as EQUIPS for Efficient Quantification of Uncertainty in Image-based Physics Simulation. The lead author of the paper said that this workflow leads to more reliable results by exploring the effect that ambiguous object boundaries in a scanned image have in simulations.
EQUIPS can use machine learning to quantify the uncertainty in how an image is drawn for 3D computer simulations. By giving a range of uncertainty, the workflow allows decision-makers to consider best- and worst-case outcomes.
Using the EQUIPS workflow, which can use machine learning to automate the drawing process, the 3D image is rendered into many viable variations showing the size and location of a potential tumour. Those different renderings will produce a range of different simulation outcomes. Instead of one answer, the doctor will have a range of prognoses to consider that can affect risk assessments and treatment decisions, be they chemotherapy or surgery.
There is not a single-point solution when working with real-world data. To be confident in an answer, researchers need to understand that the value can be anywhere between two points. They need to make decisions based on knowing that the answer is somewhere in this range not just thinking it is at one point.
The first step of the image-based simulation is the image segmentation, or deciding which pixel (voxel in a 3D image) to assign to each object and therefore drawing the boundary between two objects. From there, scientists can begin to build models for computational simulation. But pixels and voxels will blend with gradual gradient changes, so it is not always clear where to draw the boundary line.
The inherent problem with segmenting a scanned image is that whether it’s done by a person using the best software tools available or with the latest in machine learning capabilities there are many plausible ways to assign the pixels to the objects. Two people performing segmentation on the same image are likely to choose a different combination of filtering and techniques leading to different but still valid segmentations.
Sandia’s EQUIPS workflow does not eliminate such segmentation uncertainty, but it improves the credibility of the final simulations by making the previously unrecognised uncertainty visible to the decision-maker.
QUIPS can employ two types of machine learning techniques — Monte Carlo Dropout Networks and Bayesian Convolutional Neural Networks — to perform image segmentation, with both approaches creating a set of image segmentation samples.
These samples are combined to map the probability that a certain pixel or voxel is in the segmented material. To explore the impact of segmentation uncertainty, EQUIPS creates a probability map to obtain segmentations, which are then used to perform multiple simulations and calculate uncertainty distributions.
To illustrate the diverse applications that can benefit from the EQUIPS workflow, the researchers demonstrated in the Nature Communications paper several uses for the new method: CT scans of graphite electrodes in lithium-ion batteries, most commonly found in electric vehicles, computers, medical equipment and aircraft; a scan of a woven composite being tested for thermal protection on atmospheric reentry vehicles, such as a rocket or a missile; and scans of both the human aorta and spine.