In April, the first image of a black hole landed on front pages and in news feeds around the world. Researchers used millions of gigabytes of data from eight separate observatories to reconstruct the black hole’s event horizon. Scientists at Memorial Sloan Kettering Cancer Center (MSK) are using troves of data to create similarly unprecedented images from inside our bodies.

In a paper published in the May 2019 issue of Medical Image Analysis, MSK investigators, led by medical physics researcher Ida Häggström, detail a new method of image reconstruction for positron emission tomography, or PET . The approach generates higher quality images than conventional techniques in less than one hundredth the time of currently used technology.

Medical physics researcher Ida Häggström and her colleagues at Memorial Sloan Kettering are developing a new type of PET image reconstruction. Credit: Memorial Sloan Kettering Cancer Center

“Using deep learning, we trained our convolutional neural network to transform raw PET data into images,” Dr. Häggström says. “No one has done PET imaging in this way before.”

Deep learning is a subset of artificial intelligence that uses highly layered (or deep) neural networks to perform tasks. Convolutional neural networks are a class of deep learning network frequently employed in image recognition. Given enough data, the networks can be trained to recognize shapes and patterns, much like how a person learns to see. Today, researchers are starting to use these tools to solve tasks, such as classifying cancerous lesions, predicting treatment outcomes or interpreting medical charts.

The group of MSK researchers, which also includes medical physicist C. Ross Schmidtlein, graduate student Gabriele Campanella, and data scientist Thomas Fuchs, the study’s senior author, named their new technique DeepPET.

Peering into the Body’s Inner Workings

PET is one of several imaging technologies that have changed the diagnosis and treatment of cancer, as well as other diseases, over the past few decades. Other imaging technologies, such as CT and MRI, mainly generate pictures of the body’s anatomical structures or physiological behavior. PET allows doctors to see functional activity in cells, making it particularly valuable for studying tumors, which tend to have dynamic metabolisms.

During a PET scan, a patient is injected with biologically active molecules known as tracers, which are tagged with radioactive atoms. These radiotracers decay via positron emission, creating back-to-back gamma rays that are simultaneously detected by a PET scanner. The technology uses sophisticated mathematical algorithms to overlay the detected gamma rays and produce the desired images.

Depending on the type of tracers used, PET can image the uptake of glucose or cell growth in tissues, among other phenomena. Activity can help doctors distinguish between a rapidly growing tumor and a benign mass of cells.

PET is often used along with CT or MRI. The combination provides comprehensive information about a tumor’s location as well as its metabolic activity. Dr. Häggström says that if DeepPET can be developed for clinical use, additional research is needed to optimize it for use with these other methods.

Improving on an Important Technique

As currently performed, PET has some drawbacks. Processing the data and creating images can take a long time, and the images are not always clear.

To develop a better approach, the team began by training a convolutional neural network on large amounts of PET data, along with the associated images. “We wanted the computer to learn how to use data to construct an image,” Dr. Häggström says. The training used simulated scans of data that looked like images from a human body but were actually artificial.

Conventional PET images are generated through a repeating process where the current image estimate is gradually updated to match the measured data. In DeepPET, where the system has learned the PET scanner’s physical and statistical characteristics, as well as how typical PET images look, no repeats are required. The image is generated by a single, fast computation and is clearer.

Dr. Häggström’s team is currently working to enhance the system for use with clinical data, and subsequent testing and validation.

She notes that MSK is the ideal place to do this kind of research. “MSK has clinical data that we can use to test this system,” she says. “We also have expert radiologists who can look at these images and interpret what they mean for a diagnosis.”

“By combining that expertise with the state-of-the-art computational resources available here, we have a great opportunity to have a direct clinical impact,” she adds. “The gain we’ve seen in reconstruction speed and image quality should lead to more efficient image evaluation and more reliable diagnoses and treatment decisions, ultimately leading to improved care for our patients.”

“MSK provides us with the unique opportunity, academic freedom, and resources to pursue this kind of cutting-edge research,” says Dr. Schmidtlein. “Having the support of our Medical Physics leadership to pursue high-risk high-reward projects like this is part of what makes MSK special,” he adds.

This research was funded in part through a National Institutes of Health/National Cancer Institute grant (P30 CA008748). Dr. Fuchs is a founder, equity owner, and chief scientific officer of Paige.AI.

To learn more about the latest in cancer research and treatment visit www.mskcc.org.