Refining current opacity models will be key to uncovering the details of exoplanet properties – and signs of life – in data from the powerful new telescope

NASA’s James Webb Space Telescope reveals the universe with spectacular and unprecedented clarity. The observatory’s razor-sharp infrared vision cut through cosmic dust to illuminate some of the universe’s earliest structures, as well as previously hidden star schools and spinning galaxies hundreds of millions of light-years away.

As well as looking further into the universe than ever before, Webb will capture the most comprehensive picture of objects in our own galaxy, some of the 5,000 planets discovered in the Milky Way. Astronomers are taking advantage of the telescope’s light analysis precision to decode the atmospheres around some of these nearby worlds. The properties of their atmospheres could provide clues about how a planet formed and whether it harbors signs of life.

But a new study from MIT suggests that the tools astronomers typically use to decode light-based signals may not be good enough to accurately interpret data from the new telescope. Specifically, opacity models — tools that model how light interacts with matter based on the properties of matter — may need significant fine-tuning to match the accuracy of Webb’s data, the researchers said.

If these models are not refined? The researchers predict that the properties of planetary atmospheres, such as their temperature, pressure and elemental composition, may be an order of magnitude different.

“There is a scientifically significant difference between a compound like water present at 5% versus 25% that current models cannot distinguish,” says Julien de Wit, co-leader of the study, assistant professor in the Department of Earth, Atmospheric, and Planetary Sciences from MIT. (APSS).

“Currently, the model we use to decipher the spectral information is not at par with the accuracy and quality of data we have from the James Webb telescope,” adds EAPS graduate student Prajwal Niraula. “We need to up our game and tackle the problem of opacity together. »

De Wit, Niraula and their colleagues published their study in natural astronomy. Co-authors include spectroscopy experts Yuli Gordon, Robert Hargreaves, Clara Sousa-Silva and Roman Kochanov of the Harvard-Smithsonian Center for Astrophysics.


Opacity is a measure of how easily photons pass through a material. Photons of certain wavelengths can pass directly through a material, be absorbed or reflected, depending on whether and how they interact with certain molecules in a material. This interaction also depends on the temperature and pressure of a material.

An opacity model works from different assumptions about how light interacts with matter. Astronomers use opacity models to infer certain properties of a material, given the spectrum of light emitted by the material. In the context of exoplanets, an opacity model can decode the type and amount of chemicals in a planet’s atmosphere, based on the planet’s light captured by a telescope.

De Wit says the current state-of-the-art opacity model, which he likens to a classic language translation tool, has done a decent job of decoding spectral data taken by instruments like those on the Hubble Space Telescope.

“So far this Rosetta Stone has done well,” de Wit said. “But now that we’re taking it to the next level with Webb’s precision, our translation process will prevent us from capturing important subtleties, such as those that make the difference between a planet being habitable or not.”

Easy, disturbed

He and his colleagues do this in their study, testing the most commonly used model of opacity. The team set out to see what atmospheric properties the model would achieve if it were modified to assume certain limitations in our understanding of how light and matter interact. The researchers created eight of these “perturbed” models. They then fed each model, including the real version, “synthetic spectra” — patterns of light that were simulated by the group and resemble the accuracy the James Webb Telescope would see.

They found that, based on the same light spectra, each perturbed model produced large-scale predictions for the properties of a planet’s atmosphere. Based on their analysis, the team concludes that if existing opacity models are applied to the light spectra taken by the Webb telescope, they will hit a “precision wall”. In other words, they won’t be sensitive enough to tell whether a planet has an atmospheric temperature of 300 Kelvin or 600 Kelvin, or whether a particular gas occupies 5% or 25% of an atmospheric layer.

“This difference is important to allow us to constrain planetary formation mechanisms and reliably identify biosignatures,” says Niraula.

The team also found that each model also produced a “good fit” to the data, meaning that even if a perturbed model produced a chemical composition that the researchers knew was wrong, it also generated a light spectrum from that composition. chemical that was close to . enough to or “fit” the original spectrum.

“We found that there were enough parameters to adjust, even with a bad model, to get a good fit, meaning you wouldn’t know that your model is wrong and what it’s telling you is wrong .says de Wit.

He and his colleagues raised some ideas about how to improve existing opacity models, including the need for more laboratory measurements and theoretical calculations to refine model assumptions about how light and different molecules interact, as well as collaboration between disciplines, and especially between astronomy. and spectroscopy.

“There are so many things that could be done if we fully understood how light and matter interact,” says Niraula. “We know pretty well about ground conditions, but as we move into different types of atmospheres, things change, and it’s a lot of data of increasing quality that we risk misinterpreting. »

Leave a Comment