For the first time, American astrophysicists have used artificial intelligence (AI) to create complex three-dimensional simulations of the Universe. This simulator is so fast and accurate that its operation escapes even its own creators. Explanations.
Attempting to establish the scenario that would have led to the birth of the Universe and its current state is a major challenge for astrophysicists because of its evolution over time and its vastness.
- The universe has been expanding since a primordial explosion, the big bang, which occurred 13.7 billion years ago;
- The diameter of the observable Universe is estimated at about 93 billion light-years, which corresponds to 880 trillion billion kilometers;
- It is populated with billions of stars structured in billions of galaxies and other celestial objects such as planets, moons, comets and asteroids.
AI at the service of astrophysics
For several decades, scientists have been using computer modeling to try to understand the universe.
Some simulations produced by these correspond a little to what we know of our Universe, but others are far away, so that astrophysicists are still trying to find THE scenario that would explain the Universe in which we find ourselves .
However, the traditional models that scientists still use today require great computing power and a lot of time. Thus, in order to successfully develop scenarios and compare them, researchers must execute an astronomical amount of simulations by modulating the various possible parameters. This work represents thousands and thousands of hours of work.
Simulate the impossible
To accelerate the process, scientist Shirley Ho and her colleagues at the Flatiron Institute’s Center for Computational Astrophysics (Flatiron) have developed the D3M (Deep Density Displacement Model) simulator. This deep learning network is able to recognize certain common characteristics in a set of data and “learn” to manipulate them.
To develop it, astrophysicists have used 8000 simulations of the Universe made from traditional computer models.
When D3M integrated these simulations, the researchers asked him to create a brand new simulation of a diagonal-shaped cube-shaped virtual universe of 600 million light years.
D3M was able to run this simulation using the dataset it had incorporated during its training. Instead of the 300 hours needed for traditional systems to create it, it only took him 30 milliseconds to get there.
D3M also produced more accurate results (an error rate of 2.8%) compared to the current fastest models (9.3%).
Its power, speed and calculation accuracy are not its only assets, since it has also been able to produce simulations of the universe even when its creators introduce parameters that do not appear in its database.
For example, when researchers changed the amount of dark matter in the virtual universe, D3M was able to handle a simulation – although it was never trained on how to handle dark matter variations.
It’s like creating an image recognition software that includes lots of pictures of cats and dogs, but that it is able to recognize elephants. Nobody understands how he does it. It is a great mystery to solve.
Shirley Ho, Flatiron Institute
This great ability to manage parameters that are not found in its reference data makes it a flexible and useful tool.
Ms. Ho’s team now hopes to better understand the operation of this simulator, which will benefit the advancement of artificial intelligence and machine learning.
Our simulator could be an interesting laboratory. Why is our model able to extrapolate to elephants and is it not just about recognizing cats and dogs?
Shirley Ho, Flatiron Institute
The team hopes to find out why in the coming months by varying other parameters in the D3M, observing how factors such as hydrodynamics, or the movement of fluids and gases, may have shaped the formation of the Universe.
The details of this work are described in Proceedings of the National Academy of Sciences.