Growing up in Madrid, Spain, Adrián Lozano-Durán was always interested in aerospace. He was fascinated by planes, rockets, and how things flew. Today, he is an associate professor of aerospace at Caltech, where he develops models that accurately simulate phenomena such as the entry of spacecraft into the Martian atmosphere and the production of sound by supersonic aircraft.
Lozano-Durán completed his undergraduate and graduate degrees at the Technical University of Madrid. As an undergraduate student, he conducted experiments on titanium and its alloys, which are frequently used in aerospace. However, Lozano-Durán felt materials research did not quite suit his interests, so he switched to the field of aerodynamics, which is the study of how air flows around objects. He was hooked.
"Designing the shape of an aircraft or a rocket and its proportions, it was all done through simulations using a computer, with no experiments. I really liked that," Lozano-Durán says. "And that's still what I do today."
In 2016, he came to the United States to complete a postdoctoral fellowship at Stanford University, where he worked on improving the accuracy of simulations related to aerodynamic applications. He then accepted an assistant professorship at MIT and, in 2024, joined the Caltech faculty.
We recently spoke with him to talk about aerospace, accelerating computational modeling with machine learning, and his ode to scientific failures.
What was it about aerospace that captured your interest?
I liked physics, engineering, and state of the art technology. Aerospace brings all those things together. There is a lot of physics. The technology is state of the art all the time. As an undergrad, I realized that I liked not only aerospace but also the research part. There, you are dealing with the very edge of technology, and I always wanted to be at the edge of the latest knowledge that we have.
What does your work primarily focus on these days?
I have been working on developing high-fidelity aerodynamic models, meaning they can capture many of the details of what is happening. Their accuracy is high, but they also need to be computationally efficient. If you want to use these models to design a rocket capsule, the simulations cannot take half a year because they are very expensive. I've been lucky that we have been able to use supercomputers that are in the top five in terms of computing power. Those computers are expensive. These simulations generate terabytes of data, so you want to make sure that you do not need to run them again. Ideally, we would like these simulations to be done in days, so we can iterate the design.
We would like to do these things entirely computationally because experiments take a long time, are tricky, and are very expensive. Computationally, everything is cheaper, and you can do things that would be very difficult to do in an experiment. For example, you can change the value of gravity very easily in a simulation, but on Earth you cannot just change the value of gravity.
NASA, the Department of Energy, the Department of Defense, and the Air Force are all trying to invest in this computational testing of designs. We are trying to make these models as reliable as possible.
Can you give us some examples of real problems you have simulated with your models?
Yes, for example, some of the simulations that we are doing right now are related to when you send something like the Mars Science Laboratory to Mars. This was the mission that delivered the Curiosity rover to the Red Planet. Many Mars missions are lost, and many of those losses happen during what is called entry. The spacecraft has to go through the atmosphere of Mars, decelerate, and land. The computational models that we work on are related to that critical part.
When the capsule enters the atmosphere, it moves very fast; things get very hot. It moves much faster than the speed of sound, and it generates a shock wave in front that produces a lot of heat—thousands of degrees. There is a heat shield protecting all the equipment that we are sending, and if it gets very hot, it could melt. That heat shield can chemically react with Mars's carbon dioxide atmosphere. That is impossible to test here on Earth. As a result, NASA is typically very conservative. The margins for designing such a shield are huge. NASA says, 'Let's make it much thicker than we think we need,' but that means it is more expensive. And even with that kind of extra protection, some of these probes are lost.
Essentially, we try to simulate different conditions at the most critical times, when things are very fast and very hot. We try to see, for example, how hot do various materials get? What will the heat flux be at the surface? What is the thickness that we need for the heat shield? And what conditions are we going to see?
Are you doing this work for a particular mission?
We are working on developing tools that will be used for future missions. Right now, NASA uses tools from the early 2000s. But ideally, these new models will be used for missions in the next decade.
Another computational model we have made deals with NASA's X-59 quiet supersonic aircraft, developed with Lockheed Martin as part of the Quesst mission. It is designed to be quiet at ground level. The aircraft is now in flight testing, but what is exciting is that we already have the computational results. We have simulations based on the design. The simulation took one week using a supercomputer, while it took five years and millions of dollars to build the aircraft. We have our predictions of how noisy it is going to be. So, as it is tested for real, we will get to see if things match. If we predict how noisy this aircraft turns out to be, our models are working. If things do not match, our models do not work. That is also good. Then we need to understand why, adjust the design computationally, and fix the problem. If we need to make it quieter, for example, we can iterate the design computationally without having to rebuild and test. That's why everybody's interested in getting these computational models to work.
How do AI and machine learning come into play in your work?
During my postdoc at Stanford, I started to use new tools like machine learning that, at the time, were not used so much in aerodynamics. We were working out how to use these machine learning-enhanced models to make predictions. At that time, machine-learning models were not sufficiently reliable; however, we are now beginning to see models that can be deployed routinely in real-world simulations. In aerospace, predictions are very important because you often cannot do actual experiments. For example, you may have one shot at sending something to Mars. You cannot test what you are sending to Mars under the exact conditions on Earth. That's why it's very challenging. The problem is that you need to construct models that are very high fidelity, but there are no experiments to validate those models directly.
Previously, we have written about some of your work on cause-and-effect analysis, specifically on your technique for determining causality even in complex systems. Could you elaborate on that fundamental work?
I've been pushing in the field to use causality tools to understand problems in aerospace. This is something that has been used in economics, for example, where people are trying to predict movements of the stock market. What causes the market to go up or down, and what is the effect? People in biology and sociology also work on cause and effect. Maybe you have a clinical trial, and someone took a drug. Is that why they got better? Is that the cause, or was it another compounding factor?
Causality at this fundamental level is not used so much in physics and aerospace. But causal discovery is at the core of science. What I'm trying to do is bring these tools to the field in the context of what I care about. For example, the temperature of a capsule entering the Martian atmosphere will increase. Is that because the pressure in another part is increasing, or something else? How can I isolate cause and effect? In these systems with a thousand variables or more, you do not know what is the cause and what is the effect, and you want to isolate the most important variables. The tools we have been developing to measure causality apply not only to our field but to many problems in climate, ecology, mechanical systems, and many other areas.
Do you have a particular advisor or mentor who has been influential in your career?
Yes, my mentor is Javier Jimenez (PhD '73), who got his PhD here at Caltech and then went back to Spain. I learned a lot from him about how to approach a problem. I always say that what I learned from him is not physics, but how to think about physics, or even how to identify problems. Once you identify the right question, you just need to find the right tool to solve it.
I also learned to be very critical, even of myself. One time I was discussing some papers with him, and he was criticizing a paper and saying, 'This makes no sense. This is not well done.' And I said, 'But this is your paper.' And he said, 'Well, yeah, I did it 10 years ago, and now with time and perspective, I can see that this is not the right way of doing it. It has to be done again.' From that point, even if he criticizes my work, I will not be offended. His criticism of my work is always to make it better. He has been my mentor and has supported me throughout my career, even now.
Do you have any hobbies outside of work?
It's not really a hobby, but every time I generate a result that is wrong, I keep it. I keep a collection of these plots and ideas that were wrong. It's like an art gallery of all the failures. They look a bit like modern art related to science. Every few years, I print them, and I invite people to come over.
But why do you keep the ones that were wrong?
The beauty of it is that they are wrong, but they are striking. It's a failure and should not have happened, but it is still kind of beautiful.
Adrian Lozano-Duran, associate professor of aerospace at Caltech
Credit: Lance Hayashida/Caltech
