…Previously, a creative design engineer would develop a 3D model of a new car concept. This model would be sent to aerodynamics specialists, who would run physics simulations to determine the coefficient of drag of the proposed car—an important metric for energy efficiency of the vehicle. This simulation phase would take about two weeks, and the aerodynamics engineer would then report the drag coefficient back to the creative designer, possibly with suggested modifications.
Now, GM has trained an in-house large physics model on those simulation results. The AI takes in a 3D car model and outputs a coefficient of drag in a matter of minutes. “We have experts in the aerodynamics and the creative studio now who can sit together and iterate instantly to make decisions [about] our future products,” says Rene Strauss, director of virtual integration engineering at GM…
“What we’re seeing is that actually, these tools are empowering the engineers to be much more efficient,” Tschammer says. “Before, these engineers would spend a lot of time on low added value tasks, whereas now these manual tasks from the past can be automated using these AI models, and the engineers can focus on taking the design decisions at the end of the day. We still need engineers more than ever.”
accuracy is not a huge concern at the design stage
Then I doubt they are running the mentioned most accurate, two-week-long physics solvers at this stage either. You only do that when you need accuracy. A quick simulation doesn’t take long.
I’m failing to see why the creative writing machine is better than a simulation set to ‘rough’.
I’m failing to see why the creative writing machine is better than a simulation set to ‘rough’.
The problem is that you saw AI and thought LLM.
Machine Learning is a big field, AI/Neural Networks are a subset of that field and LLMs are only a single application of a specific type of LLM (Transformer model) to a specific task (next token prediction).
The only reason that LLMs and Image generation models are the most visible is that training neural network requires a large amount of data and the largest repository of public data, the Internet, is primarily text and images. So, text and image models were the first large models to be trained.
The most exciting and potentially impactful uses of AI are not LLMs. Things like protein folding and robotics will have more of an impact on the world than chatbots.
In this case, generating fast approximations for physical modeling can save a ton of compute time for engineering work.
I’ll swag that the simulation process is complex, fraught with various pitfalls and idiosyncracies that require specialized training / experience to get it setup and running properly, even in ‘rough’ mode. I’ll further swag that the reason it used to take 2 weeks was because there were about 50 engineering hours required to get it ready to do the number crunching - and those engineering hours can now be handled by the “creative writing machine” which has been trained in the various things it needs to know to match the expected patterns.
I don’t know if this is the full explanation, but the article does touch on how the LPM can be tweaked to match physical tests:
The trick is to incorporate experimental measurements to fine-tune the model. If a physics simulation doesn’t agree exactly with experimental data, it is often difficult to figure out why and tweak the model until they agree. With AI, incorporating a few experimental examples into the training process is a lot more straightforward, and it’s not necessary to understand where exactly the model went wrong.
But, what about accuracy? For GM’s purposes, Strauss says accuracy is not a huge concern at the design stage because finer details are ironed out later in the process. “When it really starts to matter is when we’re getting close to launching a vehicle, and the coefficient of drag is going to be used for our energy calculation, which eventually goes to the certification of our miles per gallon on the sticker.” At that stage, Strauss says, a physical model of the car will be put into a wind tunnel for an exact number.
All in all, by drastically bringing down the time it takes to model the physics, large physics models enable engineers to explore a much greater range of possibilities before a final design is reached.
So this is really just to help iterate on early designs without waiting 2 weeks each time to get feedback on if a design is problematic. This is actually a really great use of machine learning.
If a physics simulation doesn’t agree exactly with experimental data, it is often difficult to figure out why and tweak the model until they agree. With AI, incorporating a few experimental examples into the training process is a lot more straightforward, and it’s not necessary to understand where exactly the model went wrong.
That’s not too bad if it’s only ever used as a rough guide in the early stages of design, with proper testing done later. But do we trust corporations not to get lazy and pressure their engineers to skip the accurate tests altogether, especially when they can then brag to their investors that AI is replacing expensive engineer time? What would Boeing’s management want to do with this tech?
“According to our latest model, if you remove the doors from the aircraft entirely instead of waiting for them to come off midair, you can save about $0.007 of fuel per flight”
The regular old FEM based models can be quite misleading and when I had the chance to dig into them some years ago, it made me vaguely anxious. Except that nobody trusts the existing CAE solvers, there’s always a process to verify that actually the structure does what you think it does.
Aerodynamics, at least the coefficient of drag, is actually really good for this because you can’t cheat the air and it’s mostly obvious when you screw it up. Which isn’t true for flutter or the more structural details.
So, yeah, there is that risk, that they’ll get high on their own supply. But thankfully the management already thinks that the current crop of CAE solvers are magical and so the credentialed professional engineers already know how to fight that battle for a lot of the structural details. (The long-suffering assembly line folk who are trying to assemble the airplane properly are, of course, a different matter and have had a lot less leverage)
Although, I’d also propose that there’s a second risk, which is that the current validation process is oriented towards the ways with which the existing FEM models screw you up and it’s likely that when the large physics model screws you up, it won’t be the way FEM models do.
I was so happy reading this thread. I got the sense that people were actually reading the article. Thank you, all. I am glad to see this kind of user commentary.
.ps This is my first post on lemmy!
I mean, they’re using AI for “polling” now so, why not?




