Webinar: How modelling and simulation can accelerate product development
Ross Jones and Barry Dobson share the modelling and simulation methods they use to help companies avoid potentially costly and time-consuming dead-ends through understanding the physics that shape a potential device, and its operations.
We would like to thank all who attended, and those who joined in by asking questions. Due to time restrictions, we were unable to answer all of them on the day, but we have published a selection of questions and answers here.
Webinar Q&A
A: Clearly, there’s no one-size-fits-all solution; we use a hierarchy of solutions. For 1D models we often use the PDE solvers in Matlab or Mathcad which enables us to work in a programming environment where it’s easy to run lots of different models and, for example, do design optimisation. For simple 2D models we sometimes use FEMM, which is freeware that also has a programming interface to Matlab. Our general tool is COMSOL, which is easy to use and has great flexibility. For technically demanding simulations we often outsource to experts using top-end solutions, e.g. Fluent for fluid dynamics, Ansys HFSS for electromagnetics, Abaqus or Ansys for structural mechanics. You usually need an expert who uses these packages day in/day out to get the best results.
A: There are various ways in which a model can go wrong, so various validation methods are needed. A model might have a bug – an error in how it was set up – and to catch that we would test it on a benchmark problem where the answer is known. A model might not have the right material parameters – this is particularly tricky when the physics has some loss mechanisms such as damping or attenuation, or when the material characteristics are complex, like with nonlinear or anisotropic materials or piezo materials. In this instance a simple experiment can be done that essentially characterises the material and compares this to a simple model. Finally, the model is likely to omit some details of the real world, and so only be an approximation – that’s the nature of a model. A word of warning is that a lot of time can be spent fine-tuning a model to get it to match experimental data, but that can be self-defeating. The most important thing to do is check that the model gives the right trends: if you vary a design parameter, does it predict the right change in the output parameters? When building a prototype, ensure that you’re able to vary these parameters.
A: There are several aspects to this. Firstly, stick with the assertion that simple models are great for providing insight. The first objective in modelling is to produce some design rules which the whole design team can understand and work to. Second, the world of AI is establishing some infrastructure which is very useful for simulation: good data management, easy access to large computing resources. Thirdly, AI is adding power to design optimisation tasks. Design optimisation involves navigating a large parameter space, and that’s what AI is good at. An extreme example is shape optimisation, such as how can I achieve structural support with the least material. Finally, AI is demonstrating an ability to make inferences from a very large number of simulations – DeepMind’s work on protein folding is an astounding example of this. Find out more. https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery.
A: The ideal modeller does three things. They take a real-world problem and figure out how to turn this into a maths problem – that requires a good understanding of the physics and application. Then they solve the maths problem – even if you have fancy simulation software, you need mathematical skill to use it well. Finally, they must turn the maths answer back into a real-world answer. Again, that requires understanding the physics and the application, but also an ability to communicate things intuitively. It’s difficult to get all three skills in one person. We usually start with mathematical physicists, because the maths is a “use it or lose it” skill and look for those with some real-world experience.
A: I like equivalent circuit models. A lot of physics involves the flow of some quantity in response to a potential – that can cover various flavours of fluid dynamics, electromagnetism, thermal physics, and more. In a circuit model you lump the geometry into key objects and use simple calculations for the flow between these objects. There are lots of intuitive concepts from electrical circuits that make it easy to think about these other types of physics: resistances in series and parallel, transmission lines, reflections at impedance mismatches, and so on. This gives lots of tools for finding new solutions.
A: Yes, many, it’s very easy to get bogged down in complicated simulations that take time and don’t yield clear answers. I’m wary of systems with complicated materials – by the time you’ve experimentally characterised the material and put it into a simulation, you could probably have done some experiments that give you the answer you actually needed. And there’s some physics that is notoriously difficult, like turbulent fluid flow. If a device really hinges on the details of how a boundary layer separates during transient flow greater benefit could be gained from experiments with good instrumentation rather than performing simulations – or an alternative option is to change the design needs so that it’s not sensitive to complex physics.
A: Yes, in the context of data mining we use algorithms to set up a pipeline for training classification models, e.g. for feature evaluation (correlation with outcomes), feature engineering and feature selection. Another area where software relies on models is model based design for example, control of a system is based on a model and the embedded control code is automatically generated from a modelling package such as Matlab. We also pay due respect to the classic mathematical foundations of algorithms (“Numerical Recipes”, Knuth’s “Art of Computer Programming”), which still have a role to play in modern AI.