Sanity checking a striking claim
Posted by apgaylard on September 21, 2008
Interpreting technical data can be a tricky business, as demonstrated by an interesting article in last week’s New Scientist magazine. In a fascinating review of the recent progress made by manufacturers of electric cars the journalist tripped up in his discussion of the aerodynamic drag figures claimed for the Aptera, a two-seat electric car (the word refers to wingless insects – appropriate enough as the car does look like someone has pulled its wings off!).
“…the entire vehicle has a drag coefficient of just 0.15 – making its drag roughly the same as that caused by a single large wing mirror.”
This is a very striking claim, however, we shall see that some simple maths and a few estimates are all that is needed to show that the journalist has made a mistake here.
First, it seems that the journalist has confused the non-dimensionalised drag coefficient (CD) is with the drag force (FD). The drag coefficient provides a convenient way of comparing the relative merits of the shape of bodies of different sizes that may be travelling at different speeds (or even through different fluids). It is defined as:
CD =FD /(½ρV²A)
Where FD is the drag force (resistance of the fluid to the motion of the body); ρ is the density of the fluid (air); V is the speed of the body through the fluid and A is the projected frontal area.
For the car to have the same drag force (at the same speed, in the same fluid) as a wing mirror then the product of the drag coefficient and projected frontal area, the so-called drag area (CDA), of each body must be equal.
CD-car × Acar = CD-mirror × Amirror
With this understanding it is immidiately obvious that the statement is wrong: any driver knows that wing mirrors (more usually called door mirrors these days) are very much smaller than the smallest cars! So this equality is not going to work out; even accounting for the Aptera’s remarkably low CD (for comparison the best standard saloon cars have drag coefficients around 0.26)
It’s always good to get quantitative. There is a problem with this as I don’t know the projected frontal area value for the Aptera – so I’ll use a very generous value of 1m² (it keeps the maths simple!). I also happen to know of a relatively large (in the European context) SUV mirror with a projected frontal area of around 0.04m². This means that the Aptera’s frontal area is at least twenty-five times times greater than a typical large European mirror! For our drag area equality (CDA) to work out, so that the car and the mirror have equal drag forces acting on them, the mirror would have to have a CD twenty-five times that of the Aptera: 3.75!
This is out of the question: a flat plate sitting perpendicular to an air flow will have a CD of around 1.2; any automotive mirror is going to be much more efficient than that! A more reasonable approximation might be a hemisphere, this will probably have a CD of less than 0.4. So there’s no way that this will work out.
Given the huge difference in projected frontal areas, playing with wing mirror size will not produce a CDA that will save the journalist’s blushes: for instance, taking a CD of 0.4 the mirror would still require a frontal area around nine times the value that I’ve used. Again, this is just not plausible.
The moral of this story is that technical data can be tricky, particularly if it’s outside our usual experience – so it’s always best to do some simple sums and make some reasonable estimates, to ‘sanity check’ striking claims.
7 Responses to “Sanity checking a striking claim”
Sorry, the comment form is closed at this time.
pleick said
A colleague and good friend of mine is fond of saying (and I’m translating from German with a heavy swabian dialect here, a really difficult task) “that you can’t misjudge the way you can miscalculate”. This probably doesn’t make much sense in English, so I’ll try to put it differently: “Errors in judgement are usually much smaller than errors in calculation.”
He mostly says this to make fun of the Computational Fluid Dynamics (CFD) people.
Disregarding his playful mocking of CFD, there’s more than a grain of truth in there, and – by a strange twist – it nicely relates to your story. At least in my opinion.
I’ve come to believe that it’s really important to have a good feeling for the things you calculate – to have an idea of the size of different effects, to do order of magnitude calcualtions by hand.
Computers are great tools, and I can’t imagine doing science or engineering without them (I’m too young to have experienced anything else, anyway) – but all of our little helpers can be pretty buggy, and we users make little mistakes all the time. Sometimes the computer just calculates something else than what we believe that it calculates.
This often explains results that seem “way off the mark”. Therefore, a good judgement about expected effect size is an important tool for the practical scientist or engineer. If the results of some sophisticated calculation don’t seem right – they probably aren’t. And if they are, double-checking them is still a good thing, because somebody else is likely to be sceptical.
apgaylard said
pleick: Thanks for the observation. I think one of the best lessons I was ever taught was to try and estimate the expected size of an effect first, or to come up with more than one way of making the assessment. As someone who uses CFD nearly every day it has been invaluable. Still, this lesson also applies to experiment. I’ve come across aerodynamicists who have written off odd results (time-averaged near wake asymmetry, or asymmetric fore-body flows, for instance) as a problem with the experiment (or the hapless post-doc who did them) when they may have been genuine!
pleick said
AP Gaylard: Actually, I’m one of those guys doing the measurements that sometimes don’t make sense!
I certainly didn’t intend to imply that this lesson applies specifically to CFD (although, looking again at my comment, I can that it can be read that way). There are a lot of experiments that involve sophisticated calculations, and it applies to these calculations too.
Neither experiments nor simulation should be done blindly – having a good understanding of the complete system is very important. In my opinion, this also includes a kind of quantitative understanding, i.e. having a good idea about the orders of magnitudes and effect sizes involved.
The article you singled out is a good example. It’s probably a bit harsh to pick on a poor science journalist, but his mistake is kind of a bummer. It doesn’t really matter if it’s due to sloppy use of technical terms (sloppy language leads to sloppy thinking – and our alt.reality friends are not the only ones guilty of that capital sin, unfortunately), poor understanding of aerodynamics or just lack of attention – somebody should have noticed and should, if in doubt, have done some calculation akin to the one you’ve shown on this page. It wasn’t that hard.
PS: I’m not a CFD user myself, but I’ve often provided the experimental data to their numerics. Experimentalists and CFD users seem to have a hard time communicating with (and respecting) each other, at least in my (admittedly limited) experience.
apgaylard said
pleick:
“The article you singled out is a good example. It’s probably a bit harsh to pick on a poor science journalist, but his mistake is kind of a bummer” – I’m just grateful to have a chance to talk about something other than homeopathy. Though I was surprised at the mistake in a feature article in New Scientist.
“Experimentalists and CFD users seem to have a hard time communicating with (and respecting) each other” – I do both – it leads to some interesting conversations!
Many thanks for your continued interest.
auslaendisch said
The problem is that a large enough number of loud CFD users (usually from consultancies) go very quiet when the word “validation” is used in their presence, and a large enough number of experimentalists then believe that CFD lacks good validation and can’t be trusted. But in the end it is also possible to be a very bad experimentalist! The quality of both techniques is reliant on the expertise of the user and how well they understand what they are doing. I don’t any longer have a problem with CFD so long as the limitations are clearly understood, and I’m getting a much clearer understanding of it now as an end user of the data then I ever did sitting in conferences listening to blahblahblah about stuff I couldn’t get a proper handle on. Speaking of which, you shall shortly receive email…
pleick said
@ Ausländisch: so very true…
apgaylard said
auslaendisch:
I agree. To get the best from both approaches you need to get into the detail. It’s particularly important with CFD to know the strengths/weaknesses. The thing that depresses me most about a lot that I see pitched by consultancies, at the conferences I attend and papers I referee is the tripping up over very well known limitations that have been discussed at length in the literature for the 20 years that I have been involved in the field. My impression is that people don’t read the literature.
Validation (actually most usually calibration) is a tricky subject. It is, as you would expect, shape dependant: a CFD methodology may deliver very good results for one class of flow structures, and not another. One interesting example that I´m covering in a paper concerns a potential pit-fall caused when a new design language generated a change in wake flow structure. (from a classic “notchback” near-wake to a “fastback”).
One of my other concerns with correlation is that it is not seen as a single level activity. It’s multilayered: ranking designs, getting the directional trends, quantifying trends, absolute predictions. The level that is obtained then shows where the method can be integrated into the design process.
Anyway, I’m on holiday – so I´ll stop here. I could go on and on..
Thanks for your comments. BTW I did send a letter into New Scientist – doesn´t seem to have registered though.