analytics by IT

The recent Volkswagen emission scandal highlights two important problems that automotive industry faces and both of these relate to how a complex engineered system such as a car is designed and developed. Both of these relate to non-systems thinking type of engineering. Both can be addressed by basic fault detection enabled by analytics.

The first one relates to how many components or sub-systems are tested or verified as stand alone units working under idealized conditions. Not only are automakers guilty of this, but even testing agencies are, as the VW case proved. The EPA or other global testing agencies, such as in the EU, mandate tests which are run in controlled lab conditions which as we now found out have been gamed by car companies. The BBC explains in this article how this happened in the emissions case. The tests are “very sedate and short. There is no resemblance to real-world driving. The gentle acceleration, cruising speed, and braking used in the tests would be unrecognisable to most drivers […] There is no simulation of prolonged motorway driving, and carmakers use the most optimal settings to improve performance, such as the bare minimum of fuel and switching off air conditioning“. This type of behavior is not limited to emissions testing of course. There was a recent scandal relating to airbags which arguably was also the effect of a similar practice – the problem impacted only certain models of vehicles under certain geographies (think weather) even though the same airbag modules were installed in dozens of different cars and models.

The second one relates to how meeting a test requirement means simply meeting a single target number – also called a “bogey” in industry parlance. At a recent seminar a fault-detection-to-help-with-dv-testing.pngveteran of the industry related a story of how one supplier responsible for selling a $1 plastic part (radiator fluid over flow bottle) was hit with a recall cost of $2000+ for every car their part was involved in. Here the problem was pretty clear: the designers responsible for the bottle “over optimized” the design to cut costs and ended up creating some faulty bottles which were prone to leaking and thus resulting in severe engine damage due to overheating. Two things happened here: the bottle geometry met all of the OEMs requirements because the requirements only specified a certain target number such as minimum thickness of the plastic. However manufacturing variability resulted in a distribution of bottle cross section thicknesses. Not surprisingly many of the bottles which fell way short of the bogey thickness sprang a leak. Other areas are no different: an example that comes to mind is crash testing. A small change in impact angle can result in a drastic swing in the energy absorption performance, but OEMs and testing agencies only test at fixed idealized conditions such as constant impact angle.

Automakers are recognizing that the long term reliability predicted by the idealized design validation (DV) testing conditions does not represent real world. DV testing relied on two faulty assumptions: an isolated testing and analysis of a component/sub-system performance is sufficient and shooting for a single target of performance will ensure similar on-road performance. One key challenge that drove this type of thinking and development process was the cost of collecting and then analyzing test (or empirical) data. Today this challenge is significantly weakened: data is virtually free – so half the cost which supported this deficient way of design and development is gone. The focus is now rightly shifting toward collecting data during the normal operations. For example, prototype vehicles are driven for hundreds and thousands of miles under all sorts of weather and road conditions, and much of the performance data is hidden in the control units and never considered as a viable source of input for DV testing. Fault detection embedded in the control units can thus be very valuable.

This data can be harnessed for a variety of purposes starting with simply predicting if the control units emit a fault code: basic fault detection and prediction. It is natural to extend this to application to understand diagnostics and root causes at a much more granular level, which is what we are planning to do in subsequent phases. Using this data and analyzing it may sound like an expensive investment – but considering this cost in the light of warranty and recall costs makes it a solid investment.

Originally posted on Fri, Oct 02, 2015 @ 08:48 AM

No responses yet

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

simafore.ai