internet of things: fault detection and predictive maintenance

We are starting a new series of articles this month focusing on key challenges and opportunities with the “new” data that is beginning to emerge and will flood the industry over the next few years. What is this new data? Why is it a big deal?

In this first part we will present the background and set the stage for the benchmarking studies we have conducted over the last several months.The next few parts will cover our data analytics processes, our investigation of the different algorithms and tools, their strengths and weaknesses for this application, and the next steps. We have explored using several different analysis tools/platforms: RapidMiner, R, Python, Spark, and H2O. We have tested the application on a Hadoop environment as well as a standalone platform. Below is the whole story.

Background

Historically a lot of attention has been focused on the data emanating from manufacturing operations. But in a few years the volume of data coming from the products of manufacturing is going to vastly overshadow any other manufacturing related data. We are going to focus mostly on the automotive space here, which forms a major subset of all the data coming from connected products (and the wider IoT ocean of data). For example a fully instrumented car can generate about 25 GB/hr but if we move to a self driving car, this volume increases by an order of magnitude to 250 GB/hr

This is very soon going to be a big issue for manufacturers because they will be the natural custodians of this data (data ownership however is still an open issue). For example, the data from the sensors in all the newly produced cars from a “small” manufacturer (with less than 3% market share) can amount to 60 Exabytes/day. In comparison the internet today generates only 1 Exabyte/day! The greatest amount of value from this data initially will be derived by the manufacturers to address quality, warranty and recall issues. For that, fault detection and predictive maintenance will be key enablers.

Where is this data coming from?

With sensors monitoring everything from tire pressure to engine RPM to oil temperature and speed, cars can quickly generate terabytes of data every hour.

The vast majority of this data is used in real time to control or report on the functions of the vehicle and has not been leveraged for its long-term value. At first sight, collecting such data for long term analytics may seem redundant. For example, receiving a thousand “Tire Pressure Normal” messages from a sensor does not immediately seem to carry a lot of value, so automakers typically did not bother to store the data. However that mindset is now changing.

Inside each of thousands of prototype and field testing vehicles there exists a “black box” to capture second by second data from the dozens of sensors and control units which manage a modern automobile. The data from a black box can be accessed via a vehicle’s on-board diagnostic (OBD) port which is typically located under the dashboard of a car. They collect data on 500-750 different vehicle performance parameters which can quickly add up to terabytes if stored. Also available are “fault codes” that are triggered by the various control units if something does not function to specifications.

Fault detection and predictive maintenance: what is the right data?

The potential opportunities to utilize this data are many. Here are but a few initial “business cases” which carry value to the product development teams.

  • Statistical understanding of normal operating conditions for hundreds of different vehicle parameters
  • Detection/identification of root causes for fault codes and system failures
  • Shorten product development cycle by allowing faster iterations
  • Preventive maintenance applications

One of the main objectives of “doing data science” on this sensor data is to be able to predict occurrence of automotive fault codes or fault detection. Based on these use cases, the predictions can be layered in several ways: we could be simply predicting the occurrence or non-occurrence of a fault, or we could try to be more specific about the type of fault code and the root causes. We can do these predictions “off-line” or we could address them in real time. Each layer of analysis is likely to yield new insights about the vehicle systems and their interactions. It therefore makes sense to approach this like we were peeling an onion.

We can identify at least four different “layers” or levels of analysis with the data as follows:

  • Phase 1: Predict occurrence of fault or no-fault
  • Phase 2: Predict type of fault and identify factors influencing the occurrence
  • Phase 3: Correlate fault type and frequency to system-level failure logs (combing unstructured and structured data)
  • Phase 4: Use correlations extracted from phase 3 and time series analytics to predict real time or near real-time system-level occurrences

Over the next several articles we will focus on our efforts from Phase 1. We will describe how the data was collected and how it was transformed before we applied several machine learning algorithms using several different analytics platforms. We will compare our experiences with each algorithm and tool and summarize our findings. At the end of the article series, we will be publishing a white paper on fault detection and predictive maintenance platforms which will capture the details in full and make it available to those who are interested.

Photo by Samuele Errico Piccarini on Unsplash

Originally published on Wed, Aug 05, 2015 @ 07:53 AM

No responses yet

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

simafore.ai