By Future Electronics
Read this article to find out about:
- The cost and productivity benefi ts of adopting predictivemaintenance
- The scope for implementing predictive maintenance at the edgeusing components on the market today
- The software and enablement tools supplied by MCU, processorand FPGA manufacturers
The planned, scheduled maintenance of machines is an inherently inefficient engineering practice. Routine maintenance involves working on all the units in a population of machines, on the assumption that only one or some are likely to fail after a given number of hours of operation unless serviced.
Predictive maintenance promises to eliminate this inefficiency: if the technician knows which units in the population of machines are at risk of malfunctioning, the servicing effort can be targeted, as shown in Figure 1. The effort and cost that would otherwise have been wasted on servicing machines that are healthy can be saved. Just as important, a machine that might otherwise have failed, because scheduled maintenance would have occurred too late, can be kept running if intervention is triggered by warning signals given out by the machine.
The promised benefits of predictive maintenance are highly attractive: reduced maintenance costs combined with greater uptime and a reduced rate of machine failure. But to fulfil the promise, machine operators require a new type of intelligence about the machine, and specifically, Artificial Intelligence (AI). Predictive maintenance is an ideal application for AI, because it calls for the interpretation of complex patterns in millions of data points generated over a period of time.
Today, sensors such as accelerometers, thermocouples, microphones and barometers provide detailed and accurate measurements of the physical operation of a machine in terms of vibration, temperature, sound and pressure. Properly interpreted, these data can be used to determine the state of health of a machine, and even to pinpoint the location, cause and probability of a future fault.
Usage model favours portability
Some applications of AI call for the vast computing power provided by cloud services such as the Microsoft Azure Machine Learning service or IBM Watson Studio. But the typical usage model of predictive maintenance for small or medium-sized industrial enterprises will favour the use of portable sensor equipment. The data from machines often only needs to be logged periodically, not continuously, and new intelligent data loggers are likely, at least in the short-term, to be relatively expensive pieces of equipment. A portable data logger may be affixed temporarily to a machine, left to acquire data for a period of time, before being moved to a different machine in the same facility. In this way, a single data logger can serve multiple machines.
If predictive maintenance is applied to a consumer item such as a washing machine, the device might have no internet connection, in which case the predictive maintenance system needs to be a stand-alone operation, and to flash an error code on the user interface display or control panel if it detects a potential problem. In an industrial setting, security and privacy concerns might also prevent operators from streaming machine data logs over the internet.
In these use cases, the predictive maintenance analytics, which in AI terminology is the inference engine running a trained machine learning algorithm, need to be performed locally, at ‘the edge’, and not in the cloud.
Now microcontroller, processor and FPGA manufacturers are beginning to demonstrate the capabilities of their devices by providing frameworks and ready-made system designs for predictive maintenance at the edge.
Fig. 1: Predictive maintenance allows machine servicing efforts to be targeted (Image courtesy of Siemens Pressebild)
Competing Approaches to Machine Analytics
Design engineers will naturally wonder whether the hardware platform with which they are already familiar, such as an MCU, processor or FPGA, is capable of meeting the computing requirements of a predictive maintenance operation. Surprisingly, the answer is that even a low-cost 32-bit MCU based on a mid-range Arm® Cortex®-M4 core can support some forms of AI application.
But the choice of hardware platform ultimately depends on the objective for predictive maintenance: which insights does the predictive maintenance system need to produce, and which resources are available to train the system?
AI systems are produced by a process of either ‘supervised learning’ or ‘unsupervised learning’ on a training data set. In supervised learning, the data set, such as the stream of vibrations, squeaks and sounds that an industrial motor produces, is curated and labelled. The labels tell the algorithm what it has to recognize.
In unsupervised learning, the machine learning system is presented with a mass of uncurated data, finds patterns in the data, and produces insights based on its recognition of the patterns, or of excursions from them.
The algorithm produced by supervised learning of time-series data such as vibration and sound logs is smaller and simpler than that produced by unsupervised learning, and today, the hardware and software available from semiconductor manufacturers largely assumes that the user is implementing supervised learning.
Anomaly detection, a technique based on supervised learning, has been found to provide an excellent model for uncovering potential faults in machines such as industrial motors and home appliances.
Anomaly detection depends on a well-curated data set: sensor measurements are taken from a machine during normal operation, and the model is trained to recognize features such as peak-to-peak values, mean average and so on. The model can then be trained to detect anomalies, as values which fall outside a threshold that the user sets.
This form of predictive maintenance is favoured because the algorithm it produces can run on a simple hardware platform such as an MCU. It effectively detects potential faults, but the drawback is that it provides limited insight into the type of fault or its cause as its output simply flags a problem.
Fig. 2: NXP’s support vector machine algorithm detects excursions from normal patterns of vibration in a machine
Semiconductor suppliers are making considerable efforts to support their customers’ implementation of anomaly detection software. Perhaps the easiest way to implement it is through the use of a ready-made reference design: NXP Semiconductors, for instance, is to launch an MCU Anomaly Detection Solution by the end of 2019. Based on an i.MX RT1060 crossover processor, the reference design board also features an FXOS8300CQ accelerometer and NPS3000VV differential pressure sensor. Intended for use in smart appliances, industrial machines and smart home devices, it implements a type of anomaly detection algorithm called a support vector machine, as shown in Figure 2.
It is a general principle of embedded system design for machine learning applications that it should implement the least complex algorithm that will achieve the desired accuracy. On this basis, it is worth evaluating the STMicroelectronics MEMS motion sensors that include its Machine Learning Core (MLC). The MLC is trained through supervised learning:
the developer defines the classes of motion to be analysed, and collects logs of relevant data. Offline data analysis of statistical parameters such as variance and peak-to-peak values in a machine learning tool such as Waikato Environment for Knowledge Analysis (WEKA) produces a decision tree algorithm. This algorithm then runs on the sensor’s MLC, without any involvement from a host microcontroller or processor.
For vibration monitoring, for instance, the MLC can support one decision tree and two nodes at an output data rate of 26Hz. The power overhead of this operation is additional operating current of just 1μA. ST sensors that include an MLC are the LSM6DSOX, LSM6DSRX and ISM330DHCX six-axis motion sensors, and the IIS2ICLX three-axis inclinometer.
Most semiconductor companies follow the same model as ST, providing tools for compiling a trained model to their hardware, but requiring developers to use a third-party training framework, such as WEKA, Caffe or ONNX, to train the model.
QuickLogic’s SensiML subsidiary is unique in providing a fully integrated, end-to-end design flow. For developers of predictive maintenance applications, this eliminates the need to master a third-party framework which is primarily aimed at computer scientists rather than at the embedded community. The advantage of SensiML’s Edge AI Software Toolkit is that it enables developers to build intelligent IoT sensing devices in days/weeks without data science or embedded firmware expertise.
The software includes the SensiML Data Capture Lab, an integrated tool for the collection and curation of a training data set. For predictive maintenance, this data set will be time-series data derived from sensors such as accelerometers and gyroscopes. The Edge AI Software Toolkit analyses the labelled data to produce a classifier algorithm which implements anomaly detection. The same tool compiles the algorithm to run on the chosen hardware target, such as a microcontroller or QuickLogic’s own QuickAI™ system-on-chip platform.
Increased Hardware Capability for Greater Sophistication
The common feature of the anomaly detection method supported in the above examples from NXP, ST and QuickLogic is that it is the easiest technique for predictive maintenance to implement, and the algorithm can run on relatively low-power hardware such as the i.MX RT crossover processor based on an Arm Cortex-M7 core.
Support for more sophisticated forms of predictive maintenance system based on unsupervised learning are possible: they have the potential to provide greater insight into machine operation, to pinpoint the cause and location of faults, and to provide earlier and more detailed indications of potential faults to enable quicker intervention.
While there is less support currently for this kind of software approach from semiconductor manufacturers, the hardware capability is readily available: devices such as ST’s new STM32MP1 processor family, or the i.MX or LayerScape processor series from NXP have more than enough raw computing power to run highly sophisticated types of machine learning algorithm. Low- and mid-density FPGAs such as the PolarFire® FPGA family from Microchip Technology or the iCE40 series from Lattice Semiconductor are also ideal for this type of AI application. In cases that require always-on sensor hub operation, low-power FPGAs can often consume less energy than an MCU or processor.
Any designer wishing to explore the scope for unsupervised machine learning aimed at this kind of hardware target should contact the machine learning specialist support engineers at Future Electronics, who will be pleased to help. Future Electronics can also supply any of the reference design boards or tools mentioned here from NXP, ST or QuickLogic.