TECH | Oct 11, 2018

Industry 4.0 as a revision of production models and company roles?

Predictive Maintenance in the fourth industrial revolution

Industry4.0 is a widely used tag that qualifies the so-called fourth industrial revolution, characterized by the introduction not so much of automation in itself, but of interconnected and intelligent automata.

A revolution with different gears, which channels diverse interventions of redevelopment and modernization labeled #industry40, also to take advantage in Italy of some financing aimed at encouraging change through the Industry 4.0 plan, now Enterprise 4.0.

 

The ingredients

However it is named, the 4.0 tag indicates the introduction of intelligent and interconnected agents into production systems and arrives as an applied synthesis of some enabling technologies, such as IoT, Big Data, Cloud and Advanced Analytics.

IoT is based on a data exchange principle: more or less complex objects (things) able to collect local data through sensors and make them available on the network (Internet), retrieving information about other objects from this network, which is used to perform a task or provide a service. Modern objects are born now equipped with a rich on-board sensor system, but there is also an important independent production of increasingly refined sensors, with their own connection and processing capabilities, which allows even older machines to be transformed into “connected objects”. The sensor system produces enormous amounts of data, which can simply be consumed by some object or collected for ex-post analysis, quickly arriving at volumes of Big Data.

Big Data is above all a type of architecture able to accommodate and process large amounts of heterogeneous data, which may be the critical mass produced by the sensors in the IoT in a predictive maintenance context (Product Data Management, hereafter PdM).

Cloud is the most used implementation mode for rapid setting-up and availability of offers: whether those that propose solutions more focused on IoT (Siemens Mindsphere, GE predix, etc.) or those that are more unbalanced on the Analytics front (SAP Leonardo, Azure ML, etc.), it is basically done in the cloud. In any case what is meant is remote systems (hence the attention to the amount of data transported) that should not weigh on the plant infrastructure in order not to affect also the collection system and/or contend for the resources available for management of the productive process.

Also thanks to Big Data, so-called Advanced Analytics based on Machine Learning and AI have finally gone beyond the walls of academies and are entering the industrial context seriously and massively. In fact, being able to exploit the power of clusters, they enable many and rich use scenarios that are of great interest for companies in all sectors.

PdM is located precisely on this vein of technologies and skills, promising lower maintenance costs and maximum efficiency of machinery (as in corrective maintenance) without however jeopardizing the continuity of production (as in preventive maintenance). But it is a question (as almost always in terms of analytics) of complex issues, which require significant commitment and a change in mentality that can also lead to a rethinking of consolidated operating models, challenging the habits and very concept of maintenance as a business function. Unfortunately, however, the difficulty – also cultural – of managing a forecast horizon with relative production risk and understanding what working from data means and what can be expected from ML/AI risks generating false expectations and consequent disappointments that could make us lose a great opportunity.

The correct positioning

The natural tendency to simplify and take refuge in a familiar horizon leads to continuous slippages from the theme of PdM to the more practiced problems of Condition Based Monitoring (hereafter CBM). The name, however, already indicates another level of action: that of operational monitoring, confused with predictive monitoring because it too is oriented towards sounding the alarm before failure (but now imminent) through more or less complex systems of rules (therefore rule-driven rather than data-driven) designed to detect the critical issue as soon as there are the first signs (early warning).

Detecting a condition as soon as possible and tracing the mode of evolution back to known critical issues does not mean, however, making a forecast but rather more or less sensitive anomaly detection. Although not yet in alarm, a derivation phenomenon has however already begun and this is what CBM detects. This is something very different from forecasting a future state of derivation when there are still no direct signals. Basically, the difference is not so much the point of arrival, as the starting point: in CBM we speak about something that has begun and will end in a more or less short time; in PdM we talk about something that will start in a certain time and become critical in a further and more distant time.

CBM and PdM: what are the differences?

Both are in any case important techniques for ensuring the operability of plants, but with substantially different breathing spaces:

  • in CBM, and in general in preventive maintenance of which it is perhaps the most advanced expression, we try to reason on the real state of machinery without limiting ourselves to the theoretical plate datum, monitoring its behavior in a timely manner in order to detect as soon as possible the beginning of dangerous drifts of single components, considered as elementary and isolated units
  • in PDM the horizon widens from the single component to the ecosystem of which this is part, including the behavior of the external elements that can affect its good functioning. It is based on the forecast of a future state of the ecosystem and of the single component within it, and can make room for very interesting subsequent grafts with regard to proactive maintenance, the best setting of the plants, and the correlation between production process and product quality.

Condition Based Monitoring is closely linked to control of the physical magnitudes of the appliance, which is why it is often directed by the manufacturers of the machines or by sensors, able to derive already well-qualified data and calibration rules of control thresholds, if not an already complete and “off-the-shelf” monitoring solution. This leads to believing in the possibility of an equally off-the-shelf PdM, to be verified quickly and perhaps free of charge, to then open up to a simple license and installation cost. The context, however, is significantly different and the ready-to-use solution is an unrealistic promise that often masks a more or less advanced CBM or generates expectations that will be systematically disappointed.

If Product Data Management considers the production ecosystem in which the single machine operates and the many factors that can affect its functioning, it is clear that there cannot be a universally valid setting. Each production system is different from the others, which is reflected in the data produced and collected, always different even for the same infrastructure because the processes/production programs implemented are different and the cases reported in the data that will guide the analysis (data-driven) are varied. This makes the idea of an off-the-shelf ready-to-use product unrealistic, but, on the other hand, one cannot even develop a customized model for each production reality. While data-driven analysis remains a very time consuming activity, it is certainly possible to create loose models and strategies, which do not impose stringent requirements and which can be adapted to different contexts with a relatively contained activity of implementation.

Input and output

Extreme specialization on the physical magnitudes of single machines and components, which is useful in CBM, becomes not very sustainable in PdM, where we opt for more neutral learning systems, not dependent on the specific magnitude and more focused on more general ecosystem concepts, normal behavior and deviant behavior, arranged in models that are also very structured and capable of being instantiated on the single reality after a more or less long period of training. Unsupervised models are preferable for remaining loose on the input requirements (there may not be a history of consistent failure) and for making the system more adaptive, not instructed about content that may have become outdated (for changing the production process, line, context).

Finally, the output of a PdM system also has to deal with the single plant reality (management model, process and tools with which interventions and planning are managed) in order to become truly effective. The forecast indications of a PdM system (state of life or residual life of components) insert a risk factor that should be commensurate with the potential benefits and managed together with all the other factors around which a maintenance plan rotate (warehouse stocks, availability of teams, production targets, etc.). A risk management model must therefore be engaged, a further symptom of the change in mentality that the PdM requires.

Challenge of PdM in Industry 4.0

One of the non-technological challenges enabled by Industry 4.0 therefore concerns the ability to relocate the maintenance model also in relation to other business functions, thinking of it not as a management cost but as a resource for guaranteeing the goals of the CEO, according to paradigms that are also different (proactive maintenance, best setting, guaranteed performance/availability for production targets, etc.). Technology is enabling, but the 4.0 revolution is accomplished if it accompanies a profound revision of production models and company roles.

Grazia Cazzin