When we think about Information Technology, one of the assumptions we consider axiomatic is that everything develops with great speed. However, sometimes this is not completely true. In fact, to be quite frank, when dealing with really important solutions, you always need a little more time than you would have thought for them to really take hold. This happened with sms (text messaging), which took years to establish itself as a means of interpersonal communication. With video telephony, which we have been talking about for over twenty years, but has proved to be effective only in the last few years. With e-books, which were talked about for decades before they spread significantly. And what about Blockchain, domotics, the thousands of technological applications which – from the moment of conception – needed an ever longer gestation time than expected. It even happened with Internet, which needed the Web? In short: it is not wrong to say that even in the age of the Net, good ideas need time to assert themselves. They need this partly because they induce a change in people’s behavior (and this type of change cannot be compressed too much in time), and partly because they require contextual conditions – technological but not only – which are not always immediately available.
It is not surprising, therefore, that the concept of Digital Twin, which there is not much talk of at the moment, is not the “offspring” of the Industry 4.0 phenomenon, but rather its ancestor. It was Michael Grieves, in fact, a professor at the University of Michigan, who coined it. And he did so in 2002, when the concept of Industry 4.0 was still a long way off (it was born in 2011).
What is Digital Twin?
What Digital Twin is, is soon said: the “digital twin” is the reproduction (digital, in fact) of a physical process. A reproduction which is based on real data and one that allows – thanks to such data – to simulate changes to this process, analyzing the results, without the process – however – actually being affected. Digital Twin is therefore the modeling of a data-based system, which allows simulations to be carried out on a virtual reproduction of such a system. In short, a kind of second life of the processes, where variations, behaviors, reactions, consequences can be tested. All this without any impact on the physical world. The advantages are obvious: having the Digital Twin of a production plant allows you to optimize it by testing the effects of changes inside a protected environment, without risking negative feedback on production.
But why is there so much talk about Digital Twin now, more than 15 years after its conceptualization?
Soon said: because to reproduce a physical process in a reliable enough way in order to be able to test variations on it requires data. And not just a few. And the ability to process them.
In 2002, just to give a reference, the concept of Web 2.0 was not yet born and Wikipedia was one year old. In short: from the point of view of computer science, a zoological era ago. Since then, computational capacity has increased more than exponentially, the pervasiveness of the network has become a fact, we have started to talk about the Internet of things seriously and this has led to the development of widespread sensors which allow us to acquire data from operational technologies in a continuous manner, we have developed Artificial Intelligence systems able to simulate real phenomena with increasing accuracy. In short: an ecosystem has been developed which allows acquiring the quantity (and quality) of information from the physical world which is sufficient to be able to simulate reality accurately. Behind the models based on Digital Twin, in fact, there is not one specific technology, but a set of enabling technologies that have now reached a critical mass such as to allow effective applications and use cases.
The evolution of the models based on Digital Twin, whose characteristics are well illustrated in the White Paper recently published by Engineering, available for those who wish to learn more about the subject, is only at the beginning. The intersection between Information Technology and Operational Technology will open up integration perspectives which today we have difficulty even imagining. The development of what will come after Industry 4.0 will be based on cross feedback models in which the boundary between analog and digital will be so blurred that it will become almost completely indistinguishable, in a context in which the same concepts of “real” and “virtual” will lose their meaning, replaced by a dimension where analog and digital will blend into developing models of increasingly complex realities: a reality where it will be up to us to grasp the best of both worlds.