Figure 1: Top 15 cyber threats (source: ENISA)
Automation and emergent cyber-security threats
Automation is the key factor that will be increasingly present in the factories of the future. By definition, automated systems are characterized by a set of software and hardware components that allows computer systems, network devices or machines to function without any manual intervention. Processes are completely automated with the help of control loops and special logic: these systems use several events as input and perform operations based on conditional decision-making and specific control logic. Therefore, operations can be performed without a human operator physically located at the site where the system is installed.
This type of systems is used in a wide range of applications, such as control and monitoring systems, data security applications, factory automation systems, automated message response systems, autonomous vehicles etc.
The rise in automation leads to significant advantages in economic and safety terms. Some of the benefits are the elimination of the risk of human errors, the improvement of user productivity, the development of standardized operations and a greater operations management. At the end of the processes, the accuracy and precision of the final job is even greater in most of the cases.
A crucial issue of automated systems is without doubts the high security standards that these devices must guarantee; in fact, they are exposed to spam, virus and security threats, therefore they need a certain degree of protection. Regarding cyber security, threats against automation networks are indiscriminate, but they can carry potentially destructive results. Examples of attacks are network spoofing and denial of service which can cause the loss of process alarms and, indeed, potential serious safety issues which can cause damages to the system itself but also to the external environment (other systems, workers, etc.).
An example of a cyber threat against operation technologies that caused significant material, economic and reputational damages was the Stuxnet computer virus . It caused the destruction of the uranium enrichment centrifuges in IRAN through legitimate commands (syntactically and semantically correct) but incorrect for the functioning of the system. The purpose of the software was the sabotage of Iran’s Natanz nuclear power plant. In particular, the virus had to disable the centrifuges of the plant, preventing the detection of malfunctions and the presence of the virus itself.
The Stuxnet architecture is extremely complex with three large modules: a worm that damages PLCs and allows the software to self-replicate on other machines, a link that runs the copies created by the worm and a rootkit that hides the virus making it undetectable.
To be successful, a cyber-attack against an automated system must learn the information on the targeted plant and its operating methods. This information is acquired by the malware through sniffing operations that is passive interception of data passing through the network in a prolonged first phase in which operation is dormant. An example is Blackenergy3 which was found to be installed and dormant in some control systems of various American utilities for at least 5 years. Indeed, latency is a weakness for this type of attack as there is the possibility that the malware can be identified during normal verification operations, sometimes even fortuitously as recently occurred in Saudi Arabia where investigations to determine the causes of a fire caused the discovery of dormant malware (which had no implications for the fire).
Risk Assessment as a countermeasure
Risk Assessment is a methodology aimed at identifying hazards and risk factors that have the potential to cause harm (hazard identification), analyzing and evaluating the risk associated with that hazard (risk analysis, and risk evaluation), and determining appropriate ways to eliminate the hazard, or control the risk when the hazard cannot be eliminated (risk control). Indeed, Risk Assessment can represent a useful weapon against cyber-security threats as main tool for prevention.
The aim of the risk assessment process is to evaluate hazards, then remove that hazard or minimize the level of its risk by adding control measures, as necessary. By doing so, a safer and healthier workplace has been obtained. The goal is to try to answer the following questions: what can happen and under what circumstances? What are the possible consequences? How likely are the possible consequences to occur? Is the risk controlled effectively, or is further action required?
Assessments should be done by a competent person or team of individuals who have a good working knowledge of the situation being studied. Include either on the team or as sources of information, the supervisors and workers who work with the process under review as these individuals are the most familiar with the operation.
In general, a risk assessment is developed following the five following steps :
Step 1: Hazards identification
Hazards can be identified by using a high number of techniques: one of the most common remains interviewing operators and make safety audits and test on the examined systems. Moreover, database could be useful to carry out further analysis on external factors, e.g. identify most frequent cyber threats.
Step 2: Definition of who may be harmed and how
Once hazards have been identified the hazards, the relative levels of risks must be evaluated. It is necessary to understand which component of the system can be damaged, which services can be affected and which level of business disruption could be caused by the cyberattack.
Step 3: Evaluate the risks and decide on precautions
Once hazards have been identified, the step consists of the completely remotion of the associated risks. When it is not possible, then certain control measures should be put in place. A risk assessment matrix is one of the possible methods to quantitatively measure the level of risk and residual risks, but evaluation is growing more and more accurate through forefront methodologies, data analysis, modelling and AI algorithms.
Step 4: Record findings and implement them
Significant findings should be recorded. These findings should include the hazards identified, the risk level estimated, the likelihood and potential impact of each threat, the potential mitigation effect of countermeasures etc.
Step 5: Review risk assessment and update if necessary
Risk assessment must be periodically reviewed and updated. Changes could be in terms of new equipment, substances, and or tasks, that have been introduced since the last assessment took place. In case of significant changes on past risk assessments, a new assessment is necessary to consider new hazards, as well as possible control measures that should be introduced.
New findings for Risk Assessment of Automated Systems
Threats mitigation and the improvement of the security levels in the field of automated electronic systems is a widely discussed topic. As a consequence, several methodologies have been developed in order to implement a valid risk analysis in the best possible way. A first help for the identification and mitigation of risks affecting information and automated systems could be provided by software applications. For instance, applications could be designed for the use with a database containing historical data on errors and risks, as well as an approach based on risk assessment and neutralization. This allows excluding the most serious risks at the earliest stages of the life cycle of the information system, while focusing on those that are considered relevant for the examined system.
Being automated systems domain very broad, targeted methodologies have been developed to respond to needs of a certain sector. For what concerns Automated Driving (AD) systems, new concepts have been recently introduced, such as a state-of-the-art framework for cyber-security analysis, known as Threat Analysis and Risk Assessment (TARA). It quantifies the likelihood and the impact of attack and combines them in order to derive an attack risk value. A novelty that could be introduced is the bespoke integration of the impact calculation, which incorporates the notion of controllability of an attack by the AD system and/or by the driver . While, for instance, in the Robotic field researchers have proposed an “Attacker” model and they have confronted it with the minimal set of requirements that industrial robots should honour: precision in sensing the environment, correctness in execution of control logic, and safety for human operators. It has been demonstrated that the modelled attacker can subvert such requirements through the exploitation of software vulnerabilities, leading to severe consequences that are unique to the robotics domain .
The combination of all studies relating to risk assessment for the various automated systems are useful and crucial to the design of any system and the result will be a safer overall environment for the users to operate, as well as resilient and protected equipment to safeguard continuity of operations and economic value of assets.
 Josh Fruhlinger, “What is Stuxnet, who created it and how does it work? | CSO Online,” 2017. https://www.csoonline.com/article/3218104/what-is-stuxnet-who-created-it-and-how-does-it-work.html.
 Health and Safety Executive, “Risk Assessment: A Brief Guide To Controlling Risks In The Workplace,” Toxicol. Pathol., p. 5, 2014, [Online]. Available: www.hse.gov.uk/pubns/indg163.htm.
 A. S. Boranbayev, S. N. Boranbayev, A. M. Nurusheva, K. B. Yersakhanov, and Y. N. Seitkulov, “Development of Web Application for Detection and Mitigation of Risks of Information and Automated Systems,” Eurasian J. Math. Comput. Appl., vol. 7, no. 1, pp. 4–22, 2019, doi: 10.32523/2306-6172-2019-7-1-4-22.
 T. Stolte, G. Bagschik, A. Reschka, and M. Maurer, “Hazard analysis and risk assessment for an automated unmanned protective vehicle,” arXiv, no. Iv, 2017.
 D. Quarta, M. Pogliani, M. Polino, F. Maggi, A. M. Zanchettin, and S. Zanero, “An Experimental Security Analysis of an Industrial Robot Controller,” Proc. – IEEE Symp. Secur. Priv., pp. 268–285, 2017, doi: 10.1109/SP.2017.20.