Dr Matt Luckcuck and Dr Marie Farrell discuss how robust Formal Methods can help to verify autonomous systems.

Autonomous and automated systems present a variety of challenges for robust verification and validation. Autonomous systems make choices without human intervention; they are often embodied in robotic systems, which mean that they can interact with the real world. Autonomous systems are being introduced into safety-critical scenarios, where their safety must be assured and, often, demonstrated to a regulator.

Formal Methods are a broad category of mathematically-based approaches for developing and verifying software that provide robust evidence that a system obeys its specification. (You can find a more detailed introduction to Formal Methods on the Formal Methods Europe association website.) Trying to introduce Formal Methods is often met with assumptions, both supportive and critical, about how useful they are; for example two papers describing myths about Formal Methods, published in 1990 [1] and 1995 [2] respectively, contain points that are still being used to dissuade the introduction of Formal Methods.

Formal Methods have been shown to be effective in real-world situations; they have been successfully used in many industrial projects [3], and in academic work to overcome the challenges faced when specifying and verifying autonomous systems [4]. We argued in a previous position paper [5] that autonomous systems, particularly autonomous robotic systems, need the strong guarantees that Formal Methods can provide.

Verification Challenges
Autonomous systems present a variety of challenges to verification efforts, but the academic literature on formal verification for autonomous systems is already investigating many of them [4].

One major verification challenge that autonomous systems pose is the autonomy itself. The system is making decisions without human intervention, planning and reacting to its surroundings. How the autonomy is implemented has a large impact on whether the decisions can be verified against a specification of the system’s expected behaviour. Autonomy implemented using Symbolic Artificial Intelligence (such as agent-based systems) is often easier to verify, whereas Sub-Symbolic (connectionist) Artificial Intelligence can be more difficult. This should be taken into consideration when designing the system, because the opportunity to verify the decisions that the system is making should not be squandered.

Another challenge is presented by the autonomous system’s interactions with an often unpredictable environment. If the autonomous system is controlling a robot, then this interaction is physical; if the autonomous system is providing decision support or scheduling (for example) then it is interacting with other computer systems and humans, but there is often still some ultimate impact on the real world. This poses two problems:

  1. the potential (safety, security, ethical) impacts that the decisions can have on its environment; and,
  2. the reality of the system’s environment changing in ways that were not predicted during its design.

When introduced into safety-critical scenarios (where failure could result in human injury or death), autonomous systems must often pass regulator scrutiny before being used. There are often standards or guidelines that are specific to a particular sector or to a component’s function; regulators themselves may also publish principles that should be obeyed during system development. The challenge here lies in both: ensuring that the regulators understand what autonomy can (and cannot) do, and in ensuring that a regulator accepts and understands the results produced by formal verification approaches.

The FMAS Workshop
In 2019, we founded the workshop on Formal Methods for Autonomous Systems (FMAS) to provide a venue for academic research specifically dealing with these verification challenges, which are unique to autonomous systems, using Formal Methods. The third edition of FMAS (FMAS 2021) was held on the 21st and 22nd of October 2021 online, but run from Maynooth University in Ireland. FMAS 2021 showcased a broad range of Formal Methods, with papers covering many styles of autonomy, including work on Agent-Based (Symbolic) and a variety of Machine Learning (Sub-Symbolic) approaches, and included systems operating in a variety of sectors and scenarios.

The front of the Eolas building at Maynooth University, Ireland. It is a grey and red, flat-fronted modern building, with several rectangular windows, and a revolving glass entrance door.

FMAS 2021 also had two invited speakers, who each presented a broader overview of work on applying Formal Methods to autonomous systems. Prof. Clare Dixon at the University of Manchester in the UK, presented a broad collection of research that works towards verifying autonomous robot systems. Dr Divya Gopinath from the Robust Software Engineering (RSE) group at NASA Ames Research Center in the USA, presented a variety of formal approaches to quantify and verify Deep Neural Networks.

The proceedings for FMAS 2021 are available via EPTCS. The FMAS workshop will return in 2022.

We Can Formally Verify Autonomous Systems
Just like other verification techniques, formal verification requires time and skill, and a clear specification of what the system should do. They are not a silver bullet, but Formal Methods are a powerful technique to have in your repertoire when verifying autonomous systems. They can be used throughout the systems development life cycle, they can provide both static and dynamic verification; and they can be used exhaustively and automatically to check unambiguous specifications of the system’s behaviour [6].

Checking the decisions an autonomous system makes is an important opportunity to ensure that it can never choose to do something unsafe or unethical, which is analogous to a human passsing a test before being allowed to pilot a plane or drive a car, for example. Applying any verification to the decision-making components requires the system design to prioritise verifiability. Formal verification can be applied to agent-based autonomy (e.g [7]) and formal techniques for machine learning are under development (e.g [8]).

An autonomous system’s interactions with the real world must also be carefully checked. Formal verification of interactions is possible with current techniques (for example [9]) but a key issue here is the changing environment and updating the autonomous system’s model of its environment, work such as [10] is beginning to tackle autonomous systems safely interacting with their environment.

Formal methods provide robust evidence that a system obeys its specification. To ensure that regulators accept this evidence will require collaboration between Formal Methods practitioners and the regulators themselves. Some sectors have specific standards for using formal methods, such as the Aerospace sector’s DO-333/ED-216 [11], which is a supplement to the DO-178C/ED-12C standard [12]. In other sectors, conversations with the regulator are only just beginning. For example, a collaboration between academics and the UK’s civilian nuclear regulator produced a white paper [13] to start the discussion of how to verify autonomous systems used in hazardous environments.

If an autonomous system fails, it can have huge safety, security, and ethical impacts; so the most robust verification techniques should be used. Formal Methods don’t solve all problems, but they are capable of providing robust verification of many aspects of an autonomous system’s behaviour. Formal Methods can be used to verify autonomous systems, and they should be included in the toolbox of autonomous systems verification techniques.


  • [1]. Hall A. Seven myths of formal methods. IEEE Softw 1990; 7(5): 11–19. DOI:1109/52.57887. URL http:
  • [2]. Bowen J and Hinchey M. Seven more myths of formal methods. IEEE Softw 1995; 12(4): 34–41. DOI:1109/
    . URL http://ieeexplore.ieee.org/document/391826/.
  • [3]. Woodcock J, Larsen PG, Bicarregui J et al. Formal methods: Practice and Experience. ACM Comput Surv 2009; 41(4): 1–36. DOI:1145/1592434.1592436. URL http://portal.acm.org/citation.cfm?doid=1592434.1592436.
  • [4]. Luckcuck M, Farrell M, Dennis LA et al. Formal Specification and Verification of Autonomous Robotic Systems: A Survey. ACM Computing Surveys 2019; 52(5): 1–41. DOI:1145/3342355 URL http://dl.acm.org/citation.cfm?doid=3362097.3342355.
  • [5]. Farrell M, Luckcuck M, and Fisher M. Robotics and Integrated Formal Methods: Necessity Meets Opportunity. In Furia C and Winter K (eds.) Integr. Form. Methods, LNCS, volume 11023. Springer. ISBN 978-3-319-98938-9, pp. 161–171. DOI: 1007/978-3-319-98938-9_10. URL https://doi.org/10.1007/978-3-319-98938-9_10. 1805.11996.
  • [6]. Luckcuck M. Using formal methods for autonomous systems: Five recipes for formal verification. Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability, July 2021. DOI: 1177/1748006X211034970 , URL: https://arxiv.org/abs/2012.00856.
  • [7]. Dennis LA, Fisher M, Webster M et al. Model checking agent programming languages. Automated Software Engineering 2012; 19(1): 5–63. DOI:1007/s10515-011-0088-x. URL https://link.springer.com/article/10.1007%2Fs10515-011-0088-x.
  • [8]. Usman M, Gopinath D, and Păsăreanu C. QuantifyML: How Good is my Machine Learning Model?. In Proceedings of the Third Workshop on Formal Methods for Autonomous Systems, October 2021. DOI:4204/EPTCS.348.6.
  • [9]. Farrell M, Mavrakis N, Dixon C, and Gao Y. Formal verification of an autonomous grasping algorithm. In International symposium on artificial intelligence, robotics and automation in space. European Space Agency. 2020. URL https://www.hou.usra.edu/meetings/isairas2020fullpapers/pdf/5040.pdf.
  • [10]. Lehmann S, Rogalla A, Neidhardt M, Schlaefer A, and Schupp S. Online Strategy Synthesis for Safe and Optimized Control of Steerable Needles. In Proceedings of the Third Workshop on Formal Methods for Autonomous Systems, October 2021. DOI: 4204/EPTCS.348.9
  • [11]. EUROCAE and RTCA. DO-333/ED-216 – Formal Methods Supplement to DO-178C and DO-278A, 2012. URL: https://eshop.eurocae.net/eurocae-documents-and-reports/ed-216/
  • [12]. EUROCAE and RTCA. DO-1278C/ED-12C – Software Considerations in Airborne Systems and Equipment Certification , 2012. URL: https://eshop.eurocae.net/eurocae-documents-and-reports/ed-12c-with-corrigendum-1/
  • [13]. Luckcuck M, Fisher M, Dennis LA, Frost S, White, A, and Styles D, Principles for the Development and Assurance of Autonomous Systems for Safe Use in Hazardous Environments. Zenodo, Jun. 14, 2021. doi: 5281/zenodo.5012322.


Dr Matt Luckcuck is a post-doctoral researcher working on the VALU3S project in the Department of Computer Science at Maynooth University, Ireland. His research interests include formal behavioural specification, model-checking, runtime verification, and safety verification and assurance for autonomous, automated, and other safety-critical systems. His previous research includes formal verification of autonomous systems for use in the civilian nuclear industry, developing guidance for designing verifiable autonomous systems — in collaboration with the UK nuclear regulator (the Office for Nuclear Regulation), and formally modelling the API of a safety critical programming language.

Dr Marie Farrell is a post-doctoral researcher working on the VALU3S project in the Department of Computer Science at Maynooth University, Ireland. Her current area of research is in using and combining formal methods to reason about and provide certification evidence for aerospace and autonomous robotic systems that are to be deployed in hazardous environments. She also is Secretary of the working group developing the IEEE P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems.

Share This