Statistical Model Checking (SMC) is a powerful and widely used approach that consists in estimating the probability for a system to satisfy a temporal property. This is done by monitoring a finite number of executions of the system, and then extrapolating the result by using statistics. The answer is correct up to some confidence that can be parameterized by the user. It is known that SMC mitigates the state-space explosion problem and allows us to handle requirements that cannot be expressed in classical temporal logics. The approach has been implemented in several toolsets, and successfully applied in a wide range of diverse areas such as systems biology, robotic, or automotive. Unfortunately, SMC is not a panacea and many important classes of systems and properties are still out of its scope. Moreover, In addition, SMC still indirectly suffers from an explosion linked to the number of simulations needed to converge when estimating small probabilities. Finally,the approach has not yet been lifted to a professional toolset directly usable by industry people.
In the first session of this tutorial, we will introduce the basis of SMC and its relation with model checking/runtime verification. Then we present several contributions to increase the efficiency of SMC and to wider its applicability to a larger class of systems. As an example, we will show how to extend the applicability of SMC to estimate the probability of rare-events. The probability of such events is so small that classical estimators such as Monte Carlo would almost always estimate it to be null. We will also show how to apply SMC to those systems that combine both non-deterministic and stochastic aspects. We then show that SMC can be extended to a new class of problems. More precisely, we consider the problem of detecting probability changes at runtime. We solve this problem by exploiting an algorithm coming from the signal processing area.
In the second session of this tutorial, we will show how SMC can be extended to handle timed systems. Timed automata and games, priced timed automata and energy automata have emerged as useful formalisms for modeling real-time and energy-aware systems as found in several embedded and cyber-physical systems. During the last 20 years the real-time model checker UPPAAL has been developed allowing for efficient verification of hard timing constraints of timed automata. Moreover a number of significant branches exists, e.g. UPPAAL CORA providing efficient support for optimization, UPPAAL TIGA allowing for automatic synthesis of strategies for given safety and liveness objectives, and UPPAAL SMC offering a highly scalable statistical model checking engine supporting performance analysis of stochastic hybrid automata. Most recently the branch UPPAAL STRATEGO has been released supporting synthesis (using machine learning) and evaluation of near-optimal strategies for stochastic priced timed games. The lecture will review the various branches of UPPAAL and indicate their concerted applications to a range of real-time and cyber-physical examples.
In the third and last session, we will illustrate several applications of SMC, which includes social sciences, robotics, and energy-centric systems.
Lecturers:
Axel Legay, INRIA
Axel Legay is a permanent researcher at Inria Rennes (France) and an Industry Professor at Aalborg University (Denmark). He obtained a PhD thesis from the University of Liège (Belgium, 2007) and an habilitation thesis from the University of Rennes (France, 2015). He received several awards including the IBM prize for the best thesis in computer science (2008) and the velux Prize (2014). Legay is the co-author of more than 250 publications in top class conferences and journals. Legay leads the Inria efforts on security and Systems of Systems. His main research interests are in developing formal specification and verification techniques for Software Engineering. His original contributions are in verification of infinite-state systems for which he offered one of the first generic approach. He also made significant contributions in the area of modal transition systems and product line engineering. Legay is best known for his contribution to Statistical Model Checking for which he offered the first extension algorithms for rare events and non determinism. His tool Plasma is used by both academics and engineers.
Kim Larsen, Aalborg University
Kim. G. Larsen is a full professor in computer science and director of the centre of embedded software systems (CISS). He received his Ph.D from Edinburgh University (Computer Science) 1986, is an honorary Doctor (Honoris causa) at Uppsala University 1999 and at Ecole Normale Supérieure de Cachan, Paris 2007. He has also been an Industrial Professor in the Formal Methods and Tools Group, Twente University, The Netherlands. His research interests include modelling, verification, performance analysis of real-time and embedded systems with application and contributions to concurrency theory and model checking. In particular, since 1995 he has been prime investigator of the tool UPPAAL and co-founder of the company UP4ALL International. In 2013 he was the recipient of the CAV Award for his work UPPAAL “the foremost model checker for real-time Systems”. He has published more than 250 publications in international journals and conferences as well as co-authored 6 software-tools. He is life-long member of the Royal Danish Academy of Sciences and Letters, Copenhagen, and is member of the Danish Academy of Technical Sciences, where he is serving as Digital Wiseman. He is also the Danish National Expert in the EU ICT Programme. Also he is Honararay Doctor (Honoris cuasa) at Uppsala University, Sweden (1999) as well as at ENS Cachan, France (2007) as well as Knight of the Order of the Dannebrog (2007). In 2015 he won an ERC Advanced Grant with the project LASSO: Learning, Analysis, Synthesis and Optimization of Cyber-Physical Systems.
Electronically controlled systems are increasingly becoming prevalent in modern society, particularly, in the transportation, energy and healthcare sectors. An unique feature of these systems is the mixed discrete continuous behaviors they exhibit that manifest as a result of the interaction of embedded processors (discrete entities) with the physical systems (continuous entities) they control. In this tutorial, we focus on hybrid systems that refer to systems with both discrete and continuous dynamics. Examples of hybrid systems range from embedded control algorithms in vehicles for cruise control, medical devices such as pacemakers and infusion pumps, and protocols involving collision avoidance in air-traffic control and path planning in autonomous driving. These systems are extremely safety critical, and hence, it is essential to develop methods that provide strong guarantees of correctness of these systems.
In this tutorial, we present a brief introduction to verification techniques for hybrid systems. We start by discussing models and semantics for hybrid systems. We focus on two important properties, namely, safety and stability and present overview of verification methods. For safety verification, we will describe sampling based algorithms for flowpipe construction of the reachable sets of hybrid systems, and their application to safety verification. For stability verification, we present computational methods based on Lyapunov functions as well as more recent methods based on abstraction refinement. We will provide illustration of these techniques in hybrid systems verification tools.
Lecturer:
Pavithra Prabhakar, Kansas State University
Pavithra Prabhakar is an assistant professor of computer science at the Kansas State University. She obtained her doctorate in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2011, from where she also obtained a masters in Applied Mathematics. She was previously a CMI (Center for Mathematics of Information) fellow at Caltech and has held a faculty position at the IMDEA Software Institute. Her main research interest is in the Formal Analysis of Cyber-Physical Systems, with emphasis on both theoretical and practical methods for verification and synthesis of hybrid control systems. She is the recipient of a best paper honorable mention award from HSCC, Sohaib and Sara Abbasi fellowship from UIUC and M.N.S Swamy medal from the Indian Institute of Science. She has been awarded a Marie Curie Career Integration Grant from the European Union, an NSF CAREER Award and a Summer Faculty Fellowship from Air Force Research Lab.
Domain-specific tool support is the key towards opening development responsibility for a wider public without dedicated programming knowledge. This is, for example, important in areas like business process modelling or scientific workflows. The intention behind the tools and methods presented in this lecture lies in providing domain experts with appropriate (non-programming) tools, so that they can solve problems within their domain in a non-technical way and even construct domain-specific software products without dedicated programming knowledge.
At first Cinco is presented, a meta tooling suite for developing domain-specific graphical development tools in terms of fully operative Eclipse products. It can be regarded as a domain-specific instance of generic meta modeling frameworks such as Eclipse EMF/Graphiti which radically eases their use through its domain-specific focus and overall generative approach: Cinco produces a ready-to-run modeling tool fully automatically from simple specifications. As this allows for early successes, the hands-on tutorial part of this session will guide the participants to use Cinco for the creation of a full graphical DSL modeling tool of their choice (e.g., for State Charts, Petri Nets, BPMN, Timed Automata, dedicated security modeling (as elaborated on during the security day, or even architectural models or industrial plant layouts) consisting of an abstract specification describing its structure and appearance, accompanied by a generator for executable code or some form of configuration.
The second part of this lecture will demonstrate Cinco’s potential along a sophisticated model-based development tool for the web domain. The ``DyWA Integrated Modeling Environment’’ (DIME) is a Cinco-based modeling tool for the modeling and full generation of complex web applications. It allows one to model all the application’s aspects utilizing various model types consistently interconnected in the sense of the one-thing approach (OTA). Models for processes, data schema, database search queries, access control, and user interfaces are generated into a fully functioning rich internet application with an AngularJS frontend that communicates via REST services with a DyWA backend. Without requiring any expertise in database design or web development with HTML, CSS, and JavaScript, the participants will use DIME to develop a multi-user web application.
Lecturers:
One of the limiting factors to integrated security is the lack of coordination between the different layers of security that individually cover the layers of the IT stack. Today, hardware, operating system, middleware, virtualisation layer, and the many layers and components that constitute a user-facing application, including data and persistency, and the communication over networks, still are developed largely independently, with little inherent integration. Data at rest, data in motion, communication access and governance, as well as system evolution (e.g., through updates) are managed individually, too often under the responsibility of different actors, different technologies, different products and vendors. In this context, concerns of holistic security are growing, and call for a better integration and communication across the different layers.
The tutorial will illustrate how a partnership of different actors (SME vendors in security, leading edge research institutions, customers/users) gathers to create an open source platform based on the technologies presented during the previous day that integrates all these layers.
Lecturers:
Many IT professionals believe that the key to success is the development and application of excellent technical skills. While technical skills are vitally important, industry leaders have developed higher expectations of their technical staff. They need engineers who can communicate, manage teams, provide leadership, negotiate, and develop working relationships with diverse constituencies all with a high sense of ethics. Articles about why departments such as Marketing and Research and Development do not `play well together’ are published frequently in trade journals and highlight the need for better soft skills. Industry leaders from BMW make presentations titled “The Effective Engineer: Development of the ‘Soft Skills’” to international engineering management programs highlighting the need by industry for such individuals and discussing how these skills lead to personal as well as professional growth. ABET in the USA, ASIIN in Germany, and other accrediting organizations require soft skills in the university engineering and scientific curriculum in order for those programs to achieve accreditation. In a global, fast paced environment, the engineer must take on more leadership tasks than ever before and communicate and manage across time zones and across cultures. The bottom-line is that these ‘soft skills’ are no longer desired by industry, but are mandatory.
Lecturers: