Verification support for ARINC653based avionics software

4 downloads 1744 Views 1MB Size Report
Jan 18, 2010 - Verification support for ARINC-653-based avionics software ... Software model checking consists in applying the most powerful results in formal ...
SOFTWARE TESTING, VERIFICATION AND RELIABILITY Softw. Test. Verif. Reliab. 2011; 21:267–298 Published online 18 January 2010 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/stvr.422

Verification support for ARINC-653-based avionics software Pedro de la C´amara, J. Ra´ul Castro, Mar´ıa del Mar Gallardo and Pedro Merino ∗, † University of M´alaga, Campus de Teatinos s/n, 29071, M´alaga, Spain

SUMMARY Software model checking consists in applying the most powerful results in formal verification research to programming languages such as C. One general technique to implement this approach is producing a reduced model of the software in order to employ existing and efficient tools, such as SPIN. This paper focusses on the application of this approach to the avionics software constructed on top of the Application Executive Software (APEX) Interface, which is widely employed by manufacturers in the avionics industry. It presents a method to automatically extract PROMELA models from the C source code. In order to close the extracted model during verification, we built a reusable APEX-specific environment. This APEX environment models the execution engine (i.e. an APEX compliant real-time operating system) that implements APEX services. In particular, it explains how to deal with aspects such as real-time and APEX scheduling. Time is modelled in such a way that the we save time and memory by avoiding the analysis of irrelevant steps. This model of time and the construction of a deterministic scheduler guarantees the scalability of our approach. The paper also presents a tool that can verify realistic applications, and that has been used as a novel testing method to ensure the correctness of our APEX environment. This testing method uses SPIN to execute official APEX test cases. Copyright q 2010 John Wiley & Sons, Ltd. Received 14 August 2008; Revised 7 October 2009; Accepted 25 October 2009 KEY WORDS:

model extraction; software model checking; avionics software; APEX; real time

1. INTRODUCTION The software for avionics includes comfort, measurement and critical flying control systems, which are increasingly complex for new aircrafts. As in other kinds of embedded software, errors are likely to be present, due to problems in the underlying hardware platform or due to design or implementation flaws. Fault detection and protection on board can be complemented with rigorous modelling and reliability analysis of the software during its development. In particular, one extended approach to the avionics software is to implement a distributed system (including health control and all the functions needed on board) on top of a shared network of processors following standard interfaces such as ARINC 653 (Avionics Application Software Standard Interface) [1]. The applications share processors, memory and devices (sensors and input/output devices). Each application requires specific scheduling methods with real-time features. The critical nature of this software requires fault protection mechanisms and certification in order to reduce the risk of failures. Certification prior to deployment is mandatory for onboard software. In particular, avionics software developers and integrators should consider official regulations such as RTCA/DO-178B [2], which defines the different levels of quality certifications for the components of the whole architecture, including the operating system (OS) and applications. ∗ Correspondence †

to: Pedro Merino, University of M´alaga, Campus de Teatinos s/n, 29071, M´alaga, Spain. E-mail: [email protected]

Contract/grant sponsor: Spanish Ministry of Science and Andalusian Department of Science; contract/grant numbers: TIN2008-05932, P07-TIC-03131 Copyright q

2010 John Wiley & Sons, Ltd.

268

´ P. DE LA CAMARA ET AL.

According to RTCA/DO-178B, the certification level of a given software ranges from A (the best) to E (the lowest), depending on the processes and documents considered in phases such as planning, development, verification, configuration, quality assurance and certification. This paper devotes more attention to software model checking, a recent and promising approach to be used in the verification phase. Classic model checking [3, 4] is an efficient verification technique to check all the potential execution paths in a concurrent software in order to locate well-studied errors (such as deadlock, memory violations, etc.) or more specific errors defined with formalisms such as temporal logic or automata. The behaviour of the software is modelled in the language of the verification tool, and the analysis process is done automatically. The tool SPIN [5] is probably the best known example of this type of tool. It uses PROMELA as the modelling language and temporal logic for properties, and it can check a number of errors without temporal logic specifications. The classic model-checking technology is currently recognized as a valuable method to produce reliable software (it is worth noting the ACM Software Systems Award 2001 to SPIN and the ACM Turing Award 2007 to Emerson, Clarke and Sifakis). However, it is well known that one major problem for engineers who are not experts in formal methods is that this technique requires a deep understanding of both the modelling language and the property languages supported by the tools. Furthermore, the manual construction of the model is susceptible to human errors due to misunderstandings or programming bugs. This is the main motivation of the current software model-checking projects, capable of generating suitable models with minimal human interaction (see FeaVer [6], JPF1 [7], Bandera [8], SocketMC [9] and the recent work by Zaks and Joshi [10]). Modelling and verification of the avionics software with both variants of model-checking techniques have been considered by Penix et al. [11] and Cofer and Rangarajan [12]. The proposal by Penix et al. [11] shows how to verify a microkernel used in the avionics industry by constructing a handmade PROMELA model of the Honeywell DEOS real-time OS [13]. In order to close the OS model, all the possible interactions between the DEOS model and the external world are modelled inside the environment. Basically, this environment includes threads requiring services from the OS and time-related interruption sources. Rangarajan and Cofer have also applied software model checking to the verification of time partitioning and other key properties of the DEOS OS. In [12], they extend the work done by Penix et al. [11] with SPIN by considering the model directly from the actual flight code in such a way that the results are closely connected to the original system. For this purpose, they use the C++ to PROMELA translator from Nasa Ames Research, following the software model-checking approach. Recently, they have worked on model-checking DEOS with SPIN in order to compute the worst-case response time to an event [14]. Taking into account that previous works focus on modelling and verification of the OS itself, and not on the application, the complementary approach was followed. Even when the OS has been tested and/or certified by other means, application providers should also perform some kind of analysis in order to ensure the reliability and performance. In this paper, a software model-checking-based method to verify C applications running on top of APEX, the ARINC 653 application programming interface is defined and implemented. The paper provides an engineeroriented method to perform a development-time verification of the software on top of ARINC 653. The main idea is to construct a tool that automatically checks the source code of the application using ARINC facilities in order to locate potential errors. Following the accepted approach to reuse the existing mature tools, the functionality of the model-checker SPIN is extended for this purpose. Model extraction techniques are applied to applications (i.e. to their C source code) in order to generate PROMELA models (note that PROMELA also allows the inclusion of the C code). Then the extracted models are closed with an environment, which is in fact a model of an APEX-compliant OS. Finally, Model Checking is applied to the closed model. The construction of the PROMELA models follows the approach by the authors towards the verification of software with well-defined APIs [9, 15]. Specific models of the OS functions offered with ARINC API are defined. These include functions such as process creation, priority management Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

269

VERIFICATION OF ARINC

or time management. One remarkable modelling work is the time model, which offers interesting novelties compared with other approaches. In particular, the model of time preserves the size of the state space within the limits suitable for verification with SPIN, and can also benefit from the abstract matching mechanism developed by C´amara et al. [16]. The automatic construction of the PROMELA models and their verification with SPIN have been implemented in the prototype tool ARINC TESTER , which can be used to verify the safety and temporal properties of the application’s original source code. This tool is freely available with a number of examples in [17]. In a future work it will be extended with new features, such as modelling inter-partition communication. Since verification of the application depends on the correctness of the environment (OS model), a mechanism to automatically check its correctness is provided. For this purpose, ARINC standard test cases [18] are taken and ARINC TESTER is used to check their behaviour. It is important to notice that these test cases are run only once. When the correctness of the environment is tested, it can be reused with many application models. A detailed comparison with the related works is given in Section 9. However, some remarkable contributions of this paper are listed below: 1. The orientation to check the behaviour of the applications and not the OS. This is interesting because the range of applications provided by suppliers is large compared with OS implementations. 2. The optimization of the models in order to perform verification with limited resources, due to the use of abstract matching. 3. Testing the APEX model as a first step to use the tool for general applications. The paper is organized as follows. Section 2 gives an overview of the APEX API and the partitioning scheme for the avionics software. The background material in Section 3 introduces the features of the model-checker SPIN that is the core verification engine in the paper. In Section 4, the approach to verifying APEX-based applications is presented. The details of how to perform model extraction are presented in Section 5. Section 6 shows the correctness of our approach through testing with the tool ARINC TESTER. Section 8 presents a case study of model extraction and verification of a multiprocess application. Section 8 explains a method to optimize the representation of data in order to save memory during verification. Sections 9 and 10 provide a more detailed comparison with the related works, and the conclusions, respectively.

2. THE

ARINC API

FOR AVIONICS SOFTWARE:

APEX

INTERFACE

On-board avionics computing systems used to be federated specific computer systems, where most of the computers performed basically the same functions (input, processing and output). These complex and critical systems are now evolving towards modular and integrated computers, such as the Integrated Modular Avionics (IMA) [19]. This modular approach allows reducing resources, standardizing interfaces and encapsulating services. The next step is to share the hardware and software resources by integrating several functions on the same execution platform. To enable the execution of multiple applications on the same computation resource, while avoiding error propagation, a robust isolation mechanism is used. Isolation is achieved by means of spatial and temporal partitioning, i.e. segregation of memory and time slots allocated to the various application parts (or partitions) by means of software and hardware mechanisms. Figure 1 shows the main components of the IMA and the role of the APEX interface. The OS offers the basic common services for all applications such as load, scheduling and communication, through a well-defined API conforming to the ARINC 653 specification called APEX . The main characteristics of APEX are briefly described below (see [1] for a more detailed description). Partitioning: One purpose of a core module in an IMA system is to support one or more avionics applications and to allow their independent execution. This can be correctly achieved if the system provides partitioning, i.e. a functional separation of the avionics applications, usually for fault Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

270

´ P. DE LA CAMARA ET AL.

Figure 1. APEX Interface in IMA and application example.

containment (to prevent any partitioned function from causing a failure in another partitioned function) and for ease of verification, validation and certification. A partition is basically the same as a program in a single application environment: it comprises data, its own context, configuration attributes, etc. For large applications, the concept of multiple partitions relating to a single application is recognized. APEX Interface: As Figure 1 shows, interface APEX is located between the application software and the OS. It defines a set of facilities provided by the system for application software to control scheduling, communication and status information about its internal processing elements. APEX also provides a common logical environment for the application software that enables executing independently produced applications together on the same hardware. Figure 1 also contains an excerpt of an APEX-based application that uses some of the services explained in Table I. Scheduling: The APEX specification differentiates between partition scheduling and process scheduling. Scheduling of partitions is strictly deterministic over time. The System Integrator assigns one or more time windows to each partition. This is done in the fixed configuration within the core module. The scheduling algorithm runs repetitively with a fixed periodicity. Partitions have no priority by themselves. The scheduling unit is an APEX process. Each process has a priority. The scheduling algorithm is priority pre-emptive. During any process rescheduling Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

271

VERIFICATION OF ARINC

Table I. Some APEX services. Service

Behaviour

GET PROCESS ID

Provides a process identifier

GET PROCESS STATUS

Returns the current status of the specified process. The current status of each process in a partition is available to all processes within that partition

CREATE PROCESS

Creates a new process and returns its identifier. Partitions can create as many processes as the pre-allocated memory space supports. The consistency among process and partition parameters is checked. Assigning INFINITE TIME VALUE to PERIOD and TIME CAPACITY defines an aperiodic process and a process without DEADLINE, respectively

SET PRIORITY

Changes the current process’ priority. The process is placed as the newest process with that priority in the ready state. Process rescheduling is only performed after this service request when the process whose priority is changed is in the ready or running state

SUSPEND SELF

Suspends the execution of the current process, if it is aperiodic, until the RESUME service request is issued or the specified time-out value expires

RESUME

Resumes another previously suspended process. The resumed process will be ready if it is not waiting for a resource (delay, semaphore, period, event, message). A periodic process cannot be suspended, hence, it cannot be resumed

STOP

Makes a process ineligible for processor resources until another process issues START

START

Initializes all attributes of a process to their default values, and resets the runtime stack of the process. If the partition is in NORMAL mode, the process’ deadline expiration time and the next release point are calculated

GET MY ID

Returns the identifier of the current process

GET PAR TITION STATUS

Provides the status of the current partition

SET PARTITION MODE

Sets the operating mode of the current partition to normal after initialization of the partition is completed. The service is also used to set the partition back to idle (partition shutdown), and to cold start or warm start (partition restart), when a serious fault is detected and processed

TIMED WAIT

Suspends execution of the requesting process for a minimum amount of elapsed time. Value zero allows the round robin scheduling of processes with the same priority

PERIODIC WAIT

Suspends the execution of the requesting process until the next release point in the processor time line that corresponds to the period of the process

event, the OS always selects the highest-priority process in the ready state within the partition to receive processor resources. If several processes have the same current priority, the OS selects the oldest one. Time Management: Every process has an associated time capacity, and represents the response time given to the process for satisfying its processing requirements. When a process is started, its deadline is set to the value of the current time plus the time capacity. This deadline time may be postponed by means of the REPLENISH service. This capacity is an absolute duration of time, and not an execution time. This means that a deadline overrun for a particular process can occur Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

272

Table I. Continued. Service

Behaviour

GET TIME

Returns the value of the system clock. The system clock is the value of a clock common to all processors in the module

REPLENISH

Updates the deadline of the requesting process with a specified BUDGET TIME value. Postponing a periodic process deadline past its next release point is not allowed.

even if that process is not running inside its partition window, and can also occur in a different partition window; but it will only be acted upon inside the partition window for that process. Interpartition and Intrapartition Communication: Interpartition communication is defined as the communication among two or more partitions executing either on the same module or on the different modules. It may also mean communication between APEX partitions and external equipment. Interpartition communication is conducted via messages. Intrapartition communication mechanisms allow processes to communicate and synchronize with each other. All intrapartition message passing mechanisms must ensure atomic message access. It is clear that designing and programming APEX-based applications with multiple concurrent processes sharing resources is a task prone to the usual errors such as deadlocks or time constraint violations. In the next sections, we present our approach to verify their correct behaviour.

3. BACKGROUND ON MODEL CHECKING WITH SPIN The proposal to perform software model checking of APEX compliant applications can be implemented with different model-checking tools, and could even be addressed with a new tool from scratch. However, one particularly interesting case is the use of the widely used tool SPIN [5, 20]. This model checker implements very efficient algorithms to check concurrent systems and it supports C code processing. In this section, the main features of SPIN are introduced. PROMELA language is inspired in Dijkstra’s guarded command language, Hoare’s CSP language and C programming language. A PROMELA model of a system is composed of a finite (but dynamically created) set of processes that execute concurrently. Processes may share global variables or channels for synchronous or asynchronous communication. Processes encode finite state machines, and may have local variables. Figure 2 shows part of a PROMELA model corresponding to a lift controller. A process is declared by means of a proctype definition (line 01) (a number of initial instances may be optionally specified). The process behaviour is given by a sequence of possibly labeled sentences preceded by the declarative part. In the example, the process Sampler uses global variables such as f, nb floor and position, and it also communicates with channels Token 1, Token 2 and Token 3. In PROMELA local and global variables are updated with the assignments and the instructions for sending/receiving messages through channels. Boolean expressions behave as guards that must be satisfied before continuing the execution. However, any PROMELA instruction can act as a guard. For instance, the expression in line 04 acts as a guard (always true) and when evaluated and executed, it updates variable f. Instructions if and do in PROMELA include guards selected in a non-deterministic manner. The code between lines 25 and 42 corresponds to the initial configuration or main program. The process init() initializes global variables and creates the first set of concurrent processes, such as the process Sampler (line 36). A PROMELA model usually represents a concurrent system with a non-deterministic behaviour. Instructions if and do manage the unpredictable behaviour of the environment. Statements such as atomic and d step define a sequence of instructions whose execution cannot be interleaved with instructions in other processes. In general, a PROMELA model defines a set of possible executions Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

273

VERIFICATION OF ARINC

Figure 2. Example of classic Promela.

called execution traces/paths. SPIN produces these paths and checks the absence of errors such as deadlocks. SPIN verification consists of an exhaustive exploration of state spaces, given an initial configuration of the PROMELA model. Given the initial state, a graph of states can be generated by interleaving concurrent processes. Exhaustive exploration is done with a depth-first mechanism using the stack to store the current path and a heap to store already visited states. The number of states in the heap could be more than 109 for realistic models. SPIN searches traces that satisfy/violate a given set of properties. The properties include the usual ones in concurrent programming such as deadlock, assertions, code reachability or nonprogress loops. However, the most interesting set of properties are more complex than these standard ones. SPIN supports the verification of properties described with linear time propositional temporal formulae (LTL). For instance, the formula [] (internal_request[0] && position[0] > 0) -> (position[0] == 0)

specifies that the lift will eventually move to the lower level if a person pushes the internal request to that floor. For details on the expressive power and examples of temporal logic, see [21]. Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

274 3.1. Optimized verification

SPIN state exploration works fine with many realistic models of concurrent systems. However, in order to deal with large models, the tool has been extended with several optimization techniques. Partial-order reduction (POR) replaces several interleaved sequences of events (sentences) with only one that represents the whole set [22]. Hash-compact reduces the use of memory by compressing the representation of the states without losing information [23]. Bit-state hashing represents states as bits in a hash table; hence, in many cases the analysis is only partial [24]. Currently, work is being carried out in order to obtain parallel versions of SPIN that preserve most of these optimization [25]. Finally, there are other strategies to deal with scalability, such as the automatic transformation of the models to implement abstraction methods [26]. The verification approach to C programs described in this paper preserves the optimization techniques in the standard distribution of SPIN and also considers the inclusion of the recent abstract matching method, which is especially suitable for models with large data structures [16].

4. THE APPROACH TO VERIFYING APEX-BASED APPLICATIONS As explained in Section 1, one promising approach to verifying the software is model extraction, which consists in the automatic generation of models suitable for a particular model checker (see FeaVer [6]). However, most of the works that follow this approach do not specifically address the problem of how to model services provided by the OS. They are suitable for source code that only contains library functions that can be directly executed by the target model checker. When the target model checker cannot directly execute all the OS calls, it is necessary to add some extra harness to complete the extracted models. The previous work [9] considered how to verify concurrent C applications that make extensive use of the operating facilities through system calls. In that approach to model extraction, a SPIN -oriented model of the behaviour of some OS APIs was constructed. This model is used to automatically obtain a correct abstraction of the software that makes use of this API, for instance, the Berkeley-like Socket API. Following the work by Holzmann and Smith [6], a mapping from the original C code to extended PROMELA was defined. Tool S OCKET MC automatically transforms each API call into PROMELA. The new PROMELA model can be verified with the standard SPIN. From this experience, a decision was made to apply a similar approach to verify applications running on top of the well-defined API APEX. One key point to success in applying model checking with SPIN is having a closed model. Figure 3 shows our approach to close the PROMELA model obtained from a real IMA system. On the left, the OS (and the hardware) and the applications organized as partitions are shown. On the right, the PROMELA version is presented for each component, where the environment model represents the behaviour of the OS in such a way that it is useful to close the whole model. Our final PROMELA model is composed of: • Application processes. Each application process is one PROMELA process extracted from the C source code of one APEX process. • Environment. The environment closes the PROMELA model and simulates the behaviour of a real ARINC 653 execution environment (i.e. the ARINC 653 RTOS—Real Time OS). Our environment is composed of global variables, ANSI-C functions and other PROMELA objects. In a real execution, application APEX processes call APEX services from RTOS. In the model, application processes will call the environment to simulate these same APEX services. Consequently, the environment is the entity that provides the APEX services to the application processes, and stores all the state information needed as global data. Therefore, during extraction, the API calls in the original source code are translated into calls to the environment. Figure 4 details the whole process to obtain the verifiable models and to carry out the verification. The input provided by the software developer is the C application, including calls to the APEX Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

275

VERIFICATION OF ARINC

Figure 3. Overview of formal modelling for verification.

Figure 4. Model extraction process.

interface. Then, a parsing is made in order to perform syntax check and to locate API calls in the code. The next step is code generation, producing the closed PROMELA model that can be verified with SPIN. Code generation uses our PROMELA-oriented implementation of each APEX call, instead of the usual ones provided by real OS. These PROMELA-oriented versions of the APEX interface have been exhaustively tested following the Conformity Test Specification by ARINC, as explained later in Section 6.1. 4.1. Embedding C APEX applications into promela The latest version of SPIN implements PROMELA extensions to work with embedded C code. In particular, the construction c code allows the execution of any C code in an unreliable way: if the code fails, the model checker itself stops. The code within this construction can include any kind of C code and functions, even functions belonging to libraries. The main restriction is to Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

276

´ P. DE LA CAMARA ET AL.

Figure 5. Mapping from APEX application to PROMELA.

avoid system calls. The construction c expr makes it possible to express guard conditions to be evaluated in a side effect free manner. Constructions c decl and c state are used to declare C variables. With the first one, it is possible to include types and variable declarations that are hidden to SPIN. With the second, c state, it is possible to declare variables and to decide on the kind of visibility for SPIN: to register variables during verification in the main SPIN structures (state-vector, stack and explored states), or to keep the variables hidden. It is worth noting that the ability to process C code can be used in two complementary forms. The first form is to include an island of imperative C code that updates PROMELA and C variables, including complex algorithms as initially encoded by the C programmers. The second form consists in using the embedded C code to create and manipulate additional data structures that can be used to extend the functionality of the verifier. The work presented in this paper takes advantage of both the uses. As shown in Figure 5, mapping from the original code to PROMELA is done by replacing every process (every main function) with a proctype definition. Then the body of every proctype is filled using the extensions for C code (c decl, c state, c expr and c code). This is done by breaking the C code at the points where a call to API appears. The final PROMELA code preserves the sequential execution of every C block code between two system calls. Thus, when verifying the model, SPIN interleaves blocks and system calls as atomic sentences. Note that the constraints imposed on interleaving will positively affect the behaviour of the resulting model. Code blocks execute instructions that only have local scope (i.e. they do not have an effect outside the process, nor are they affected by the exterior). On the other hand, API services affect/are affected by the environment and, potentially, by other processes. Note that in our model communication through means other than API calls is not allowed. In particular, if two processes need to share memory, access to shared variables as a special API, as modelled by Gallardo et al. [27], should be considered. Once this division is done, from a model-checking point of view, code blocks perform nonvisible local actions that are independent from other processes or from the environment. This characteristic has an important consequence when implementing the scheduling mechanism: when a process is executing a code block, it does not matter at which point it is pre-empted. Therefore, all the sentences of a code block can be merged into an atomic block, allowing processes to be Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

277

VERIFICATION OF ARINC

pre-emptable only before or after the execution of a code block. This concept of merging local sentences has already been used by C´amara et al. [9]. By default, the PROMELA models produced by the model extractor contain all the C variables in the original code. The method to produce abstract matching functions, presented by C´amara et al. [16], makes it possible to automatically reduce the set of variables that should actually be managed to produce the state space. This optimization is also considered for models extracted from APEX-based C applications and is explained in Section 8. This approach can be used for any kind of API, provided that the manner in which the API calls are modelled preserves their semantics, as detailed by C´amara et al. [9]. Hence, in order to verify C applications written on top of APEX, it suffices to correctly model the services provided by APEX and to develop the model extraction tool. The definition of such API models is the aim of Section 5.

5. MODELLING APEX CALLS In order to check the real applicability of our approach, it is clear that the most critical aspect to be modelled in SPIN is the management of timing and its associated functionalities. Owing to the fact that several APEX features are still under development and services are not yet implemented, the following constraints have been set: only one partition is allowed; processes cannot be restarted after being stopped; partitions cannot be restarted; error handler and Health Monitoring and recovering actions are not supported. Note that interpartition (e.g. ports) has not yet been implemented either. They will be part of the following phase, and their implementation will be based on the experience modelling socket-based communication (see [9]). 5.1. Modelling process scheduling During real execution, the OS schedules APEX processes according to the following rules: (1) the scheduling unit is an APEX process; (2) each process has a priority; (3) the scheduling algorithm is priority pre-emptive; (4) during any process rescheduling event, the OS always selects the highestpriority process in the ready state; and (5) if several processes have the same current priority, the OS selects the oldest one. In the APEX model, the environment is in charge of managing the scheduling of PROMELA processes. In summary, it ensures that (a) in a given state only one PROMELA process is executable by SPIN , (b) process pre-emption behaves as in a real APEX OS and (c) APEX scheduling rules are followed when selecting the process to be executed. 5.1.1. Controlling process execution. APEX specification states that a process may be pre-empted whenever a re-scheduling event takes place (e.g when a higher-priority process is resumed after a timeout expiration). Simulating this behaviour in a SPIN-based model greatly increases the complexity of the model, and potentially leads to an explosion in the number of states generated during verification. However, this risk has been mitigated with the atomic C code blocks executing instructions that only have local scope. Regarding API services, the real OSs execute these services atomically. Therefore, the model does not allow pre-emption to happen inside an API call, except when required by the API service itself (e.g. the API call stops the calling process). As explained before, the environment is in charge of stopping the execution of a process and resuming another one, according to the APEX scheduling rules. The main tool used by the environment to achieve this is the inclusion of a provided() sentence in each PROMELA proctype declaration. Sentence provided() adds an executability condition to every transition of the process it is attached to. As a result, execution of the process is disabled whenever the provided() condition is false. The environment takes advantage of this by using the following provided() sentence: ‘proctype p1() provided ( curSchProc == pid)’, where pid is a local variable storing the process identifier, and curSchProc a global variable Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

278

Table II. Process attributes. Static Attributes Name Base priority Period Time capacity Type of deadline

Initial priority The amount of time in which the process must finish its work (it is used to calculate the process deadline) It can be HARD or SOFT Dynamic Attributes

Current priority Deadline State Waiting cause Position in queue

Resource Timeout Time wait

Absolute time when time budget will expire APEX state of the process: WAITING, RUNNING, DORMANT, READY Reason why a process is WAITING (e.g. for a resource) If the process is WAITING or READY, position in the queue of processes waiting for the same cause. This attribute is used when there are several candidates to be awakened/run and the scheduler must choose the oldest one If the process is waiting for a resource, this is its identifier (e.g. a communication port) If the process was suspended with timeout, absolute time when the process will be resumed If the process called TIMED WAIT, absolute time when the process will continue its execution

managed by the environment. By setting curSchProc to different pids, the environment selects which process is executable. 5.1.2. Storing process attributes. The environment keeps process context information in the same way as real OSs do. The main part of the process context information consists of the so-called process attributes. As listed in Table II, process attributes can be divided into: (a) static, if their value does not change after the process is created and (b) dynamic, if their value can change after the creation of the process. Notice that storing process attributes in each generated state requires a considerable amount of memory. In order to mitigate this, storage is managed as follows. Process attributes are stored as ANSI C data structures in the environment by using the SPIN 4 embedded-C primitives. Only dynamic attributes are included in the SPIN state-vector (i.e. copied in each generated state). Static attributes are partially hidden by marking their ANSI C data structures as Tracked UnMatched (see details in Section 8 and in References [9, 16, 28]). Since static attributes are not modified during execution, the resulting behaviour of the model is the same as if all attributes were in the SPIN state-vector. 5.1.3. Scheduling functionality. Scheduling functionality, or scheduler, is the name given to the part of the environment that deals with process scheduling. The scheduler is called whenever re-scheduling may take place. Re-scheduling is triggered by many causes, for instance, API calls (e.g. SET PROCESS PRIORITY and SUSPEND SELF), time events (e.g. deadline or timeout expiration), the reception of messages in ports, and intrapartition communication (events up, signaling of semaphores, etc.). After the call, the scheduler may change the executability of any process by changing the curSchProc global variable, or the process attributes. Figure 6 shows part of the code to implement the API call SUSPEND SELF, which updates some of the process attributes in Table II. 5.2. Modelling time Application of SPIN model-checking to problems where time plays a significant role always faces a challenge: the system time must be represented in the state-vector (e.g. using a variable). Therefore, Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

279

VERIFICATION OF ARINC

Figure 6. Excerpt of the macro for SUSPEND SELF.

each time the representation of the system time is updated (e.g. variable updated), a new state must be generated. In most cases, this eventually leads to a state explosion. In order to overcome this limitation, the solution is based on the following principle: system time representation is only updated when it may impact the behaviour of the model. To implement this principle a system clock is used as the representation, together with the concept of time events. Time events are global events, triggered by the environment, and associated with a point in the time line. System clock is updated only when a time event is activated, therefore avoiding many unnecessary updates. One peculiarity of this solution is that, for an observer, system clock seems to advance with irregular jumps. This will be illustrated with the following example. Our system clock starts at t=0 ms and there are two time events: at t=10 and 11 ms. Since the system clock is only updated in time events, it seems to first perform a long jump (from t=0 to 10 ms), and then a short jump (from t=10 to 11 ms). 5.2.1. Life cycle of time events. Time events may be armed statically before starting verification or dynamically by the environment during execution of the model. The typical life cycle of a time Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

280 event is as follows:

• One application process calls an APEX service (e.g. TIMED WAIT(500 ms)). • The environment takes control and suspends the calling process. Then it arms a Waiting Timeout time event that will be triggered 500 ms in the future. • The environment resumes another application process in ready state. • Eventually (i.e. 500 ms in the future), the environment will trigger the time event, and the environment awakens the suspended process. • Finally, the environment updates the system clock in 500 ms, removes the time event and returns the execution to the appropriate application process. 5.2.2. Implementation and usage of time events. Time events are implemented as a part of the environment. The environment keeps all the armed time events as global state variables, but is optimized for less memory consumption. Time events are used with two approaches to modelling time management in SPIN, called Time Management Type. Time Management Type 1 (TMT 1) is used when execution times of applications codes are unknown. Time Management Type 2 (TMT 2) is a refinement of TMT 1 that takes these execution times into account. As a result, each of these Time Management Types leads to a different implementation of the environment. (1) Time Management Type 1: Non-usage of application execution times Verification by model checking is specially useful in the first phases of the software development process, as a means to discard design errors. On the other hand, in these early stages execution times of the application code are often not available. However, even if execution times are unknown, the piece of software under analysis may already include some timing values (e.g. parameters of an API service call) in the code. The environment based on TMT 1 is capable of interpreting these time values, but without using application execution times. To do that, TMT 1 makes the following simplification: execution time of each application code block may take between 0 and infinite time units, but it can only begin/end at those time points where a time event is triggered. This simplification produces execution traces that will never happen in real execution (the effect is usually called over-approximation in the model-checking community). The first element in the implementation of TMT 1 is the PROMELA process TimeEventTrigger (see code below). This process is a part of the environment, triggers the time events and runs parallel to the application processes. proctype TimeEventTrigger () { do :: c_code {ma_Tempus_Fugit();} od; } The TimeEventTrigger process contains an infinite loop with a call to the environment function ma Tempus Fugit. Inside this function time events are triggered, and the system clock is updated. Note that TimeEventTrigger may always be executed. This means that in every transition of the model a non-deterministic decision is made: either (a) to execute an application process (if at least one is available) or (b) to execute the Tempus Fugit sentence. As result, for a given model, SPIN will generate many execution sequences, potentially one sequence per non-deterministic decision. To illustrate the behaviour of TMT 1, some possible execution sequences produced by SPIN when the application processes and the TimeEventTrigger process are interleaved are shown. In the discussion, the application code blocks of the model under analysis are denoted with A0 , A1 , . . . , Ak , et (Ai )(0 ≤ i ≤ k) being their respective execution times. Similarly, it is assumed that T0 , T1 , . . . are the successive TimeEventTrigger transitions, and that t p(Ti ) is the time Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

281

VERIFICATION OF ARINC

point associated with the time event triggered in Ti . → represents the order in which transitions are executed in a given sequence of transitions. It is important to remember that the system clock is only updated during the time event triggering, which, in this case, means that it is only updated by Tempus Fugit. Starting with one TimeEventTrigger transition T0 , SPIN explores all the following execution patterns: • T0 → A0 → A1 → · · ·, that is, no TimeEventTrigger transition takes place. Thus, the system clock remains unchanged, and therefore, et (A0 )+et (A1 )+· · · = 0. • T0 → A0 → · · · An → T1 → · · ·, that is, some application code blocks are executed between T0 and T1 . After T1 is triggered, the system clock is updated, and therefore, et (A0 )+· · ·+ et (An ) = t p(T1 )−t p(T0 ). • T0 → T1 → A0 → · · ·, that is, the first block A0 is executed after T1 . This sequence models the behaviour where et (A0 ) > t p(T1 )−t p(T0 ). It can also represent a behaviour where execution of A0 is disabled until the time event T1 is triggered. • T0 → T1 → T2 → · · ·, that is, no application code block is executed. This sequence models the behaviour where et (A0 ) is infinite. It can also represent a behaviour where execution of A0 is indefinitely disabled. In conclusion, TMT 1 can be used to verify properties related to ordering execution of application code blocks (Ai ) and TimeEventTrigger transitions (Ti ). To accomplish this, SPIN generates many execution sequences for a given model. The properties covered include non-realtime temporal properties usually checked in SPIN. On the other hand, TMT 1 can only deal with some of the properties involving real-time values. For instance, the existential property ‘Process P1 is able to do its work before its deadline’ can be checked. SPIN will try to find an execution in which P1 is able to finish successfully. However, property ‘Process P1 shall always do its work before its deadline’ cannot be proven because the execution time of one code block may be infinite. (2) Time Management Type 2: Usage of application execution times To remove the restrictions imposed by TMT 1 Time Management Type 2 adds information about the execution times of the application processes. This information is a fixed execution time attached to code block and API service. As explained earlier, the application code is divided into atomic code blocks and API services (also atomic), which have no impact on the overall behaviour of the model. This entails a problem bearing in mind that time events can only be triggered before or after an atomic block. To solve this problem, TMT 2 establishes the following strategy: an atomic block is only executed if no time event will be triggered during its execution. In order to apply this, the environment creates a variable to store the atomic block’s remaining execution time. The environment compares this variable with the time to the next time event to decide whether an atomic block must be executed. In case a time event is executed instead of the block, this is interpreted as the block running ‘partially’, but not finishing before the time event was triggered. After this, the remaining execution time variable is updated and the block waits until the time event finishes to try running again. Eventually the remaining execution time will be so short that the block can finish before a time event is triggered. A special time control logic, which is considered part of the environment, is added at the beginning of each code block. Both the whole code block and the time control logic are included in an atomic sentence. The following example illustrates the way time control logic works. 1 2 3 4 5

CP1: Atomic { /* Init of Time Control Logic of Block 1 */ if RemainExecTime(_pid) = 0 then /* Initial Execution Time */ RemainExecTime(_pid) = CP1_EXEC_TIME; end if

Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

282 6

if ((T_NextTimeEvent - CurrentTime) c_code{ ma_Tempus_Fugit(); }; od; } Note that this mechanism is not needed for API calls. In a real system, the RTOS executes the API calls atomically. Therefore, the assumption that no time event will happen during an API call can be accepted as correct. In conclusion, it is possible to state that TMT 2 preserves the ordering of time events and the beginning/ending of application code blocks and API services. Therefore, it is able to verify many properties that were not covered by TMT 1. Note that in comparison with TMT 1, TMT 2 generates only one execution sequence, since the environment does not make non-deterministic decisions. 5.2.3. Experimental evaluation of time modelling. The following section presents an example used to evaluate the environment based on Time Management Type 1. The application code has been instrumented so that SPIN generates one execution trace for each possible execution sequence. The time values for this trace are taken from the system clock. The analysis of these traces makes it possible to understand the order in which sentences were executed during verification, and the time when it happened. The example consists of a process P1 with the following behaviour. Process P1 obtains its own identifier (GET MY ID), modifies its own priority (SET PRIORITY) four times and enters the waiting state (TIMED WAIT) for 500 time units. After awaking, it reads the system clock (GET TIME) and stops itself (STOP SELF). The code of the process init is the following: init{ atomic { c_code { MODEL_A653_Init(); }; c_code{ strcpy (a_att.NAME, "p1"); a_att.STACK_SIZE = 200; a_att.BASE_PRIORITY = 30; a_att.PERIOD = INFINITE_TIME_VALUE; a_att.TIME_CAPACITY= 5000; a_att.DEADLINE = SOFT; }; A653_CREATE_PROCESS(&a_att, &a_proc, &a_return,p1); A653_START(a_proc,&a_return); Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

283

VERIFICATION OF ARINC

A653_SET_PARTITION_MODE(NORMAL,&a_return); run clock(); }; }; Init creates a process P1 (CREATE PROCESS), starts it (START) and begins the normal scheduling process by setting the partition in NORMAL mode (SET PARTITION MODE). P1 has a time capacity of 5000 time units, therefore its deadline will be set at 5000 time units. Just for this example a minimal Health Monitoring functionality that stops process P1 when its deadline is reached (see [29] for details on Health Monitoring) was modelled. As described in Section 5.2.2, Time Management Type 1 makes use of the process TimeEventTrigger to trigger time events. In order to explore every possible sequence of time events and application code blocks, Time Management Type 1 takes advantage of the non-deterministic interleaving between this process and the application processes. In particular, the instrumentation code added to this example generates 10 traces, one for each possible execution sequence. They are discussed in detail below. Trace 1 Time

Process Executed

Description

0 5000

TimeEventTrigger

Next time event t = 5000 DEADLINE

In this trace, TimeEventTrigger runs before P1 and triggers the P1-DEADLINE time event. In consequence, the execution is stopped before P1 can run. In a real system, this behaviour takes place if the execution time of the first code-block of P1 takes more than 5000 time units. Traces 2-6 The second trace corresponds to the scenario where P1 is able to execute sentence (GET MY ID) before its deadline is triggered. Note that after the execution of GET MY ID, the system time has not advanced. Time

Process Executed

Description

0 0 5000

P1 TimeEventTrigger

CP11: GET MY ID Next time event t = 5000 DEADLINE

The next four traces are similar to the previous one. In each of them, P1 is able to execute one additional sentence SET PRIORITY before the deadline is triggered. Traces 7-10 Time

Process Executed

Description

0 0 0 0 0 0 0 500 500 5000

P1 P1 P1 P1 P1 P1 TimeEventTrigger

CP11: GET MY ID CP12: SET PRIORITY CP13: SET PRIORITY CP14: SET PRIORITY CP15: SET PRIORITY CP15: TIMED WAIT Next time event t = 500 Awake P1 Next time event t = 5000 DEADLINE

Copyright q

TimeEventTrigger

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

284

´ P. DE LA CAMARA ET AL.

In the seventh trace, P1 enters a waiting state for 500 time units. A time event is armed at t = 500 and is triggered, since no other process is executing. Finally, the P1 deadline is triggered. The last three traces show how P1 continues its execution after TIMED WAIT. The last trace represents the behaviour where the execution of P1 is so fast that its deadline is not triggered. SPIN reached depth 30, with state-vector of 228 bytes (107 bytes to store the environment state). The example confirms that Time Management Type 1 can be used as an over-approximation when there is no information about execution times. An equivalent test was performed for environment based on Time Management Type 2. The main difference is that it only generates one complete trace (equivalent to trace 10), since Time Management Type 2 is fully deterministic.

6. CONFORMANCE TESTING WITH THE TOOL

ARINC TESTER

In a real system, application processes use the OS services described in the ARINC 653 Part 1 specification [1]. In the verification model, applications call services provided by the verification environment. To ensure the correctness of the verification, the behaviour of the environment services must match the real OS services. As a first step to establish the soundness

Figure 7. Test Cases to certify the API calls. Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

285

VERIFICATION OF ARINC

of the model proposed here, an exhaustive testing campaign was carried out. A battery of Test Cases were written in PROMELA which call every implemented service in every possible condition. Each Test Case provides a fail/pass verdict, depending on the behaviour of the services called. As a first step, every Test Case was written in a C file, and debugged using the ARINC development environment by WindRiver [30]. Then ARINC TESTER was used to build the PROMELA version of Test Cases, using the approach presented in the previous sections, and checking that no Test Case returned a fail verdict. ARINC TESTER is the front-end to perform the model extraction process and to use the SPIN model checker for verifying applications that use API calls compliant with the ARINC 653 specification (APEX). It integrates all the functionalities proposed in Figure 4 of Section 4. The tool, as well as applications used to check the tool, are available in [17]. 6.1. Using the official test cases to certify the API model Test Cases are defined considering the ARINC 653 Part 3 ‘Conformity Test Specification’ document [18]. This specification gives a description in natural language of an APEX conformity test-battery. In other words, this battery checks if the APEX services provided by an OS are conformed to the ARINC 653 Part 1 specification. In consequence, it can also be used to check the conformance of the services implemented in our environment. Test Cases are classified into functional and robustness tests. Functional tests check that the service works in normal use conditions. Robustness tests check how the service works in abnormal conditions (e.g. it returns the appropriate error codes). The result is a complete conformance test suite. The work first consisted in selecting the test definitions applicable to the current services implemented in the environment. Figure 7 shows the names of C files used, main service checked and other services necessary to satisfy the initial requirements. All test cases were successfully executed. The codes and the verification results are available in [17], including executions with both methods for time management TMT 1 and TMT 2. The verification work corresponds to the exhaustive exploration of the state space generated by each application calling the models of the APIs. The number of states explored varies from a few to more than 100 000, depending on the kind of time model used. It is worth noting that for the most complex test cases, TMT 1 implies checking 10 times more states than TMT 2.

7. A CASE STUDY: A MULTI PROCESS APPLICATION The previous sections present the whole approach to the verification of ARINC-based applications, including the test cases used to check the correctness of the models of the API. This section provides a more realistic case study to clarify the kind of applications that ARINC TESTER can analyse and also to show how the approach is scalable. The case study is a C application encoded by the authors as an extension of the example presented by Gamatie et al. [31]. This application computes the current position and the fuel level in the aircraft. It consists of three processes and uses several communication and synchronization mechanisms such as blackboards, buffers, semaphores and events. Following the same strategy as in the test cases presented in the previous section, the Wind River environment [30] is used to help debug the first C application. Then the tool ARINC TESTER is used to built the PROMELA version of the application, and the verification suite is performed. 7.1. The C application The application ON FLIGHT consists of three processes running in the same partition. Its mission is to compute the current position and the fuel level, taking the information from sensors. It Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

286

´ P. DE LA CAMARA ET AL.

Figure 8. Execution steps in the processes to monitor position and fuel level.

periodically displays the information as: date of the report::height::latitude::lo ngitude::fuel level In addition to the three processes, the partition includes a blackboard, two buffers (buff1 and buff2), an event evt, a semaphore sema and a resource global params which contains some parameters. The three processes are called PositionIndicator, ParameterRefreset and FuelIndicator. The periodic behaviour of these processes, which is represented step by step in Figure 8, is the following. • Process PositionIndicator produces the empty report message adding the current date. Then it sends a request to the process ParameterRefresher for a refreshment of global parameters, via buff2 (in order to be able to update the report message with position information). It waits for notification of the end of refreshment, using evt; reads the refreshed position values from the blackboard; updates the report message with height, latitude and longitude and sends the report message to the process FuelIndicator, via buff1. Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

287

VERIFICATION OF ARINC

Figure 9. C code of the process FuelIndicator.

• The main task of process FuelIndicator is to update the report message (produced by PositionIndicator) with the current fuel level. In each cycle, if a message is contained in the buffer, buff1 then retrieves and processes this message; and it always updates the fuel level information from Global params, via protected access (using sema); and then resets evt; • The process ParameterRefresher refreshes all the global parameters used by the other processes in the partition. It checks if a refresh request arrives in buffer buff2 and then updates all the global parameters in Global params, using protected access; displays refreshed position values on the blackboard and notifies the end of the refreshment, using evt. Figure 9 contains part of the C code of this application, together with some of the relevant data structure used by the three processes. Note that the software architecture has been simplified and the reading of the sensors for position, height, fuel, etc. has been removed. This behaviour is included in the process ParameterRefresher as a simple update of these variables. The whole C code of the application (500 lines) and the PROMELA version are available in [17]. 7.2. Verification As expected, programming this application following the papers and the APEX documentation revealed several errors. Some of them were easily detected using the APEX simulator from Win Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

288

River. However, errors regarding concurrency were difficult to find and were successfully detected with ARINC TESTER. It was also possible to verify some interesting properties with the final application (after removing the errors). 7.2.1. Programming error with semaphore. The first problem was a deadlock due to the typical error using semaphores. When editing the process ParameterRefresher, the programmer used the copy-and-paste function with the intention of writing the sequence WAIT SEMAPHORE, SIGNAL SEMAPHORE and erroneously include two calls to WAIT SEMAHPORE, as follows: .................... RECEIVE_BUFFER (buffId, INFINITE_TIME_VALUE, msg, &len, &retCode); CHECK_CODE(": RECEIVE_BUFFER by ParameterRefresher", retCode, sal_code); GET_SEMAPHORE_ID(SEMAPHORE_NAME_0, &semId, &retCode); CHECK_CODE(": GET_SEMAPHORE_ID_0 by ParameterRefresher", retCode, sal_code); WAIT_SEMAPHORE (semId, INFINITE_TIME_VALUE, &retCode); CHECK_CODE(": WAIT_SEMAPHORE by ParameterRefresher", retCode, sal_code); msg[0] = 22; msg[1] = height--; msg[2] = latitude++; msg[3] = longitude++; msg[4] = fuel--; WAIT_SEMAPHORE (semId, INFINITE_TIME_VALUE, &retCode); CHECK_CODE(": SIGNAL_SEMAPHORE by ParameterRefresher", retCode, sal_code); GET_BLACKBOARD_ID(BLACKBOARD_NAME_0, &bbId, &retCode); ....................

After producing the PROMELA model, the tool detects a deadlock at depth 113.

ARINC TESTER

explores for default errors and

7.2.2. Initialization error. Another example of how the tool ARINC TESTER helped to debug the code was the verification of the property ‘The aircraft never runs out of fuel during flight’, which can be encoded as [] ((height > 0 ) -> (fuel > 0)) Suddenly, this property was not satisfied, and SPIN found an error at depth 78. The reason was bad initialization of variables, because initial values had not been checked trusting the initialization performed by the three processes. After performing the initialization properly, the property was satisfied exploring 1158 states in 12 s. 7.2.3. Other desired behaviours. When the model was free of the kinds of errors mentioned above, a number of desired behaviours were successfully verified, such as: • • • • • •

The fuel is not exhausted when the aircraft is flying When all ready processes have the same priority, the oldest one is selected for execution A running process is never waiting for a resource If a process is waiting for a resource then it is not running No process is waiting forever The system clock never goes over a process deadline

All these properties were successfully verified. The tool generated 1158 states for each verification, each state being 1232 bytes. The definition of the formulas are available in [17]. 7.2.4. Scalability of the verification. From the experience with this application, the number and the size of the processes is not a real problem. In order to test how the approach scales, the graphics in Figure 10 (left) were prepared with verification results depending on the initial height. Note that the application was implemented in order to force the aircraft to land from the initial position. Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

289

VERIFICATION OF ARINC

Figure 10. Scalability results.

The graphics in Figure 10 (right) show how the modelling of time prevents the state explosion. They prove the scalability of time management. Note that the increase of the process period (the time capacity) does not affect the number of states.

8. OPTIMIZING VERIFICATION WITH ABSTRACT MATCHING The main drawback of model checking and of Software Model Checking is the state explosion problem. If the PROMELA model preserves all C variables defined by the application programmer, with large domains the size and the number of states could increase with a negative impact on the scalability. One promising approach to reduce this impact is to only save parts of the state representations in the global storage of the visited states (usually called the heap). This section describes a new technique to implement sound abstract matching of states, based on the ideas of Bosnacki [32]. The major contribution is the characterization of several kinds of static analysis useful to implement the method for specific kinds of properties. Some experimental results with the APEX-based applications used in the paper are also provided. Figure 11 shows how abstract matching works. The main idea is to avoid starting a new search from a given state if an essentially equal state has been visited before. Given the current global state now (see Figure 11), abstraction consists in replacing the usual operation ‘h(now)’, which stores it as a visited state, with the new operation ‘h(f(now))’, where f() represents the abstraction function. Function f() generates the abstraction of now to be matched and stored. Function f() is only used to cut the search tree, but the verification is actually performed with the concrete state without losing information. Note that by using abstraction and model checking in this way, a subset of the original state space is explored. Thus, abstraction produces under-approximation of the state space, in contrast to the most common use of abstraction, which produces overapproximation. Hence, as in the case of over-approximation, verification results are only reliable when the abstraction method is sound (see [28, 32]). It is necessary to establish some correctness conditions in the matching scheme, defining a function f() that depends on the property to be verified. The correctness conditions are satisfied by construction, due to the use of static analysis guided by the property when encoding f(). 8.1. The approach to abstraction C´amara et al. [16] extended the proposal by Holzmann and Joshi [28], providing implementable methods to produce abstraction functions, which are sound and oriented to the property to be checked. In the implementation scheme, abstraction functions are implemented in such a way that Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

290

´ P. DE LA CAMARA ET AL.

Figure 11. SPIN data structures to implement abstract matching.

Figure 12. Transforming a PROMELA process to implement abstract matching.

they can (automatically) identify the variables to be hidden from the state-vector in every global state, after the execution of every verification step. The implementation of function f() is based on the use of the PROMELA c track primitive (see [28]). This construction allows the user to declare variables as UnMatched in order to be stored only in the SPIN stack (without registering in the state-vector), so that they can be hidden when necessary. The approach is shown with a simple code as the one in Figure 12, which can be obtained by the model extractor. By default, there is no static analysis, and the model is extracted assuming that C variables influence the verification of properties (left). This is why variables x and y are visible in the state-vector. Consider now that there exists interest in checking a particular property that needs the precise value of x after executing the code at L1. Then, in this case, the extracted model must keep variable x visible after executing the instruction at L1, as shown in the right side of Figure 12. This second version calls f() at any point where the global state should be stored. This function uses its argument to check the current execution point in the model. The function updates the variables to be hidden (using Hide()) or updated (using Show()) before matching them with the current set of visited states, depending on the current label. For instance, variable x can be hidden until it is updated in L1 (declared as UnMatched). However, it is made visible at L1 because it will be used to update y, and it is hidden again after updating y. The extra variables x and y are used to store the values of the real (hidden) variables or a representation of their values using the global hash table described in the previous section. This way of constructing f() can be extended to systems with multiple processes. Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

291

VERIFICATION OF ARINC

8.2. Characterization of static analysis to produce sound abstract matching The key point in the previous description is how to decide when to hide a variable. The so-called influence analysis (IA) has been developed to annotate each program point with a set of significant variables that should be visible at that point during model checking to correctly verify a given property. Given the set of variables V of interest for a given property, the analysis records the variables that are alive wrt a particular property. Thus, if a variable does not affect any variable in V at a given program point, it can be hidden since its current value is not relevant for the verification. The following section describes how (IA) should work to allow the verification of families of properties like the ones considered in Section 7. This static analysis is a refinement of the live variables analysis given by Nielson et al. [33], adapted to the case of PROMELA. This adaptation takes into account the properties of interest to be verified, defining four variants of analysis. In particular, it considers code reachability, invariants with local and global variables and temporal logic formulas. This paper provides a rigorous descriptions of such variants of analysis; however proof of soundness is not included and can be found in a previous paper by some of the authors [16]. 8.2.1. Notation. Given a PROMELA program M, the goal of IA is to associate each program point in M with the least set of variables whose value is needed to analyse M. Let V be the set of program variables. Informally, given x, y ∈ V, variable y influences variable x at a given program point, if there exists an execution path in M from this point to an assignment x = ex p and the current value of y is used to calculate ex p, that is, if the current value of y is needed to construct the value of x in the future. Let Inst be the set of all valid PROMELA instructions including the Basic statements (boolean expressions, assignments, and input/output instructions over channels), If, Do, Atomic, Unless statements, etc. In the sequel, no distinction is made between C variables and pure PROMELA variables. Furthermore, blocks of C instructions inside c code are considered as Basic instructions, and C boolean expressions are considered to be managed as pure PROMELA boolean expressions. In order to simplify the analysis, it is assumed that Do instructions are implemented using If and goto statements. In addition, it is assumed that branches of If instructions always begin with a boolean expression followed by at least one statement. tr ue and skip are used to complete the instructions when necessary (for example, see the codes of Figure 13). Finally, when an else branch appears, it is assumed that it has been replaced by the corresponding boolean expression. In the sequel, BoolExp will denote the set of boolean expressions that can be constructed with the usual arithmetical and boolean operators and with the constant and variables of the model. Let M = P1  . . . Pn be the PROMELA model to be analysed, where each Pi denotes a concurrent process declared in M. It is assumed that all instructions of the PROMELA model M to be analysed are labelled, i.e. each one has the form L : ins where L ∈ L is a unique label of the instruction ins. Labels may be defined by the user or automatically assigned. End denotes the set of user-defined labels starting with end. The code of each process is finished with a label L ∈ End. Note that labels represent program counters of processes. For the sake of simplicity, it is assumed that labels in each PROMELA process are different. Function I : L → Inst returns the Basic/If instruction following a label. For instance, considering the code of process p2 of Figure 13, I (L6) = x1 = x1+1 and I (L1) = if :: true− > L2 : x2 = 0; :: true− > L3 : x2 = 1; fi;. Function next : L → L associates each label l with the label pointing to the basic/If instruction following I (l). For example, in process p2 of Figure 13, next(L2) = L4, and next(L6) = L5. Function next is well defined because it is always applied to labels pointing to basic instructions, although it may return labels pointing to a Basic/If statement. Given any expression or instruction, var (e) ⊆ V is used to denote the set of program variables appearing in e. 8.2.2. 1 : Checking code reachability. The first analysis, IA1 , should produce information to an abstraction function that preserves each possible execution path in model M. Since control flow Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

292

Figure 13. Two PROMELA processes.

depends on boolean expressions and If instructions, in order to simulate all the execution paths, all possible values of the variables appearing in the guards of the control statements have to be recorded. The set Init ⊆ V of all program variables appearing in some boolean expression in M is defined, and the influence analysis IA1 is performed attaching each program counter of M with the set of program variables influencing some variable in the set Init. For instance, for processes p1 and p2 given in Figure 13, the set Init is respectively defined as {x} and {x3, x4}. For the case of p1, Figure 14 (left) shows the intended result of IA1 . For this process, IA1 associates the set {x} with the labels L1, L2 and L3. The usefulness of the analysis is clear: if one is interested in knowing whether a particular label of process p1() is reachable, one only has to store variable x at labels L1, L2 and L3. In particular, variable y may be completely hidden because its value is not relevant for this analysis. In order to give a rigorous description, how to apply the static analysis to a simple process P is first defined, and then extended to a whole program M composed of several concurrent processes. Furthermore, it is assumed that M only contains local variables, and then once again the analysis is extended to the case of global variables. The static analysis IA1 is formally constructed as the least fixed point of function F1 : ℘ (V)L → ℘ (V)L , which represents a transformation function that transforms vectors of |L|-components, where |L| is the number of labels in the system. Each component that corresponds to a label is a set of variables. → → → Given − s = {sl |l ∈ L} ∈ ℘ (V)L , the l-component of − s is denoted as − s (l) and corresponds to a subset of variables attached to label l at a given moment during the computation of IA1 . → → Similarly, the l-component of F1(− s ) is denoted as F1(− s )(l). F1 is a backward analysis, that is, it extracts information following the reverse control flow of the program. Thus, to calculate the significant variables at a given label l ∈ L, all the variables that are needed by any execution path starting at this point have to be collected. Bear in mind that a variable is needed at l if its value is needed to execute the next instruction I (l) or for executing any instruction following I (l). In → → consequence, given − s ∈ ℘ (V)L F1(− s )(l) is constructed making use of function F1∗ , defined below, as follows: → → F1(− s )(l) = F1∗ (I (l), − s (next(l)))if I (l) ∈ Basic and n F1∗ (b , − → → F1(− s )(l) = ∪i=1 i s (li ))if I (l) = if :: b1 →l1 : . . . :: bn →ln : . . . ; fi

Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

293

VERIFICATION OF ARINC

Figure 14. Result of IA2 in process p1 (left) IA2 for process p2 (right).

where F1∗ : Basic×℘ (V) → ℘ (V) calculates the significant variables before executing a basic instruction as:  s if x ∈ s ∗ F1 (x = ex p, s) = s −{x}∪var(ex p) if x ∈ s F1∗ (bool, s) = s ∪var(bool), bool being a Boolean expression That is, assignment x = ex p modifies set s only if it has been deduced that x influences some variable in Init. In that case, the effect of x = ex p consists of introducing in s all variables appearing in ex p, excluding x because its value is changed in the assignment. In addition, all variables appearing in a boolean expression influence variables in Init (in fact, they belong to Init). Define ∀l ∈ L.sl = ∅, and consider the initial vector − s→ init = {sl }l∈L . Then, the static analysis L → → IA1 ∈ ℘ (V) is given by the least fixed point of the equation − s = F1(− s ) which can be calculated − → − → as the limit of the sequence sinit , F1(sinit ), . . . . i − −→ → The following assertions regarding sequence − s→ init , F1(sinit ), . . . hold: (1) ∀i ∈ N, F1 (sinit ) ⊆ i+1 k k+1 − → − → − → F1 (sinit ); (2) ∃k ≥ 0, F1 (sinit ) = F1 (sinit ). Now, consider a PROMELA program M = P1  · · · Pn involving the concurrent execution of several processes. Let IAi1 be the vector produced by the Influence Analysis for the process Pi . If the set of labels appearing in process Pi is denoted with Li , then a program point of M may be represented by a tuple (l1 , . . . ,ln ) with li ∈ Li being the current program counter of process Pi . n Function IA1 : L1 ×· · ·×Ln → ℘ (V) is thus defined as: IA1 (l1 , . . . ,ln ) = i=1 IAi1 (li ). That is, the information regarding analysis IA1 at program counter (l1 , . . . ,ln ) is the union of all variables collected by IA1 for each process Pi at label li . Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

´ P. DE LA CAMARA ET AL.

294

The following variants of IA are based on the definition of IA1 , redefining function F1∗ , which propagates the information about the needed variables in a bottom-up manner, and the initial vector − s→ init which is used to start the fixed point computation. 8.2.3. 2 : checking state properties. The variant IA2 preserves state properties, such as the ones specified with assert statement. For instance, in order to verify the assertion x1 == 2 of process p2, it is easy to see that not only the variables influencing the boolean expressions in the code have to be stored in order to completely simulate the reachability tree, but those that influence the variables in the assert statement (variable x1 in the example) as well. Figure 14 (right) shows the intended result of IA2 for process p2. Observe that variable x1 is attached to some labels of the process, as its value is needed at label L12. Thus, the purpose is to extend analysis IA1 to take into account variables appearing in the assertions to be validated in the code during execution. It is worth noting that at this point, the assumption still is that the model only contains local variables. To extend IA1 , it suffices to redefine F1∗ as function F2∗ defined as:  s if x ∈ s F2∗ (x = ex p, s) = s −{x}∪var(ex p) if x ∈ s F2∗ (bool, s) = s ∪var(bool), bool being a Boolean expression F2∗ (assert(b), s) = s ∪var (b), assert(b)being an assertion expression Now, IA2 is constructed using F2∗ as IA1 was defined from function F1∗ , and considering the same initial vector − s→ init . The resulting analysis is able to preserve the assertions as desired. 8.2.4. 3 : dealing with global variables. As mentioned above, the previous description is only applicable to models without global variables. The analysis of local variables is easier because their use is localized inside a unique process, and the static analysis follows the control flow of isolated processes. In contrast, the code regarding a global variable may be distributed through many different system processes. Thus, it is possible that some variables used to construct a given global variable in a process are erroneously hidden by the static local analysis. In order to solve this problem, consider the set G M ⊆ V of all global variables appearing in some boolean expression −→ g of some process. Then modify − s→ init and use sinit defined as {sl |l ∈ L} where ∀l ∈ L.sl = G M . With this definition for the initial vector, static analysis is able to extract all variables influencing global variables, which are critical for the control flow of the model. In the following section, IA3 is called to this extension. Observe that if assertions are considered as boolean expressions, this analysis is also able to preserve the state properties described above. 8.2.5. 4 : preserving trace properties. In order to preserve the evaluation of an LTL property f , all its (global) variables have to be always significant for the analysis. Thus, since variables appearing in the formula are always saved, the formula may always be correctly checked. Then, function F4∗ takes into account the variables in f to be checked as follows: ⎧ s if x ∈ s ⎪ ⎨ F4∗ (x = ex p, s) = s −{x}∪var (ex p) if x ∈ s, x ∈ var ( f ) ⎪ ⎩ s ∪var (ex p) if x ∈ s, x ∈ var ( f ) F4∗ (bool, s) = s ∪var(bool), bool being a Boolean expression −→  Now, define sinit = {sl |l ∈ L} where ∀l ∈ L, sl = G M . Thus, analysis IA4 is constructed as the −→  . previous ones, but using function F4∗ and the initial vector sinit Additionally, following the ideas by Peled et al. [34], once again IA4 is extended by hiding variables from f when they are not necessary. Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

295

VERIFICATION OF ARINC

8.3. Some experimental results with

APEX

applications

The verification work with the test cases and with the application in the case study section has been revised in order to confirm that the proposed optimizations are feasible. In the previous experiments, the C applications and the deterministic execution produced by the APEX schedulling rules were considering using the two methods for time management ( TMT 1 and TMT 2). Now TMT 2 also has been extended with a non-deterministic environment that expands the reachability graph in order to produce a larger state space. Now, the abstraction function IA1 is applied in the verification of the test cases in order to check deadlock using both models for time management. Compared with the results obtained in Section 6.1, the following reductions are obtained: • Using TMT 1, the method IA1 reduces the size of each state by 20%. In many test cases there is no reduction in the number of states; however, reducing the size of each state allows the verification of programs with large data structures. Note that, for instance, in App14.c with 1Kb states it is possible to save 32 Mb of memory due to static analysis. Furthermore, with some of them, such as App14.c, we can save 20% of the states. Taking into account that verification only requires a few seconds, the reduction in the verification time is not significant in the current experiments. • Using TMT 2, the same reduction in the size of each state is obtained as using TIME1, but the number of states is the same without abstract matching. • Using a non-deterministic environment and TMT 2, both the size and the number of states can be reduced by 50% in the most complex applications. When IA1 is applied to the application ON FLIGHT, similar reductions are obtained, saving 20% of memory for each state. Again, using TMT 2 and no additional non-determinism, no states are saved. However, adding non-deterministic behaviour in some API calls, we can save 20% of the states compared with the version without abstract matching. The same results are obtained by applying IA4 to verify the temporal logic property [] (height > 0 ) -> (fuel > 0) in the ON FLIGHT application. The table with all the data is available in [17].

9. COMPARISON WITH RELATED WORK As mentioned in the Introduction, works by Penix et al. [11] and by Cofer and Rangarajan [12] on constructing a PROMELA model of the DEOS OS follow the software model-checking approach, as in this work. However, the present work has the opposite goal. It aims at verifying avionics applications that access OS services through the APEX interface. In this case, PROMELA models are semi-automatically extracted from application source codes (see [9]). In order to close the application model, an environment capable of providing APEX services to the applications has been used. Some ideas on modelling time by Penix et al. can be compared with this work. In the DEOS environment, one message is periodically generated to indicate that a higher-priority thread may become schedulable. After receiving this message, the kernel checks if the current running process must be pre-empted by the higher-priority thread. In this paper, since the environment is the OS, and the threads are the applications to be verified, whenever a higher-priority process becomes schedulable, the environment, that is the OS, disables the execution of the low-priority processes and enables the high-priority one. The DEOS environment has two time-related interruption sources. The first source periodically interrupts the kernel in order to check if a higher priority has become schedulable. The second source interrupts the kernel whenever the running process exhausts its time budget. The DEOS environment combines both interruptions in one process, in order to coordinate them and avoid ‘impossible’ behaviours. Similarly, the environment in this paper includes a TimeEventGenerator process that may be seen as the combination of every time-related interruption source applicable on APEX (waiting timeout expiration, time budget expiration, etc.). However, instead of ticks, the Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

296

´ P. DE LA CAMARA ET AL.

TimeEventGenerator triggers time events. Each time event has an associated time point. The environment is able to know the current system time, just by reading the time point value of the last time event triggered. This feature allows the environment to provide the system time to the applications. Since both the environment and the applications are aware of system time, it is possible to use timing values in the properties to be verified (i.e. in LTL formulae). Modelling time to verify systems with SPIN has been considered in other works. The work presented by Bosnacki [35] is a time extension for PROMELA. The principles explained in that paper are generic and may be applicable to different modelling problems. In this paper, the aim is to model time management as described in the APEX specification. The context of the approach is automatic model extraction of avionics applications and APEX environment modelling. This entails the ability to use modelling techniques specially tailored for this context. On the other hand, some of the techniques and assumptions made may not be applicable to other modelling problems. In the discrete time model of Bosnacki, time is divided into slices. Actions take place inside these slices, making it possible to obtain a measure for the time elapsed between events belonging to different slices. However, within a slice, only the relative ordering between events is known. The time elapsed between events is measured in ticks of a global digital clock that is increased by one with every such tick. In ARINC TESTER, a time slice may be considered as the time elapsed between two consecutive time events. The main difference is that the time slice size is always variable and depends on the model behaviour. Furthermore, time events may only happen between two atomic blocks of application codes. Another significant difference is that two different use scenarios have been identified, depending on whether the execution time of application code blocks is known or not. If the execution time is not known, the environment assumes worst-case overapproximation, that is, the execution time of each code block may be any value from 0 to infinite. In practice, this means that between two time events, a process can execute a non-deterministic number of code blocks, unless it makes an API call involving waiting for an event (TIMED WAIT, etc.). In principle, as proposed by Bosnacki, only the relative ordering between events is known within a slice. However, if the execution time of code blocks is known, the environment makes it possible to know the absolute time in which a block of code was executed. One important point of the work of Bosnacki is that it is compatible with SPIN POR algorithm. This is not the case of ARINC TESTER, since it uses the PROMELA provide sentence, which is incompatible with POR. However, this is not a major performance issue for the ARINC TESTER model, since the main objective of POR is to reduce the state explosion caused by non-deterministic process interleaving. Owing to the way in which APEX processes are scheduled, this kind of state explosion rarely appears during verification. Another related development to manage time is the RT-Spin package [36]. RT-Spin extends PROMELA in order to deal with dense real time. It offers clock variables to be used in the model and also in the temporal logic formulas. RT-Spin is oriented to specifying and verifying quantitative temporal properties, and it requires a different implementation of SPIN. The aim of ARINC TESTER is to model discrete time in order to emulate the APIs, using standard SPIN. Regarding model extraction itself, the closest related work is the project FeaVer. The main objective in FeaVer is to produce ‘fail-safe’ abstraction when processing the C code in order to produce PROMELA. It is based on a mapping table with information for all statements in the original code. If one statement or function call is not involved in the verification of a given property, then it is replace by skip. If it is involved, then a more elaborate mapping is defined. If the verification reports no errors, then one can be sure that the initial C code verifies the property. The user does not have to focus on making the PROMELA model, but rather on defining the mapping table. Compared with the method described in this paper, there are several differences. FeaVer does not seem to have a general parsing mechanism to automatically produce a model such as the one defined in this paper or to check the correctness of the mapping before doing the verification. Instead of adapting the mapping table to the property, ARINC TESTER preserves all the C statements, conveniently grouped as atomic blocks, and automatically replaces calls to external functionalities with tested models of their behaviours. Given a property to be verified, FeaVer uses the refinement of the Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

VERIFICATION OF ARINC

297

mapping table as the optimization mechanism, and ARINC TESTER uses a fixed model and abstract matching functions based on static analysis for this purpose.

10. CONCLUSIONS This paper presents a method to verify C applications running on top of APEX, the application programming interface of ARINC 653. The technique can be used to verify both the OS and the applications; however, compared with related work, this paper focuses on verifying the behaviour of the applications and not the OS. The most important conclusion of this work is that verification of APEX-based avionics applications is feasible with SPIN. SPIN is able to model APEX-like Real-Time management in a correct and efficient manner, if the right methods and assumptions are used. However, verification methods and assumptions must be adapted to each use-scenario. If application execution times are not known before the verification, then one must use methods and assumptions that cope with this uncertainty (i.e. Time Management Type 1). On the other hand, when the execution times become available, one must refine the verification using more accurate methods and assumptions (i.e. Time Management Type 2). It is also worth noting that the size of the code is not a real limit due to the way blocks are made between two API calls. Note that we could put large sequences of C lines as an atomic block. The only real limit to the approach is given by the number of calls to API and the number of processes. We still need to explore which kinds of realistic applications are beyond the limits of our tool. With respect to the future work, the authors plan several parallel lines of study. First, they plan to complete the set of modelled APEX services, such as interpartition and intrapartition communication services. They also want to improve the approach by using memory optimization methods based on data abstraction [16]. They have detected that in some use-scenarios, the execution times are known, but with some degree of uncertainty. For these scenarios they want to build a Hybrid Time Management, where the execution times are non-deterministically chosen among a limited set of values. All the new features will be integrated in the current prototype tool ARINC TESTER, available in [17]. REFERENCES 1. ARINC. ARINC Specification 653-2: Avionics Application Software Standard Interface Part 1—Required Services. Aeronautical Radio INC, MD, U.S.A., 2005. 2. RTCA. RTCA/DO-178B Software Considerations in Airborne Systems and Equipment Certification, 1992. 3. Clarke EM, Emerson EA, Sistla AP. Automatic verification of finite-state concurrent systems using temporal logic specifications. ACM Transactions on Programming Languages and Systems 1986; 8(2):244–263. DOI: http://doi.acm.org/10.1145/5397.5399. 4. Queille JP, Sifakis J. Specification and verification of concurrent systems in cesar. Proceedings of the 5th Colloquium on International Symposium on Programming. Springer: London, U.K., 1982; 337–351. 5. Holzmann GJ. The model checker spin. IEEE Transactions on Software Engineering 1997; 23:279–295. DOI: http://doi. ieeecomputersociety.org/10.1109/32.588521. 6. Holzmann GJ, Smith MH. Software model checking: Extracting verification models from source code. Software Testing, Verification and Reliability 2001; 11(2):65–79. 7. Havelund K, Pressburger T. Model checking Java programs using Java pathfinder. International Journal on Software Tools for Technology Transfer 2000; 2(4):366–381. 8. Corbett JC, Dwyer MB, Hatcliff J, Laubach S, Pasareanu CS, Robby, Zheng H. Bandera: Extracting finitestate models from Java source code. ICSE ’00: Proceedings of the 22nd International Conference on Software Engineering. ACM Press: New York, NY, U.S.A., 2000; 439–448. DOI: http://doi.acm.org/10.1145/337180.337234. 9. de la C´amara P, Gallardo MM, Merino P, San´an D. Model checking software with well-defined apis: The socket case. FMICS ’05: Proceedings of the 10th International Workshop on Formal Methods for Industrial Critical Systems. ACM Press: New York, NY, U.S.A., 2005; 17–26. DOI: http://doi.acm.org/10.1145/1081180.1081184.

Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

298

´ P. DE LA CAMARA ET AL.

10. Zaks A, Joshi R. Verifying multi-threaded c programs with spin. SPIN ’08: Proceedings of the 15th International Workshop on Model Checking Software. Springer: Berlin, Heidelberg, 2008; 325–342. DOI: http://dx.doi.org/10.1007/978-3-540-85114-1 22. 11. Penix J, Visser W, Engstrom E, Larson A, Weininger N. Verification of time partitioning in the deos scheduler kernel. ICSE ’00: Proceedings of the 22nd International Conference on Software Engineering. ACM Press: New York, NY, U.S.A., 2000; 488–497. DOI: http://doi.acm.org/10.1145/337180.337364. 12. Cofer DD, Rangarajan M. Formal modeling and analysis of advanced scheduling features in an avionics rtos. EMSOFT ’02: Proceedings of the Second International Conference on Embedded Software. Springer: London, U.K., 2002; 138–152. 13. Binns P. A robust high-performance time partitioning algorithm: The digital engine operating system (deos) approach. Twentieth Digital Avionics System Conference Proceedings, Daytona Beach, FL, U.S.A., 2001. 14. Rangarajan M, Cofer DD. Computing worst-case response times in real-time avionics applications. Formal Methods for Industrial Critical Systems (Lecture Notes in Computer Science, vol. 4916). Springer: Berlin, 2008; 101–114. 15. de la C´amara P, Gallardo MM, Merino P. Model extraction for arinc 653 based avionics software. SPIN, Berlin, Germany, 2007; 243–262. 16. de la C´amara P, Gallardo MM, Merino P. Abstract matching for software model checking. Thirteenth International Workshop on Model Checking of Software (SPIN06). Springer: London, U.K., 2006; 182–200. DOI: 10.1007/11691617 11. 17. de la C´amara P, Castro RJ, Gallardo MM, Merino P, San´an D. Arinc Tester. Available at: http://www.gisum. uma.es//tools/arinctester. 18. ARINC. ARINC Specification 653-2: Avionics Application Software Standard Interface Part 3—Conformity Test Specification. Aeronautical Radio INC, MD, U.S.A., 2006. 19. Watkins C, Walter R. Transitioning from federated avionics architectures to integrated modular avionics. Digital Avionics Systems Conference, 2007. DASC ’07. IEEE, AIAA: New York, 26 October 2007; 2.A.1-1–2.A.1-10. DOI: 10.1109/DASC.2007.4391842. 20. Holzmann GJ. The SPIN Model Checker: Primer and Reference Manual. Addison-Wesley Professional: Reading, MA, 2003. 21. Manna Z, Pnueli A. The Temporal Logic of Reactive and Concurrent Systems. Springer New York, Inc.: New York, NY, U.S.A., 1992. 22. Holzmann GJ, Peled D. An improvement in formal verification. Proceedings of the 7th IFIP WG6.1 International Conference on Formal Description Techniques VII. Chapman & Hall, Ltd.: London, U.K., 1995; 197–211. 23. Wolper P, Leroy D. Reliable hashing without collision detection. Computer Aided Verification. Fifth International Conference. Springer: Berlin, 1993; 59–70. 24. Holzmann GJ. An analysis of bitstate hashing. Formal Methods in System Design. Chapman & Hall: London, 1995; 301–314. 25. Holzmann GJ, Bosnacki D. The design of a multicore extension of the spin model checker. IEEE Transactions on Software Engineering 2007; 33(10):659–674. DOI: http://dx.doi.org/10.1109/TSE.2007.70724. 26. Gallardo MM, Mart´ınez J, Merino P, Pimentel E. spin: A tool for abstract model checking. International Journal on Software Tools for Technology Transfer 2004; 5(2):165–184. DOI: http://dx.doi.org/10.1007/s10009003-0122-9. 27. Gallardo MM, Merino P, San´an D. Model checking dynamic memory allocation in operating systems. Journal of Automated Reasoning 2009; 42(2):229–264. DOI: 10.1007/s10817-009-9124-y. Available at: http://dx.doi.org/10.1007/s10817-009-9124-y. 28. Holzmann GJ, Joshi R. Model-driven software verification. SPIN, Barcelona, Spain, 2004; 76–91. 29. Nicholson M. Health monitoring for reconfigurable integrated control systems. System Safety Symposium, Southampton, U.K., 2005. 30. WindRiver, Wind River VxWorks 653 Platform. 31. Gamatie A, Brunette C, Delamare R, Gautier T, Talpin JP. A modeling paradigm for integrated modular avionics design. EUROMICRO ’06: Proceedings of the 32nd EUROMICRO Conference on Software Engineering and Advanced Applications. IEEE Computer Society: Washington, DC, U.S.A., 2006; 134–143. DOI: http://dx.doi.org/10.1109/EUROMICRO.2006.11. 32. Bosnacki D. Enhancing state space reduction techniques for model checking. PhD Thesis, Technische Universiteite Eindhoven, 2001. 33. Nielson F, Nielson HR, Hankin C. Principles of Program Analysis. Springer New York, Inc.: Secaucus, NJ, U.S.A., 1999. 34. Peled D, Valmari A, Kokkarinen I. Relaxed visibility enhances partial order reduction. Formal Methods in System Design 2001; 19(3):275–289. 35. Bosnacki D, Dams D. Integrating real time into spin: A prototype implementation. FORTE XI/PSTV XVIII ’98: Proceedings of the FIP TC6 WG6.1 Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols (FORTE XI) and Protocol Specification, Testing and Verification (PSTV XVIII). Kluwer, B.V.: Deventer, The Netherlands, 1998; 423–438. 36. Tripakis S, Courcoubetis C. Extending promela and spin for real time. TACAs ’96: Proceedings of the Second International Workshop on Tools and Algorithms for Construction and Analysis of Systems. Springer: London, U.K., 1996; 329–348. Copyright q

2010 John Wiley & Sons, Ltd.

Softw. Test. Verif. Reliab. 2011; 21:267–298 DOI: 10.1002/stvr

Suggest Documents