In the following, we introduce our development environment, including the tools and libraries used in the different development stages as well as our test and verification possibilities during system development. We distinguish three development stages at different levels of abstraction targeting specific key aspects, namely simulation, prototyping and pre-production. Validation and verification activities are applied in each stage according to the given abstraction level. On the left in Figure 1, the overall process including the different stages is shown. In the following, we describe the applied validation and verification activities of each stage in the form of the libraries, methods and tools used. Furthermore, we show how to achieve an AUTOSAR conform system realizing the complex behavior of the robot incrementally developed, validated and verified during the different development stages.
Simulation Stage
Individual functions as well as composed behavior, resulting from multiple individual functionalities, are the subject of the simulation stage. Data flow models in the form of block diagrams (e.g., MATLAB/Simulink) usually in combination with control flow models like Statecharts (e.g. Stateflow) are used. Normally, function development is done independent from platform specific limitations (memory capacity, floating point calculation or effects resulting from discretization). Additionally, environment specific signals and other real sensor values (e.g. produced by A/D, D/A converter or specific communication messages) are ignored for the sake of simplicity. The goal of the simulation stage is to prove that the functional behavior can work and as a result provides a first proof of concept for control algorithms. As depicted in Fig. 1 and according to the aspect, we mainly use the MATLAB tool suite including the Simulink and Stateflow extension in this development stage. Let us consider the MATLAB model shown in Fig. 2 , as an example modeling the functionality of an odometry. It reads data from moving sensors to calculate changes in the position over time according the actual orientation and movement speed of the robot. In the simulation stage, such a model is used to apply a so-called model test (MT), where individual functionalities can be simulated sending static input values to the model (e.g., drive speed and turn rate of the robot as in Fig. 2) and plotting the computed output values as shown in Fig. 3. These one-shot/ one-way simulations are typical for the MT step and do not consider the interaction with the environment or a plant model. More complex behavior is constructed and validated in the form of individual functionalities and running model-in-the-loop (MiL) simulations including preliminary environment models of the plant. At this point in time, feedback simulations validate the developed functionality considering the dynamic behavior of the environment. Outputs are sent to the plant model, which itself gives feedback used as input for the function blocks in the next iteration of the MiL simulation.
In the case of robotic systems, such a plant model can be represented at different levels, e.g., by using models representing a single sensor, the behavior of a single robot using multiple sensors or in the case of a complex simulation realizing the behavior of multiple robots as well as relevant parts of the logical and/or physical environment. Using such a plant model in the context of a MiL simulation, we must bridge the gap between our MATLAB models and the provided model of the plant. For this purpose, on the one hand, we use the RobotinoSim simulator. Therefore, we implemented a block library for MATLAB in our development environment, which allows access to sensors (e.g., distance sensors, bumper, incremental encoder, electrical motors) and actuators. The sensors and actuators can be accessed individually inside a MiL simulation supporting the validation of the models. The RobotinoSim simulator provides optimal sensor values excluding effects such as sensor noise. Therefore, on the other hand, we can access the HW of the robot directly via a wireless LAN connection. Due to the fact that we use the concrete HW in this simulation setting, we could verify our functionalities and control algorithm with real sensor values including measure errors and sensor noise.
Additionally on the right of Fig.1 ,one can follow the toolchain used via the flowarrows (The described RP flow to the real robot is not shown in the figure). However,we are not limited to the RobotinoSim tool in our development approach. We use this tool to show the proof of concept, but in general it is possible to create block libraries in MATLAB or use existing ones (for example the robotics toolbox) for other robots, simulation frameworks or individual sensors/ actuators.
Prototyping Stage
The focus of this stage changes from design to implementation. While in the simulation stage models are the main artifacts, in this stage the source code plays a major role. In the following, we show how to support the prototyping stage at the level of more isolated functional parts as well as at the level of the system behavior by using the professional, commonly used tools of the automotive domain.
Function Level – TargetLink:
In the automotive domain, code generators are commonly used to derive an implementation for the specific target platform. Usually, the models from the simulation stage are directly used or refined until a code generation step is possible. In our development environment, the tool TargetLink from dSPACE is fully integrated into MATLAB and can automatically derive the implementation from behavior models in form of C-Code. In this step, we use the same MATLAB blocks as discusses in the previous section. So, we are able to seamlessly migrate our functions and control algorithm from the model level, realizing continuous behavior, to the implementation level, realizing a discrete approximation of the original continuous behavior. Discretization is applied at different levels. E.g., fixed point variables are used for the implementation at the data level or time continuous differential equations are mapped to discrete execution intervals at the timing level. We can configure several characteristics of the desired target platform/HW.
Software-in-the-loop (SiL) simulation is a first step from the pure model execution to a code-based testing. Certain assumptions can be validated by replacing more and more models with code. While still executing the software on a host pc and not on the real HW, different effects can be analyzed, which result from chosen configuration parameters during code generation. Just as in the MiL simulation case, a SiL simulation can be applied in MATLAB using the generated source code instead of the original model. The developer can switch between the MiL and SiL simulation mode in MATLAB. Therefore, he can easily compare the simulation results. Fig. 3, for example shows the monitored results of the position as well as the orientation from the MiL and SiL simulation runs of the odometry. The simulations run against the RobotinoSim simulator. In the MiL run (dashed line), appropriate values for the actual position and orientation are calculated. Because of rounding (discretization) effects in the SiL run, the calculated values are much too low. So, the difference between pure model simulation and code generation becomes visible.
The problem in this special example could be fixed by choosing different values for the discretization over time. Calculating the position each 0.02 time units (corresponds to a scheduling with a period of 20 ms, cf. the constant value in Fig. 2) leads to very small offsets in the position, which is often rounded to zero due to discretization. After we identified the problem, we could easily fix it in the model. Instead of a 20 ms period, we double it to 0.04 time units for calculating the position. After generating code again, we could validate our assumption, which leads to a new requirement to trigger the functionality of the odometry with a period of 40 ms. Using code generators for automatically deriving the implementation realizing the behavior of initially created models support the seamless migration from the model level to the implementation level as well as allow to analyze effects arising from the implementation.
System Integration and AUTOSAR
For more complex system behavior resulting from the composition of multiple individual functionalities, we use the component-based architecture provided by the AUTOSAR framework.
Brief introduction to AUTOSAR
The AUTomotive Open System ARchitecture was invented to further support the development of complex and distributed systems. AUTOSAR is the new de facto standard in the automotive domain. It defines a layered architecture, standardized communication mechanism and a whole development methodology. Furthermore, it supports the interaction between different car manufactures and suppliers. Figure 4 gives an overview of the layered AUTOSAR architecture. The layer at the bottom represents the real hardware including microcontroller and communication busses. An abstraction layer on top of the real hardware, included in the basic software layer, offers standardized interfaces for accessing the HW. Further functionality realizing the OS behavior as well as functionality for realizing communication is included in the basic software layer. The AUTOSAR runtime environment (RTE) is responsible for realizing the communication from and to the top software application layer. Software components (SWCs) realize application functionality at the layer on top. There, the architecture style changes from a layered to a component based approach. SWCs communicate over well-defined ports using AUTOSAR interfaces, which are realized by the RTE layer. Each SWC consists of an arbitrary number of so-called Runnables that specify the behavior entities of each component (The functionality of a Runnable can be realized by a C/C++ function.). Such Runnable entities are mapped on OS tasks,which are scheduled and handled by the operation system included in the basic software layer.
System Level – SystemDesk:
Individual functionalities provided by the MATLAB models are mapped on constituent parts of the AUTOSAR model such as those depicted in Fig. 6. The generated source code from TargetLink is mapped into the AUTOSAR SWC in the form of so-called Runnables.
So, the same C-Code as in the SiL simulation is used and thus, a seamless integration of individual functions into the overall system behavior is achieved. In our example, we split the MATLAB model into two Runnables, namely OdometryRunnable and OmnidriveRunnable. The SWC communicates to other ones over well defined ports. Furthermore, the input and output values are mapped to AUTOSAR interfaces with data entries and types respectively.
System Configuration: In addition to the architecture modeling and the separation of functions in different SWCs, SystemDesk supports a task specification for the underlying operating system. Runnables can be mapped to different tasks. Furthermore, several task activation events including periodic and sporadic ones are supported and additional scheduling information like periods and priorities can be modeled. For a system simulation, one has to specify a concrete AUTOSAR conform system configuration, which includes
- a set of tasks, each consisting of one or more Runnables,
- one or more electronic control units, which are specialized processors, and
- communication capabilities (buses) with a concrete mapping of messages, which have to be exchanged. In the following, we describe the first point in more detail using our running example.
After adding more information to satisfy points 2) and 3), SystemDesk can realize a system simulation. It automatically generates the required simulation framework code according to the AUTOSAR standard. Furthermore, existing source files, generated by TargetLink (from the MATLAB models), are compiled and linked into OS tasks. The complete system runs in a special simulation environment inside the SystemDesk tool and considers the HW configuration as well as OS task specifics. This simulation is executed on a host PC and thus belongs to the prototyping stage. As depicted in Fig. 1 we can validate the overall system behavior in the three following scenarios: First, we can monitor different output values, messages and variables inside the simulation environment itself. Second, we can connect the Robotino simulation environment as a plant model, which interacts with the SystemDesk tool. Finally, we are able to replace the plant simulator with the real robot. Therefore, we have to establish a W-LAN connection for the communication and to access the real sensors as well as actuators. Unfortunately, this unpredictable connection can destroy the timing behavior of the simulation, although the simulator tries to keep all deadlines. If we find errors during our validation processes, we can change the configuration, architecture or communication possibilities in SystemDesk and run our simulations again. Furthermore, we are able to re-import SWCs into MATLAB and therefore, switch between the different development stages.
Hardware-in-the-loop (HiL) simulations can be applied in the prototyping stage too. In these kind of simulations, the "unlimited" execution and testing hardware is often replaced by special evaluation boards with additional debugging and calibration interfaces, which are similar to the final hardware configuration. Due to limitations of our robot laboratory and missing evaluation boards, we do not use such HiL simulations. However, the integration of such boards can be carried out easily in the SystemDesk tool by changing the HW specification during the system configuration step.
Pre-Production Stage
Within the pre-production stage, usually, a prototype of the real system is built. This prototype is tested against external environmental influences (such as temperature, vibration or other disturbances). The goal of this stage is to prove whether all requirements and constraints are still met on the real HW. During this last integration of all components and system parts, upcoming problems should be fixed as early as possible and before the final production of the product starts. In our setting, we did not built any HW prototypes. Instead, we integrate the overall functions, components as well as the generated RTE and tasks to a complete system, compile and run it on the target processor of the robot. So in this last step, we have no simulation semantic and W-LAN connection to other tools. We can fully operate the behavior of the robot in hard real-time. For verification, we use some hard real-time logging mechanism of the robot OS. Furthermore, we can change the hardware composition of the robot by adding or removing special sensors and actuators.
Integrated Development via Model Synchronization
During the overall development of complex engineering systems different modeling notations are employed. For example in the automotive domain, system engineering models (e.g., SysML or UML) are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models (e.g., AUTOSAR) are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model (SysML) are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. We have represented how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent (cf. literature at the official project website). We also introduced a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. For synchronization triple graph grammars are used for exemplifying the general approach by means of a model synchronization solutions between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner.
Synchronization of different modeling notations is only one relevant aspect when developing complex systems. Further, support for traceability and maintenance of traceability information is essential. On the one hand, classical traceability approaches in the context of model driven engineering (MDE) address this need by supporting creation of traceability information on the model element level, e.g., between model elements of the same model. On the other hand, global model management approaches manually capture traceability information on the more global model level, e.g., between different models. We have shown how to support comprehensive traceability on both levels, and efficient maintenance of traceability information, which requires a high degree of automation and scalability. Additionally, we have presented a comprehensive traceability approach that combines classical traceability approaches for MDE and global model management in form of dynamic hierarchical mega models. We further integrate efficient maintenance of traceability information based on top of dynamic hierarchical mega models. We have motivated how to apply such a technique to support traceability for timing properties of an AUTOSAR conform architecture.
References
You find our publications for this research area at the official project website.