Dieses Bild ist ein Thumbnail, das für den Artikel Bewährte Praktiken für die Analyse von Simulink-Modellmetriken und die Rekonstruktion von Modellen verwendet wird.

Best Practices für Simulink: Modellmetriken analysieren & Refactoring

Der folgende Artikel zum Thema „Modellmetriken analysieren & Refactoring“ ist derzeit nur auf Englisch verfügbar. Was Sie in diesem Artikel erfahren: Die Analyse von Modellmetriken und das gezielte Refactoring spielen eine zentrale Rolle bei der Entwicklung qualitativ hochwertiger Simulink-Modelle. Um komplexe Systeme langfristig wartbar, effizient und verständlich zu gestalten, ist ein strukturierter Ansatz unverzichtbar. Der folgende Leitfaden beinhaltet die wichtigsten Aspekte, von den Grundlagen der strukturellen Qualität über zentrale Modellmetriken wie Komplexität, bis hin zu methodischen Ansätzen für strukturelle und funktionale Schichten. Sie lernen außerdem, wie man mit ungültigen Schnittstellen umgeht und Clone Components erkennt. Abschließend stellen wir Ihnen Best Practices für die Analyse von Modellmetriken und Refactoring vor, die eine nachhaltige Verbesserung der Modellqualität ermöglichen.

In model-based development (MBD), high-quality model architecture is a prerequisite for generating high-quality code. Static model analysis plays a crucial role in ensuring model quality, while modeling guidelines have been widely applied and well-established in the industry. However, modeling guidelines are not the only criteria for compliance with model design principles, as there are many aspects that still require improvement based on specific model properties. As an essential aspect reflecting modeling quality, model structural quality can be comprehensively analyzed through a series of model metrics. This article presents the concepts, principles, and methods of model metrics, along with their applications and best practices for model refactoring. Additionally, this article shows how these approaches can effectively enhance model structural quality.

This figure describes the software development and software quality activity process.
Figure 1: Software Development and Software Quality Activity Process

What Is Structural Quality?

First, the concept of model quality is examined. In model-based development (MBD), the primary focus is on ensuring software quality within the model. The reference phases for software product development outlined in ISO 26262 provide corresponding design principles, software design and verification requirements. According to the V-model for software development in ISO 26262 (as shown in Figure 1), different phases involve various activities, ranging from safety requirements to software architecture design, software unit design and implementation, software unit verification, software integration and verification, and finally, embedded software verification.

This development process is divided into two phases:

  • On the left side of the V-model, the model is designed and built based on software requirements.
  • On the right side of the V-model, at each corresponding phase, verification is conducted to ensure that the model or software functions as expected according to the requirements and architectural design.

At the same time, the V-model integrates quality assurance activities throughout all phases. Structural quality and functional quality are two key aspects of model quality.

  • Structural quality: This refers to the suitability of model structures and design attributes. It involves assessing whether the model structure aligns with design suitability and whether the design is appropriate for meeting software requirements.
  • Functional quality: This focuses on verifying whether the software's functionality aligns with the model design and operates correctly according to the specified requirements.

What exactly is structural quality? To understand this, it is essential to explore the structural properties of a model. Structural properties reflect the extent to which software design meets requirements and the degree of compliance with design properties. According to ISO 26262’s description of typical design characteristics and quality assurance properties, the goals of design or implementation include: consistency, simplicity, comprehensibility and readability, modularity and encapsulation, adaptability to modifications, design robustness, verifiability, testability, and maintainability. From a software architecture perspective, a key design characteristic is ensuring consistency in software unit interfaces. Additionally, the design should be easy to understand and review. For instance:

  • How well is modularization implemented?
  • Is the design easy to modify?
  • Is the design sufficiently robust?
  • Does it follow industrial best practices or paradigms for modeling?
  • Is the design conducive to testing?
  • Will the design be convenient for future maintenance?

By addressing these questions, structural quality in software design can be effectively evaluated and improved.

To achieve these properties, specific reference can be found in ISO 26262-6, particularly in the recommended design principles for software design and implementation. Three tables in the standard focus on design recommendations and structural quality aspects: Table 1, Table 3, and Table 6. These tables cover topics that modeling and coding guidelines should address, principles for software architectural design, and design principles for software unit design and implementation. These principles are primarily implemented through static analysis and the application of modeling conventions in model-based development. At the same time, the principles of software architecture design provide concrete actionable suggestions, such as low complexity execution, limiting component size, ensuring strong cohesion within components, and loose coupling between components. To implement these suggestions effectively, it is first essential to have a detailed understanding of relevant model properties, for which model metrics can offer comprehensive insights.

This figure describes the local complexity of the subsystem.
Figure 2: Local Complexity of a Subsystem

Structural Quality-Related Model Metrics

Since model metrics can represent quality properties related to software models, it is important to first understand the specific metrics involved and how they reflect the structural quality of a model.

Typically, metrics measure the degree to which a software model shows certain properties. Model metrics follow the same principle. By measuring these properties, it becomes possible to map them to quantitative values, thereby enabling an objective understanding and assessment. Examples of such metrics include model complexity, component size, incoherence, the proportion of functional components, interface size, and the usage of clone groups.

The following sections provide a comprehensive overview of these key model metrics, including clear explanations of each metric, the factors influencing them, and the ways in which they reflect the structural quality of a model. Additionally, industry’s best practices for each metric are presented to guide effective implementation.

Model Complexity

The discussion begins with model complexity, with a primary focus on readability complexity, which is related to modeling style and comprehensibility at the subsystem level. Therefore, the analysis first considers the local complexity of a subsystem. To illustrate this, a typical subsystem is shown in Figure 2.

This figure describes complexity: low complexity vs. complexity.
Figure 3: Complexity: Low Complexity vs High Complexity

As shown here, the example subsystem consists of several input and output ports, along with two additional subsystems. At the current structural level, the focus is solely on these two subsystem blocks, without delving into the internal contents of each. In other words, the current subsystem includes input ports, output ports, and two internal blocks. It is essential to understand the concept of structural hierarchy at this point. At the structural level, the goal is to gain clarity on the origin and destination of signals, the direction of signal flow, and the way in which they are interconnected. This refers to the structural layout of the current modeling level. To assess local complexity from a structural perspective, it is not necessary to understand the internal computations or logic performed within the blocks. Instead, the focus is placed on the visible, representative structural components. At the current model level, the local complexity of this subsystem can be quantitatively evaluated by considering all visible elements. For example, a calculated value—such as 33 in this case—can be used to characterize the local complexity of the subsystem.

Why calculate local complexity? In fact, the goal is to focus on key features at a specific model level. By applying an appropriate hierarchical structure, the content represented by the model becomes easier to read and understand—thereby enhancing model readability. All other information related to the model's internal logic or details is intentionally omitted at the current level and deferred to other layers of the model’s structure. A well-balanced subsystem layout with reasonable complexity improves structural quality, enhances simplicity and readability, and reduces the effort required for review and maintenance. If this number alone does not yet convey more useful insights into the complexity, there is no need to worry. The following example in Figure 3 will help clarify this further.

This figure describes the layout and complexity of a complex system (local complexity ~ 600).
Figure 4: Layout and Complexity of Complex Systems (local complexity ~600)

As shown on the left side of Figure 3, the subsystem is relatively small in scale and consists primarily of a few simple computational steps. When evaluated using the same local complexity calculation method, its complexity value is determined to be 60. In contrast, the subsystem on the right side of Figure 3 is significantly larger in scale and evidently involves more computational logic. Intuitively, one would consider the right-hand subsystem to be more complex. The question is: how much more complex is it? Can a numerical value be used to quantify the difference in complexity compared to the smaller subsystem? By applying the same local complexity calculation, the complexity value of the right-hand subsystem is determined to be 600. This indicates that the larger subsystem is approximately ten times more complex than the smaller one.

Next, the analysis focuses on this large subsystem with relatively high complexity (refer to the right side of Figure 3 and Figure 4). A complexity value of 600 is not considered extremely high, but it falls within the medium-high range. For such subsystems, the question is how to improve readability. From a holistic perspective, the subsystem involves some parallel computation, and certain signal flows are not clearly defined. To improve readability, an attempt can be made to limit the local complexity to a certain threshold, as a complexity value of 600 is already somewhat large. Therefore, the subsystem will be restructured to address these issues.

This figure describes the layout and complexity of a restructured complex system (local complexity ~ 40).
Figure 5: Layout and Complexity of Complex Systems (local complexity ~600)

The model is essentially divided into two parts: the upper and lower sections. In the current structural layer, subsystems are created for each, with calculations placed in the lower model layer. After restructuring, the local complexity (see Figure 5) is significantly reduced to 40. More importantly, the redesigned subsystems provide a clearer and simpler layout, making the understanding of signal flow, data flow, and computations more straightforward. This improves both comprehensibility and maintainability.

This figure describes global complexity.
Figure 6: Global Complexity

Local complexity helps assess the size of model components or the complexity of subsystems. Building upon the earlier analysis at the subsystem level, the scope is now extended to examine the global complexity of the model system or its subsystems, to better understand the overall implementation scale. For the same model system or subsystem (as shown in Figure 6), the subsystem consists of input ports, output ports, and two subsystem blocks. All internal implementations—such as specific computations and components within deeper-level subsystems—are considered, as displayed in the model browser in Figure 6. The complexity of all elements within the model subsystem is calculated, and the complexities of each contained subsystem are summed. This results in a global complexity value of 352 for the system. Thus, the global complexity at this structural level includes not only the local complexity of 33, but also the local complexities contributed by all subsystems.

This figure describes the restructuring of a complex model system – library/model reference.
Figure 7: Refactoring of Complex Model Systems - Library/Model References

Global complexity metrics help ensure that a unit or component does not become excessively complex, thus maintaining its readability. Additionally, it is important that such units and components remain measurable, particularly for the purposes of review and testing. In this context, the complexity metric can be directly associated with the required effort—higher complexity indicates greater workload for both review and testing.

Therefore, the magnitude of global complexity directly corresponds to the amount of workload, highlighting its impact on structural quality. Keeping units small improves readability, modularity and testability, thereby enhancing the implementation efficiency. If a given functional requirement can be implemented in two different ways, both of which are functionally correct, a significant difference in their global complexity values indicates that the approach with lower complexity is the more efficient implementation. Thus, while global complexity may not affect functional quality, it may have an impact on structural quality, serving as a reflection of the effectiveness of the design and modeling process.

How does global complexity contribute to improving modeling efficiency in practice? This can be illustrated using a typical example model in MATLAB/Simulink. Firstly, global complexity metrics can be used to assess and constrain the overall size of model components, thereby reducing the structural complexity of the model. Secondly, they provide indicators of testability. For instance, by using libraries or model references (as shown in Figure 7), files can be partitioned into separate model components, enhancing both modularity and testability. Finally, global complexity serves as an indicator of modularity. Proper block partitioning enables block reuse and independent testing. A modular design aligns with the overall software architecture, linking directly to key requirements and test strategies. Modularization also enables flexible loading and compilation of components, allowing them to be reused across different projects without the need for repeated global-level review and testing of the same structural component types.

This figure describes an example of Block path count per Block in a subsystem.
Figure 8: Example of Block Path Count per Block in a Subsystem

Previously, the concept of complexity was discussed, along with how to evaluate the size and complexity of a component. The focus now shifts to understanding how well the internal parts of a component work together—that is, whether different computations or operations within the component are interrelated. To accomplish this, the degree of interaction among elements within a component is analyzed. In MATLAB/Simulink/Stateflow, this is represented using the concepts of cohesion and incoherence [1]. The core idea is that every block within a subsystem should directly or indirectly influence, and be influenced by, other blocks within the same subsystem. For each block b, the number of blocks that are affected by or affect b—including b itself—is calculated. This total, referred to as the number of blocks on paths through b, is abbreviated as bop. In a highly cohesive subsystem, the bop of each block tends to approach the total number of blocks in the subsystem. In contrast, a subsystem with low cohesion will have blocks whose bop values are significantly lower compared to the total block count.

Next, the bop values of all blocks within a subsystem are aggregated and normalized to obtain the cohesion value of the subsystem. Accordingly, the cohesion of a subsystem S consisting of a set of blocks Bs is calculated using the following formula:

Diese Formel dient zur Definition von Kohäsion und Inkohärenz, wobei die Erklärung auf Chinesisch erfolgt.
Formel 1: Kohäsion und Inkohärenz
This figure describes an example of incoherence = 1.
Figure 9: Incoherence Example 1 (Incoherence = 1.0)

When designing the structure or performing modular computations within a subsystem, the goal is to group related functionalities together within the same subsystem. Conversely, functionalities that are unrelated should be separated to enhance comprehensibility. If all blocks are focused on a single function or computation, the subsystem becomes significantly easier to comprehend. Therefore, it is important to identify parallel or independent components within the model to support model refactoring. This facilitates modularization and encapsulation, ultimately improving key quality properties such as testability and maintainability.

How does incoherence assist in refactoring operations? Incoherence can be interpreted as a rough estimate of the number of disconnected or parallel components within a subsystem. For example, in Figure 9, all blocks involved in the computation lie on a single path—this represents the simplest case, with a calculated incoherence value of 1. In Figure 10, like the earlier example, the subsystem includes a partial split in the computational path, resulting in an incoherence value of 1.3. Now, consider what happens in Figure 11, where parallel computation is introduced. An additional set of independent, parallel components increases the number of disconnected elements in the subsystem, leading to an incoherence value of 2. From this, the integer part of the incoherence value provides an approximate indication of how many parallel or separated component groups exist within the subsystem.

This figure describes the second example of incoherence.
Figure 10: Example of incoherence 2 (Incoherence = 1.3)
This figure describes the third example of incoherence.
Figure 11: Example of incoherence 3 (Incoherence = 2.0)
This figure describes an example of a highly complex and highly incoherent subsystem.
Figure 12: Example of a Subsystem with High Complexity and High Incoherence (Incoherence ≈ 5, Local Complexity = 1104)

A more complex example can further illustrate this concept. Incoherence can be utilized to identify subsystems that exhibit both high complexity and a high degree of disconnection. As shown in Figure 12, the local complexity of the subsystem reaches 1104, which introduces significant difficulties in terms of readability and testability. The corresponding incoherence value is approximately 5, suggesting that the subsystem can be divided into four to five independent components. Based on this analysis, the model may be refactored into four larger, separate components. Alternatively, a partition into five components is also viable, depending on specific structural or functional requirements.

This figure describes the model restructuring based on the incoherence Simulink model metric.
Figure 13: Model Refactoring Based on Simulink Metrics Using Incoherence (Incoherence ≈ 4, Local Complexity = 194)

After refactoring, a new structure like that shown in Figure 13 is obtained, forming a new structural layer that makes the overall model easier to understand. Firstly, it clearly shows which input signals correspond to specific functionalities or computations within the subsystem unit, which is crucial for the evaluation of functional signals. Secondly, it can be observed that while the incoherence value remains approximately the same, the local complexity value is significantly reduced, thereby improving the comprehensibility of this model layer. At this stage, the model introduces a structural layer, rather than a mixture of functional and structural layers.

This figure describes an example of a subsystem with implicit data flow and high incoherence.
Figure 14: Example of a Subsystem with Implicit Data Flow and High Incoherence (Incoherence ≈ 3, Local Complexity = 1673)

In addition to providing a rough estimate of separated components, the model metric of incoherence also helps prevent designs with implicit data flows, as illustrated in the example shown in Figure 14. The subsystem consists of many individual charts, where major signal data flows are hidden using Goto/From blocks, making it difficult to understand the properties and directions of the signals. Statistical analysis indicates that the subsystem exhibits extremely high local complexity and poor readability. However, the incoherence value is only 3, suggesting that although the use of Goto/From blocks visually separates functional components, the subsystem can be simplified into three main functional groups.

This figure describes the model restructuring based on the incoherence Stateflow model metric.
Figure 15: Model Refactoring Based on Stateflow Metrics of Incoherence (Incoherence ≈ 3, Local Complexity = 111)

The refactored model shown in Figure 15 demonstrates clearer subsystem structures and data signal flows, while also significantly reducing the local complexity.

This figure describes an example of a structural subsystem in the Simulink demo model fuelsys.
Figure 16: Example of a Structural Subsystem in the Simulink Model fuelsys

Structural and Functional Layers

How can the separation of structure and functionality be interpreted? Taking the Simulink subsystem shown in Figure 16 as an example, the input and output ports are referred to as neutral blocks, as they are typically necessary in most cases. Within the subsystem, no actual computation takes place; instead, it contains only two internal subsystems. A subsystem that consists solely of such structural blocks is referred to as a structural subsystem.

This figure describes an example of a functional subsystem in the Simulink demo model fuelsys.
Figure 17: Example of a Functional Subsystem in the Simulink Demonstration Model fuelsys

Similarly, in the subsystem shown in Figure 17, besides the input and output ports, all other blocks perform functional operations such as mathematical calculations, logic operations, bitwise operations, or function calls. A subsystem that contains only functional operation blocks is referred to as a functional subsystem.

Diese Tabelle beschreibt die Modellkennzahlen: Anteil der Funktionsmodule.
Tabelle 1: Modellkennzahlen – Anteil der Funktionsmodule

How can the functional and structural levels of a model be identified? To address this, a model metric called the ratio of functional blocks at the subsystem level is introduced. This metric represents the percentage of functional blocks relative to the total number of functional and structural blocks within a subsystem. A metric value of 0% indicates that, apart from neutral blocks, the subsystem contains only structural blocks, representing 0% functionality and 100% structure. Conversely, a value of 100% indicates a purely functional subsystem. Intermediate values correspond to hybrid subsystems with varying proportions of functional and structural elements.

As shown in Table 1, based on the statistics of functional and structural blocks and the metric ratio of functional blocks, a value of 100% indicates a purely functional subsystem, while “–” represents a purely structural subsystem. Intermediate values may occur, representing hybrid subsystems. However, in modeling practice, it is strongly recommended to avoid such hybrids. A hybrid subsystem, containing both structural blocks and functional computation blocks at the same hierarchical level, may negatively impact readability, comprehensibility, and especially testability. Therefore, it is advisable to maximize or minimize the ratio of functional block, rather than allowing it to remain at intermediate values.

This figure describes the best practices for hierarchical model design.
Figure 18: Best Practices for Hierarchical Model Design

In accordance with the architectural design principles recommended by ISO 26262, it is advisable to establish an appropriate hierarchical structure for subsystems. But what constitutes an "appropriate" hierarchy? In this context, appropriateness can be defined as a dedicated differentiation between structural and functional layers. By separating structural and functional elements, a consistent hierarchy for signal processing, structural organization, and functional implementation can be achieved, thereby improving structural quality in terms of readability, modularity, testability, and adaptability to modifications.

A set of recommended best practices for model architecture design is presented here. In the software development process, the software architecture is typically outlined at a high level during the architectural design phase. Based on this initial structure, the model system should be designed and refactored using the previously discussed model metrics to ensure that the system or its subsystems align with industry’s best practices and the intended software architecture. During the detailed design phase, care must still be taken to avoid mixing structural and functional block elements within the same subsystem. The goal is to achieve a well-organized, visually clean, and logically layered model structure, such as the appropriately layered architecture shown in Figure 18. In this example, the top-level Simulink root layer represents the overall system structure. Beneath it, Model Layer 1 includes subsystems responsible for signal distribution and software architecture layers. Further below, Model Layer K consists of subsystems that implement structural aspects of software architecture. From a certain level—such as Model Layer m—onward, the subsystems transition into either structural layers of the architecture or the actual implementation of functional computations, extending down to the lowest layer of functional logic. In this layered design, structural subsystems are maintained until the final computation layer. Throughout the modeling process, proper use of library links and model references helps reduce system complexity and contributes to a final architecture that is simple, readable, testable, and maintainable.

This figure describes an example of an unused base signal.
Figure 19: Example of Unused Basic Input Signals

Invalid Interfaces and Interface Size

When discussing subsystem layers, structural organization, and hierarchical model architecture, it is essential to consider the signal flow between subsystems and the interfaces entering each subsystem.

According to the software design principles recommended in ISO 26262, interface sizes should be kept within a reasonable range to prevent oversized interfaces that can impair system comprehensibility. In limiting interface size, two aspects must be considered: the total number of subsystem interfaces and the consolidation of signals through bus structures. More importantly, emphasis should be placed on the effectiveness of the interface signals themselves. This means that all input signals to a subsystem must be functionally relevant—specifically tied to the functional requirements implemented within the subsystem. Ineffective or unused signals should be avoided, as they add unnecessary complexity and reduce model clarity. To better illustrate this, consider the example shown in Figure 19. Five basic input signals are passed into a subsystem that contains multiple hierarchical levels and a model reference. Notably, only two of the five input signals—signal a and signal b—are used in dot product computation. The remaining signal c, signal d, and signal 1—are ineffective inputs, as they are not used in any downstream functionality or computation.

This figure describes an example of an unused bypass signal.
Figure 20: Example of Unused Bypassed Signals

However, in certain cases, it may be unclear whether signals are truly used—especially when the internal structure of a subsystem or the details of referenced models are not fully visible or understood. A typical use case is illustrated in Figure 20, where an entire signal bus is fed into a subsystem. However, only a subset of the signals is actually required for the functional implementation within the subsystem, while the remaining signals are effectively bypassed and unused. To address this, it is recommended to limit interface size to ensure that units and components remain reusable, while avoiding the introduction of virtual coupling. To achieve this, the use of explicit signal flow is often enforced, which improves the model’s readability, modularity, testability, maintainability, and adaptability to changes.

This figure describes the best practices for interface operations.
Figure 21: Best Practices for Interface Handling

What are the best practices for managing interface size and signal handling? First, it is important to note that simply imposing a fixed numerical limit on the number of input and output ports is not a practical approach. Instead, as illustrated in Figure 21, a recommended best practice is to group required signals based on functional needs into buses. Ideally, these buses should consist only of valid and necessary signals for the subsystem, with signal flows made explicit, as demonstrated in the signal grouping and extraction process shown in Figure 18. Therefore, required bus signals should be extracted or modified outside the subsystem, using signal selectors to explicitly choose only the signals that are actually needed. This approach ensures that the essential signal flow is clearly represented at the structural level, enhancing the subsystem’s clarity, modularity, and maintainability.

This figure describes the subsystem clones.
Figure 22: Subsystem Clones

Clone Components

Now suppose a well-designed model is already in place, with all structural quality metrics fully optimized. When there's a need to reuse a specific function or algorithm in other parts of the model, a common operation is to manually duplicate a block, component, or subsystem. When similar structures—such as subsystems with identical or highly similar properties—appear in multiple locations, they are referred to as a clone group. Regarding subsystem cloning and clone group operations, consider a scenario where a small subsystem contains a set of computational blocks and constants arranged in a specific layout. As a reusable component, it is copied and pasted elsewhere in the model, followed by some layout and parameter adjustments (as shown in Figure 22). Predictably, repeating such operations across the model leads to a rapid increase in system complexity, along with a corresponding rise in review and testing workload. To simplify the design and reduce overall system complexity, models containing clone groups can be restructured as separate reference models. If certain subsystems need to be reused often, they should be converted into library components. By converting subsystems into model references or Simulink libraries, you can significantly reduce model complexity, lower testing and maintenance effort, and ultimately decrease code size.

This figure describes the statistics of clone group detection in the control system.
Figure 23: Clone Group Detection Results in the Control System

Therefore, identifying many clones and clone groups and replacing them with library references is considered the best practice for handling cloned components. This approach can significantly reduce global complexity, which directly correlates with model size and development effort. In complex model-based systems composed of a wide variety of block types, individual model components are often developed in parallel and in distributed groups. As a result, copy-paste patterns and clone groups frequently emerge, contributing to relatively high global complexity. For example, based on clone group and complexity analysis conducted in a specific project [2] (see Figure 23), the system had a substantial number of cloned components and repetitive subsystems. By improving these clone groups and duplicated subsystems, a complexity reduction of approximately 10% was achieved. In general, for large-scale engineering projects, a 10% reduction in complexity translates into a similar reduction in review and testing effort, leading to significant savings in labor and costs.

When analyzing the relationship between a typical project development cycle and the model's global complexity, it is seen that complexity increases rapidly at the beginning—from project initiation to feature completion—and then gradually saturates, especially during the bug-fixing phase. However, if model metrics analysis is applied and regular model refactoring is performed throughout the development cycle, both the overall size and complexity of the model can be significantly reduced. In other words, by continuously applying metric-based analysis, conducting clone group detection, and executing periodic refactoring during the entire model development lifecycle, it is possible to effectively reduce global complexity and enhance the structural quality of the system.

This figure describes the model metric view in MES Model Examiner®.
Figure 24: Model Metrics View in MES Model Examiner® (MXAM)

Best Practices for Model Metrics Analysis and Model Refactoring

As discussed above, a variety of structural metrics are associated with model quality. To obtain precise values for these metrics and support informed model design and refactoring decisions, model analysis tools can be used. For example, the in-house developed model checking tool MES Model Examiner® (MXAM) serves not only as a model compliance verification tool but also provides detailed structural quality metrics. MXAM can calculate and report key structural metrics such as global/local complexity, model hierarchy depth, interface size, clone groups, incoherence, and more. These metrics are integrated as part of the checking process and are clearly visualized in a dedicated view, as illustrated in Figure 24. In the figure, local complexity values for different subsystems are shown. For example, a subsystem with a local complexity value of 806 is marked in red, showing that it exceeds the default upper threshold of 750. As a result, the model fails the compliance check for the local complexity rule.

For the refactoring of complex model systems, specialized tools can also be used to simplify modeling and refactoring operations. For instance, the dedicated model refactoring tool MES Model Refactor® (MoRe) developed by MES significantly streamlines the modeling process. By automating a series of continuous modeling steps, MoRe enables fast and purpose-driven model restructuring, allowing model components to be efficiently and rapidly refactored.

In summary, analyzing structural quality properties—such as model subsystem complexity (local/global), component size, incoherence, the proportion of functional components, interface size, and clone groups—offers valuable insights into the model structure. Applying model metrics and performing regular model refactoring as a best practice throughout the development process helps improve model readability, comprehensibility, maintainability, and testability. It effectively reduces software complexity, supports the implementation of component and functional modeling design and verification principles, and enhances the overall structural quality of the model. As a result, the quality of the model-based software system is significantly improved.

Reference

  1. Mäurer et al. (2014), On Bringing Object-Oriented Software Metrics into the Model-Based World – Verifying ISO 26262 Compliance in Simulink, 8th International Conference, SAM 2014, Valencia, Spain
  2. Salecker et al. (2016), JUST SIMPLIFY: Clone Detection for Simulink Controller Models, SAE World Congress 2016, Detroit, MI, USA

Kontaktieren Sie uns

Dieses Bild zeigt Elena Bley.
Elena Bley
Senior Manager Marketing & Webinars

* Pflichtfeld

Was ist die Summe aus 3 und 5?