本图描述了文章"Everything Begins with Findings - Understanding the Foundation of Software Quality Monitoring"的封面图。
MES Quality Commander® (MQC)中Findings的全面概览

Everything Begins with Findings - Understanding the Foundation of Software Quality Monitoring

本篇文章“Everything begins with Findings - Understanding the Foundation of Software Quality ”目前仅提供英文版本。本文阐述了为何Findings(发现项)是所有软件质量指标的基础。尽管聚合后的质量度量能够提供软件质量的整体概览,但它们往往会掩盖关键细节。要真正理解软件质量并进行有针对性的改进,必须分析来自开发和质量保障过程中的底层Findings。这些Findings能够清晰地揭示需求是否得到满足、风险出现于何处,以及质量指标变化的原因。然而,随着项目规模的扩大,分散在不同工具和来源中的Findings变得难以全面掌握。通过对所有Findings建立集中视图,团队能够识别模式、理解根本原因,并持续、有效地解决质量问题。

本图片展示了MES Quality Commander® (MQC)中的软件质量热力图。
图片:MES Quality Commander® (MQC)软件质量热力图示例

When you look at your project quality, the first thing you hopefully see is an overview. Aggregated metrics derived from quality assurance data provide insight into where things stand. These metrics summarize findings into percentages, trends, and status indicators. This overview is necessary and useful. It helps you orient yourself.

At the same time, however, it hides important information – by design.

What you see in an overview is the result of aggregation. What you do not see are the details that these metrics are built on. To understand where quality comes from, where risks form, and why numbers change, you must look at the findings behind the overview.

本图片展示了MES Quality Commander® (MQC)中的软件Findings热力图。
图片:MES Quality Commander® (MQC)软件Findings热力图示例

Software Quality Is Not the Overview Alone

A project may appear solid at first glance. Aggregated metrics, such as guideline and test compliance or code coverage, provide a general overview of quality. While these values can improve over time, they cannot show all aspects the software is actually made of.

To understand how to improve quality, you need to move from the overview to the underlying details. These details are the findings generated throughout the development and testing process. In model-based quality assurance, these findings demonstrate:

  • where requirements are met,
  • where guidelines are followed,
  • where complexity or inconsistencies arise,
  • which elements are covered.

You can only see the true, detailed state of your project by examining this model-based quality assurance data in detail. While the overview may appear reassuring, detailed insight comes from understanding the underlying findings.

Where Findings Come From

The findings result directly from the activities carried out during development and quality assurance. They appear because you continuously ask questions about your software system and evaluate the responses as part of your model-based quality assurance process.

Typical questions include:

  • Is each requirement covered by at least one test? If coverage is complete, you receive a passed finding. If coverage is missing, you get a failed finding.
  • Is the complexity of a subsystem acceptable? Values within an expected range are considered acceptable (passed finding), while increased complexity may trigger a warning or even a failed finding.
  • Are modeling or implementation guidelines being followed? Each deviation from a guideline creates its own warning or failed finding.
  • Behaves a test case according to the requirements (often implemented in assessments)?

Each answer adds detail to the overall picture. Passed findings show you what works as intended, while warning and failed findings point to areas that need attention.

Taken together, findings form the basis of your overview. They provide the level of detail that aggregated metrics alone cannot show. Findings reveal what is unclear, what may be missing, and what already works well.

The Challenges That Come with Findings

As long as you are working with only a few models, sifting through findings may feel manageable. However, this quickly changes as projects grow.

Your findings come from many different sources. Static and dynamic testing and review comments all generate model-based quality assurance data. The challenge lies not in the data itself but in the way it is distributed across tools and reports.

You check one model, then the next, and so on. Static testing results live in one tool, dynamic testing findings live in another, and review comments live somewhere else. Keeping track of everything is time-consuming and distracting. Important patterns remain hidden because the data is scattered across many individual views.

At this point, there is nothing wrong with the findings. What gets lost is visibility.

A Central View of Findings

Imagine having a central overview of everything without losing access to the details. All findings from multiple tools and models are brought together in one place as a consistent set of quality assurance data.

Rather than checking the results of models individually, you can see which areas produce issues, how severe they are, and where problems recur across models. This reduces review effort, reveals patterns, and helps you focus on what really matters.

The MES Quality Commander® (MQC) provides this type of centralized visibility and easily scales across many models. To see how it works, have a look at the video below.

What You Gain When Nothing Gets Lost in the Overview

The problem is not having an overview. Losing the connection to the underlying findings is.

When you work with model-based quality assurance data at the level of findings, you gain clarity. You understand why numbers change, what influences quality, and where risks begin to emerge. Rather than reacting to trends, you can identify their causes.

Quality does not improve just because an overview looks great. Quality improves when we fix the underlying issues. We need the quality overview connected with the details of the findings on which it is based.

Would you like to see how a centralized view of findings works for yourself?

联系我们

本图片是Hartmut Pohlheim的肖像照。
Dr. Hartmut Pohlheim
Managing Director

*必须填写

Please calculate 1 plus 8.