Empirical study planning

In order to evaluate our solution, a SPL was developed from three legacy applications, namely Healthwatcher, Medwatch, and DPH-LA, so it was possible to derive three products with the same functionality and quality attributes of the correspondent legacy applications. We use the Goal-Question-Metric (GQM) [3] approach and, according to it, first the goals of the assessment must be established (see previous section). Then, questions must be formulated to define how the goals will be evaluated. Lastly, metrics support answering the questions in a quantitative manner.

In order to assess the evolvability of product architectures, we collected metrics related to evolution attributes known in the literature [4]. These metrics provide a multiple viewpoint for assessing the architectures. However, these attributes alone do not assure that an architecture is easy to evolve, so we compared the measures for these evolution attributes of one product derived by SPL against the measures of the corresponding legacy application, that is, Healthwatcher. The designers of the Healthwatcher were concerned in prioritizing the evolvability requirements in the realization of crosscutting and base-level features. Thus, the comparison between the two architectures provides a basis to assess to what extent our solution succeeds. The question related to the first goal of the evaluation is presented below:

Question 1: Does our solution support the design of evolvable product architectures?

According to Brcina et al. [4] the support for evolution is positively influenced by separation of concerns and negatively influenced by the coupling between modules (i.e. components and connectors). In particular, separation of concerns metrics and coupling between modules metrics were collected. The separation of concerns metrics employed in this empirical study were the feature scattering and feature tangling proposed by Riebisch and Brcina [5]. The coupling between modules was measured by a traditional coupling metric, which counts the number of modules that each module depends [6], and a metric specific for aspects, which counts the number of modules crosscut by the aspects of a particular module [7]. The architecture evolvability of a product was compared to the architecture evolvability of the corresponding legacy application. In this case, the metrics of legacy applications were used as reference for understanding to what extent our solution supports modeling evolvable SPL architectures.

Regarding the Goal 2, in this next question we evaluate the soundness of the complete SPL architecture. This evaluation has a complementary role in this study because the previous questions involve the evaluation of product architectures, but not of the complete SPL architecture. The second question of this study is presented below:

Question 2: Does our solution support the design of sound SPL architectures?

Andre van der Hoek et al. [2] proposed metrics to evaluate the soundness of SPL architectures. They relied on service utilization metrics that are appropriate for the context of SPLs, in which architectural elements can be mandatory, optional or alternative.

Figure 1 shows how goals, questions and metrics are related to each other in this empirical study, as suggested by the GQM [3].

Figure 1: Relationship between the goals, questions and metrics in this empirical study
Image gqmPHCS-v01-1col

Leonardo Tizzei 2013-02-18