Current Projects

This project includes:

1) Modeling AADL constructs in Clafer and translation of models written in AADL to models in Clafer language

2) Computation of quality attributes such as latency, cost, maintainability

3) Optimization of the model with respect to the given objectives, such as, to minimize total latency

4) Visualization of trade-offs and Pareto front produced after the optimization

Clafer is a lightweight structural modeling language.

This is the old project website. More more up-to-date information, visit the new official website:

Traditional algebraic frameworks for bidirectional transformations are state-based: the input and output are states of data. But actual implementations are delta-based: the synchronizer tries to understand what is the delta resulted from the update, and then try to propagate the delta.

We show that state-based algebraic framework has several drawbacks, and build delta-based algebraic frameworks for both the asymmetric case and the symmetric case.

Multi-Objective Combinatorial Optimization (MOCO) explores a finite search space of feasible solutions and finds the optimal ones that balance multiple (often conflicting) objectives simultaneously. MOCO is a fundamental challenge in many problems in software engineering (e.g., architecture design, test data generation, and project planning) and other domains (e.g., hybrid vehicle powertrain design, electric vehicle battery design, and civil infrastructure repair planning).

Most MOCO problems are NP-hard. To address them, approximate approaches that depend mainly on meta-heuristics have been advocated for years. In most cases, they solve MOCO problems in an acceptable time, but they find only near-optimal solutions, and often suffer from parameter sensitivity (i.e., the accuracy of the found solutions varies widely with the parameter settings of these approaches). In contrast, exact methods that scan all candidate solutions one by one often take too long for large-scale problems, but they are accurate in finding all, exact optimal solutions, which is desirable for those stakeholders who never want to miss any optimal opportunity.

We aim at exact, parallel approaches that solve MOCO problems accurately and efficiently. We propose five novel parallel MOCO algorithms that search for exact optimal solutions using off -the-shelf solvers, and that parallelize the search via collaborative communication, divide-and-conquer, or both. A key finding is that one algorithm, which we call FS-GIA, achieves substantial (even super-linear) speedups that scales well up to 64 cores. Our work opens a new direction in scaling exact MOCO methods. We hope that our work encourages other researchers to reconsider the feasibility of exact MOCO methods and to try different ways to scale them. Appropriate parallelization, especially given the increasing availability of multi-core systems, is definitely a promising approach.

More details, implementation code, and experimental data are available on an open-source project website:

The use of examples is critical for a more widespread adoption of modeling as it makes modeling more accessible to non­-experts. We propose Example­-Driven Modeling (EDM), an approach that systematically uses explicit examples for eliciting, modeling, verifying, and validating complex business knowledge. It emphasizes the use of explicit examples together with abstractions, both for presenting information and when exchanging models.

The project is in initial stage. You can check out Clafer Wiki that contains some models created and validated via examples. There is also Dina's CS846 course project report that compares modeling in UML and Clafer.

Set of tools for feature modeling, configuration, feature-based model templates, template instantiation and verification.

This project investigates and studies large-scale real world feature models, such as the Linux kernel, with more than 6000 features, and Ecos, with over 1000 features.

We have studied the their structural characteristics (size, depth, width, number of constraints), the evolution of the model, and also the languages used for expressing these models and their semantics.

A highly automated approach based on dynamic analysis for understanding how a concept of interest (e.g., context menu) is implemented in example applications of an object-oriented application framework (e.g., Eclipse JFace).

A framework-specific modeling language (FSML) is a language designed for a particular framework and it is used for expressing how applications use that framework. We built four exemplar FSMLs for Java Applet, Apache Struts 1.x, Eclipse Workbench, and EJB 3.0. FSMLs support five use cases: framework API understanding, completion code understanding and analysis, creation, migration, and evolution.

A metamodel of an FSML defines framework API concepts in some scope together with their features and a mapping between the features and code patterns that implement them. Such a language definition is interpreted by our generic FSML infrastructure, which supports reverse-, forward-, and round-trip engineering, framework specific code completion, and framework specific (code) quick fix. We also created an FSML engineering method that can be used for building new FSMLs.

A tool demonstration is available.

The aim of this project is to improve the empirical understanding of variability-modeling practices in industry. We conduct case studies and surveys with industry to obtain an overview of variability-modeling solutions (notations, tools, models) and to understand successful and failed practices in industrial companies engineering software product lines.

Current object-oriented applications depend heavily on third party Application Programming Interfaces (APIs). Developers often need to migrate their applications across competing APIs for the same domain, usually seeking better designs, functionality or performance. Independently developed APIs may agree on the overall semantic model at some level of abstraction but they often differ in many details. The term API mismatch refers to the challenge of migrating across two APIs. The main goals of this project are to devise a method for migration of applications across APIs and to develop techniques to automate and guide the execution of the migration.

Empirical Assessment of Product-Line Migration Strategies in Industry

Project Overview

Software product lines (SPLs) are portfolios of products that address a variety of requirements for different customers or market segments. When such a product line comprises many products, dedicated techniques and processes have to be applied, which is known as Software Product Line Engineering (SPLE). SPLE promises many advantages, such as shorter time-to-market or low redundancies among products by establishing an integrated platform from which the individual portfolio products can be derived efficiently. SPLE is increasingly adopted in industry. However, many companies are still facing challenges with SPLE and are hesitant to migrate to an integrated SPL platform, given the high investments necessary as well as the lack of detailed migration processes and of information about the required technical effort.

Project Goal 

In this project, our goal is to collect and compare experiences of companies that have successfully migrated to an SPL or that are currently in the migration process. This will be done through interviews with architects and engineers from various companies. Our focus is on technical details of the migration, such as the identification of variability in existing products, including details on the diffing strategies of source code; the modeling of variability and identification of features; and what kind of refactoring is needed to migrate products to an integrated platform. Examples of other details we strive to analyze comprise version-control strategies and product-generation techniques.

Project Outcome and Benefits

The outcome of this project would be a set of strategies that can be applied at different phases of the migration. For example, we would identify what diff tools can be used to compare existing products and how the identified differences can be mapped to features. Additionally, we would report the challenges and problems faced in practice and discuss the possible solutions for them based on our participants's experiences. Such information would provide guidelines for new companies to migrate to SPLs and allow companies that have already migrated to further improve their processes and SPL implementations. More specifically, by participating in this study, companies can:

  • Compare their practices with those of other companies.
  • Compare their migration and implementation techniques to the state of the art.
  • Receive a report with the results of this study.

Scope of Interview Questions

The following are examples of questions we would ask during an interview:

  • What programming language(s) are used?
  • What would a product delivered to the customer consist of (code, binary, other artifacts, etc.)?
  • Before the migration, were multiple product variants developed? If so, how were product differences documented?
  • Is the SPL based on existing products?
  • How were the implementations of these products analyzed? Were specific tools used?
  • Were all product differences considered as features?
  • How were the existing implementation(s) refactored to a product line (through annotations, creating a new architecture, etc.)?
  • How is a product “generated” using the new SPL?




Team Members


Related Former Publications

This work follows-up on our previous studies of variability modeling, particularly in the systems software domain. Specifically:


This project aims to automatically extract configuration constraints from C code, to facilitate reverse-engineering and consistency checking of variability models in highly configurable systems.

This project explores optimization problems taken from existing literature or inspired by industrial problems. These include:

1) Server allocation problems: given a set of services (e.g, Mail, Calendar, Search) with a given requirements (e.g., CPU and memory), and a set of machines (servers) with resource constraints (provided SPU and memory). The task is to distribute each service among several machines, so that all requirements and resource limits are satisfied, and the number of used machines is minimized.

The Network for the Engineering of Complex Software-Intensive Systems for Automotive Systems (NECSIS) is a research network to tackle the obstacles and develop new MDE capabilities that lead to the development of the next generation of MDE methods and tools. This project, Feature Oriented Modeling and Analysis, groups activities with involvement of the GSD Lab members within the NECSIS Theme 3: Uncertainty, Adaptability, and Variability.

Development of solutions for maintaining traceability among BPMN models from high-level business specifications to executable implementations, enabling impact analysis and generating fixing actions for concurrent editing.

This project aims at facilitating language composition and notational diversity using projectional language workbenches.

We completed a consulting engagement with a company X entitled The Requirements Engineering Practices and Tool Support at X in which we identified top 10 challenges with the requirements engineering practices faced by X and presented a prioritized list of tool features desired by analysts, developers, and quality assurance. During the course of the study, we identified over 700 statements (codes) stated by over 40 participants of five focus groups and 18 interviews.

Our paper is available here. An implementation of our feature model synthesis is available on BitBucket.

The Linux Variability Model used in our evaluation is available as a CNF formula in dimacs format.

Support for software configuration is gaining importance. Large modern reusable software such as platforms or product lines often have a vast number of configuration settings that need to be specified in order to derive a running system. Examples of highly configurable systems include the Linux kernel, Eclipse, and eCos. They are supported by configuration tools, respectively, Linux Kconfig, Eclipse Yoxos, and eCos configuration tool. We have been studying the configuration models of these systems in a related project on "Feature Models in the Wild".

The aim of this project is to understand what problems the users face during configuration and provide corresponding support in the configuration tool. Our current focus is to understand and support conflict-resolution in Linux Kconfig and eCos configurator.

In our newest progress, we have introduced priorities to guide the error fixing process. The corresponding paper has been submitted for review, and a technical report has been published. If you are a reviewer of our paper, please see our technical report page for the pseduo code and the experiment results.

As part of our effort to further understand and improve industrial software product line practices, we perform a study on the culture of artifact cloning for product lines.

Many software systems provide configuration options for users to tailor their functional behavior as well as non-functional properties (e.g., performance, cost, and energy consumption). Configuration options relevant to users are often called features. Each variant derived from a configurable software system can be represented as a selection of features, called a configuration.

Performance (e.g., response time or throughput) is one of the most important non-functional properties, because it directly affects user perception and cost. To find an optimal configuration to meet a specific performance goal, it is crucial for developers and IT administrators to understand the correlation between feature selections and performance.

We investigate a practical approach that mines such a correlation from a sample of measured configurations, specifies the correlation as an explicit performance prediction model, and then uses the model to predict the performance of other unmeasured configurations.

More details, implementation code, and experimental data are available on an open-source project website: