This lab is quite diverse. It is comprised of a group of really smart people from all over the world, each with different backgrounds, working in a friendly and welcoming environment. Being part of it has been such an enriching experience.
Clafer Evaluation
Evaluation of a language is an important part of research. It helps to verify correctness of a solution, asses its efficiency, and investigate expressiveness. Recently, we have evaluated Clafer on over 60 realistic examples of feature models, meta-models, and feature-based templates. For each model, we performed instance finding analysis, since many other analyses can be reduced to this problem. For slices of meta-models and feature-based templates element liveness was verified, because it is another example of a non-trivial analysis. We consider our results promising, since we obtained acceptable timings (within seconds) without fully exploiting the potential of Alloy.
Evaluation was good for the language design since it gave us extra insight into useful syntactic constructions and we learned more about encoding various classes of models in Clafer. More importantly, now we have a better understanding of how to perform automatic analyses of big models and how to support them in the Clafer infrastructure. In short, this evaluation will have a big impact on Clafer and our future work.