In our lab, people rarely work alone - we collaborate a lot with each other as well as with external researchers and our industry partners.
Evaluation of a language is an important part of research. It helps to verify correctness of a solution, asses its efficiency, and investigate expressiveness. Recently, we have evaluated Clafer on over 60 realistic examples of feature models, meta-models, and feature-based templates. For each model, we performed instance finding analysis, since many other analyses can be reduced to this problem. For slices of meta-models and feature-based templates element liveness was verified, because it is another example of a non-trivial analysis. We consider our results promising, since we obtained acceptable timings (within seconds) without fully exploiting the potential of Alloy.
Evaluation was good for the language design since it gave us extra insight into useful syntactic constructions and we learned more about encoding various classes of models in Clafer. More importantly, now we have a better understanding of how to perform automatic analyses of big models and how to support them in the Clafer infrastructure. In short, this evaluation will have a big impact on Clafer and our future work.