Monday, April 26, 2010

EMF Facet, a new project for model customization and extensibility

How to dynamically extend and combine existing Ecore definitions without having to modify them?

EMF Facet, a new project we have proposed to create under EMFT, will propose dynamic extension mechanisms for all EMF-based tools exposing a view on a model:
  • Navigation tools (browser, navigator, etc);
  • Graphical modeling tools;
  • Code or documentation generation (M2T) tools;
  • Model-to-model transformation (M2M) tools.

Our proposition provides a solution to introduce new viewpoints (or "facets") on existing models:

  • Extending an existing metamodel (Ecore model) in a non-intrusive way by adding new types, attributes, operations and relations. New relations could be used to compose several models by linking their elements.
  • Computing an extension by executing queries against an existing model; queries will be implemented by making use of existing query mechanisms (e.g. considering Java, ATL, EMFQuery, Xpath, etc)
The Query "subClassifiers" visible as a "virtual" relation on Class

Some of these mechanisms have already been developped in the MoDisco project. But, as they could be reused by tools not related to software modernization (the scope of the MoDisco project), we have decided to contribute the corresponding components to EMF Facet.

In a previous post, I have presented these components (Facet Manager and Query Manager).

In another post, I have also illustrated how Facets can be used to extend the Java metamodel to highlight JUnit concepts.


Thursday, April 22, 2010

Architecture-Driven Modernization Case-Studies


William Ulrich and Philip Newcomb have recently published "Information Systems Transformation, Architecture-Driven Modernization Case-Studies", a reference book on Software Modernization.

This book has been written by two of the most active members of the Architecture-Driven Modernization task force, the OMG initiative which aims at defining standard specifications for the modernization of existing software systems. They first introduce Architecture-Driven Modernization technologies, standards and approaches. Then they have compiled ten detailled case-studies on real modernization projects on various business domains (bank, administration, tourism, air-trafic management, combat system) and technologies (COBOL, VB6, Powerbuilder, ...).

With Gabriel Barbier (Mia-Software), Yves Lennon (Sodifrance/SoftMaint) , Hugo Brunelière and Frédéric Jouault (INRIA/AtlanMod) we have written one of these case-studies.

In this chapter we first describe the modernization process and tools used by Sodifrance to migrate Software Systems. The approach has been imagined and protyped thanks to a collaboration established in 1993 bewteen Sodifrance and Jean Bezivin, from University of Nantes, who was working on the representation of existing software systems with a technology based on sNets, a first generation of model engineering platform.

The sNets technology was immediately used by Sodifrance to develop a semantic discovery tool, named Semantor, to analyse any COBOL program and provide a fine-grained level of information about its internal structure and data. This tool is still evolving and has been renamed Mia-Mining.

In parallel, at the end of 1998, based on the experience gained in the rebuilding of an insurance company’s contract management system, where modelling and tailored code-generators were successfully used, Sodifrance started developing its own model transformation technology. From this work, in association with Jean Bezivin who brought his knowledge about early work at OMG on MOF, was born Mia-Studio, a model transformation tool to develop and run model-to-model transformation rules and model-to-text generation templates.

With these tools, Sodifrance has progressively built a chain which can be used on architecture migration projects, to transform existing applications from client-server to n-tiers and SOA architectures.




This chain is composed of three main steps:
  1. Extraction of a comprehensive model (the initial model) of the existing application from its assets (source code, configuration files, development repositories, etc).
  2. Transformation of the model of the existing application into a comprehensive model of the target application (the target model).
  3. Generation of the source code of the target application from the model of the target application.
To illustrate how this chain can be used, the chapter describes a project conducted by Sodifrance to migrate an application from VB6 to JEE for Amadeus Hospitality the leader in IT solutions provided for the tourism and travel industry.

The initial application, named RMS (Revenue Management System), was developed in VB6, and was performing queries on an Oracle database. It was composed of 300 screens, 200 of them displaying charts (pie charts, bar charts or line graphs). The VB6 code was composed of 306,000 source lines of code, VB6 code in 216 classes and 261 modules.


The migration project was completed by Sodifrance in 1,600 man-days with ten engineers over a year. The transformation of all the VB6 code (access to data, business rules and interface) was 80% automated, while the definition of the screens (Forms) was only 50% automated, due to the necessity to redesign them for a web mode. The new version of RMS is now composed of about 300,000 lines of code in 1,000 Java classes and 310 Jsp (Java Server Pages).


In the last part of the chapter, we present MoDisco, the Eclipse project dedicated to software modernization. This project has been created by AtlanMod during Modelplex, a research project funded by the European Community.

Because of the widely different nature and technological heterogeneity of legacy systems, there are several different ways to extract models from such systems. MoDisco proposes a generic and extensible metamodel-driven approach to model discovery. A basic framework, including implementations of OMG standards such as KDM or SMM, and a set of guidelines are provided to the Eclipse contributors to bring their own solutions to discover models in a variety of legacy systems.
One of the first industrial use cases using MoDisco has been the understanding of a Large Scale Data Intensive Geological system for WesternGeco, a geophysical services company

Saturday, April 10, 2010

JEE, Flex and MDSD in Tunis



Last week I was in Tunis.

The weather was not as sunny as expected, but it was not the reason of my travel. I was there to setup a MDSD (Model-Driven Software Development) process for a tunisian bank.

This bank has planned to redevelop its Core Banking System with new technologies (JEE and Flex). In order to facilitate the development and provide both flexibility and quality to the future system, they have decided to adopt a Model-Driven approach.

My mission consists in helping the IT team to put this approach in place.


Developing a generator for a customer is an activity which I usually decompose in four steps:


Development of a reference application

In every industrialization process, the first step consists in identifying the scope of what can be beindustrialized. In a MDSD process, the better way to define this scope is to manually develop a reference application: a subset of the future application which contains examples of each coding patterns.

During a first stay, few weeks ago, we had specified and started to develop a reference application based on a cash withdrawal scenario. Behind this scenario, we had defined several services to invoke, the corresponding business objects (BO) and data transfer objects (DTO), and the existing Oracle tables and stored procedures which have to be reused. Based on the languages and frameworks selected by the customer (Flex with the CairnGorm framework, JEE with the Spring and Hibernate frameworks), we had designed the reference application and defined the coding patterns to use.

This week, when I began my second stay, the customer had finished the development of the reference application, and it was running.

Identification of the Generation Scope


Once a reference application exists, the second step consists in analysing its source code to identify the variability factor of each line of code:
  • What is the minimal information required to be able to produce this line of code?
  • Is this information specific to the reference application or generic ?
  • Can we produce other lines of code with the same information?
  • What is the ratio between the effort to declare this information and the effort to manually produce all the corresponding code?
The answers to these questions, coupled with a discussion with the customer, help defining the generation scope:
  • which code can be produced automatically?
  • which code needs to be developped manually?

Definition of Modeling Rules


Once the generation scope is identified, then we need to define how the information required to generate the code can be defined within a model. There are three possibilities:
  1. Defining a Profile in a UML Modeler
  2. Developing a Domain-Specific Modeler
  3. Developing a Domain-Specific Concrete Syntax
For my customer in Tunis, I have proposed the first option and defined a UML profile containing a first set of about 20 stereotypes (application, service, bo, vo, dao, table, ...). With this profile I used MagicDraw (which provides very powerful extensibility and customization mechanisms) integrated in Eclipse to create a model of the reference application.

Development of Generation Templates

The fourth step is the easiest: templates can be developped from the reference application by copy/pasting fragments of code. The fragments parts which are generic remain in the templates, while variable parts are replaced by calls to the model (using EMF APIs).

Last week, it tooks me one day to develop the Mia-Studio templates for the presentation layer. From the EMF model of the reference application, the templates have regenerated 6 MXML files and 19 ActionScript files (Commands, Events, Service Delegates, Front Controller, and Value Objects). The MXML files contain the graphical definition of the GUI: they will be generated only once, just to provide a first application which can be executed. Then they will be edited and maintained with a WYSIWYG designer.


The templates for the two other layers (Business and Data) will be developed with a colleague during a third stay in Tunis.

Then our role in the project will be to assist the team in modeling and developing the first application and adapt the MDSD process to integrate the unforseen cases.

Thursday, April 1, 2010

Eclipse & OMG Symposium


After Ottawa in 2008, the 2nd biannual symposium on Eclipse & OMG will held in June at Minneapolis. This symposium will be introduced by Kenn Hussey, the leader of the Eclipse MDT project which provides implementations and tools for standard metamodels. It is the opportunity to discuss about how OMG standards can be implemented on Eclipse, and how Eclipse can influence the definition of new standards.


With Hugo Bruneliere and Jordi Cabot of INRIA/AtlanMod team, I will present the Eclipse implementation of SMM (Structured Metrics Metamodel), one of the OMG standards supported by the MoDisco project. We will explain how this metamodel can be used to represent metrics computed from Java source code. And how it can help while developing Eclipse components, by measuring their compliance to Eclipse development best practices.

You can read three of my posts on this topic :
Talks on other standards will be given by people from University of Madrid, NASA (JPL), IBM, CEA, Model Driven Solutions, Ericsson, Intalio, RedHat, Obeo and Thales.