Skip to content

Benoit Verhaeghe

7 posts by Benoit Verhaeghe

First look at GitProjectHealth

When it comes to understand a software system, we are often focusing on the software artifact itself. What are the classes? How they are connected with each other?

In addition to this analysis of the system, it can be interesting to explore how the system evolves through time. To do so, we can exploit its git history. In Moose, we developed the project GitProjectHealth that enables the analysis of git history for projects hosted by GitHub, GitLab, or BitBucket. The project also comes with a set of metrics one could use directly.

GitProjectHealth is available in the last version of Moose, it can be easily installed using a Metacello script in a playground.

Metacello new
repository: 'github://moosetechnology/GitProjectHealth:main/src';
baseline: 'GitLabHealth';
onConflict: [ :ex | ex useIncoming ];
onUpgrade: [ :ex | ex useIncoming ];
onDowngrade: [ :ex | ex useLoaded ];
load

For this first blog post, we will experiment GitProjectHealth on the Famix project. Since this project is a GitHub project, we first create a GitHub token that will give GitProjectHealth the necessary authorization.

Then, we import the moosetechnology group (that hosts the Famix project).

glhModel := GLHModel new.
githubImporter := GithubModelImporter new
glhModel: glhModel;
privateToken: '<private token>';
yourself.
githubImporter withCommitsSince: (Date today - 100 days).
group := githubImporter importGroup: 'moosetechnology'.

This first step allows us to get first information on projects. For instance, by inspecting the group, we can select the “Group quality” view and see the group projects and the last status of their pipelines.

Group Quality view for moosetechnology

Then, by navigating to the Famix project and its repository, you can view the Commits History.

alt text.

It is also possible to explore the recent commit distribution by date and author

commit distribution.

In this visualization, we discover that the most recent contributors are “Clotilde Toullec” and “CyrilFerlicot”. The “nil” refers to a commit authors that did not fill GitHub with their email. It is anquetil (probably the same person as “Nicolas Anquetil”). The square without name is probably someone that did not fill correctly the local git config for username.

A popular metric when looking at git history is the code churn. Code churn refer to edit of code introduced in the past. It corresponds to the percentage of code introduced in a commit and then modified in other comments during a time period (e.g in the next week). However many code churn definitions exit.

The first step is thus to discover what commits modified my code. To do so, we implemented in GitProjectHealth information about diff in commit.

To extract this information, we first ask GitProjectHealth to extract more information for the commits of the famix project.

famix := group projects detect: [ :project | project name = 'Famix' ].
"I want to go deeper in analysis for famix repository, so I complete commit import of this project"
githubImporter withCommitDiffs: true.
famix repository commits do: [ :commit | githubImporter completeImportedCommit: commit ].

Then, when inspecting a commit, it is possible to switch to the “Commits tree” view.

Commit Tree

Here how to read to above example

  • The orange square “Remove TClassWithVisibility…” is the inspected commit.
  • The gray square is the parent commit of the selected ones.
  • The red squares are subsequent commits that modify at least one file in common with the inspected commit
  • The green squares are commits that modifies other part of the code

Based on this example, we see that Clotilde Toullec modifies code introduced in selected commits in three next commits. Two are Merged Pull Request. This can represent linked work or at least actions on the same module of the application.

Can we go deeper in the analysis?

It is possible to go even deeper in the analysis by connecting GitProjectHealth with other analysis. This is possible by connecting metamodels. For instance, it is possible to link GitProjectHealth with Jira system, of Famix models. You can look at the first general documentation, or stay tune for the next blog post about GitProjectHealth!

Test your Moose code using CIs

You have to test your code!

I mean, really.

But sometimes, testing is hard, because you do not know how to start (often because it was hard to start with TDD or better XtremTDD 😄).

One challenging situation is the creation of mocks to represent real cases and use them as test resources. This situation is common when dealing with code modeling and meta-modeling.

Writing a model manually to test features on it is hard. Today, I’ll show you how to use GitHub Actions as well as GitLab CI to create tests for the Moose platform based on real resources.


First of all, let’s describe a simple process when working on modeling and meta-modeling.

Source Code

Parse

Model File

Import

Model in Memory

Use

When analyzing a software system using MDE, everything starts with parsing the source code of the application to produce a model. This model can then be stored in a file. Then, we import the file into our analysis environment, and we use the concrete model.

All these steps are performed before using the model. However, when we create tests for the Use step, we do not perform all the steps before. We likely just create a mock model. Even if this situation is acceptable, it is troublesome because it disconnects the test from the tools (which can have bugs) that create the model.

One solution is thus not to create a mock model, but to create mock source code files.

Using mock source code files, we can reproduce the process for each test (or better, a group of tests 😉)

Mock Source Code

Parse with Docker

Model File

Import with script

Model in Memory

Test

In the following, I describe the implementation and set-up of the approach for analyzing Java code, using Pharo with Moose. It consists of the following steps:

  • Create mock resources
  • Create a bridge from your Pharo image to your resources using PharoBridge
  • Create a GitLab CI or a GitHub Action
  • Test ❤️

The first step is to create mock resources. To do so, the easiest way is to include them in your git repository.

You should have the following:

> ci // Code executed by the CI
> src // Source code files
> tests // Tests ressources

Inside the tests folder, it is possible to add several subfolders for different test resources.

To easily use the folder of the test resource repository from Pharo, we will use the GitBridge project.

The project can be added to your Pharo Baseline with the following code fragment:

spec
baseline: 'GitBridge'
with: [ spec repository: 'github://jecisc/GitBridge:v1.x.x/src' ].

Then, to connect our Pharo project to the test resources, we create a class in one of our packages, a subclass of `GitBridge“.

A full example would be as follows:

Class {
#name : #MyBridge,
#superclass : #GitBridge,
#category : #'MyPackage-Bridge'
}
{ #category : #initialization }
MyBridge class >> initialize [
SessionManager default registerSystemClassNamed: self name
]
{ #category : #'accessing' }
MyBridge class >> testsResources [
^ self root / 'tests'
]

The method testsResources can then be used to access the local folder with the test resources.

Warning: this setup only works locally. To use it with GitHub and GitLab, we first have to set up our CI files.

To set up our CI files, we first create in the ci folder of our repository a pretesting.st file that will execute Pharo code.

(IceRepositoryCreator new
location: '.' asFileReference;
subdirectory: 'src';
createRepository) register

This code will be run by the CI and register the Pharo project inside the Iceberg tool of Pharo. This registration is then used by GitBridge to retrieve the location of the test resources folder.

Then, we have to update the .smalltalk.ston file (used by every Smalltalk CI process) and add a reference to our pretesting.st file.

SmalltalkCISpec {
#preTesting : SCICustomScript {
#path : 'ci/pretesting.st'
}
...
}

The last step for GitLab is the creation of the .gitlab-ci.yml file.

This CI can include several steps. We now present the steps dedicated to testing the Java model, but the same steps apply to other programming languages.

First, we have to parse the tests-resources using the docker version of VerveineJ

stages:
- parse
- tests
parse:
stage: parse
image:
name: badetitou/verveinej:v3.0.0
entrypoint: [""]
needs:
- job: install
artifacts: true
script:
- /VerveineJ-3.0.0/verveinej.sh -Xmx8g -Xms8g -- -format json -o output.json -alllocals -anchor assoc -autocp ./tests/lib ./tests/src
artifacts:
paths:
- output.json

The parse stage uses the v3 of VerveineJ, parses the code, and produces an output.json file including the produced model.

Then, we add the common tests stage of Smalltalk ci.

tests:
stage: tests
image: hpiswa/smalltalkci
needs:
- job: parse
artifacts: true
script:
- smalltalkci -s "Moose64-10"

This stage creates a new Moose64-10 image and performs the CI based on the .smalltalk.ston configuration file.

The last step for GitLab is the creation of the .github/workflows/test.yml file.

In addition to a common smalltalk-ci workflow, we have to configure differently the checkout step, and add a step that parses the code.

For the checkout step, GitBridge (and more specifically Iceberg) needs the history of commits. Thus, we need to configure the checkout actions to fetch the all history.

- uses: actions/checkout@v3
with:
fetch-depth: '0'

Then, we can add a step that runs VerveineJ using its docker version.

- uses: addnab/docker-run-action@v3
with:
registry: hub.docker.io
image: badetitou/verveinej:v3.0.0
options: -v ${{ github.workspace }}:/src
run: |
cd tests
/VerveineJ-3.0.0/verveinej.sh -format json -o output.json -alllocals -anchor assoc .
cd ..

Note that before running VerveineJ, we change the working directory to the tests folder to better deal with source anchors of Moose.

You can find a full example in the FamixJavaModelUpdater repository

The last step is to adapt your tests to use the model produced from the mock source. To do so, it is possible to remove the creation of the mock model by loading the model.

Here’s an example:

externalFamixClass := FamixJavaClass new
name: 'ExternalFamixJavaClass';
yourself.
externalFamixMethod := FamixJavaMethod new
name: 'externalFamixJavaMethod';
yourself.
externalFamixClass addMethod: externalFamixMethod.
myClass := FamixJavaClass new
name: 'MyClass';
yourself.
externalFamixMethod declaredType: myClass.
famixModel addAll: {
externalFamixClass.
externalFamixMethod.
myClass }.

The above can be converted into the following:

FJMUBridge testsResources / 'output.json' readStreamDo: [ :stream |
famixModel importFromJSONStream: stream ].
famixModel rootFolder: FJMUBridge testsResources pathString.
externalFamixClass := famixModel allModelClasses detect: [ :c | c name = 'ExternalFamixJavaClass' ].
myClass := famixModel allModelClasses detect: [ :c | c name = 'MyClass' ].
externalFamixMethod := famixModel allModelMethods detect: [ :c | c name = 'externalFamixJavaMethod' ].

You can now test your code on a model generated as a real-world model!

It is clear that this solution slows down tests performance, however. But it ensures that your mock model is well created, because it is created by the parser tool (importer).

A good test practice is thus a mix of both solutions, classic tests in the analysis code, and full scenario tests based on real resources.

Have fun testing your code now!

Thanks C. Fuhrman for the typos fixes. 🍌

Migrating internationalization files

During my Ph.D. migration project, I considered the migration of several GUI aspects:

  • visual
  • behavioral
  • business

These elements are the main ones. When perfectly considered, you can migrate the front-end of any application. But, we are missing some other stuff 😄 For example, how do you migrate i18N files?

In this post, I’ll present how to build a simple migration tool to migrate I18N files from .properties (used by Java) to .json format (used by Angular).

First, let’s see our source and target.

As a source, I have several .properties files, including I18N for a Java project. Each file has a set of key/value and comments. For example, the EditerMessages_fr.properties is as follow:

##########
# Page : Edit
##########
pageTitle=Editer
classerDemande=Demande
classerDiffusion=Diffusion
classerPar=Classer Par

And it’s Arabic version EditerMessages_ar.properties

#########
# Page : Editer
#########
pageTitle=تحرير
classerDemande=طلب
classerDiffusion=بث
classerPar=تصنيف حسب

As a target, I need only one JSON file per language. Thus, the file for the french translation looks like this:

{
"EditerMessages" : {
"classerDemande" : "Demande",
"classerDiffusion" : "Diffusion",
"classerPar" : "Classer Par",
"pageTitle" : "Editer"
}
}

And the Arabic version:

{
"EditerMessages" : {
"classerDemande" : "طلب",
"classerDiffusion" : "بث",
"classerPar" : "تصنيف حسب",
"pageTitle" : "تحرير"
},
}

To perform the transformation from the .properties file to json, we will use MDE. The approach is divided into three main steps:

  1. Designing a meta-model representing internationalization
  2. Creating an importer of properties files
  3. Creating a JSON exporter

I18N files are simple. They consist of a set of key/values. Each value is associated with a language. And each file can be associated with a namespace.

For example, in the introduction example, the namespace of all entries is “EditerMessages”.

I designed a meta-model to represent all those concepts:

meta-model

Once the meta-model is designed, we must create an importer that takes .properties files as input and produces a model.

To produce a model, I first look for a .properties parser without much success. Thus, I decided to create my own parser. Given a correctly formatted file, the parser provides me the I18N entries. Then, by iterating on this collection, I build an I18N model.

To implement the parser, I used the PetitParser2 project. This project aims to ease the creation of new parsers.

First, I downloaded the last version of Moose, and I installed PetitParser using the command provided in the repository Readme:

Metacello new
baseline: 'PetitParser2';
repository: 'github://kursjan/petitparser2';
load.

In my Moose Image, I created a new parser. To do so, I extended the PP2CompositeNode class.

PP2CompositeNode << #CS18NPropertiesParser
slots: { };
package: 'Casino-18N-Model-PropertyImporter'

Then, I defined the parsing rules. Using PetitParser2, each rule corresponds to a method.

First, start is the entry point.

start
^ pairs end

pairs parses the entries of the .properties files.

pairs
^ comment optional starLazy, pair , ((newline / comment) star , pair ==> [ :token | token second ]) star , (newline/comment) star ==> [ :token |
((OrderedCollection with: token second)
addAll: token third;
yourself) asArray ]

The first part of this method (before ==>) corresponds to the rule parsed. The second part (after ==>), to the production.

The first part tries to parse one or several comment. Then, it parses one pair followed by a list of comment, newline, and pair.

This parser is clearly not perfect and would require some improvement. Nevertheless, it does work for our context.

The second part produces a collection (i.e. a list) of the pair.

Now that we can parse one file, we can build a I18N model. To do so, we will first parse every .properties file. For each file, we extract the language and the namespace based on the file name. Thus, EditerMessages_fr.properties is the file for the fr language and the EditerMessages namespace. Then, for each file entry, we instantiate an entry in our model inside the namespace and with the correct language attached.

importString: aString
(parser parse: aString) do: [ :keyValue |
(self model allWithType: CS18NEntry) asOrderedCollection
detect: [ :entry |
"search for existing key in the file"
entry key name = keyValue key ]
ifOne: [ :entry |
"If an entry already exists (in another language for instance)"
entry addValue: ((self createInModel: CS18NValue)
name: keyValue value;
language: currentLanguage;
yourself) ]
ifNone: [
"If no entry exist"
(self createInModel: CS18NEntry)
namespace: currentNamespace;
key: ((self createInModel: CS18NKey)
name: keyValue key;
yourself);
addValue: ((self createInModel: CS18NValue)
name: keyValue value;
language: currentLanguage;
yourself);
yourself ] ]

After performing the import, we get a model with, for each namespace, several entries. Each entry has a key and several values. Each value is attached to the language.

To perform the JSON export, I used the NeoJSON project. NeoJSON allows one to create a custom encoder.

For the export, we first select a language. Then, we build a dictionary with all the namespaces:

rootDic := Dictionary new.
(model allWithType: CS18NNamespace)
select: [ :namespace | namespace namespace isNil ]
thenDo: [ :namespace | rootDic at: namespace name put: namespace ].

To export a namespace (i.e., a CS18NNamespace), I define a custom encoder:

writter for: CS18NNamespace customDo: [ :mapper |
mapper encoder: [ :namespace | (self constructNamespace: namespace) asDictionary
]
].
constructNamespace: aNamespace
| dic |
dic := Dictionary new.
aNamespace containables do: [ :containable |
(containable isKindOf: CS18NNamespace)
ifTrue: [ dic at: containable name put: (self constructNamespace: containable) ]
ifFalse: [ "should be an CS18NEntry"
dic at: containable key name put: (containable values detect: [ :value | value language = language ] ifOne: [ :value | value name ] ifNone: [ '' ]) ] ].
^ dic

The custom encoder consists on converting a Namespace into a dictionary of entries with the entries keys and their values in the selected language.

Once my importer and exporter are designed, I can perform the migration. To do so, I use a little script. It creates a model of I18N, imports several .properties file entries in the model, and exports the Arabic entries in a JSON file.

"Create a model"
i18nModel := CS18NModel new.
"Create an importer"
importer := CS18NPropertiesImporter new.
importer model: i18nModel.
"Import all entries from the <myProject> folder"
('D:\dev\myProject\' asFileReference allChildrenMatching: '*.properties') do: [ :fileRef |
self record: fileRef absolutePath basename.
importer importFile: fileRef.
].
"export the arabian JSON I18N file"
'D:/myFile-ar.json' asFileReference writeStreamDo: [ :stream |
CS18NPropertiesExporter new
model: importer model;
stream: stream;
language: ((importer model allWithType: CS18NLanguage) detect: [ :lang | lang shortName = 'ar' ]);
export
]

The meta-model, importer, and exporter are freely available in GitHub.

Automatic meta-model documentation generation

When you are developing with Moose everyday, you know how to create an excellent visualization of your meta-model. But, we have to open a Pharo image, and it is hard to share it during a presentation. Often, we made one UML of the meta-model, and then… we forget to update it. Then, when sharing with others, you have to say that the UML is not correct but it is ok???.

In my opinion, this is super bad. Thus, I decided to have a look at GitHub Actions to update my UML automatically.

In the following, I present how to update GitHub Actions to add UML auto-generation. I use the Coaster project as an example. Please consider reading the blog post about using GitHub action with Pharo.

The first step is to configure SmalltalkCI for your project. To use it, we need to create two files: .smalltalk.ston, and the GitHub actions: .github/workflows/ci.yml.

The .smalltalk.ston file is used to configure the CI. It is written in the Ston file format and configures how to load the Pharo project and how to test it. In our case, the Coaster project does not have tests 😱, so we set that the CI does not fail even if no tests are ran.

The final file can be found in the Coaster project.

SmalltalkCISpec {
#loading : [
SCIMetacelloLoadSpec {
#baseline : 'Coaster',
#directory : 'src',
#load : [ 'default' ],
#platforms : [ #pharo ],
#onConflict : #useIncoming,
#onUpgrade : #useIncoming
}
],
#testing : {
#failOnZeroTests : false
}
}

The second file, .github/workflows/ci.yml, is used by GitHub when running the CI. We describe in comments the main steps:

# Name of the project in the GitHub action panel
name: CI
# Execute the CI on push on the master branch
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
# Use Moose 9 that includes our visualization tool
smalltalk: [Moose64-9.0]
name: ${{ matrix.smalltalk }}
steps:
# checkout the project
- uses: actions/checkout@v2
# Prepare the CI - download the correct VM :-)
- uses: hpi-swa/setup-smalltalkCI@v1
with:
smalltalk-image: ${{ matrix.smalltalk }}
# Use the CI - always better to run test
- run: smalltalkci -s ${{ matrix.smalltalk }}
shell: bash
timeout-minutes: 15

Once the main files are created, we can configure the CI also to create the UML file. To do so, we will use the plantUML visualization tool.

We add a new step in the .github/workflows/ci.yml file as a first step. It consists of executing the FamixMMUMLDocumentor on the meta-model we want to document.

- name: Build meta-model plantuml image
run: |
$SMALLTALK_CI_VM $SMALLTALK_CI_IMAGE eval "'coaster.puml' asFileReference writeStreamDo: [ :stream | stream nextPutAll: (FamixMMUMLDocumentor new model: CCModel; beWithStub; generatePlantUMLModel) ]."

This new step creates the coaster.puml file in the $HOME folder of the GitHub action. Then, we use a new action that creates the coaster.png file.

- name: Generate Coaster PNG Diagrams
uses: cloudbees/plantuml-github-action@master
with:
args: -v -tpng coaster.puml

Nice 😄, we have the png file generated by the GitHub action.

Finally, you can upload the UML png as an artifact of the Github action or upload it somewhere else. Here, I present how to publish it to a new branch of your repository. Then, we will see how to show it in the Readme of the main branch.

The goal of this step is to automatically update the documentation for end-users.

First, we create a new directory where we put the UML png file.

- name: Move artifact
run: |
mkdir doc-uml
mv *.png doc-uml

Then, we configure this directory as a new git repository.

- name: Init new repo in doc-uml folder and commit generated files
run: |
cd doc-uml/
git init
git add -A
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git commit -m 'update doc'

This new repository includes only the documentation we generated. The final step is to push this into a new branch of our project.

Because we do not care about the history of our meta-model UML files here, we will force push the repository. But creating more intelligent scripts is possible.

To do so, we use the ad-m/github-push-action GitHub action.

# Careful, this can kill your project
- name: Force push to destination branch
uses: ad-m/github-push-action@v0.5.0
with:
# Token for the repo. Can be passed in using $\{{ secrets.GITHUB_TOKEN }}
github_token: ${{ secrets.GITHUB_TOKEN }}
force: true
# Destination branch to push changes
branch: v1/doc
# We need to push from the folder where files were generated.
# Same as where the new repo was initialized in the previous step
directory: ./doc-uml

BE CAREFUL; if you incorrectly set the branch argument, you might delete your project.

When used, this action pushes the UML files in the v1/doc branch. The v1/doc of the Coaster project is created here.

Finally, we add the image of the UML files in the Readme of the main project. For the Coaster project, we modified the Readme and added:

![Coaster meta-model png](https://raw.githubusercontent.com/badetitou/CoastersCollector/v1/doc/coaster.png)

The URL follows the following pattern: https://raw.githubusercontent.com/:owner:/:repo:/:branch:/:file:. The final .github/workflows/ci.yml file is here.

That’s it 😄 Now, at every commit, the CI will update the png files used in the Readme of the project, and thus, the documentation is always up-to-date.

Connecting/Extending meta-models

Sometimes, a meta-model does not have all the information you want. Or, you want to connect it with another one. A classic example is linking a meta-model at an abstract level to a more concrete meta-model.

In this blog post, I will show you how to extend and connect a meta-model with another (or reuse a pre-existing meta-model into your own meta-model). We will use the Coaster example.

The Coaster meta-model is super great (I know… it is mine 😄 ). Using it, one can manage its collection of coasters.

However, did you notice that there is only one kind of Creator possible: the brewery. It is not great because some coasters are not created by breweries but for events. My model is not able to represent this situation. So, there are two possibilities: I can fix my meta-model, or I can extend it with a new concept. Here, we will see how I can extend it.

Extended Coaster meta-model

As presented in the above figure, we add the Events concept as a kind of Creator.

As a first step, we need the original Coaster meta-model generator loaded in our image. We can download it from my Coaster GitHub repository.

You should have a named CoasterCollectorMetamodelGenerator in your image. This is the generator of the original meta-model. We will now create another generator reusing the original one.

First, we create a new generator for our extended meta-model.

FamixMetamodelGenerator subclass: #CoasterExtendedMetamodelGenerator
instanceVariableNames: ''
classVariableNames: ''
package: 'CoasterCollector-ExtentedModel-Generator'

Then, we link our generator with the original one. To do so, we will use the submetamodels feature of the generator. We only have to implement the #submetamodels method in the class side of our new generator. This method should return an array including the generators of the submetamodels that we want to reuse.

CoasterExtendedMetamodelGenerator class >> #submetamodels
^ { CoasterCollectorMetamodelGenerator }

Finally, as for a classic meta-model generator, we define a package name and a prefix.

CoasterExtendedMetamodelGenerator class >> #packageName
^ #'CoasterExtended-Model'
CoasterExtendedMetamodelGenerator class >> #prefix
^ #'CCE'

Creating new concepts in the new meta-model is done following the same approach as for classic meta-model generator. In our example, we add the Event class. Thus, we create the method #defineClasses with the new entity.

CoasterExtendedMetamodelGenerator >> #defineClasses
super defineClasses.
event := builder newClassNamed: #Event.

To extend the original meta-model, we first need to identify the entities of the original meta-model we will extend. In our case, we only extend the Creator entity. Thus, we declare it in the #defineClasses method. To do so, we use the method #remoteEntity:withPrefix:. The prefix is used to allow multiple entities coming from different submetamodels but with the same name.

CoasterExtendedMetamodelGenerator >> #defineClasses
super defineClasses.
event := builder newClassNamed: #Event.
"Remote entities"
creator := self remoteEntity: #Creator withPrefix: #CC

We refer to a remote entity by sending #remoteEntity:withPrefix: to self and not using the builder. Indeed, the entity is already created.

Once the declaration done, one can use the remote entities as classic entities in the new generator. In our example, we will create the hierarchy between Creator and Event.

CoasterExtendedMetamodelGenerator >> #defineHierarchy
super defineHierarchy.
event --|> creator

Once everything is defined, as for classic generator, we generate the meta-model. To do so, execute in a playground:

CoasterExtendedMetamodelGenerator generate

The generation creates a new package with the Event entity. It also generates a class named CCEModel used to create an instance of our extended meta-model.

It is now possible to use the new meta-model with the Event concept. For instance, one can perform the following script in a playground to create a little model.

myExtendedModel := CCEModel new.
myExtendedModel add: (CCBrewery new name: 'Badetitou'; yourself).
myExtendedModel add: (CCEEvent new name: 'Beer party'; yourself)

We saw that one can extend a meta-model by creating a new one based on the pre-existing entities. It is also possible to connect two existing meta-models together.

To do so, let’s assume we have two existing meta-models to be connected. As an example, we will connect our coaster meta-model, with the world meta-model. The world meta-model aims to represent the world, with its continent, countries, regions and cities.

We will not detail how to implement the world meta-model. But the generator is available in my GitHub repository. The figure below illustrates the meta-model.

World meta-model

Connecting world meta-model with Coaster meta-model

Section titled “Connecting world meta-model with Coaster meta-model”

Our goal is to connect the coaster meta-model with the world meta-model. To do so, we will connect the country concepts of each meta-model.

Connected meta-model

As a first step, you should install both the coaster meta-model and the world meta-model. Again, both are available in my GitHub repository.

Then, we create a new meta-model generator that will perform the connection.

FamixMetamodelGenerator subclass: #ConnectMetamodelGenerator
instanceVariableNames: ''
classVariableNames: ''
package: 'Connect-Model-Generator'

To connect together the two meta-models, we must first declare them in our connector meta-model. To do so, we define the #submetamodels method.

ConnectMetamodelGenerator class >> #submetamodels
^ { WorldMetamodelGenerator . CoasterCollectorMetamodelGenerator }

And, as for every meta-model generator, we define a prefix and a package name.

ConnectMetamodelGenerator class >> #packageName
^ #'Connect-Model'
ConnectMetamodelGenerator class >> #submetamodels
^ #'CM'

Before creating the connection, we must declare, in the new meta-model, the entities that will be contected. To do so, we declare them as remoteEntity.

ConnectMetamodelGenerator >> #defineClasses
super defineClasses.
coasterCountry := self remoteEntity: #Country withPrefix: #CC.
worldCountry := self remoteEntity: #Country withPrefix: #W

Then, it is possible to connect the two entities as classic one.

ConnectMetamodelGenerator >> #defineRelations
super defineRelations.
coasterCountry - worldCountry

Build a model with two connected submetamodels

Section titled “Build a model with two connected submetamodels”

Once the generator is created, we can generate the connection by generating the new meta-model. To do so, execute in a playground:

ConnectMetamodelGenerator generate

Then, it is possible to create a model with all the entities and to link the two meta-models. In the following, we present a script that create a model.

"create the entities"
coaster1 := CCCoaster new.
coaster2 := CCCoaster new.
coaster3 := CCCoaster new.
coasterFranceCountry := CCCountry new name: #'France'; yourself.
coasterFranceCountry addCoaster: coaster1.
coasterFranceCountry addCoaster: coaster2.
coasterGermanyCountry := CCCountry new name: #'Germany'; yourself.
coasterGermanyCountry addCoaster: coaster3.
wFranceCountry := WCountry new name: #'France'; yourself.
wGermanyCountry := WCountry new name: #'Germany'; yourself.
continent := WContinent new name: #Europe; yourself.
continent addCountry: wFranceCountry.
continent addCountry: wGermanyCountry.
"connect CCountries to WCountries"
coasterFranceCountry country: wFranceCountry.
coasterGermanyCountry country: wGermanyCountry.
"put all entities into the same model"
connectedModel := CMModel new.
connectedModel addAll:
{ coaster1. coaster2 . coaster3 .
coasterFranceCountry . coasterGermanyCountry .
wFranceCountry . wGermanyCountry . continent }.

Based on the preceding model, it is possible to create query that will request the coaster and the world meta-model. For instance, the following snippet count the number of coasters by country in the Europe continent:

europe := (connectedModel allWithType: WContinent)
detect: [ :continent | continent name = #Europe ].
(europe countries collect: [ :eCountry |
eCountry name -> eCountry country coasters size ]) asDictionary

The coutry: and coutry methods are accessors that allow to set and recover one CCCountry (resp. WCountry) into a WCountry (resp. CCCountry). The accessors names are the same in both classes and were generated automatically from the declaration of the relationship in defineRelations (this is normal behaviour of the generator, not specific to using sub-models.

In this post, we saw how one can extend and connect meta-models using Famix Generator. This feature is very helpfull when you need to improve a meta-model without modifying it directly. If you need more control on the generated entities (e.g., name of the relations, etc.), please have a look at the create meta-model wiki page.

Coasters collection

I’m a coasters collector. I’m not a huge collector but I want to inventory them in one place. For sure, I can create a PostgreSQL database. But, at the same time, it appears that I can also design my collection using Moose.

So, you’re going to use a complete system analysis software to manage your coasters collection?

Exactly! And why? Because I think it’s simpler.

As for every software system, the first step is to design the model. In my case, I want to represent a collection of coasters. Let’s say a coaster is an entity. It can belong to a brewery or not (for example event coasters). A coaster also has a form. It can be round, squared, oval, or others. A Coaster can also be specific to a country. Because it is a collection, I can register coaster I own and other I do not. Finally, each coaster can have an associated image.

From this description of the problem, I designed my UML schema:

"coasters UML"

The most complicated part is done. We just need to implement the meta-model in Moose now 😄.

First of all, we’ll need a Moose 8 image. You can find everything you need to install Moose in the moose-wiki.

Ok! Let’s create a generator that will generate for us the meta-model. We only need to describe the meta-model in the generator. We will name this generator CoasterCollectorMetamodelGenerator.

FamixMetamodelGenerator subclass: #CoasterCollectorMetamodelGenerator
slots: { }
classVariables: { }
package: 'CoasterCollector-Model-Generator'

The generator needs to define two methods class side for the configuration:

  • #packageName defines where the meta-model will be generated
  • #prefix defines the prefix of each class when they are generated.

We used for #packageName:

CoasterCollectorMetamodelGenerator class >>#packageName
^ #'CoasterCollector-Model'

We used for #prefix:

CoasterCollectorMetamodelGenerator class >>#prefix
^ #'CC'

Now, we have to define the entities, their properties, and their relations.

A meta-model is composed of entities. In our case, it corresponds to the entities identified in the UML. We use the method #defineClasses to define the entities of our meta-model.

CoasterCollectorMetamodelGenerator>>#defineClasses
super defineClasses.
coaster := builder newClassNamed: #Coaster.
country := builder newClassNamed: #Country.
shape := builder newClassNamed: #Shape.
round := builder newClassNamed: #Round.
square := builder newClassNamed: #Square.
oval := builder newClassNamed: #Oval.
creator := builder newClassNamed: #Creator.
brewery := builder newClassNamed: #Brewery

We also need to define the hierarchy of those entities:

CoasterCollectorMetamodelGenerator>>#defineHierarchy
super defineHierarchy.
brewery --|> creator.
oval --|> shape.
square --|> shape.
round --|> shape

As we have defined the classes, we defined the properties of the entities using the #defineProperties method.

defineProperties
super defineProperties.
creator property: #name type: #String.
country property: #name type: #String.
coaster property: #image type: #String.
coaster property: #owned type: #Boolean

In this example, we did not use Trait already created in Moose. However, it is possible to use the Trait TNamedEntity to define that countries and creators have a name instead of using properties.

Finally, we defined the relations between our entities:

defineRelations
super defineRelations.
(coaster property: #shape) *- (shape property: #coasters).
(coaster property: #country) *- (country property: #coasters).
(coaster property: #creator) *- (creator property: #coasters)

Once everything is defined, we only need to use the generator to build our meta-model.

CoasterCollectorMetamodelGenerator generate

The generation creates a new package with our entities. It also generates a class named Model used to create an instance of our meta-model.

I have created my meta-model. Now I need to fill my collection. First of all, I will create a collection of coasters. To do so, I instantiate a model with: model := CCModel new. And now I can add the entities of my real collection in my model and I can explore it in Moose.

For example, to add a new brewery I execute: model add: (CCBrewery new name: 'Badetitou'; yourself).

The code is available on github.

Once I have created the collection, I can save it using the Moose export format (currently JSON and mse). To do so, I execute the following snippet:

'/my/collection/model.json' asFileReference ensureCreateFile
writeStreamDo: [ :stream | model exportToJSONStream: stream ]

Then I can select where I want to export my model.

To import it back into an image, I use the following code

'/my/collection/model.json' asFileReference
readStreamDo: [ :stream | model := CCModel importFromJSONStream: stream ]