Skip to content

Blog

Migrating internationalization files

During my Ph.D. migration project, I considered the migration of several GUI aspects:

  • visual
  • behavioral
  • business

These elements are the main ones. When perfectly considered, you can migrate the front-end of any application. But, we are missing some other stuff 😄 For example, how do you migrate i18N files?

In this post, I’ll present how to build a simple migration tool to migrate I18N files from .properties (used by Java) to .json format (used by Angular).

First, let’s see our source and target.

As a source, I have several .properties files, including I18N for a Java project. Each file has a set of key/value and comments. For example, the EditerMessages_fr.properties is as follow:

##########
# Page : Edit
##########
pageTitle=Editer
classerDemande=Demande
classerDiffusion=Diffusion
classerPar=Classer Par

And it’s Arabic version EditerMessages_ar.properties

#########
# Page : Editer
#########
pageTitle=تحرير
classerDemande=طلب
classerDiffusion=بث
classerPar=تصنيف حسب

As a target, I need only one JSON file per language. Thus, the file for the french translation looks like this:

{
"EditerMessages" : {
"classerDemande" : "Demande",
"classerDiffusion" : "Diffusion",
"classerPar" : "Classer Par",
"pageTitle" : "Editer"
}
}

And the Arabic version:

{
"EditerMessages" : {
"classerDemande" : "طلب",
"classerDiffusion" : "بث",
"classerPar" : "تصنيف حسب",
"pageTitle" : "تحرير"
},
}

To perform the transformation from the .properties file to json, we will use MDE. The approach is divided into three main steps:

  1. Designing a meta-model representing internationalization
  2. Creating an importer of properties files
  3. Creating a JSON exporter

I18N files are simple. They consist of a set of key/values. Each value is associated with a language. And each file can be associated with a namespace.

For example, in the introduction example, the namespace of all entries is “EditerMessages”.

I designed a meta-model to represent all those concepts:

meta-model

Once the meta-model is designed, we must create an importer that takes .properties files as input and produces a model.

To produce a model, I first look for a .properties parser without much success. Thus, I decided to create my own parser. Given a correctly formatted file, the parser provides me the I18N entries. Then, by iterating on this collection, I build an I18N model.

To implement the parser, I used the PetitParser2 project. This project aims to ease the creation of new parsers.

First, I downloaded the last version of Moose, and I installed PetitParser using the command provided in the repository Readme:

Metacello new
baseline: 'PetitParser2';
repository: 'github://kursjan/petitparser2';
load.

In my Moose Image, I created a new parser. To do so, I extended the PP2CompositeNode class.

PP2CompositeNode << #CS18NPropertiesParser
slots: { };
package: 'Casino-18N-Model-PropertyImporter'

Then, I defined the parsing rules. Using PetitParser2, each rule corresponds to a method.

First, start is the entry point.

start
^ pairs end

pairs parses the entries of the .properties files.

pairs
^ comment optional starLazy, pair , ((newline / comment) star , pair ==> [ :token | token second ]) star , (newline/comment) star ==> [ :token |
((OrderedCollection with: token second)
addAll: token third;
yourself) asArray ]

The first part of this method (before ==>) corresponds to the rule parsed. The second part (after ==>), to the production.

The first part tries to parse one or several comment. Then, it parses one pair followed by a list of comment, newline, and pair.

This parser is clearly not perfect and would require some improvement. Nevertheless, it does work for our context.

The second part produces a collection (i.e. a list) of the pair.

Now that we can parse one file, we can build a I18N model. To do so, we will first parse every .properties file. For each file, we extract the language and the namespace based on the file name. Thus, EditerMessages_fr.properties is the file for the fr language and the EditerMessages namespace. Then, for each file entry, we instantiate an entry in our model inside the namespace and with the correct language attached.

importString: aString
(parser parse: aString) do: [ :keyValue |
(self model allWithType: CS18NEntry) asOrderedCollection
detect: [ :entry |
"search for existing key in the file"
entry key name = keyValue key ]
ifOne: [ :entry |
"If an entry already exists (in another language for instance)"
entry addValue: ((self createInModel: CS18NValue)
name: keyValue value;
language: currentLanguage;
yourself) ]
ifNone: [
"If no entry exist"
(self createInModel: CS18NEntry)
namespace: currentNamespace;
key: ((self createInModel: CS18NKey)
name: keyValue key;
yourself);
addValue: ((self createInModel: CS18NValue)
name: keyValue value;
language: currentLanguage;
yourself);
yourself ] ]

After performing the import, we get a model with, for each namespace, several entries. Each entry has a key and several values. Each value is attached to the language.

To perform the JSON export, I used the NeoJSON project. NeoJSON allows one to create a custom encoder.

For the export, we first select a language. Then, we build a dictionary with all the namespaces:

rootDic := Dictionary new.
(model allWithType: CS18NNamespace)
select: [ :namespace | namespace namespace isNil ]
thenDo: [ :namespace | rootDic at: namespace name put: namespace ].

To export a namespace (i.e., a CS18NNamespace), I define a custom encoder:

writter for: CS18NNamespace customDo: [ :mapper |
mapper encoder: [ :namespace | (self constructNamespace: namespace) asDictionary
]
].
constructNamespace: aNamespace
| dic |
dic := Dictionary new.
aNamespace containables do: [ :containable |
(containable isKindOf: CS18NNamespace)
ifTrue: [ dic at: containable name put: (self constructNamespace: containable) ]
ifFalse: [ "should be an CS18NEntry"
dic at: containable key name put: (containable values detect: [ :value | value language = language ] ifOne: [ :value | value name ] ifNone: [ '' ]) ] ].
^ dic

The custom encoder consists on converting a Namespace into a dictionary of entries with the entries keys and their values in the selected language.

Once my importer and exporter are designed, I can perform the migration. To do so, I use a little script. It creates a model of I18N, imports several .properties file entries in the model, and exports the Arabic entries in a JSON file.

"Create a model"
i18nModel := CS18NModel new.
"Create an importer"
importer := CS18NPropertiesImporter new.
importer model: i18nModel.
"Import all entries from the <myProject> folder"
('D:\dev\myProject\' asFileReference allChildrenMatching: '*.properties') do: [ :fileRef |
self record: fileRef absolutePath basename.
importer importFile: fileRef.
].
"export the arabian JSON I18N file"
'D:/myFile-ar.json' asFileReference writeStreamDo: [ :stream |
CS18NPropertiesExporter new
model: importer model;
stream: stream;
language: ((importer model allWithType: CS18NLanguage) detect: [ :lang | lang shortName = 'ar' ]);
export
]

The meta-model, importer, and exporter are freely available in GitHub.

Label Contractor for shortening labels

When there are long labels in a visualization the displayed elements can overlap which renders the visualization very difficult to read, or the elements have to be very spread out (to not overlap) and then the visualization does not fit in a normal screen or paper.

The Label Contractor project comes to solve this problem by offering several ways to reduce the length of labels (hence its name).

For example:

LbCContractor new
removeVowels;
reduce: 'MergedSuperClasses'.

will return ‘MrgdSprClsss’ by suppressing all vowels from the label.

In this blog post, I will explain how you can apply a reduction following different strategies and how you can combine them.

In order to install this project, on a Pharo 9.0/Moose Suite 9.0 image execute the following script in the Playground:

Metacello new
baseline: 'LabelContractor';
repository: 'github://moosetechnology/LabelContractor/src';
load

The full project including examples of the application of LabelContractor on visualizations and Spec2 can be obtained with:

Metacello new
baseline: 'LabelContractor';
repository: 'github://moosetechnology/LabelContractor/src';
load: 'full'.

The idea was to build a tool that can reduce labels without losing too much information, and is to provide the user with a set of strategies, allowing him to apply them separately or in combination.

There are startegies for: removing some arbitrary substring from labels, removing all vowels, removing fully qualified path names, etc.

The contraction of labels is based on two decisions:

  • First, filenames are treated by default to remove the full pathname, therefore ‘/home/idtaleb/Label Contractor/images/src/LbCContractor.st’ will be truncated as ‘LbCContractor.st’. If a label is not a filename, this has no effect on it;
  • Second, some strategies working on words assume the labels follow the CamelCase convention.

Currently these decisions are hardcoded in the contractor, but they will be implemented as normal strategies in the future.

There are 13 strategies that we are going to review now.

This strategy removes the extension of filenames. The extension is the part of the label after the last dot (’.’)

LbCContractor new
removeFilenameExtension ;
reduce: 'LbCContractor.st'

will return ‘LbCContractor’.

This strategy abbreviates the words in the label to their first capital letter. As explained before, the label is assumed to follow the CamelCase convention. Only the first three words can be abbreviated (if there are more than three words). On top of that, the last word is not abbreviated.

LbCContractor new
abbreviateNames;
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘CMSAndInheritedTraitsHierarchyTest’ (only the first tree words Cly, Merged, and Superclasses were abbreviated).

This strategy removes all vowels from the label. Notice that the first letter of a word is always kept whether it is a vowel or a consonant.

Note: In English, the letter ‘y’ is sometimes considered a vowel and sometimes a consonant. This strategy assumes that ‘y’ is a consonnant when it is followed by a vowel like in ‘layer’.

LbCContractor new
removeVowels;
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘ClMrgdSprclsssAndInhrtdTrtsHrrchTst’.

LbCContractor new
removeVowels;
reduce: 'layer'

will return ‘lyr’.

This strategy replaces a word by another one. If the word appears more than once, then all occurrences of the word will be replaced.

Example:

LbCContractor new
substitute: 'Superclasses' by: 'Sc';
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘ClyMergedScAndInheritedTraitsHierarchyTest’.

There are three strategies based on specifically fixing a maximal size for the contracted label.

This strategy removes the frequent letters until having the maximal size. The frequency of letters is hard coded from know frequency of letters in english texts. Letters are removed, one at a time, from the most frequent (in english) to the least frequent until the label is the maximum size. The startegy is not case sensitive, meaning that a ‘T’ is counted as a ‘t’.

LbCContractor new
removeFrequentLettersUpTo: 20;
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'.

will return ‘ClyMgdpcldIhidiHichy’.

removing the letters (number of apparition in parentheses) ‘e’, ‘r’, ‘s’, ‘u’, ‘a’, ‘n’, and ‘t’.

This strategy keeps the beginning and the end of the label and replace the middle by ellipsis represented as a ''. The default size is eight, so it keeps the first four characters and the last four characters af the label and separates them with a tilde ''. The default size can be changed.

LbCContractor new
ellipsis;
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘ClyM~Test’.

This strategy takes the first eight characters of a label. Again, the default size can be changed.

LbCContractor new
pickFirstCharacters;
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'.

will return ‘ClyMerge’ (the first eight letters are kept)

This is another group of three strategies that remove some given substring from a label.

Notice that by default the startegies are not case sensitive.

This strategy accepts one or a collection of substring to be removed, and it removes all the occurrences of these substrings in the label.

An example with only one substring to remove:

LbCContractor new
removeSubstring: 'Merged';
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘ClySuperclassesAndInheritedTraitsHierarchyTest’.

An other example with a collection of substrings:

LbCContractor new
removeSubstrings: #('cly' 'merged' 'and' 'test');
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘SuperclassesInheritedTraitsHierarchy’.

The same idea, this strategy removes the prefix of the label if it matches the given prefix: A collection of prefixes can be given if the same contractor is applied to several labels (with different prefixes).

LbCContractor new
removePrefix: 'ClyMerge';
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘dSuperclassesAndInheritedTraitsHierarchyTest’.

This strategy is similar to the last one, except that it removes the suffix substrings.

This is a group of three strategies which is very similar to the Remove Substrings group, except that it removes words in the label (assuming a CamelCase convention). The words to remove are specified by their indexes.

This strategy removes words of the label, that are specified by their indexes. Like Remove Any Substrings, you can give an index or a collection of indexes of the words to remove.

LbCContractor new
removeWordAt: 2;
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'

will return ‘ClySuperclassesAndInheritedTraitsHierarchyTest’ (the second word, ‘Merged’ was removed).

This strategy removes automatically the first word of the label, whatever it is.

This strategy removes automatically the last word of the label, whatever it is.

Finally, there are two ways to combine the strategies together, in the both cases the user must provides the strategies:

  • The user provides the strategies in the order to apply them:
LbCContractor new
ellipsisUpTo: 20;
removeVowels;
removeSubstrings: #('Merged' 'Test');
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'.

will return ‘ClMrgdS~rrchTst’ by applying first ‘ellipsisUpTo:’, then ‘removeVowels’, and then ‘removeSubstrings:’. Note that the last one was actually not applied because the other two had already changed the label, and the ellipsis is shorter than expected because ‘removeVowels’ came after.

  • Combining following predefined priorities:

To avoid unreasonable result (as in the previous example), the strategies have built-in priorities that can be applied with ‘usingPriorities’.

The same example but with priorities:

LbCContractor new
usingPriorities;
ellipsisUpTo: 20;
removeVowels;
removeSubstrings: #('Merged' 'Test');
reduce: 'ClyMergedSuperclassesAndInheritedTraitsHierarchyTest'.

will return ‘ClSprclsss~dTrtsHrrch’

The result is different, because the substrings were removed before applying removeVowels strategy which was itself applied before ‘ellipsisUpTo:’.

The priority system is defined as follows (the color green means that the strategy has the highest priority):

Contractor&#x27;s strategies

In this post, we have seen how to compact labels in a visualization using the LabelContractor. The goal is to improve the readability of a visualization while retaining as much information as possible.

Note that LabelContractor is not just for visualizations, but you can use it whenever you want.

Automatic meta-model documentation generation

When you are developing with Moose everyday, you know how to create an excellent visualization of your meta-model. But, we have to open a Pharo image, and it is hard to share it during a presentation. Often, we made one UML of the meta-model, and then… we forget to update it. Then, when sharing with others, you have to say that the UML is not correct but it is ok???.

In my opinion, this is super bad. Thus, I decided to have a look at GitHub Actions to update my UML automatically.

In the following, I present how to update GitHub Actions to add UML auto-generation. I use the Coaster project as an example. Please consider reading the blog post about using GitHub action with Pharo.

The first step is to configure SmalltalkCI for your project. To use it, we need to create two files: .smalltalk.ston, and the GitHub actions: .github/workflows/ci.yml.

The .smalltalk.ston file is used to configure the CI. It is written in the Ston file format and configures how to load the Pharo project and how to test it. In our case, the Coaster project does not have tests 😱, so we set that the CI does not fail even if no tests are ran.

The final file can be found in the Coaster project.

SmalltalkCISpec {
#loading : [
SCIMetacelloLoadSpec {
#baseline : 'Coaster',
#directory : 'src',
#load : [ 'default' ],
#platforms : [ #pharo ],
#onConflict : #useIncoming,
#onUpgrade : #useIncoming
}
],
#testing : {
#failOnZeroTests : false
}
}

The second file, .github/workflows/ci.yml, is used by GitHub when running the CI. We describe in comments the main steps:

# Name of the project in the GitHub action panel
name: CI
# Execute the CI on push on the master branch
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
# Use Moose 9 that includes our visualization tool
smalltalk: [Moose64-9.0]
name: ${{ matrix.smalltalk }}
steps:
# checkout the project
- uses: actions/checkout@v2
# Prepare the CI - download the correct VM :-)
- uses: hpi-swa/setup-smalltalkCI@v1
with:
smalltalk-image: ${{ matrix.smalltalk }}
# Use the CI - always better to run test
- run: smalltalkci -s ${{ matrix.smalltalk }}
shell: bash
timeout-minutes: 15

Once the main files are created, we can configure the CI also to create the UML file. To do so, we will use the plantUML visualization tool.

We add a new step in the .github/workflows/ci.yml file as a first step. It consists of executing the FamixMMUMLDocumentor on the meta-model we want to document.

- name: Build meta-model plantuml image
run: |
$SMALLTALK_CI_VM $SMALLTALK_CI_IMAGE eval "'coaster.puml' asFileReference writeStreamDo: [ :stream | stream nextPutAll: (FamixMMUMLDocumentor new model: CCModel; beWithStub; generatePlantUMLModel) ]."

This new step creates the coaster.puml file in the $HOME folder of the GitHub action. Then, we use a new action that creates the coaster.png file.

- name: Generate Coaster PNG Diagrams
uses: cloudbees/plantuml-github-action@master
with:
args: -v -tpng coaster.puml

Nice 😄, we have the png file generated by the GitHub action.

Finally, you can upload the UML png as an artifact of the Github action or upload it somewhere else. Here, I present how to publish it to a new branch of your repository. Then, we will see how to show it in the Readme of the main branch.

The goal of this step is to automatically update the documentation for end-users.

First, we create a new directory where we put the UML png file.

- name: Move artifact
run: |
mkdir doc-uml
mv *.png doc-uml

Then, we configure this directory as a new git repository.

- name: Init new repo in doc-uml folder and commit generated files
run: |
cd doc-uml/
git init
git add -A
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git commit -m 'update doc'

This new repository includes only the documentation we generated. The final step is to push this into a new branch of our project.

Because we do not care about the history of our meta-model UML files here, we will force push the repository. But creating more intelligent scripts is possible.

To do so, we use the ad-m/github-push-action GitHub action.

# Careful, this can kill your project
- name: Force push to destination branch
uses: ad-m/github-push-action@v0.5.0
with:
# Token for the repo. Can be passed in using $\{{ secrets.GITHUB_TOKEN }}
github_token: ${{ secrets.GITHUB_TOKEN }}
force: true
# Destination branch to push changes
branch: v1/doc
# We need to push from the folder where files were generated.
# Same as where the new repo was initialized in the previous step
directory: ./doc-uml

BE CAREFUL; if you incorrectly set the branch argument, you might delete your project.

When used, this action pushes the UML files in the v1/doc branch. The v1/doc of the Coaster project is created here.

Finally, we add the image of the UML files in the Readme of the main project. For the Coaster project, we modified the Readme and added:

![Coaster meta-model png](https://raw.githubusercontent.com/badetitou/CoastersCollector/v1/doc/coaster.png)

The URL follows the following pattern: https://raw.githubusercontent.com/:owner:/:repo:/:branch:/:file:. The final .github/workflows/ci.yml file is here.

That’s it 😄 Now, at every commit, the CI will update the png files used in the Readme of the project, and thus, the documentation is always up-to-date.

Towards analyzing TypeScript with Moose

TypeScript is a more and more popular programming language, and so it would be great if we could analyze TypeScript projects using Moose. At the time of writing, no meta-model (or importer) exists for the TypeScript language in Moose. First, what are the pieces of the puzzle needed to analyze TypeScript with Moose? Before we consider TypeScript, let’s look at how things work with Java:

Elements of analyzing a Java project

VerveineJ is the importer that can generate models of Java files, allowing us to do analyses in Pharo/Moose.

If we want to do the same thing for TypeScript, we would need:

  • an equivalent of VerveineJ (importer) for TypeScript files,
  • a Famix model of TypeScript.

Creating a parser and importer for TypeScript is no small task, but TypeScript is a popular environment and we can use ts-morph to facilitate the navigation of the TypeScript AST. There’s also a very cool visualization of TypeScript ASTs, which will be useful for understanding and debugging.

Designing a new meta-model for TypeScript is definitely not trivial, because it requires a deep understanding of the language. On the other hand, once a meta-model exists, it’s easy to generate using FamixNG domain-specific language.

Pragmatically speaking, do we need a perfect model of TypeScript to analyze it?

“All models are wrong, but some are useful.” —maybe not George Box

By searching the web for TypeScript and Moose, I discovered a GitHub project called pascalerni/abap2famix. It is an ABAP importer (written in TypeScript) that models ABAP projects using FAMIX 3.0 (compatibility meta-model for Java). Java and ABAP are indeed different languages, but perhaps the differences are not so important if we want to do some static analysis? Seems like a pragmatic approach!

Looking at the node packages used by abap2famix I discovered famix, a TypeScript implementation of Famix, which facilitates creating FAMIX 3.0 entities from TypeScript. Its source is at pascalerni/famix, and I could see that much of it was generated, e.g., in class.ts there’s proof it was not written by hand:

// automatically generated code, please do not change
import {FamixMseExporter} from "../../famix_mse_exporter";
import {Type} from "./../famix/type";
export class Class extends Type {
...
}

How was this code generated? The answer is the fork of FameJava at pascalerni/FameJava, namely the Famix30Codegen.java file. The original FameJava was used to generate the Java API for use with FAMIX 3.0 metamodel. This fork generates (via Java) a TypeScript API. Clever and useful!

So, what if we try to create an importer using ts-morph and the famix packages that will model TypeScript programs in a Java metamodel? As a first try, model only the object-oriented elements of TypeScript, such as classes, methods, attributes, etc.

This is actually the project I proposed to students in an advanced-topics software design course at my university during the winter of 2021.

Several teams set out to achieve this goal, although none of the students had ever done parsing or Pharo before. Many were familiar with node and TypeScript.

I’m happy to say that they all were successful (in varying degrees) in writing an importer for TypeScript that allowed analyses to be done in Moose and Pharo, and their results are all on GitHub:

Team 1 | Team 2 | Team 3 | Team 4

Here are some visualizations produced by Team 4 using Roassal on models loaded into Moose.

The first shows the Weighted Method Count (sum) for classes in several TypeScript projects. The cyclomatic complexity values were calculated using another npm package (ts-complex) in the TypeScript importer:

WMC for various typescript projects

The following chart shows distributions of Cyclomatic Complexities of methods for various classes in the prisma project:

CC for prisma project in typescript

Limits of modeling TypeScript in a Java metamodel

Section titled “Limits of modeling TypeScript in a Java metamodel”

Here are some of the obvious things in TypeScript (Javascript) that don’t quite fit into a Java model:

  • Functions can exist in the global namespace. A workaround proposed by one team was to create a “Global” class in the Java model, and just put functions there as static methods.
  • Functions can exist in methods, but maybe this is possible to model in a newer meta-model for Java that supports lambdas. The API from pascalerni/famix supports an older meta-model for Java.
  • string, number, any are types in TypeScript, but they do not really map to primitive types or classes in Java.
  • TypeScript doesn’t have packages like Java, although it does have ways to specify namespaces and avoid naming conflicts.

Even though a formal model in TypeScript doesn’t (yet) exist in Famix, it’s possible to perform useful analyses of TypeScript using the FAMIX 3.0 (Java) metamodel, thanks to packages, tools and APIs developed and reused in the npm and Moose communities.

Photo credit: “patchwork beads” (CC BY-SA 2.0) by various brennemans

Generate a plantUML visualization for a meta-model

This post describes a tool that has been replaced by a new FamixUMLDocumentor. The new tool is described in another post.

When you are interested in a meta-model of which you are not the creator, it is sometimes difficult to understand it only using the declarations in the code. It would be best if you could actually visualize it in a different way. What better way to go back to a very efficient meta-model visualization tool: UML.

In this blog, I will show you how to generate plantUML code from a generated meta-model. For that, I will take the example of the evolution of the meta-model on coasters:

There is no need to do these posts to understand this one. I would even say that this is precisely the subject: to study an unknown meta-model.

First of all, and if it has not already been done, do not forget to download and generate the meta-models using its generator. For example, for the basic Coasters collection, the code is available on Coaster GitHub repository and it can be generate with:

CoasterCollectorMetamodelGenerator generate

FamixMMUMLDocumentor, the tool I am going to present to you, is based on the generated meta-model. Therefore, it is particularly suitable for models with subMetamodels (Cf. beWithStub option). It is important to note that another tool, based on the meta-model builder, exists : FmxMBPlantTextVisitor. It can be interesting if you need to display the compositions.

I would also like to make one last remark, most of the information given in this post can be found in the comment of the FamixMMUMLDocumentor class. Finally, there is the plantUML server to run your plantUML code directly on the web. So let’s continue and generate our visualizations! 😄

Let’s say we know that there is a meta-model on coasters whose builder is CoasterCollectorMetamodelGenerator. Since we need the generated model and not the builder, we will look at the prefix defined in CoasterCollectorMetamodelGenerator class >> #prefix and deduce the model name, which consists of the model prefix followed by the word Model.

In this case, for CoasterCollectorMetamodelGenerator, the model is called CCModel. From here, we have all the elements to generate the plantUML code associated with the model via the following code:

FamixMMUMLDocumentor new
model: CCModel ;
generatePlantUMLModel.

The generation is done by instantiating a FamixMMUMLDocumentor for which we provide the model (model:) and ask for the complete generation for this last one (generatePlantUMLModel).

UML representation of Coaster meta-model

We can now compare the generated UML representation to the basic one that helped create the generator or that has been used to generate the generator 😄 (Cf. Model your Fame/Famix meta-model using Graphical Editors).

"coasters UML"

We can observe a UML diagram that is almost identical. Only CCModel is additional. However, generation options allow solving this problem (and many others).

Indeed, it is possible to ask to generate the plantUML code without a defined collection of entities. For example, if you do not want the CCModel to appear.

FamixMMUMLDocumentor new
model: CCModel ;
generatePlantUMLModelWithout: { CCModel }.

UML representation (option Without) of Coaster meta-model

It is important to note that it is necessary to give the entities themselves and not their names. That is to say that it is necessary to add their prefix. For example, the entity associated with the name Coaster is CCCoaster.

It is also possible to do the opposite. That is to say to select only the entities to generate.

FamixMMUMLDocumentor new
model: CCModel ;
generatePlantUMLWith: { CCCoaster . CCCreator . CCBrewery }.

UML representation (option With) of Coaster meta-model

This can be useful if you are interested in certain entities.

Finally, there is one last exciting possibility. If we take the case of the evolution of the coasters meta-model extended in terms of creators Connecting/Extending meta-models.

Extended Coaster meta-model

Let’s generate the plantUML code on the meta-model and observe.

FamixMMUMLDocumentor new
model: CCEModel ;
generatePlantUMLModelWithout: { CCEModel }.

UML representation of Extended Coaster meta-model

We can say that the representation is deceiving, but it is only a representation of what is declared in the meta-model. However, the meta-model has a subMetamodel, so we have to look for these dependencies in it. For this, there is the beWithStub option.

FamixMMUMLDocumentor new
beWithStub;
model: CCEModel ;
generatePlantUMLModelWithout: { CCEModel . MooseModel }.

UML representation with stub of Extended Coaster meta-model

We can see that Event inherits from an external class Creator, coming from the subMetamodel CoasterCollectorMetamodelGenerator.

It would indeed be interesting to generate the subMetamodel view as well in order to have a better overall view, maybe an improvement track?

Each option is available in text or file output via the following methods:

  • generatePlantUMLModel / generatePlantUMLModelFile:
  • generatePlantUMLModelWithout: / generatePlantUMLModelFile:without:
  • generatePlantUMLWith: / generatePlantUMLFile:with:

To finalize this post, we will generate a larger meta-model that aggregates all the notations available in the tool. To do this, I chose FASTModel, a method syntax analysis meta-model, available with this moosetechnology GitHub repository.

FamixMMUMLDocumentor new
beWithStub;
model: FASTModel;
generatePlantUMLModelWithout: { FASTModel . MooseModel }.

UML representation with stub of FASTCore

In summary, we have 5 specific notations:

  • Internal entity notations:
    • Class: Black C on white background
    • Trait: Black T on grey background
  • External entity notations:
    • Class: Black C on yellow background with External label
    • Trait: Black T on yellow background with External label
  • Use of traits: Dashed arrow

The rest of the notations follows the UML standard.

In this post, we have seen how to visualize a meta-model using FamixMMUMLDocumentor. This feature is handy for understanding complex meta-models and allows (almost) automatic documentation.