Skip to content

Blog

Generate a class diagram visualization for a meta-model

This post is the first in a serie dedicated to Famix Tools

When creating or studying a meta-model, it is often convenient to be able to “see” it as a whole.

UML looks like a natural solution for this.

So in the past we had a tool to create UML diagrams of the meta-models through PlantUML (a small language and a tool to generate UML diagrams). The post Generate a plantUML visualization for a meta-model explained how to use this tool

But the tool had some limitations, one of which was that it was not easy to add a different backend than PlantUML.

Therefore, inspired by the previous tool, we redesigned a new one, FamixUMLDocumentor, with a simpler API and the possibility to add new backends.

We illustrate the use with the same Coaster example already used previously. You can also experiment with FDModel, a small meta-model used for testing.

You can create a PlantUML script for a UML class of your metamodel with:

FamixUMLDocumentor new
model: CCModel ;
generate ;
exportWith: (FamixUMLPlantUMLBackend new).

The result will be a PlantUML script that you can paste into https://plantuml.org/ to get this UML class diagram:

Generated UML class of the Coaster meta-model{: .img-fluid}

The API for the documenter is as follow:

  • model: — adds a meta-model to export. Several meta-models can be exported jointly by adding them one after the other. By default each meta-model is automatically assigned a color in which its entities will be drawn.
  • model:color: — same as previous but manually assign a Color to the meta-model.
  • onlyClasses: — specifies a list of classes to export. It can replace the use of model:.
  • excludeClasses: — specifies a list of classes to exclude from the export. Typically used with model: to remove from the UML some of the meta-model’s classes. Can also be used to exlude “stub” classes (see beWithStubs).
  • beWithStubs — Indicates to also export the super-classes and used traits of exported classes, even if these super-classes/traits or not part of the meta-models. These stubs have an automatically selected color different from the meta-models displayed.
  • beWithoutStubs — opposite of the preceding. This is the default option.
  • generate — creates an internal representation of a UML class diagram according to the configuration created with the preceding messages.
  • exportWith: — exports the internal representation with the “backend” given (for example: FamixUMLPlantUMLBackend in the example above)

The backend is normally called by the FamixUMLDocumentor but can be called manually. For example, the image above can be exported in a PlantUML script with:

documentor := FamixUMLDocumentor new.
documentor
model: CCModel ;
generate.
FamixUMLPlantUMLBackend new export: documentor umlEntities.

(Compare with the example given above)

Backends have only one mandatory method:

  • export: — Exports the collection of umlEntities (internal representation) in the format specific to the backend.

New backends can be created by subclassing FamixUMLAbstractBackend.

There is a FamixUMLRoassalBackend to export the UML diagram in Roassal (visible inside Pharo itself), and a FamixUMLMermaidBackend to export in Mermaid format (similar to PlantUML).

There is a FamixUMLTextBackend that outputs the UML class diagram in a textual form. By default it returns a string but this can be changed:

  • toFile: — Instead of putting the result in a string, will write it to the file whose name is given in argument.
  • outputStream: — specifies a stream on which to write the result of the backend.

FamixUMLPlantUMLBackend and FamixUMLMermaidBackend are subclasses of this FamixUMLTextBackend (therefore they can also export to a file).

Enhancing software analysis with Moose's aggregation

As software systems grow more complex, importing large models into Moose using the conventional process can cause issues with speed, excessive memory usage, and overall performance due to the vast amount of data. To ensure a smoother analysis process, managing the importation of extensive models efficiently is crucial. To overcome these challenges, strategic filtering and aggregation have emerged as powerful techniques.

One feature of Moose is its model import filtering, which provides a practical approach to effectively handle large models. It allows us to selectively choose relevant entities for analysis instead of importing the entire model.

However, filtering has its limitations. By excluding certain entities during importation, we may lose some fine-grained details that could potentially be relevant for certain analyses. Moreover, if our filtering criteria are too aggressive, we might overlook important dependencies that could impact the overall understanding of the software system. To address these limitations, we have adopted a specific approach in this context - not importing methods.

Simplifying the model by not importing methods

Section titled “Simplifying the model by not importing methods”

let’s take a look at a real-life example - a massive software model with over 130,000 methods!

"Massive Model"

While method-related information can be crucial for certain analysis tasks, focusing on high-level relationships between classes is often more important than diving into individual method implementations. By avoiding the importation of individual methods, we strike a balance between capturing essential dependency information and simplifying the model.

But how do we preserve crucial dependency information when we’re not importing methods? This is where aggregation comes into play.

Aggregation: an approach to capture dependencies

Section titled “Aggregation: an approach to capture dependencies”

Aggregation involves creating an aggregated method within each class, serving as a central repository for consolidating dependencies. This approach reduces the need for complex connections between individual methods, leading to improved performance and overall efficiency. The abstraction layer introduced by aggregated methods not only simplifies the model but also enhances its modularity. By adopting this approach, we promote cleaner code design, making the software more maintainable and adaptable.

Now, let’s explore the process of importing a software model into Moose using the aggregator approach.

Importing a model in Moose with the aggregator

Section titled “Importing a model in Moose with the aggregator”

To import an aggregated model into Moose:

  1. Open Moose’s model browser.
  2. Locate the model file on your computer.
  3. Click “Aggregate Methods.”
  4. Click “Import.”

"Importing Model"

Now, the aggregated model is available for analysis in Moose.

"My Java Model"

Benchmarking aggregation’s impact on model size and analysis

Section titled “Benchmarking aggregation’s impact on model size and analysis”

To assess the effectiveness of aggregation in reducing model complexity, we conducted a benchmark using a real-life example. The original software model had a staggering 10,267 methods.

"Source Model Number Of Methods"

After importing the model into Moose using the aggregation approach, the corresponding aggregated model had only 448 methods. This showcases a substantial reduction in complexity achieved through aggregation.

"Aggregated Model Number Of Methods"

In proportion, the aggregated model represents just 4.4% of the original model’s size (448 / 10,267 * 100). This remarkable decrease in the number of methods demonstrates the powerful impact of aggregation in simplifying the model.

Our benchmark confirms that aggregation is an invaluable technique for managing large models in Moose. It significantly streamlines the analysis process while preserving essential dependency information. Aggregation empowers software engineers to work with large-scale systems more efficiently and promotes cleaner code design, making the software more maintainable and adaptable.

In summary, aggregation proved to be a highly effective approach for managing large models in Moose. By adopting aggregation, software engineers can work more efficiently with complex systems.

Representation of parametrics

In Java generic types allow you to write a general, generic class (or method) that works with different types, allowing code reuse.

But their modeling and how it works can be difficult to understand. Let’s take an example.

public class ClassA<T>

Here, ClassA is a generic class because there is one generic type T. One can not use ClassA without specifying the generic type.

ClassA<Integer> class1 = new ClassA<Integer>;
ClassA<String> class2 = new ClassA<String>;

class1 and class2 are variables of type ClassA, but this time ClassA doesn’t have a generic type but String or Integer. So, how do we represent all that?

Modelisation_generic

We have 5 new traits in our meta-model :

  • TParametricEntity is used by all parametric entities. It can be a ParametricClass, ParametricMethod, and ParametricInterface.
  • TConcretisation allows one to have a link between two TParametricEntity. A TParametricEntity can have one or more concretizations with other TParametricEntity. Each TParametricEntity that is a concretization of another TParametricEntity has a genericEntity.
  • TConcreteParameterType for concrete parameters.
  • TGenericParameterType for generic parameters.
  • TParameterConcretisation is the same as TConcretisation but instead of two TParametricEntity it has TConcreteParameter and TGenericParameterType. TGenericParameterType can have one or more concretisations and TConcreteParameterType has generics.

A TParametricEntity knows its concrete and generic parameters.

ParameterType uses the TWithInheritance trait because in Java we can do the following: <T extends Object> and <? super Number>. For the first, it means that T can be all subclasses of Object and for the second, it means Number and all its superclasses or interfaces (Number, Object, Serializable). ParameterType also uses the TThrowable trait because we can have a genericException so ParameterType should be considered like it.

public interface GenericThrower<T extends Throwable> {
public void doThrow() throws T;
}

example

If we take the first class. We have a ParametricClass with one ParameterType name T.

{{ &#x27;classA<T>&#x27; | escape }}

For the second class, we have a class that extends a parametric class with one parameter named String. String here is a class. It is not a ParameterType anymore.

{{ &#x27;classB extends class<String>&#x27; | escape }}

So, what is the link between the two parametric classes and the parameters T and String?

concretization

We have here a Concretisation. ClassA with the parameter T has one concretization and the parameter T has one parameter Concretisation which is String.

If we take back our first example:

public class ClassA<T>
ClassA<Integer> class1 = new ClassA<Integer>
ClassA<String> class2 = new ClassA<String>

We have three ParametricClass, one ParameterType and two types (String and Integer). T is our ParameterType and has two ParameterConcretisations: String and Integer. We can say that T is generic and String and Integer are concrete because we know what they are: classes. ClassA with the ParameterType T (ClassA<T>) also has two concretizations. These are ClassA<Integer> and ClassA<String>. The three different classA know their parameters. T is in genericParameters. String and Integer are in concreteParameters.

A class is generic if it has at least one ParameterType. We can have concretization of a parametric class that is also generic. See the example below:

public class ParametricClass<T, V, K, Z>
public class ParametricClass2<Z> extends ParametricClass<String, Integer, Integer, Z>

The second ParametricClass has one ParameterType, so the class is generic. The four parameters (T, V, K, Z) have each a concretization (String, Integer, Integer, Z). Even if Z is still a ParameterType.

ParametricClass2 has for superclass ParametricClass, which has for generic entity ParametricClass with 4 ParameterTypes.

methodParametric

Let’s see what we have here. First of all, we recognize a ParametricClass with one ParameterType. This class has two methods. One is a regular method and the second is a parametricMethod. The first one isn’t generic because when the class is concretized, the ParameterType T will become String, Integer, Animals… and it will be the same for the method. The parameter of the first method depends on the class and this is not the case for the second method. That is why the second method is generic, not the first one.

public classA<T>
public classB extends classA<String>

This is how we can represent this in Pharo.

classAgen := FamixJavaParametricClass named:'ClassA'.
t := FamixJavaParameterType named:'T'.
classAgen addGenericParameter: t.
classAcon := FamixJavaParametricClass named:'ClassA'.
string := FamixJavaClass named:'String'.
classAgen addConcreteParameter: string.
FamixJavaConcretisation new concreteEntity: classAcon ; genericEntity: classAgen.
FamixJavaParameterConcretisation new concreteParameter: string ; genericParameter: t.
classB := FamixJavaClass named:'ClassB'.
FamixJavaInheritance new subclass: classB ; superclass: classAcon .

In this post, we have seen how generics types are modeled with VerveineJ and Moose for code analysis.

Test your Moose code using CIs

You have to test your code!

I mean, really.

But sometimes, testing is hard, because you do not know how to start (often because it was hard to start with TDD or better XtremTDD 😄).

One challenging situation is the creation of mocks to represent real cases and use them as test resources. This situation is common when dealing with code modeling and meta-modeling.

Writing a model manually to test features on it is hard. Today, I’ll show you how to use GitHub Actions as well as GitLab CI to create tests for the Moose platform based on real resources.


First of all, let’s describe a simple process when working on modeling and meta-modeling.

Source Code

Parse

Model File

Import

Model in Memory

Use

When analyzing a software system using MDE, everything starts with parsing the source code of the application to produce a model. This model can then be stored in a file. Then, we import the file into our analysis environment, and we use the concrete model.

All these steps are performed before using the model. However, when we create tests for the Use step, we do not perform all the steps before. We likely just create a mock model. Even if this situation is acceptable, it is troublesome because it disconnects the test from the tools (which can have bugs) that create the model.

One solution is thus not to create a mock model, but to create mock source code files.

Using mock source code files, we can reproduce the process for each test (or better, a group of tests 😉)

Mock Source Code

Parse with Docker

Model File

Import with script

Model in Memory

Test

In the following, I describe the implementation and set-up of the approach for analyzing Java code, using Pharo with Moose. It consists of the following steps:

  • Create mock resources
  • Create a bridge from your Pharo image to your resources using PharoBridge
  • Create a GitLab CI or a GitHub Action
  • Test ❤️

The first step is to create mock resources. To do so, the easiest way is to include them in your git repository.

You should have the following:

> ci // Code executed by the CI
> src // Source code files
> tests // Tests ressources

Inside the tests folder, it is possible to add several subfolders for different test resources.

To easily use the folder of the test resource repository from Pharo, we will use the GitBridge project.

The project can be added to your Pharo Baseline with the following code fragment:

spec
baseline: 'GitBridge'
with: [ spec repository: 'github://jecisc/GitBridge:v1.x.x/src' ].

Then, to connect our Pharo project to the test resources, we create a class in one of our packages, a subclass of `GitBridge“.

A full example would be as follows:

Class {
#name : #MyBridge,
#superclass : #GitBridge,
#category : #'MyPackage-Bridge'
}
{ #category : #initialization }
MyBridge class >> initialize [
SessionManager default registerSystemClassNamed: self name
]
{ #category : #'accessing' }
MyBridge class >> testsResources [
^ self root / 'tests'
]

The method testsResources can then be used to access the local folder with the test resources.

Warning: this setup only works locally. To use it with GitHub and GitLab, we first have to set up our CI files.

To set up our CI files, we first create in the ci folder of our repository a pretesting.st file that will execute Pharo code.

(IceRepositoryCreator new
location: '.' asFileReference;
subdirectory: 'src';
createRepository) register

This code will be run by the CI and register the Pharo project inside the Iceberg tool of Pharo. This registration is then used by GitBridge to retrieve the location of the test resources folder.

Then, we have to update the .smalltalk.ston file (used by every Smalltalk CI process) and add a reference to our pretesting.st file.

SmalltalkCISpec {
#preTesting : SCICustomScript {
#path : 'ci/pretesting.st'
}
...
}

The last step for GitLab is the creation of the .gitlab-ci.yml file.

This CI can include several steps. We now present the steps dedicated to testing the Java model, but the same steps apply to other programming languages.

First, we have to parse the tests-resources using the docker version of VerveineJ

stages:
- parse
- tests
parse:
stage: parse
image:
name: badetitou/verveinej:v3.0.0
entrypoint: [""]
needs:
- job: install
artifacts: true
script:
- /VerveineJ-3.0.0/verveinej.sh -Xmx8g -Xms8g -- -format json -o output.json -alllocals -anchor assoc -autocp ./tests/lib ./tests/src
artifacts:
paths:
- output.json

The parse stage uses the v3 of VerveineJ, parses the code, and produces an output.json file including the produced model.

Then, we add the common tests stage of Smalltalk ci.

tests:
stage: tests
image: hpiswa/smalltalkci
needs:
- job: parse
artifacts: true
script:
- smalltalkci -s "Moose64-10"

This stage creates a new Moose64-10 image and performs the CI based on the .smalltalk.ston configuration file.

The last step for GitLab is the creation of the .github/workflows/test.yml file.

In addition to a common smalltalk-ci workflow, we have to configure differently the checkout step, and add a step that parses the code.

For the checkout step, GitBridge (and more specifically Iceberg) needs the history of commits. Thus, we need to configure the checkout actions to fetch the all history.

- uses: actions/checkout@v3
with:
fetch-depth: '0'

Then, we can add a step that runs VerveineJ using its docker version.

- uses: addnab/docker-run-action@v3
with:
registry: hub.docker.io
image: badetitou/verveinej:v3.0.0
options: -v ${{ github.workspace }}:/src
run: |
cd tests
/VerveineJ-3.0.0/verveinej.sh -format json -o output.json -alllocals -anchor assoc .
cd ..

Note that before running VerveineJ, we change the working directory to the tests folder to better deal with source anchors of Moose.

You can find a full example in the FamixJavaModelUpdater repository

The last step is to adapt your tests to use the model produced from the mock source. To do so, it is possible to remove the creation of the mock model by loading the model.

Here’s an example:

externalFamixClass := FamixJavaClass new
name: 'ExternalFamixJavaClass';
yourself.
externalFamixMethod := FamixJavaMethod new
name: 'externalFamixJavaMethod';
yourself.
externalFamixClass addMethod: externalFamixMethod.
myClass := FamixJavaClass new
name: 'MyClass';
yourself.
externalFamixMethod declaredType: myClass.
famixModel addAll: {
externalFamixClass.
externalFamixMethod.
myClass }.

The above can be converted into the following:

FJMUBridge testsResources / 'output.json' readStreamDo: [ :stream |
famixModel importFromJSONStream: stream ].
famixModel rootFolder: FJMUBridge testsResources pathString.
externalFamixClass := famixModel allModelClasses detect: [ :c | c name = 'ExternalFamixJavaClass' ].
myClass := famixModel allModelClasses detect: [ :c | c name = 'MyClass' ].
externalFamixMethod := famixModel allModelMethods detect: [ :c | c name = 'externalFamixJavaMethod' ].

You can now test your code on a model generated as a real-world model!

It is clear that this solution slows down tests performance, however. But it ensures that your mock model is well created, because it is created by the parser tool (importer).

A good test practice is thus a mix of both solutions, classic tests in the analysis code, and full scenario tests based on real resources.

Have fun testing your code now!

Thanks C. Fuhrman for the typos fixes. 🍌