Skip to content

Blog

Connecting/Extending meta-models

Sometimes, a meta-model does not have all the information you want. Or, you want to connect it with another one. A classic example is linking a meta-model at an abstract level to a more concrete meta-model.

In this blog post, I will show you how to extend and connect a meta-model with another (or reuse a pre-existing meta-model into your own meta-model). We will use the Coaster example.

The Coaster meta-model is super great (I know… it is mine 😄 ). Using it, one can manage its collection of coasters.

However, did you notice that there is only one kind of Creator possible: the brewery. It is not great because some coasters are not created by breweries but for events. My model is not able to represent this situation. So, there are two possibilities: I can fix my meta-model, or I can extend it with a new concept. Here, we will see how I can extend it.

Extended Coaster meta-model

As presented in the above figure, we add the Events concept as a kind of Creator.

As a first step, we need the original Coaster meta-model generator loaded in our image. We can download it from my Coaster GitHub repository.

You should have a named CoasterCollectorMetamodelGenerator in your image. This is the generator of the original meta-model. We will now create another generator reusing the original one.

First, we create a new generator for our extended meta-model.

FamixMetamodelGenerator subclass: #CoasterExtendedMetamodelGenerator
instanceVariableNames: ''
classVariableNames: ''
package: 'CoasterCollector-ExtentedModel-Generator'

Then, we link our generator with the original one. To do so, we will use the submetamodels feature of the generator. We only have to implement the #submetamodels method in the class side of our new generator. This method should return an array including the generators of the submetamodels that we want to reuse.

CoasterExtendedMetamodelGenerator class >> #submetamodels
^ { CoasterCollectorMetamodelGenerator }

Finally, as for a classic meta-model generator, we define a package name and a prefix.

CoasterExtendedMetamodelGenerator class >> #packageName
^ #'CoasterExtended-Model'
CoasterExtendedMetamodelGenerator class >> #prefix
^ #'CCE'

Creating new concepts in the new meta-model is done following the same approach as for classic meta-model generator. In our example, we add the Event class. Thus, we create the method #defineClasses with the new entity.

CoasterExtendedMetamodelGenerator >> #defineClasses
super defineClasses.
event := builder newClassNamed: #Event.

To extend the original meta-model, we first need to identify the entities of the original meta-model we will extend. In our case, we only extend the Creator entity. Thus, we declare it in the #defineClasses method. To do so, we use the method #remoteEntity:withPrefix:. The prefix is used to allow multiple entities coming from different submetamodels but with the same name.

CoasterExtendedMetamodelGenerator >> #defineClasses
super defineClasses.
event := builder newClassNamed: #Event.
"Remote entities"
creator := self remoteEntity: #Creator withPrefix: #CC

We refer to a remote entity by sending #remoteEntity:withPrefix: to self and not using the builder. Indeed, the entity is already created.

Once the declaration done, one can use the remote entities as classic entities in the new generator. In our example, we will create the hierarchy between Creator and Event.

CoasterExtendedMetamodelGenerator >> #defineHierarchy
super defineHierarchy.
event --|> creator

Once everything is defined, as for classic generator, we generate the meta-model. To do so, execute in a playground:

CoasterExtendedMetamodelGenerator generate

The generation creates a new package with the Event entity. It also generates a class named CCEModel used to create an instance of our extended meta-model.

It is now possible to use the new meta-model with the Event concept. For instance, one can perform the following script in a playground to create a little model.

myExtendedModel := CCEModel new.
myExtendedModel add: (CCBrewery new name: 'Badetitou'; yourself).
myExtendedModel add: (CCEEvent new name: 'Beer party'; yourself)

We saw that one can extend a meta-model by creating a new one based on the pre-existing entities. It is also possible to connect two existing meta-models together.

To do so, let’s assume we have two existing meta-models to be connected. As an example, we will connect our coaster meta-model, with the world meta-model. The world meta-model aims to represent the world, with its continent, countries, regions and cities.

We will not detail how to implement the world meta-model. But the generator is available in my GitHub repository. The figure below illustrates the meta-model.

World meta-model

Connecting world meta-model with Coaster meta-model

Section titled “Connecting world meta-model with Coaster meta-model”

Our goal is to connect the coaster meta-model with the world meta-model. To do so, we will connect the country concepts of each meta-model.

Connected meta-model

As a first step, you should install both the coaster meta-model and the world meta-model. Again, both are available in my GitHub repository.

Then, we create a new meta-model generator that will perform the connection.

FamixMetamodelGenerator subclass: #ConnectMetamodelGenerator
instanceVariableNames: ''
classVariableNames: ''
package: 'Connect-Model-Generator'

To connect together the two meta-models, we must first declare them in our connector meta-model. To do so, we define the #submetamodels method.

ConnectMetamodelGenerator class >> #submetamodels
^ { WorldMetamodelGenerator . CoasterCollectorMetamodelGenerator }

And, as for every meta-model generator, we define a prefix and a package name.

ConnectMetamodelGenerator class >> #packageName
^ #'Connect-Model'
ConnectMetamodelGenerator class >> #submetamodels
^ #'CM'

Before creating the connection, we must declare, in the new meta-model, the entities that will be contected. To do so, we declare them as remoteEntity.

ConnectMetamodelGenerator >> #defineClasses
super defineClasses.
coasterCountry := self remoteEntity: #Country withPrefix: #CC.
worldCountry := self remoteEntity: #Country withPrefix: #W

Then, it is possible to connect the two entities as classic one.

ConnectMetamodelGenerator >> #defineRelations
super defineRelations.
coasterCountry - worldCountry

Build a model with two connected submetamodels

Section titled “Build a model with two connected submetamodels”

Once the generator is created, we can generate the connection by generating the new meta-model. To do so, execute in a playground:

ConnectMetamodelGenerator generate

Then, it is possible to create a model with all the entities and to link the two meta-models. In the following, we present a script that create a model.

"create the entities"
coaster1 := CCCoaster new.
coaster2 := CCCoaster new.
coaster3 := CCCoaster new.
coasterFranceCountry := CCCountry new name: #'France'; yourself.
coasterFranceCountry addCoaster: coaster1.
coasterFranceCountry addCoaster: coaster2.
coasterGermanyCountry := CCCountry new name: #'Germany'; yourself.
coasterGermanyCountry addCoaster: coaster3.
wFranceCountry := WCountry new name: #'France'; yourself.
wGermanyCountry := WCountry new name: #'Germany'; yourself.
continent := WContinent new name: #Europe; yourself.
continent addCountry: wFranceCountry.
continent addCountry: wGermanyCountry.
"connect CCountries to WCountries"
coasterFranceCountry country: wFranceCountry.
coasterGermanyCountry country: wGermanyCountry.
"put all entities into the same model"
connectedModel := CMModel new.
connectedModel addAll:
{ coaster1. coaster2 . coaster3 .
coasterFranceCountry . coasterGermanyCountry .
wFranceCountry . wGermanyCountry . continent }.

Based on the preceding model, it is possible to create query that will request the coaster and the world meta-model. For instance, the following snippet count the number of coasters by country in the Europe continent:

europe := (connectedModel allWithType: WContinent)
detect: [ :continent | continent name = #Europe ].
(europe countries collect: [ :eCountry |
eCountry name -> eCountry country coasters size ]) asDictionary

The coutry: and coutry methods are accessors that allow to set and recover one CCCountry (resp. WCountry) into a WCountry (resp. CCCountry). The accessors names are the same in both classes and were generated automatically from the declaration of the relationship in defineRelations (this is normal behaviour of the generator, not specific to using sub-models.

In this post, we saw how one can extend and connect meta-models using Famix Generator. This feature is very helpfull when you need to improve a meta-model without modifying it directly. If you need more control on the generated entities (e.g., name of the relations, etc.), please have a look at the create meta-model wiki page.

How to build a new Moose tool: The MooseInspector

How to build a new Moose tool: The MooseInspector

Section titled “How to build a new Moose tool: The MooseInspector”

To create a new Moose Tool, you must create a child class of MiAbstractBrowser. This abstract class contains the basic infrastructure to all Moose browsers. It provides a toolbar with: buttons to inspect and propagate the current selection; Radio buttons to choose a reception mode; and a help button that shows the class comment for each browser.

"MiAbstactBrowser toolbar"

Also, it provides the logic to connect the browser to the Moose bus.

So, let us get started. We will create a “Moose Inspector”. It would be like the Pharo’s inspector but as a Moose browser. Firstly, we create the subclass as following:

MiAbstractBrowser subclass: #MiInspectorBrowser
instanceVariableNames: 'stInspector'
classVariableNames: ''
package: 'Moose-Core-Inspector'

As one can see, it has one instance variable which will hold an instance of Pharo’s inspector: StInspector.

Now, we must implement some basic methods. First let us implement initializePresenters method:

initializePresenters
super initializePresenters.
stInspector := self instantiate: StInspector.
stInspector model: self model

We instantiate stInspector variable an instance of Pharo’s inspector. Then we set the inspector model to be the same as the browser model.

Now we are going to implement canReceiveEntity: method. This method returns a Boolean which tells us if the entities received on the bus are usable in this browser. As we are building an inspector all entities can be accepted. So, we are going to return true always.

canReceiveEntity: anEntity
^ true

Then, we must implement followEntity: method. This method is called when new entities are received from the bus. In this case, we only need update the inspector model with the new entity. This method has the responsibility of defining the behaviour of the browser when new entities arrives from the bus. This is part of the bus mechanism of MiAbstractBrowser. This method is called if canReceiveEntity: anEntity returns true.

followEntity: anEntity
self model: anEntity.
stInspector model: self model

Next, the miSelectedItem method tells the bus what to propagate (when the user hits the “Propagate” button). In this case we want to propagate the object selected in the last inspector page.

miSelectedItem
| lastInspectorPage |
lastInspectorPage := stInspector millerList pages last.
^ lastInspectorPage model inspectedObject

Now we have all the logic and we can define the layout of this new browser. Now in Spec, the framework used to buld GUi in Pharo, we can implement dynamic layouts. So, we create a initializeLayout method. In that method, we take the layout of the super class, which is the toolbar, and we will add the other presenter.

initializeLayout
self layout: (SpBoxLayout newTopToBottom
add: self class defaultSpec expand: false;
add: stInspector;
yourself)

And do not forget to call this at the end of initializePresenters.

initializePresenters
super initializePresenters.
stInspector := self instantiate: StInspector.
stInspector model: self model.
self initializeLayout

Finally, we can define which will be the default model on which the browser will open. This is in case the bus does not have any entities. We want the Moose models, so create on class side:

newModel
^ MooseModel root entities

Optionally, we can override class side methods title and windowSize. We are ready to go. All we must do now is to run MiInspectorBrowser open on a Playground. This will open our new browser.

"Moose Inspector"

How to add new tabs in the Moose Inspector Browser

The new browser is not effective yet. We want to add some custom tabs to inspect the entities in a more visual way. To do so we can add some custom inspector tabs. When it displays an object, the inspector looks for methods of that object that have the <inspectorPresentationOrder:title:> pragma. The method should return a spec presenter that will be included in the inspector’s tab.

We will migrate the old “Properties” tab that is found in MooseFinder. The code is in MooseObject>>#mooseFinderPropertiesIn: We only have to rewrite it using Spec framework and use the pragma <inspectorPresentationOrder:title:>. We will create a method in MooseObject called inspectorPropertiesIn.

inspectorPropertiesIn
<inspectorPresentationOrder: 2 title: 'Properties'>
^ SpTablePresenter new
items: self mooseInterestingEntity mooseDescription allPrimitiveProperties;
sortingBlock: [ :x :y | x name < y name ];
addColumn: (SpStringTableColumn title: 'Properties' evaluated: #name);
addColumn: (SpStringTableColumn title: 'Value' evaluated: [ :each | (self mooseInterestingEntity mmGetProperty: each) asString ]);
yourself

Because the method is defined in MooseObject, any subclass (MooseModel, MooseEntity, …) will have this tab when displayed in the inspector.

That it is! Now we run again: MiInspectorBrowser open and we will se that the new tab now appears.

"Moose Inspector"

Dependency Structure Matrix for a Java project using Moose

Dependency Structure Matrix for a Java project using Moose

Section titled “Dependency Structure Matrix for a Java project using Moose”

As an extension to Analyzing Java With Moose, in this post I will show how one can create a Design Structure Matrix (DSM) in Moose, in particular from a model of a Java project.

First, you need to generate and load an MSE file into Moose for a Java project. Refer to this post for those steps, which uses the Java code from from Head First Design Patterns.

Roassal (which is a visualization platform that’s part of Moose) has a visualization for DSM called RTDSM. It’s explained here, but with Pharo classes. How to use it with Moose on a Java model?

The key is in the dependency: block, which we define using a Moose Query with allClients. Open a Moose Playground and paste the following Pharo code:

| dsm classes |
dsm := RTDSM new.
classes := (MooseModel root first allModelClasses
select: [ :c |
c mooseName
beginsWith:
'headfirst::designpatterns::combining::decorator' ])
reject: #isAnonymousClass.
"Avoid arbitrary ordering by sorting"
dsm objects: (classes asSortedCollection: [ :a :b | a name < b name]).
"Change the default label from asString which is very long"
dsm labelShapeX label text: #name.
dsm labelShapeY label text: #name.
"Moose Query equivalent to #dependentClasses for a Pharo class"
dsm dependency: #allClients.
dsm rotation: 270.
^dsm

Roassal DSM visualization

This visualization may not be as powerful as IDEA’s DSM analysis or Lattix’s, but it’s open source and can be manipulated in Pharo.

In Moose Query, the opposite to allClients is allProviders.

Analysis of Sindarin API: a modular Moose use case

How is our API used? This is the question the Pharo debugging team came to me with. They needed to know if it is used as they intended to or if there are some improvements to do. To answer that question, we used some of Moose new browsers.

The Pharo debugging team works on Sindarin, an API that eases the expression and automation of debugging tasks. Sindarin is made of two layers:

  • The “technical” API is the minimal API needed to interact with a debugging kernel (the Pharo kernel here).
  • The “scripting” API is here to ease the writing of debugging scripts without having to manipulate low-level concepts.

"Sindarin API"

Now, that’s the theory. In reality, the debugging team realized that these two layers are not really respected: users bypass the scripting API to use technical methods and the debugging kernel methods directly. We used Moose to analyze Sindarin API usage, to help understand if, and how, it should be improved.

Moose has recently been upgraded and now offers more modularity. Users can analyze their model through new specialized browsers that allow users to navigate and visualize their model entities. Entities are propagated from one browser to the others, dynamically updating their contents.

Helping the debugging team was an opportunity to test some of these browsers under operating conditions.

Analysis of the interactions of the users scripts with Sindarin and with the debugging kernel

Section titled “Analysis of the interactions of the users scripts with Sindarin and with the debugging kernel”

To analyze Sindarin, we built a model constituted of 3 key elements:

  • Users debugging scripts: script methods gathered in 4 Classes (green in the figure above)
  • Sindarin: methods from the SindarinDebugger class (both APIs, yellow and blue in the figure)
  • Pharo debugging kernel (pink in the figure)

Our goal was to understand how the users’ scripts use Sindarin and if they interact directly with the debugging kernel.

We imported the model directly from Pharo, using the Model Root browser. This browser is the entry point to modular Moose. It shows the list of installed models and allows to import new ones from MSE files or from Pharo.

"Model Root Browser"

Once the model was imported, it was automatically propagated in modular Moose, to other browsers.

Querying the model to select key elements from the model

Section titled “Querying the model to select key elements from the model”

To explore the debugging scripts, Sindarin and the debugger kernel - and the interactions between them - we used the Query browser.

The purpose of this browser is to query a model or a group of entities. It can filter entities according to their type or the value of their properties. It can also use MooseQuery to navigate associations (inheritances, invocations, accesses, and references) and scopes (containment).

"Queriest Browser"

The left pane of the browser is a visualization of the queries that were made. On the image above, no queries were made yet. The left pane only shows the model on which queries will apply. The right pane of the browser shows the result of the selected query and the code used to build it.

We obtained the 3 elements relevant to our analysis (users scripts, Sindarin, and the debugger kernel) by combining simple queries:

  • A Type query (Classes) gave us all the classes in the model, then
  • a property query (name beginsWith: 'SindarinScript') gave us the 4 scripts classes.

The two images below show how this last query was constructed:

"Query Scripts - Step 1"

"Query Scripts - Step 2"

Queries to get Sindarin API, i.e. methods of the class SindarinDebugger:
Section titled “Queries to get Sindarin API, i.e. methods of the class SindarinDebugger:”
  • A type query (Classes), gave us all classes in the model, then
  • a property query (name = 'SindarinDebugger’), gave us the Sindarin API class.
  • finally, a scope query (scope down to methods) gave us all Sindarin methods, i.e. the API.
Queries to get methods from the debugging kernel:
Section titled “Queries to get methods from the debugging kernel:”

The image below shows the sequence of queries created to obtain methods from the Context class and the Debugger-Model package:

  • Two Type queries (Classes and Packages, green on right), gave us all classes and packages in the model, then
  • two property queries (name = 'Context' and name = 'Debbuger-Model', blue), gave us the Context class and the Debugger-Model package, then
  • we made the union (pink) of the class and package, and finally
  • a scope query (scope down to methods, turquoise, bottom right query) gave us the methods they contain.

"Kernel methods"

Saving queries combinations as custom queries

Section titled “Saving queries combinations as custom queries”

We saved these combinations of simple queries as custom queries, as shown in the image below:

  • SindarinScript classes are the 4 script classes.
  • SindarinDebugger methods are the Sindarin API, i.e. methods from SindarinDebugger class
  • Debugger Kernel methods are methods from the debugging kernel, that should not be called directly from the scripts.

"Saved queries"

How do scripts interact with Sindarin and the debugging kernel?

Section titled “How do scripts interact with Sindarin and the debugging kernel?”

We added a navigation query to obtain methods that were called in the script. We then narrowed this result by selecting methods from Sindarin or from the debugging kernel.

To find which methods are called in the users scripts, we queried outgoing invocations from the scripts classes. We obtained a group of 331 candidate methods whose signatures match messages sent in the scripts:

"Script outgoing"

The next step was to compare this group of candidates with the methods from Sindarin scripting API. We established with the debugging team that scripting methods should have "scripting" as comment. This convention allowed us to get them with a property query (blue query in the figure below).

We queried for the intersection between the scripting API and the group of candidates invoked from the users scripts (salmon query at the bottom left in the figure below). This gave us 26 methods that are called from the scripts, out of 45 in the Sindarin scripting API:

These 26 mehtods, result of the last query performed, are listed in the top right pane of the Query browser.

"API called in scripts"

Sindarin methods that are not used in the scripts

Section titled “Sindarin methods that are not used in the scripts”

It was important for the debugging team to know the other 19 scripting API methods that are not called from the scripts. We obtained them by querying the difference between Sindarin methods and the group of candidates (purple query at the bottom left in the figure below):

"API not called in scripts"

Again the 19 resulting methods are listed in the “Result” pane.

The debugging team now knows which methods from their scripting API have been useful for users and, most importantly, which methods have not.

Debugging kernel methods called from the scripts

Section titled “Debugging kernel methods called from the scripts”

We also compared the group of candidates invoked from the users scripts with the methods of the debugging kernel and obtained 15 methods. These methods are used directly in the users scripts without using Sindarin:

"Kernel in scripts"

Identifying the scripts that use the debugging kernel:

Section titled “Identifying the scripts that use the debugging kernel:”

Let’s get back to the fact that outgoing invocations gave us a group of candidates methods. When several methods share the same signature, we cannot know for certain which method is called.

For example: Sindarin defines #arguments, which is also defined in Context.

Sindarin defines:

arguments
^ self context arguments

that should be used as follows:

sindarin := SindarinDebugger new.
"some steps"
(sindarin arguments at: 1) doSomething.

We wanted to detect cases where the users did not use Sindarin, like in the following:

sindarin := SindarinDebugger new.
"some steps"
(sindarin context arguments at: 1) doSomething.

This is a simple example or wrong practice. In more complex cases, interacting directly with the kernel would force the user to write verbose scripts and increase the risk of bugs. Why not benefit from a tested API when you can ?

To detect these cases, we used a convention of the users scripts: each time a call to Sindarin API is made, the receiver is a variable named sindarin. This means that invocations which receiver is not sindarin are probably cases where the user bypassed Sindarin API, calling the debugging kernel directly.

We inspected the 15 debugging kernel methods that may be used directly in the scripts and selected the ones that are sent from the scripts to a receiver other than sindarin. This was done in Pharo (not the Query browser), using MooseQuery:

self select: [ :method |
method incomingInvocations anySatisfy: [ :invocation |
(invocation sender parentType name beginsWith: 'SindarinScript')
and: [ invocation receiver isNil
or: [ invocation receiver name ~= 'sindarin' ] ] ] ]

We narrowed it down to 11 methods of the debugging kernel that are called from the scripts:

"Kernel called in scripts"

We then collected the scripts where these methods are called:

self flatCollectAsSet: [ :method |
method incomingInvocations
select: [ :invocation |
(invocation sender parentType name beginsWith: 'SindarinScript')
and: [ invocation receiver isNil
or: [ invocation receiver name ~= 'sindarin' ] ] ]
thenCollect: #sender ]

We obtain 15 user script methods that need to be investigated manually. Moose cannot determine which object will receive a message when the script is executed. The debugging team will analyze in the scripts the statements where the probable kernel methods are invoked. They will run the debugging scripts to identify at runtime which objects are the receivers.

Exploring these 15 scripts will help the debugging team to understand whether:

  • Users did not use Sindarin correctly: they interacted directly with the debugging Kernel, even though Sindarin methods are available,
  • Sindarin API is incomplete: users had no choice but to interact with the debugging kernel because Sindarin does not provide the necessary methods,
  • There are other reasons why the API is not used correctly?

"Scripts calling kernel"

With this experiment, we were able to provide valuable information to the debugging team. In return, we got suggestions of improvements to be implemented in modular Moose tools.

To know how many times a method is called, we need the number of invocations of this method. That is querying the size of a query result. The query browser does not allow it yet, and it should.

To get the 15 users scripts that need to be explored, we had to inspect a group of methods and run a selection manually (Pharo code). This should be possible directly in the queries browser: users should be able to build a query that executes a query script.

This experiment will help Pharo debugging team to improve Sindarin. They will be able to make it more complete and documented, so users continue to develop debugging scripts.

In the meantime, we will improve modular Moose to widen the queries possibilities. Then, we will challenge these new features: we will use them to analyze the upgraded Sindarin API to document its evolution.

Don't like Windows? Use WSL!

Recently, I’ve been using a Moose utility (MooseEasyUtility) to ease the creation and loading of a Moose Model. In turn, this utility uses the LibC to send various commands to the operating system for example to call the VerveineJ parser. However, when using this library, I encountered a problem on my Windows machine. The problem was caused by an encoding issue between the Java Parser (VerveineJ) and the file reader in Pharo. This is Windows specific as there was no issue when running my program on a Linux-based machine.

Sometimes you have to use the Windows OS like me, or you want to build an interoperable application that runs on multiple applications. Your Pharo program may eventually run on a Windows or Linux-based OS; however, you may not be familiar with Windows. Luckily, Microsoft has finally opened up its system to introduce the Windows Subsystem Linux (WSL).

WSL is a compatibility layer for running Linux binary executables (in ELF format) natively on Windows 10. Furthermore, in 2019, Microsoft introduced WSL 2, which included important changes such as a real Linux kernel, through a subset of Hyper-V features. The important thing to remember is that it allows us to use bash commands on Windows 10.

There are two ways to install WSL on your machine. The simplified installation requires that you join the Windows Insider Program and run the command wsl --install. The manual installation is a bit more complicated but the steps are detailed on docs-microsoft.

Once installed you may access WSL by either launching WSL through the search bar or by running your favorite terminal and using the command wsl.

"search"

You can also type wsl in your terminal to access the WSL directly.

"search"

The next step after installing WSL on your machine is to launch your favorite image of Pharo. From there you can use the class LibC to run some commands:

LibC resultOfCommand: 'ls'.
LibC resultOfCommand: 'wsl ls'.

As you will see, when the first command is sent, the terminal returns an error as Windows does not recognize the ls command. However, when you execute ls with the wsl command, it can now successfully display the list of files in the current directory.

Let’s come back to our problem with Moose-Easy. Moose-Easy is a library to facilitate access to Moose, e.g. to generate MSE files from a source code. Originally, I wanted to use Moose-Easy to parse and generate a model of the Java project I was working on. In my project, I passed the path to the source code, the parser, and any project dependencies I wanted to analyze.

generate
"Generates this model importer's model from its origin into its configured model path, using its parser"
MooseEasyUtility
createJavaMSEOn: self originSrcPath asFileReference asPath
using: self parser path asFileReference pathString
named: self modelPath asFileReference pathString
withDependencies: self dependenciesFolder

This method uses MooseEasyUtility to generate the model as a MSE file which would be later imported. Then, the import method from my project imports the MSE as a Moose Model.

import
"Imports this model importer's model from its configured model path"
self modelPath asFileReference readStreamDo: [ :stream |
model := MooseModel importFromMSEStream: stream.
model
rootFolder: self originPath;
install.
^ model
]

An important feature of the Moose Model is the ability to access/visualize the source code of an entity. To enable this, during the generation of the MSE, the parser (here VerveineJ) creates a SourceAnchor which writes the start position and end position of the entity within a specific file. However, when manipulating the model, I noticed that the sourceAnchors’ start/end positions were shifted. This resulted in accessing the wrong portions of the source code for each entity. After investigating this shift, it became clear it was due to an encoding issue with the carriage return. When the parser accessed the file it read it the carriage returns differently than when it was being accessed by the Moose library. To fix this issue without modifying the parser, I used WSL commands to run my parser. More precisely, I replaced the direct call to the parser executable with a call to wsl to run the parser. The following instruction of MooseEasyUtility was changed from:

command := 'cd "' , revPath asFileReference pathString , '" && "'
, pathToJavaToFamixCommand , '" ' , verveineJOptions , ' > "'
, javaToFamixOutput fullName , '"' , ' 2> "'
, javaToFamixErrors fullName , '"'.
result := LibC runCommand: command.

to:

command := 'cd "' , revPath asFileReference pathString , '" && wsl"'
, pathToJavaToFamixCommand , '" ' , verveineJOptions , ' > "'
, javaToFamixOutput fullName , '"' , ' 2> "'
, javaToFamixErrors fullName , '"'.
result := LibC runCommand: command.

Notice that instead of calling the JavaToFamixCommand (i.e. VerveineJ) directly, it is now called through wsl (end of first line). This allowed me to generate a MSE file that contained the correct start/end positions.

Additionaly, it is important to remember that if an absolute path is passed to wsl, it must be formatted according to Linux and WSL conventions. On Windows, the path to a file may look like this C:\Users\Pascal.ZARAGOZA\eclipse-workspace. While the WSL absolute path will look like this /mnt/c/Users/Pascal.ZARAGOZA/eclipse-workspace. This is why I recommend to use relative paths whenever possible when using a mix of WSL and Windows commands.

If you are interested in analyzing your java project with Moose 8, there is a detailed blog post by Christopher Fuhrman on the subject. It covers the whole process of using MooseEasyUtility to clone, parse, and load the corresponding model of a java project. It also demonstrates how to both visualize and analyze the moose model.