2018/11/16

Introduction to the Meta’16 Workshop Special Issue

Filed under: Announcement,Issue TOC,Special Section Editorial — Alfonso Pierantonio @ 18:25

Guest Editors: Elisa Gonzalez Boixa (Vrije Universiteit Brussel), Stefan Marr (University of Kent)

***

This special issue represents a selection of the best papers of the workshop on Meta-Programming Techniques and Reflection 2016 (Meta’16). Meta’16 was held in Amsterdam, The Netherlands, in October 2016 co-located with SPLASH’16.

Meta is an ACM SIGPLAN workshop for discussing research on metaprogramming and reflection. %, as well as users building applications, language extensions, or software tools using them.
The changing hardware and software landscape, and the increased heterogeneity of systems make metaprogramming once more an important research topic to handle the associate complexity. The scope of the workshop includes a wide range of topics related to design, implementation, and application of metaprogramming techniques, as well as empirical studies on and typing for such systems and languages.

The workshop welcomes mature contributions as well as work-in-progress contributions. A formal refereeing process selects a high-quality set of papers from those submitted at the workshop. Mature contributions are formally published in the workshop proceeding published electronically in the ACM Digital Library. The rest of papers are informally published at the workshop website. The JOT journal version of the papers offered an opportunity for the authors to take a longer term view of their research work and to present new results since the original presentation at Meta’16.

Meta’16 received 14 submissions, of which 7 full papers and 3 short papers were accepted and presented at the workshop. The guest editors of this special issue selected seven papers from Meta’16 workshop and invited their authors to submit an extended version of their paper, including at least 30\% of novel material. All papers have been reviewed by at least three reviewers, and have followed the reviewing process of the JOT journal until the final decision for each paper was reached.

After the journal’s rigorous reviewing process, the editors of this special issue finally selected the following two papers for publication:

  • Yutaro Tsunekawa, Taichi Tomioka, Kazunori Ueda. \textit{Implementation of LMNtal Model Checkers: a Metaprogramming Approach.} This paper discusses an approach for a meta-circular interpreter for prototyping model checkers.
  • Pablo Tesone, Guillermo Polito, Noury Bouraqadi, Stéphane Ducasse, Luc Fabresse. \textit{Dynamic Software Update from Development to Production.} This paper discusses a software update solution suitable for live programming environments.

As editors of this special issue, we hope you will enjoy the selection of papers.  We would like to sincerely thank the Meta’16 program committee and the anonymous referees who provided extensive feedback on the submitted papers; their reviews helped both authors and us, guest editors, to improve the quality of the submissions.

 

 

2016/06/20

CIBSE Special Section

Filed under: Special Section Editorial — Tijs van der Storm @ 14:03

This special issue contains three extended and peer reviewed papers from the 18th edition of the Ibero-American Conference on Software Engineering (CIBSE), which was held in Lima, Peru from April 22 to 24, 2015. CIBSE was conceived as a space dedicated to the dissemination of research results and activities, encouraging dialogue between scientists, educators, professionals and students of Software Engineering.

CIBSE consists of three tracks. The issues related to Requirements Engineering were in the WER track, while Experimental Software Engineering topics were handled by ESELAW track. All issues related to software production process and contemporary approaches to automation and quality improvement were discussed in the SET Track. For this special section, we selected one best paper from each track, which were extended and reviewed in two rounds. The three papers were refereed by three well-known experts in the field. The selected papers are the described as follows:

  • Sergio Miranda, Elder Rodrigues, Marco Tulio Valente, Ricardo Terra in their paper entitled “Architecture Conformance Checking in Dynamically Typed Languages” present an architectural conformance and visualization approach based on static code analysis techniques and on a lightweight type propagation heuristic. The main idea of the paper is to provide the developers’ community with means to control the architectural erosion process by reporting architectural violations and visualizing them in high-level architectural models, such as reflexion models and DSMs. The approach is supported by the ArchRuby tool.
  • Christian Quesada-Lopez and Marcelo Jenkins in their paper entitled “Function Point Structure and Applicability: A Replicated Study” report on a family of replications carried out on a subset of the ISBSG R12 dataset to evaluate the structure and applicability of function points. The goal of this replication is to aggregate evidence about internal issues of Function point analysis (FPA) as a metric, and to confirm previous results using a different set of data. The results aggregated evidence and confirmed that some BFCs (Base Functional Component ) of the FPA method are correlated. A prediction model based on transactions or external inputs appear to be as good as a model based on UFP. Simplifying the FPA measurement procedure based on counting a subset of BFCs could improve measurement process efficiency and simplify prediction models while allowing savings in measurement effort, and preserving the accuracy of effort estimates.
  • Leandro Antonelli, Gustavo Rossi, and Alejandro Oliveiros in their paper entitled “A Collaborative Approach to Describe the Domain Language through the Language Extended Lexicon” propose an approach to specify a domain specific language (DSL) to capture requirements in a collaborative way using Language Extended Lexicon. Defining a domain specific language to specifying requirements is a way to diminish the level of incompleteness and deal with the possible conflicts that do arise in the requirements context. The authors rely on collaboration to foster the involvement and cooperation of the stakeholders, thus the stakeholders are able to explore the differences constructively and provide a common understanding of the domain language beyond their own limited views.

We hope the readers enjoy these three papers and find them relevant and useful. Finally, we would like to thank the CIBSE organizers, the authors, the reviewers and the JOT editorial board for making this special section possible.

PC Chairs

  • João Araújo, NOVA LINCS, Universidade NOVA de Lisboa, Portugal
  • Nelly Condori Fernandez, VU Univ. Amsterdam, The Netherlands

SET Track Chairs 

  • Nelly Bencomo,  Aston University, UK
  • Toacy Oliveira, COPPE, Universidade Federal do Rio de Janeiro, Brazil

WER Track Chairs

  • Jose Luis de La Vara, Carlos III University of Madrid, Spain
  • Isabel Brito, Instituto Politécnico de Beja, Portugal

ESELAW Track Chairs 

  • Miguel Goulao, NOVA LINCS, Universidade NOVA de Lisboa, Portugal
  • Santiago Matalonga, Universidad ORT Uruguay,  Uruguay

2016/04/03

Handing Over the Reins

Filed under: Uncategorized — Laurence Tratt @ 16:45

Editing JOT has been a huge pleasure. I’ve interacted with many great people and read many great papers (and, I admit, giggled at one or two spam submissions). Looking back, I’m particularly pleased that I was able to help make JOT fully open-access, using CC licenses that allow authors to retain copyright on their articles. Open-access journals are becoming more common now, but they are far from the majority. Journals like JOT — which, lest we forget, has been open-access since its start many years ago — thus remain trailblazers.

It’s now time for me to hand over the reins. I’m pleased to say that the next JOT EIC is Tijs van der Storm. I have been fortunate enough to know Tijs for a number of years. His energy, good taste, and good humour will help ensure that JOT continues to provide a valuable service for both authors and readers. I wish him all the best, and encourage everyone reading this editorial to submit their best quality work to JOT. After all, we provide the platform, but you provide the content!

Laurence Tratt, March 2016

2015/08/11

VOLT 2012 / 2013 Special Edition

Filed under: Editorial — admin @ 13:17

This JOT special section contains three extended and peer reviewed papers from the first and second editions of the International Workshop on Verification Of modeL Transformation (VOLT). The first edition of VOLT was held on April 21st, 2012 in Montreal, Canada as satellite event of the 5th International Conference on Software Testing, Verification and Validation (ICST 2012). The second edition was held on June 17th, 2013 in Budapest, Hungary as a satellite event of Federated Conferences on Software Technologies: Applications and Foundations (STAF 2013).

Model transformations are everywhere in software development, implicitly or explicitly. They became first-class citizens with the advent of Model-Driven Engineering (MDE). Despite some recent activity in the field, the work on the verification of model transformations remains scattered and a clear perspective on the subject is still not in sight. Moreover, current model transformation tools often lack verification techniques to support such activities. The goal of VOLT is to offer researchers a dedicated forum to classify, discuss, propose, and advance verification techniques dedicated to model transformations. VOLT promotes discussions between theoreticians and practitioners from academy and industry. A significant part of the workshop editions includes a forum for discussing practical applications of model transformations and their verification, including interesting properties to verify and efficient techniques to actually compute those properties.

For this special section, we selected three papers by means of at least two rounds of reviews. All papers were refereed by four well-known experts in the field. The selected papers are the following:

  • Moussa Amrani, Benoit Combemale, Levi Lucio, Gehan Selim, Juergen Dingel, Yves Le Traon, Hans Vangheluwe and James Cordy in their paper entitled “Formal Verification Techniques for Model Transformations: A Tridimensional Classification” discuss the evolution, trends, and current practices in model transformation verification found in the literature from three viewpoints: the transformations, their properties, and the verification techniques.
  • David Lindecker, Gabor Simko, Tihamer Levendovszky, István Madari and Janos Sztipanovits in their paper entitled “Validating Transformations for Semantic Anchoring” present a technique to validate that a domain-specific language satisfies the intentions that the designer had in mind when engineering the language. The approach consists of validating the consistency between a formalization of intention of a language designer and the semantic mapping of the language, the latter being expressed as a formal model transformation.
  • Rick Salay, Marsha Chechik, Michalis Famelis and Jan Gorzny in their paper entitled “A Methodology for Verifying Refinements of Partial Models” present a technique to verify how uncertainty present in models and transformations is reduced after refining models and model transformations.

We would like to thank everyone who has made this special section possible. In particular, we are obliged to all past VOLT organizers, to the reviewers for giving off their time to thoroughly and thoughtfully review papers multiple times, to the authors for contributing to VOLT and JOT with high quality papers, and to the JOT editorial board for making this special issue possible.

Eugene Syriani, University of Montreal (Canada)
Manuel Wimmer, Vienna University of Technology (Austria)

2015/04/09

Volume 14 issue 1 now live

Filed under: Announcement — admin @ 10:37

The first issue of volume 14 is now online at the JOT website.

Colin Atkinson, Philipp Bostan, Dirk Draheim, Foundational MDA Patterns for Service-Oriented Computing, pp. 1:1-30
Stefan Mutke, Christoph Augenstein, Martin Roth, André Ludwig, Bogdan Franczyk, Real-time information acquisition in a model-based integrated planning environment for logistics contracts, pp. 2:1-25
Naranjo David, Mario Sánchez, Jorge Villalobos, Evaluating the capabilities of Enterprise Architecture modeling tools for Visual Analysis, pp. 3:1-32

2015/03/18

Popularity will NOT bring more contributions to your OSS project

Filed under: Column — admin @ 17:34

The vitality and success of Open Source Software (OSS) projects depend on their ability to attract, absorb and retain new developers [1] that decide to commit some of their time to the project. In the last years, new code hosting platforms like GitHub have popped up with the goal of helping in the promotion and collaboration around OSS projects thanks to their integration of social following, team management and issue-tracking features around a pull-based model implementation.

Roughly speaking, GitHub enables a distributed development model based on Git (though with some extensions). In GitHub there are two main development strategies aimed at (1) the project team members and (2) external developers. Team members have direct access to the source code, which they modify by means of pushes. External developers follow a pull-based model, where any developer can work isolately with clones (facilitated by means of forks in GitHub) of the original source code. Later, developers can then send back their changes and request those changes to be integrated in the project codebase. This is what is called to send a pull request. Finally, pull requests are evaluated by project team members, who can either approve the pull request and incoporate the changes, or reject it and propose improvements which can be addressed by the proponent. Beyond the project creator, other developers can be promoted to the status of official project collaborators and get most of the same rights project owners have, so that they can help not only on the development (by means of pushes, as said above) but also with management tasks (e.g., answering issues or providing support to other developers). Issue-tracking support helps both external developers and team members to request new features and report bugs, and therefore fosters the participation in the development process. People interested in the project can also become watchers to follow the project evolution.

What makes some projects more successful than others?

Since there is still very limited understanding of why some projects advance faster than others, we asked ourselves whether projects using all these new collaboration features available in code hosting platforms like GitHub would actually have a positive influence in the advancement of the project. Are popular projects (i.e., projects with more watchers, more issues added, more people trying to become collaborators…) really more successful?

This blog reports on our answer to this question based on our findings after conducting a quantitative analysis considering all the GitHub projects created in the last two and a half years. As metric for project success we chose the number of commits (not necessarily adding code, also removing it). We believe this reflects better than other metrics the fact that the project is alive and improving. Several works have performed qualitative analysis of GitHub samples ([2, 3, 4] among others) but none trying to determine criteria for project success.

Methodology for our quantitative analysis

To perform our study we took all GitHub projects created after 2012 and collected a few relevant attributes for each of them.

GitHub Project Attributes

For each project we were interested in getting insights regarding the following characteristics:

  1. General information. We consider basic project information such as whether the project is a fork of another and the programming language used in its development.
  2. Development. We measure the development status of GitHub projects in terms of commits (totalCommits attribute) since its creation. As GitHub projects can receive commits from pushes (i.e., source code contributions coming from team members) and pull requests (i.e., source code contributions coming from accepted pull requests), we distinguish commitsPush and commitsPR attributes, respectively.
  3. Interest. Being a social coding site, GitHub projects can also be monitored, tracked and forked by users. We therefore focus on two main facilities provided by GitHub: watchers (watchers attribute) and forks (forks attribute). The former is the number of people interested in following the evolution of the project; they are notified when the project status changes (e.g., new releases, new issues, etc.). The latter is the number of people that made a fork. Both attributes can provide good insights on the project popularity [5].
  4. Collaborators. We consider the number of collaborators (attribute) who have joined a project to help in its development.
  5. Contributions. We focus on contributions coming from (1) pull requests (PRs attribute) and (2) issues (issues attribute). In particular, we are interested in collecting the number of pull requests and issues that have been proposed (i.e., opened) for each project.

Mining GitHub

The mining process is illustrated in the following Figure and is composed of three phases: (1) extracting the data, (2) aggregating the data to calculate and import the attribute values for each project into a database, and (3) filtering the database to build the subset of projects used for analysis (see Filter). Next, we will describe each phase of the process.

GitHub mining process

Figure 1: Mining process.

Extractor. GitHub data has been obtained from GitHub Archive which has tracked every public event triggered by GitHub since February 2011. GitHub events describe individual actions performed on GitHub projects, for instance, the creation of a pull request or a push. Events are represented in JSON format. There are 22 types of events but we focus on 7 of them from which we can get the data needed to calculate the project attributes described before. The considered event types are presented in Table 1.

Table 1: Events considered in the GitHub Archive extractor.

Event Type Triggering condition Attributes Involved
MemberEvent A user is added as a collaborator to a repository collabs
PushEvent A user performs a push commitsPush
WatchEvent A user stars a repository watchers
PullRequestEvent A pull request is created, closed, reopened or synchronized PRs, commitsPR
ForkEvent A user forks (i.e., clones) a repository forks
IssuesEvent An issue is created, closed or reopened issues

Events are stored in GitHub Archive hourly. Our process collected all the events triggered per day since January 1st 2012 (starting date for our analyzed period).

Aggregator. This component aggregates the events extracted in the previous step and calculates the attributes for each project.

The resulting dataset contains 7,760,221 projects. This dataset was curated to eliminate projects with missing information or that were former private projects (which would prevent us from getting the full picture of the project). The curated dataset contained 7,365,622 projects.

Filter. This component allows building subsets of the previous dataset in order to perform a more focused analysis. The filter takes as input the dataset from the previous step and creates a new filtered dataset containing only those elements fulfilling a particular condition.

In the context of our study, we built a new filtered dataset including only those projects not being a fork of another and that explicitly mention they were repos with code in a given programming language. GitHub is used for many other tasks beyond software development (i.e., writing books) and we wanted to focus only on original software development projects. The resulting filtered dataset contained 2,126,093 projects and was the one used in all the other analysis presented in this blog post.

First of all, are projects in GitHub really using collaboration features?

Before we try to answer the question of whether using those features help in the project advancement, we should check whehter these features are used at all. To answer this question, we will characterize GitHub projects according to the attributes presented before and specifically study the use of collaboration facilities in them. Table 2 reveals that in fact they are not largely used.

Table 2: Project attributes results of the GitHub dataset.

Development attributes
Attribute Min. Q1 Median Mean Q3 Max.
totalCommits 0.00 2.00 7.00 43.00 19.00 5545441.00
commitsPush 0.00 2.00 7.00 41.00 19.00 5545441.00
commitsPR 0.00 0.00 0.00 1.31 0.00 38242.00
Interest attributes
Attribute Min. Q1 Median Mean Q3 Max.
watchers 0.00 0.00 0.00 2.26 1.00 14607.00
forks 0.00 0.00 0.00 0.68 0.00 2913.00
Collaborators and Contribution attributes
Attribute Min. Q1 Median Mean Q3 Max.
collabs 0.00 0.00 0.00 0.05 0.00 7.00
PRs 0.00 0.00 0.00 0.96 0.00 8337.00
issues 0.00 0.00 0.00 0.29 0.00 1540.00

The results for development attributes such as totalCommits are strongly influenced by the fact that a considerable number of projects have a small number of commits. Thus, 1,259,822 (59.26% of the total number of projects) have between 0-10 commits from pushes (commitsPush) and 2,092,685 (98.47% of the total number of projects) have only between 0-10 commits from pull requests (commitsPR). Figure 2 illustrates this situation by showing the number of projects (vertical axis) per group of commits (horizontal axis).

Comparison between number of projects and number of commits coming from pull requests and pushes

Figure 2: Comparison between number of projects and number of commits coming from pull requests (commitsPR) and pushes (commitsPush).

Regarding the interest attributes, 1,433,042 projects (67.40% of the total number of projects) have 0 watchers and 1,614,556 projects (75.94% of the total number of projects) have never been forked. These results suggest that the use of GitHub is far from what it would be expected as a social coding site.

The results for collaborator and contribution attributes also reveal a very poor usage. Thus, 2,017,911 projects (94.91% of the total number of projects) do not use the collaborator figure; 1,953,977 projects (91.90% of the total number of projects) have never received a pull request; and 1,949,644 projects (91.70% of the total number of projects) have never received an issue.

Therefore we can conclude that most projects do not make any use of GitHub features and use it purely as a kind of backup mechanism. The great majority of projects show a low activity (i.e., totalCommits, commitsPush and commitsPR) and attract low interest (i.e., forks and watchers).

but those that do, do they get any benefits?

If so, this would be a good reason for the other projects to follow suit. Let’s see then if popular projects that attract a lot of interset (plenty of forks and watchers) and manage to involve a large community (that opens issues, becomes collaborators, submits pull requests) end up having more commits in the repository than others.

To answer this question, we will perform a correlation analysis among the involved attributes. More specifically, we resort to the Spearman’s rho (ρ) correlation coefficient to confirm the existence of a correlation. This coefficient is used in statistics as a non-parametric measure of statistical dependence between two variables. The values of ρ are in the range [-1, +1], where a perfect correlation is represented either by a -1 or a +1, meaning that the variables are perfectly monotonically related (either increasing or decreasing relationship, respectively). Thus, the closer to 0 the ρ is, the more independent the variables are.

Table 3 shows the ρ values for each combination of attributes we wanted to evaluate. The first three rows focus on the correlation between the number of collaborators, pull requests and issues and the number of commits of the project. As you can see there is no correlation (except for the somewhat obvious correlation between the number of pull requests and the commits derived from accepting them, as long as they are accepted, but with basically no impact on the global number of commits) among them. The last rows show there is no correlation either between the number of people following the project and the commits.

Table 3: Correlation analysis between the considered attributes.

Success attributes
totalCommits commitsPush commitsPR
collabs 0.09 0.09 0.06
PRs 0.27 0.25 0.88
issues 0.25 0.25 0.34
watchers 0.11 0.10 0.24
forks 0.08 0.07 0.36

It is important to note that during our study we also calculated the correlation values among all these attributes when grouping the projects according several dimensions, specially based on their size and the language used. None of those groupings revealed different results from those shown above.

Threats to Validity

In this section we describe the threats to validity we have identified in our study.

External Validity. Our study considers a large dataset of GitHub projects, however, it may not represent the universe of all real-world projects. In particular, as GitHub allows users to create open source repositories without any expense, our dataset might include mock or personal projects that are not focused on attracting contributions and they have been open sourced only to avoid paying membership fees to keep them private.

Internal Validity. Our study only considers GitHub data and therefore does not take into account external tools used by some GitHub projects (e.g. to manage the team and issues; for instance people attaching patches to an external Bugzilla bug tracking tool, later manually merged into the project by the project owner) that can lead to bias our study (i.e., in the previous example, that patch would not count as a pull request). Finally, using the language attribute to filter out non-software projects may result in the elimination of relevant projects since some software projects do not set the programming language used.

If popularity is not a good indicator, what determines the success of a project?

Honestly, we think by now it’s clear that we have no idea. We have learnt about quite a few things that do not correlate with success but still have to find one that does. Probably because there is no single reason for that or at least not one that it is simple enough to be easily measured. Still, being able to shed some light on this issue, even if partial, would be very benefitial for the OSS community and thus it’s worth to keep trying.

To try to get some more insights on this we have complemented this quantitative analysis with a more qualitative one where we conducted a manual inspection of the 50 most successful GitHub projects in our dataset (success measured in terms of the number of commits of the project coming from pull requests, i.e., from external contributors). We noticed that 92% of them (i.e., 46 projects) included a description file (i.e. readme), with, often, a link to complementary information in wikis (46%) and/or external websites (50%). A further manual inspection of these three kinds of project information sources revealed that they were not purely “decorative” but that instead included precise information on the process to follow for all those willing to contribute to the project (e.g., how to submit a pull request, the decision process followed to accept a pull request or an issue, etc.). We have compared these numbers with random samples of projects to confirm they are not just average values for the GitHub population.

This hints at the reasonable possibility that having a clear description of the contribution process is a significant factor to attract new contributions. Unfortunately, existing GitHub APIs and services do not provide direct support to automatically check our hypothesis on the whole population of GitHub projects so further research on this requires conducting other kinds of empirical analysis like interviews to contributors and project managers. If this is confirmed, this would open plenty of other interesting questions like whether some kinds of contribution processes (also known as governance rules) attract more contributors than others (e.g. dictatorship approach versus a more open process to accept pull requests). This could help project owners to decide whether to have a more transparent governance process in order to advance faster in the project development. See [6] for a deeper discussion on this.

About the Authors

Javier Luis Cánovas Izquierdo is a postdoctoral fellow at IN3, UOC, Barcelona, Spain.

Valerio Cosentino is a postdoctoral fellow at EMN, Nantes, France.

Jordi Cabot is an ICREA Research Professor at IN3, UOC, Barcelona Spain.

References

[1]     C. Bird, A. Gourley, P. Devanbu, U. C. Davis, A. Swaminathan, and G. Hsu. Open Borders ? Immigration in Open Source Projects. In MSR conf., 2007.

[2]     J. Choi, J. Moon, J. Hahn, and J. Kim. Herding in open source software development: an exploratory study. In CSCW conf., pages 129–133, 2013.

[3]     G. Gousios, M. Pinzger, and A. V. Deursen. An Exploratory Study of the Pull-based Software Development Model. In ICSE conf., 345–355, 2014.

[4]     F. Thung, T. F. Bissyande, D. Lo, and L. Jiang. Network Structure of Social Coding in GitHub. In CSMR conf., pages 323–326, 2013.

[5]     T. F. Bissyande, D. Lo, L. Jiang, L. Reveillere, J. Klein, and Y. L. Traon. Got issues? Who cares about it? A large scale investigation of issue trackers from GitHub. ISSRE symp., pages 188–197, 2013.

[6]     J. Cánovas, J. Cabot. Enabling the Definition and Enforcement of Governance Rules in Open Source Systems. In ICSE – Software Engineering in Society (ICSE-SEIS), to appear.

2014/09/01

The common good

Filed under: Editorial — Laurence Tratt @ 13:37

We asked. You said. We listened.

From this issue onwards, all JOT articles will be licensed under a Creative Commons licence. Currently, authors can choose either Attribution 4.0 International (CC BY 4.0) or Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0) as their paper’s license (depending on feedback, we may extend these options over time). The author instructions have been updated accordingly.

In doing this, we’re giving back rights to authors and stating explicitly: JOT is on your side. Practically speaking, this move will make authors’ lives easier, and ultimately that of readers. We hope you enjoy the result!

2014/07/01

Extreme Modeling 2012 Special Edition

Filed under: Editorial — Laurence Tratt @ 14:27

This JOT special section contains four extended and peer reviewed papers from the first edition of the Extreme Modeling Workshop (XM2012) held on October 1st, 2012 in Innsbruck, Austria as satellite event of the 15th International Conference on Model Driven Engineering Languages & Systems (MODELS2012).

The goal of XM 2012 was to bring together both researchers in the area of modeling and model management in order to discuss more disciplined techniques and engineering tools to support flexibility in several forms in a wide range of modeling activities, including metamodel, model, and model transformation definition processes. The workshop aimed at a) better identifying the difficulties in the current practices of MDE related to the lack of flexibility and b) soliciting contributions of ideas, concepts, and techniques also from other areas of software engineering, such as that of specific language communities (e.g. the Smalltalk and Haskell communities, and the dynamic languages community). These contributions could be useful to revise certain fundamental concepts of Model Driven Engineering (MDE), such as the conformance relation.

From 8 initial submissions we selected 4 papers by means of at least two rounds of reviews. All papers were refereed by three well-known experts in the fields. The selected papers are the following:

  • Vadim Zaytsev in his paper entitled Negotiated Grammar Evolution presents a study about the adaptability of metamodel transformations. In particular, some metamodel transformation paradigms, like unidirectional programmable grammar transformation, are rather rigid. They are written to work with one input grammar, and are not easily adapted if the grammar changes. In the paper, the author proposes a solution able to entail isolation of the applicability assertions into a component separate from the rest of the transformation engine, and enhancing the simple accept-and-proceed vs reject-and-halt scheme into one that proposes a list of valid alternative arguments and allows the other transformation participant to choose from it and negotiate the intended level of adaptability and robustness.
  • Paola Gómez, Mario Sánchez, Héctor Florez and Jorge Villalobos in their paper entitled An approach to the co-creation of models and metamodels in Enterprise Architecture Projects discuss the problems related to the lack of dynamicity of model editors and the impossibility to load new metamodels at runtime. In the paper, they present an approach able to address such problems by separating ontological and linguistic aspects of metamodels. The GraCoT tool is an implementation of the approach based on GMF and it is also discussed in the paper.
  • Konstantinos Barmpis and Dimitrios S. Kolovos in their paper entitled Evaluation of Contemporary Graph Databases for Efficient Persistence of Large-Scale Models compare the commonly used persistence mechanisms in MDE with novel approaches such as the use of graph-based NoSQL databases. Prototype integrations of Neo4J and OrientDB with EMF are used to compare with relational database, XMI and document-based NoSQL database persistence mechanisms. The paper benchmarks also two approaches for querying models persisted in graph databases to measure and compare their relative performance in terms of memory usage and execution time.
  • Zoe Zarwin, Marija Bjekovic, Jean-Marie Favre, Jean-Sébastien Sottet, and Henderik A. Proper in their paper entitled Natural Modelling motivate the need for instruments that enable a wider adoption of modeling technologies. To this end it is necessary that such technologies are perceived as natural as possible. After having defined the natural modeling concept, the authors discuss how human aspects of modeling could be better instrumented in the future by using modern technologies.

We would like to thank everyone who has made this special section possible. In particular, we are obliged to the referees for giving off their time to thoroughly and thoughtfully review and re-review papers, to the authors for their hard work on several revisions of their papers, from workshop submission to journal acceptance, and to the JOT editorial board for organising this special issue.

Davide Di Ruscio, University of L’Aquila (Italy)
Alfonso Pierantonio, University of L’Aquila (Italy)
Juan de Lara, Universidad Autónoma de Madrid (Spain)

2014/06/04

The Song Remains (Almost) The Same

Filed under: Editorial — Laurence Tratt @ 11:47

For me, taking over as Editor-in-Chief of JOT is no small matter. The most recent editors — Oscar Nierstrasz and Jan Vitek — have done sterling work in establishing JOT as a well-read reference for substantial computing research, a job that Bertrand Meyer and Richard Wiener began before them. JOT continues to fill an important role in computing: an open-access journal with rigorous standards. In most senses, my job is to strive to continue Oscar and Jan’s sterling work. After all, when the JOT formula isn’t broken, why break it?

Of course, no such formula can be perfect, because the world around us changes: habits change, needs change, and attitudes change. It is the latter aspect which I wish to address in this, my first editorial. Research, at its best, is intended to benefit mankind: when, instead, it is hidden behind paywalls, its purpose is obstructed. JOT is therefore an open-access journal: whoever you are, whatever your status is, wherever you are in the world, you can read the research we publish in JOT without hindrance.

But JOT has one vestige shared with traditional journals: when authors publish their research in JOT we ask them to transfer the copyright of their paper over to us. What this means is that JOT is then the legal guardian of the paper: anyone who wishes to distribute or alter it — even the original authors — has to ask JOT for permission to do so. This was done with the aim of ensuring that JOT maintained the definitive home of the research and JOT has the legal right to prevent people duplicating (or, worse, plagiarising) the research we publish.

Attitudes in recent years have shifted. Authors want to publish copies of their papers on the homepages, in university paper repositories, and other online paper repositories. It is reasonable for them to ask why, if they put in the effort to perform and write-up the research, they should lose the legal right to post copies of their paper where they wish to.

In consultation with the JOT Steering Committee, I therefore believe that JOT should move to a world where we no longer require authors to transfer copyright to us. There are several possible models for how we might go about this, and we are opening up this discussion to the JOT community, seeding it with an initial proposal. With luck, we will put the new process into place later in the (northern hemisphere) summer.

Our initial proposal is as follows, based in part on the approach taken by similar journals such as PLOSOne and LMCS. Instead of requiring authors to transfer copyright to us, we propose that authors whose papers have passed JOT’s peer-review process are required to place their papers under a Creative Commons license before their paper will be published. Doing so will give everyone — including JOT — the right to host copies of their paper. We intend giving authors the freedom to choose between between the Attribution CC BY or Attribution-NoDerivs CC BY-ND licenses. Broadly speaking, the former would allow anyone to distribute (possibly altered versions of) the paper; the latter would allow anyone to distribute, but not alter, a paper. In both cases, the right to distribute the specific version of the paper accepted by JOT is irrevocable: it will be publicly available for all time. We would request that all copies the authors place on other sites use the JOT template so that JOT is properly credited as the publication that put the effort into reviewing and publishing the paper, but this will rely on author’s goodwill, rather than any legal mechanism.

Please feel free to leave your suggestions in the comments below or by contacting me directly. I would like whatever process we come up with to be as good as it can be, and that is most likely to happen when the JOT community puts its collective brain to the task!

2013/08/14

TOOLS Europe 2012 Special Section

Filed under: Editorial — Jan Vitek @ 10:40

Carlo A. Furia  and   Sebastian Nanz

The 50th International Conference on Objects, Models, Components, Patterns (TOOLS Europe 2012) was the closing event in a series of symposia devoted to object technology and its applications. The conference program included 24 paper presentations covering a broad range of topics, from programming languages to models and development practices. This variety, typical of the TOOLS conferences, is a sign of the vast success of object technology and of its theoretical underpinnings.

This Special Section of the Journal of Object Technology (JOT) consists of the extended versions of two contributions selected among those presented at TOOLS Europe 2012. We picked these two pieces of work among those receiving the most positive reviews before the conference, raising substantial interest at the conference, and passing the muster of additional thorough refereeing for this Special Section after the conference. Besides being mature and high-quality research work in their own right, the two papers target topics that are indicative of the vitality of object technology even now that it has become commonplace.

Lilis and Savidis’s paper “An Integrated Approach to Source Level Debugging and Compile Error Reporting in Metaprograms” discusses techniques and tools to improve the readability and understandability of error reporting with metaprograms — that is, programs that generate other programs, such as the template programming constructs available in C++. Their solution is capable of tracing errors along the complete sequence of compilation stages and also targets aspects of IDE integration. It is also fully implemented and available for download: note the demonstration video linked to at the end of the article.

Wernli, Lungu, and Nierstrasz’s paper “Incremental Dynamic Updates with First-class Contexts” tackles a difficult problem frequently present in complex software systems that must be highly available: how to reduce the downtime required to perform system updates. Their solution hinges on turning contexts into first-class entities. Their Theseus system is thus capable of performing updates incrementally, with different threads running in parallel on different versions of the same class. The conference version of this paper also won the TOOLS 2012 Best Paper Award sponsored by the European Association for Programming Languages and Systems (EAPLS).

We are glad to be able to offer such an interesting Special Section to readers of JOT. We thank Antonio Vallecillo for suggesting this Special Section. We thank the anonymous referees for their punctual and dedicated work, instrumental in guaranteeing high quality presentations; and we thank the authors for choosing TOOLS Europe and JOT to present some of their most interesting research work.

Older Posts »

Powered by WordPress