2012/01/04

European Research Project Symposium at ECOOP 2011

Filed under: Conference Report — Tags: — szschaler @ 16:17

The European Conference on Object Oriented Programming (ECOOP) was held in Lancaster, 25-29 July. This year, the conference featured the novelty of a Research Project Symposium, providing an opportunity for the dissemination of integrated project visions as well as discussions aiming to seed new collaborations and future research projects. With half-day sessions from three current European research projects, the symposium provided an interesting overview of European ICT research ranging from applications to the design of large-scale machine-translation systems and marine information systems to foundational research in the specification and verification of adaptable systems.

These projects should be of particular interest to JOT readers as they show the application of concepts from object-orientation and beyond (e.g., ontologies and components) in a variety of contexts, which is why a brief overview of the projects is given below. Full materials as well as slides and video capture of the presentations are available from the ECOOP website at 2011.ecoop.org. The text below is based on contributions from each of the projects.

HATS – Highly Adaptable Trustworthy Software

Many software projects make strong demands on the adaptability and trustworthiness of the developed software, while the contradictory goal of ever faster time-to-market is a constant drum beat. Current development practices do not make it possible to produce highly adaptable and trustworthy software in a large-scale and cost-efficient manner. Adaptability and trustworthiness are not easily reconciled: unanticipated change, in particular, requires freedom to add and replace components, subsystems, communication media, and functionality with as few constraints regarding behavioural preservation as possible. Trustworthiness, on the other hand, requires that behaviour is carefully constrained; preferably through rigorous models and property specifications since informal or semi-formal notations lack the means to describe precisely the behavioural aspects of software systems: concurrency, modularity, integrity, security, resource consumption, etc.

The HATS project develops a formal method for the design, analysis, and implementation of highly adaptable software systems that are at the same time characterized by a high demand on trustworthiness. The core of the method is an object-oriented, executable modelling language for adaptable, concurrent software components: the Abstract Behavioural Specification (ABS) language.  Its design goal is to permit formal specification of concurrent, component-based systems on a level that abstracts away from implementation details, but retains essential behavioural properties. HATS is an Integrated Project supported by the 7th Framework Programme of the EC within the FET (Future and Emerging Technologies) scheme.

PRESEMT – Pattern Recognition-Based Statistically Enhanced Machine Translation

The objective of the PRESEMT project is to develop a flexible and adaptable Machine Translation (MT) system from source to target language, based on a method which is easily portable to new language pairs. This method attempts to overcome well-known problems of other MT approaches, e.g. the need to compile bilingual corpora or create new rules per language pair. PRESEMT is intended to result in a language-independent machine-learning-based methodology. To that end, a cross-disciplinary approach is adopted, combining linguistic information with pattern recognition techniques towards the development of a language-independent analysis.

PRESEMT is intended to be easily customisable to new language pairs. Consequently, relatively inexpensive, readily available language tools and internet-sourced resources are used (a large monolingual corpus in the source language and a small parallel corpus in the source and target languages), while the platform can handle the input of different linguistic tools, in order to support extensibility to new language pairs and user requirements. The translation context is modelled on phrases that are produced via an automatic and language-independent process, removing the need for specific, compatible NLP tools per language pair. Furthermore, system optimisation and personalisation is implemented via meta-heuristics (such as genetic algorithms or swarm intelligence).

PRESEMT is funded by the European Union under its Framework 7 Programme.

NETMAR – Open Service Network for Marine Environmental Data

NETMAR aims to develop a pilot European Marine Information System (EUMIS) for searching, downloading and integrating satellite, in situ, and model data from ocean and coastal areas. It will be a user-configurable system offering standards-based, flexible service discovery, access and chaining facilities. It will use a semantic framework coupled with ontologies for identifying and accessing distributed data, such as near-real time, forecast and historical data. EUMIS will also enable further processing of such data to generate composite products and statistics suitable for decision-making in diverse marine application domains.

The project aims to support operational “Smart Discovery” based on ontologies. This is the process by which users of the system are able to locate and therefore utilise datasets using search terms that are different but semantically linked to dataset labels such as keywords.  A frequently quoted example is the location of datasets labelled ‘rainfall’ using the search term ‘precipitation’. In a pan-European context the issue of language arises, meaning that operational “Smart Discovery” also needs to be able to link datasets labelled in one European language to search terms supplied in another. The creation of “Smart Discovery” demonstrators is straightforward using a small ontology containing carefully selected dataset labels linked to a few search terms that are well known to the demonstrator. However, taking this to the operational scale is much more difficult because there is no foreknowledge of the search terms that will be used.  Consequently, domain coverage of the underlying ontologies needs to be as close to complete as possible.

One possible approach to the development of operational ontologies might be to bring together groups of domain experts and knowledge engineers in a series of knowledge capture semantic workshops.  However, this approach fails to take into account existing computerised knowledge resources making it extremely inefficient.  A much more productive approach is to identify pre-existing controlled vocabularies, thesauri and ontologies that may be brought together into a single semantic resource to underpin the EUMIS.

NETMAR is funded by the European Union under its Framework 7 Programme.

2011/12/07

TOOLS Europe 2011 – Day 2

Filed under: Uncategorized — Tags: — alexbergel @ 16:30

My name is Alexandre Bergel and this blog entry is about the second day of the TOOLS EUROPE 2011 conference.

The day is opened by Judith Bishop, the program co-chair. Before introducing the day’s keynote speaker, Frank Tip, Judith briefly introduced the new Microsoft Academic Search facility; it allows you to search for papers, co-author, impact factor (http://academic.research.microsoft.com). I was surprised at how intuitive it is. Beside the classical g & h indexes and graph of coauthors, it has some exotic features, for example, who cites you the most. It also produces an interactive graph of citations, which is at the very least entertaining.

The second keynote of TOOLS is given by Frank Tip, about security on web applications. I met Frank for the first time in 2003, at the Joint-Modular Language Conference (JMLC) conference on modularity in programming languages. Then, he gave a great talk about security; I am confident his new talk will be excellent.

Making web applications secure is pretty difficult. One reason is that everything happens with low-level string manipulations, and also because of the pervasive use of weakly typed languages. I first was a bit surprised by this statement since the core of the Pharo, Smalltalk and Ruby communities are web developers. You won’t make many friends in these communities if you say that dynamic typing is a bad thing.

As Frank goes on, I understand his argument better: web applications often use some dynamic features that are evil when you need to deal with security. A Web application that modifies itself while being executed usually leads to a complex mixture of different languages (PHP and HTML, at the minimum). Making all web browsers happy is challenging due to their subtle incompatibilities. Last, but not least, web applications are difficult to test because of the event-driven execution model. My personal experience shows that it is easy to write unit tests for this model, but simulating mouse clicks in the browser is not easy to automatize. I am not expert in web development, but I feel that expressing control flows in web application is one source of problems. Some say the Seaside framework addresses it in a sensible way, but I’m not sufficiently familiar with it.

Another typical problem in web applications is execution SQL injection and cross-site referencing. Wikipedia tells me that a cross-site injection is an attack that works by including a link or a script in a pages that accesses a site to which the user is known to have been authenticated. Imagine a link <img src=”bank.example.com/withdraw?account=bob&amp;amount=10000000&amp;for=hacker” alt=”” /> available on a page once Bob has logged in.

Frank’s talk continues, and he addresses general quality problems in Web applications; these are usually tackled by, for example, test generation. The idea of this is to identify inputs that expose failures in PHP applications.

“Apollo” is a tool developed at IBM Research for this, and Frank introduces it. Apollo identifies inputs that expose failures using dynamic symbol execution. The idea of dynamic symbolic executions is described by the following 3 steps: (i) give an empty input to your program then execute your program; (ii) you can deduce path constraints on input parameters based on the control flow predicates (i.e., the condition in if-statements); (iii) from this new path constraint, you can create a new input, you then jump to (ii). Frank empirically demonstrated this is a viable technique: many failures were found in numerous applications.

Another approach to addressing quality problems in web applications is fault localization, which is all about identifying where faults are located in the source code. Using ideas from Apollo, it is possible to generate an input that leads to a crash; but where the error is localized in the source code? This is a highly active area of research. The idea is to use information from failing and passing executions to predict location of faults. For each statement, its suspiciousness rating is computed using some very complicated (yet impressive) formulas.

Frank then moved on to directed test generation: the generation of test suite that enables better fault localization (I liked how the different contributions in Frank’s talk layered on each other). When a test fails, we usually want to know where the fault in our program is. What if no test suite is available? How can we generate a test suite with good fault characteristics? Tests that fail for the same fault can be exploited to localize the fault. Dynamic symbol execution with its complicated formulas is effective to generate tests by being based on similarity metrics (path constraint similarity, input similarity).

Finally, Frank talked about automatic program repair. Usually it is not difficult to determine how to correct  malformed HTML since documents have a regular structure. Most errors are rather simple (e.g., missing ending tag). The easy cases can be solved by analyzing string literals in PHP source code and check for proper nesting and other simple problems (incorrect attributes). But how do we come up with a fix for the PHP program that generated the html code? The most difficult cases seem to involve solving a system of string constraints. Frank suggests to generate one constraint for each test case.

Frank summarised some of the key lessons he’d learned, in particular that we need better languages for developing web applications, and need to get them adopted. Until such languages are widely adopted, we are faced with a fragile mix of weakly typed languages and their problems (structured output constructed using low-level string manipulation, dynamic features are difficult to analyze statically, interactive user input). The analysis technique Frank presented seem promising to aid with this.

Coq Tutorial by Catherine Dubois

A major novelty of TOOLS Europe 2011 this year is the tutorial of Catherine Dubois on Coq. Coq means “Calculus of Construction” and it is a proof assistant. It allows users to write programs and their specifications. I am not sure what this really means, but apparently “Coq is based on a typed lambda-calculus with dependent types and inductive types (the richest in the Barendregt’s cube).”. People who dedicate their life to code generation will be happy to know that Coq can produce ML programs.
Coq has various applications:

  • Formalization of mathematics (constructive mathematics, four color theorem, geometry),
  • program verification (data structure and algorithms),
  • security (electronic banking protocols, mobile devices),
  • programming languages semantics and tools (ML type inference, POPLmark challenge, formalization of FeatherweightJava, JVM, compiler verification)

The tutorial began well, but after 25 minutes I was more lost than the first day I left my hometown. What I learnt is however useful: Coq allows you to write functional programs, pretty much as you would do in a typed version of Scheme. You can then write some specifications, and then find and apply a number of tactics to drive a proof.

Coq, as you may have guessed, is based on defining functions (anonymous or named one), applying them (partially or not) and composing them. Functions must be total (defined for _all_ the domain, e.g., the head function must be defined for an empty and non empty list). A function also has to always terminate. Functions are first class citizens. No side effects and no exceptions are supported. To give you a flavor of the Coq syntax, here are a few examples:

  • Assuming the type of natural numbers and the + function: Definition add3 (x: nat) := x + 3.
  • Assuming the type of lists and the concatenation ++ function: Definition at_end (l: list nat)(x: nat) := l++(xx::nil).
  • Higher order: Definition twice_nat (f: nat -> nat)(x: nat) := f (f x).

Catherine told me that LaTeX can be produced from a proof. Handy to write paper, isn’t it? For people who want to know more about Coq, “Coq in a hurry” is apparently an excellent tutorial (http://cel.archives-ouvertes.fr/docs/00/45/91/39/PDF/coq-hurry.pdf). I read it and understood everything.

Session 14:30 chaired by Juan de Lara

The afternoon technical session ran in parallel with Catherine’s tutorial, so unfortunately I missed the first talk. The second talk was “Systems evolution and software reuse in OOP and AOP” and is presented by Adam Przybylek. The research question pursed in this work is “Is the software built in AOP more maintainable than when written in OOP?”. To answer this question, five maintenance scenarios are solved twice, with AOP and OOP. The number of lines of code is used as a performance metric. As a result, an OOP solution requires less change on the code than with AOP. Stefan Hannenberg worked on a similar topic. His work sparkled an incredible amount of divergence in opinions. I immediately knew that Przybylek will be under some fire.
Christoph Bockisch opened the dance by asking “how are you sure that the aspects you have programmed are done in the right way?”. Other similar questions were raised. Apparently people from the AOP World felt a bit attacked. Dirk concluded “There is a lot more than AspectJ in AOP”. I wished the questions were not so defensive.

Session 16:00 chaired by Manuel Oriol

The final session of the day consisted of four papers, and was chaired by a previous TOOLS Europe programme chair! The first paper is titled “A Generic Solution for Syntax-driven Model Co-evolution” and is presented by Protic, Van Den Brand and Verhoeff. The problem that is addressed in this work is to ensure the evolution of a meta-model if a model evolves. They introduce a special metamodel for metamodels and a bijective transformation for transforming metamodels into models. It is not immediately clear to me why this is not called “instantiation”.

The second paper is “From UML Profiles to EMF Profiles and Beyond”. A UML profile is a lightweight language extension mechanism of the UML. A profile consists of stereotypes. A stereotypes extend base classes and may introduce tagged values. Profiles are applied to UML models. This paper explored how to use EMF profiles (in Eclipse) to support UML profiling concepts and technologies.

The third paper is “Domain-specific profiling”, presented by Jorge Ressia. I coauthored this work. Profiling is the activity of monitoring and analyzing program execution. The motivation is that traditional profilers reason on the stack frames by regularly sampling the method call stack, which is quite unadapted as soon as you have a non-trivial model. Our solution consists of offering profilers that will compute metrics against objects that compose the domain.
Nathaniel Nystrom asked how the profiler maps to the domain? What about “domain specific debugger” was asked by Mark van den Brand? Oh yes! Profiler and Debugger have many commonalities; the domain on which they reason. The model on which traditional code profilers and debuggers reason on is the stack frame: what a pretty poor abstraction for applications made of objects and components. This topic will probably entertain me for the coming years.

Summary of Days 3 and 4

On Day 3, Oscar advertised the Journal of Object Technology (JOT). Oscar has been the Editor-in-Chef of JOT for a little more than one year. JOT offers a transparent reviewing process, accepted submissions are quickly put online, and it has a presence on various social networks. I personally hope JOT will become an ISI journal. This will encourage South-American researchers to submit to JOT. Unfortunately, ISI publications still remain important for emerging countries. Things are changing, but slowly. Our paper TOOLS paper “Domain-Specific Profiling” has been selected in a special issue for JOT.

The Dyla workshop was held on Day 4. Dyla is the workshop on dynamic languages and applications. I am happy to have co-organized it. It was the fifth edition and a sheer crowd attended. Dyla is focussed on demos instead of technical and academic presentation. We had a number of exciting demos, including type systems for dynamic languages, visualizations of program execution, context-oriented programming and meta-object protocols. Dyla will be happen again in 2012.

I hope these two blog posts provide you with something of a flavour of TOOLS Europe in 2011, and I hope to blog from future events!

On JOT Special Sections

Filed under: Announcement — avallecillo @ 09:30

It has been almost six months since the JOT Editor-in-Chief, Oscar Nierstrasz, asked me to take care of the JOT Special Sections. It was, I remember, in Zurich, while attending yet another interesting TOOLS conference, and one of the first questions that came to my mind was: Why should anybody want to organize a special section for JOT?

Trying to answer this question motivated me to look for the real raisons d’être for JOT sections, to find out their distinguishing features, and to identify the major advantages they could bring to the community working on Object Technologies. As I soon realized, two of the major advantages of JOT, when compared to other publishing venues, are its timeliness and quality. And they are both essential properties of Special Sections, as we shall see below.

Normally, the main sources for JOT Special Sections are, of course, the conferences and workshops devoted to topics within the scope of the journal (which, by the way, is much broader than most people think, because it encompasses all aspects of Object Technology; see the JOT mission at http://www.jot.fm).

Nowadays there are a myriad of high-quality events with very good papers, which provide excellent snapshots of the research being carried out in most fields. However, it is impossible for the average researcher to read all these workshop and conference papers to keep himself up-to-date. This is precisely where special sections can be so useful: they offer a small selection of the best papers from these events, in particular those papers which are mature enough to present valuable, persistent and distinctive contributions. In addition, an editorial paper from the Guest Editors introduces each topic to distill the main ideas and results discussed during the meeting.

However, this alone is not enough for Special Sections to be useful: the papers also have to be timely! There is no point in reading a paper from a conference or a workshop two years after it was held (and almost three years after it was written). In this respect, JOT can be of great help. In 2010, the renewed JOT journal moved to a new publication model in which there is no pipeline of accepted papers waiting to be published — instead, accepted papers appear as soon as the final camera-ready copy is provided. In addition, we have established a review process for special sections that is able to have papers ready for publication no more than nine months after the conference (including 5 months for preparing the extended versions of the papers), which is much more reasonable.

Another important requirement for the success of Special Sections is to have a thorough and careful review process. This is not only essential for ensuring the quality of the published papers, but it also provides a valuable service to authors. If a paper has been selected, it is because the Guest Editors think that it contains valuable ideas, and that it can be evolved into a mature journal article within a reasonable timeframe. In this respect, it is the job of the Guest Editors and the reviewers of a Special Section paper to help it improve smoothly until it reaches the required level of quality — preferably working more as shepherds and advisors for the paper than as critical judges of it.

Finally, publishing a Special Section with selected papers from a conference or workshop can also be very valuable and rewarding for Guest Editors. They have the opportunity not only to organize the event and select the papers that will be presented, but a Special Section also gives them the chance to prepare a summary for the community; this will introduce the key ideas and concepts from the topic, distill the most valuable discussions held during the event, and will also give prominence to the most significant works presented during the meeting developed to become mature, high-quality journal papers. This is a really valuable service to the community.

In the case of JOT, we have been quite successful so far at attracting Special Sections. Most of them come from events held this year (2011), and are expected to see the light during the first semester of 2012. Others are already planned from 2012 conferences.

  • New Mechanisms for Object Oriented Languages (Best papers of ICOOOLPS and MASPEGHI 2010). Guest editors: Markku Sakkinen and Olivier Zendra).
  • Refactoring and Testing (Best papers of REFTEST 2011). Guest editors: Steve Counsel and Stephen Swift.
  • Object Technology (Best papers of TOOLS Europe 2011). Guest editors: Judith Bishop and Antonio Vallecillo.
  • Model Comparison in Practice (Best papers of IWMCP 2011). Guest editors: Dimitris Kolovos, Davide Di Ruscio and Alfonso Pierantonio.
  • Model Transformations (Best papers of ICMT 2011). Guest editors: Jordi Cabot and Eelco Visser.
  • Object Technology (Best papers of TOOLS Europe 2012). Guest editors: Sebastian Nanz and Carlo Furia.

We are currently preparing a new set of Special Sections. If you recently organized a conference or a workshop, and are thinking of organizing a Special Section, please do not hesitate to contact us.   We are also working on how to make the job easier for Guest Editors, and have prepared a set of resources that aim to help editors: FAQ documents, guidelines, template invitation letters, review forms, etc.

In summary, these past months have been very busy, preparing the supporting infrastructure and organizing the forthcoming special sections for JOT, which we expect you to find useful and interesting. But publishing valuable Special Sections is a collective effort, whose success or failure depends on us all, so please keep your proposals coming!  I am sure the readers of this journal will appreciate it very much.

Antonio Vallecillo.

2011/08/04

TOOLS Europe 2011 – Day 1

Filed under: Conference Report — Tags: — alexbergel @ 17:55

My name is Alexandre Bergel and I’m an assistant professor at the University of Chile; for JOT, I’ll be blogging about TOOLS EUROPE 2011. This is the fourth TOOLS EUROPE I’ve attended. This 49th edition took place in the lovely city of Zurich, in the German-speaking part of Switzerland. This blog entry relates the conference as I lived it.

Opening Session

The conference is opened by Judith Bishop, Antonio Vallecillo (the program co-chairs) and Bertrand Meyer. Observation (and evidence!) suggests that this year’s event has more participants than the previous year. The amphitheater is full, and not many attendees had their laptop in their hand, which is a good sign! Judith and Antonio give us some stats, which you can find elsewhere, but what I found noteworthy was that the acceptance rate was 28%; by comparison, ECOOP’11 has an acceptance rate of 28% and OOPSLA’11 36% (!).

This year’s event contains three novelties: Catherine Dubois will give a tutorial about Coq on Wednesday; a best paper prize will be awarded; and a selection of the papers will appear in a special issue of JOT. I like surprises, especially when they are this good. A tutorial by an expert, at no additional cost, is indeed something one should not miss.  The award is also an inexpensive mechanism to advertise the conference: winners are always happy to mention it. The special issue is also a good thing. Unfortunately, JOT is (still) not an ISI journal, which means that is not considered by national science agencies in South America. Oscar, the editor-in-chief, is aware of the importance of being in ISI; I cannot resist to raise this with him each time we meet. Apparently JOT has a nice backlog, meaning that the journal camera ready version of papers is due in December.

Bertrand then tells us something about next year. TOOLS EUROPE 2012 will be the 50th event and will be held in Prague at the end of May. The conference will take place just before ICSE, which will be held in Zurich. This is earlier than usual, but Prague is terrific and I hope the conference as a result attracts lots of great papers.

As much as we like surprises, it’s now time to move on to the even better stuff: the talks!

Keynote – Oscar Nierstrasz

Oscar’s keynote is called “Synchronizing Models and Code”. He considered naming it “Mind the Gap”. However, Oscar is Canadian and lives in Bern, where there is no underground 🙂

Oscar divides his keynote into four parts.

Part 1: “programming is modeling”. This first part is about trying to understand what is modeling in the object-oriented paradigm. Oscar asks what is the “OO Paradigm”? What people traditionally think about OOP is “reuse”, “design”, “encapsulation”, “objects+classes+inheritance”, “programs = objects+messages”, or “everything is an object”. Oscar argues that none of these buzzwords gives an adequate definition, and that the essence of object-orientation lies in the possibility for one to define their own domain specific language. “Design your own paradigm” seems to reflect the truth behind OOP. This reflects a slight change from his previous blog on that topic (Ten Things I Hate About Object-Oriented Programming): Oscar now agrees Object-Orientation is indeed a paradigm. I buy Oscar’s claim that the essence of OOP cannot be easily captured, however I am wondering whether the ability to easily express new paradigms and domain specific abstractions is really what crystalizes OOP. Scheme programmers have a powerful macro mechanism that seems to provide equal capability.

Part 2: The second part of the talk is subtitled “mind the gap”. Here, Oscar is arguing that there is a gap between model and code. This can be easily seen from activity by the reverse engineering community, which spends most of its time making sense of code by providing adequate representations. The modeling community is actively working in the other direction (as can be seen by some of the papers at ICMT’11, which is co-located with TOOLS EUROPE 2011). Oscar presents his argument by clarifying that there are actually many gaps. For example, there is a gap between static and dynamic views: what you see in the source code is classes/inheritance; what you see in the executing software is objects sending messages. I like Oscar’s punch. It’s astonishing to see that code debuggers, code profilers and test coverage tools still base their analysis on classes, methods and stack frames. We profile and debug Java applications the same way as COBOL and C. How can one effectively identify the culprit of a bottleneck or a failure by solely focussing on classes and methods? There is indeed something odd here and Oscar puts his finger on it.

Part 3 of the talk is called “bridging the gaps”. After so many open questions, the natural one to dwell on is: how can be we bridge the gaps between model and code? Oscar’s group has extensive experience in reverse engineering and component-based software engineering. Moose, a platform for software and data analysis (moosetechnology.org) is a solution for bridging these gaps. Oscar spends some time talking about Moose’s features, including an extensible metamodel, metrics and visualisations. Polymetric views are particularly interesting. They’ve had a strong impact on my personal work since most of my research results are visually presented using a polymetric view. To demonstrate some of these ideas, Oscar talks about Mondrian for a while; it’s an agile visualization engine that visualizes any arbitrary data from short scripts written in Smalltalk/Pharo. I am myself the maintainer of Mondrian. I encourage the reader to try it out, people find it mind blowing 🙂

Oscar finishes this part of the talk by summarising a number of language and runtime extensions, including Bifrost (for live feature analysis), Pinocchio (an extensible VM for dynamic languages), Classboxes (which I defined in my PhD), which are the precursor to Changeboxes, which enable forking and merging application versions at runtime.

We now arrive at Part 4: “lessons learned”. What Oscar’s group has learned is that “less is more”, i.e., “keep it simple”. All the experience gained by Oscar’s group is that simple programming languages can often be more successful than complex languages. “Keep it simple” is apparently a favorite lesson learnt of many keynote these days (e.g., Craig Chambers at ECOOP’11, David Ungar at  ECOOP’09).

What else has Oscar’s group learned? Visualizations often use simple metrics defined on simple structures. Oscar also argues that currently there are too many different tools to support software engineering that are not connected. I indeed agree with Oscar. I often found opportunities by connecting different mechanisms and tools (e.g., profiling and visualization).

Oscar’s conclusion is that the things we talk about in software engineering are about managing change. We spend too much time recreating links between artifacts. Instead, we should maintain the links between them. To do this, we should manipulate models, not code. How can I disagree? Oscar’s talk is a success and leads to more than 30 minutes of questions. I had the impression that most of them are about software visualization.

Session 1

After coffee (much needed), we moved to the first technical session, which was actually a shared session with ICMT. As such, the presentations were chosen to be of interest to both TOOLS EUROPE and ICMT attendees. This is always a nice part of TOOLS, the cross-pollination of presentations from different co-located events.

Jochen Kuester presents the first paper on “Test Suite Quality for Model Transformation Chains”, focusing on issues to do with testing model transformation chains (and not just single transformations, which are easier) based on specifying models used as input and output. The authors have lots of experience in dealing with workflow models in particular. What is interesting here is how the authors put an emphasis on the quality of the test suite, including element coverage. This seems to be a good move towards a proper testing discipline for model transformations.

The second talk of the session is given by Marco Trudel, and  is titled “Automated Translation of Java Source Code to Eiffel”. Their work is driven by (i) the need to use Java libraries within Eiffel and (ii) compile Java applications to native code. Translating Java code into Eiffel has a number of challenges: “continue” and “return” Java instruction, exceptions, Java Native Interface (JNI). Only 2.2 overhead performance (based on running Mauve, the java.io unit tests). There are lots of questions after the talk, many of which seem to be abstractions of the question “why translate Java into Eiffel in the first place?”, even though the presenter addressed this at the start.

The third talk is by Markus Lepper, and it is an ICMT paper but from a group outside of the typical ICMT community (more from the compiler construction community). The paper is “Optimizing of Visitor Performance by Reflection-Based Analysis”. This is a bit outside of my personal research interests, but I understood the following: “Everything is a model. Every computation is a model transformation”.

The fourth and final talk of the session is by Fredrik Seehusen entitled “An evaluation of the Graphical Modeling Framework (GMF) based on the Development of the CORAS tool”. GMF is an Eclipse-based framework for development of graphical language editors. The paper evaluates GMF as a result of building a tool called CORAS, and some interesting empirical observations are made, even though the presenter noted that there were potentially simpler approaches to producing the GMF editor (e.g., using a tool like EuGENia) that would lead to different observations. Nevertheless, this is an interesting presentation showing clear benefits, but I admit this conveys an impression of deja-vu for the Smalltalker that I am!

Session 2

The second session of the day focuses purely on TOOLS EUROPE presentations. The first talk is called “KlaperSuite: an Integrated Model-Driven Environment for Non-Functional Requirements Analysis of Component-Based Systems”. The problem on which the authors are focussing on is how to control the cost of enhancing QoS. Their approach is to use an automatic QoS prediction at design-time. It is based on an automatic model transformation of several input models. This is an interesting presentation of a complicated transformation with real applications.

The second talk is titled “Unifying Subjectivity”, by Langone, Ressia and Nierstrasz. “Subjective” means “based on or include by personal feelings”. Subjectivity is related to ContextL, perspective (from Us), roles (from gBeta). The problem addressed by Langone et al. is that all these related work are based on a common point of view. One of their message is that “context is subjective”.

Session 3

The first day ends with three presentations. First up is “Lifted Java: A minimal Calculus for Translation Polymorphism”. Compared to FeatherweightJava, Lifted Java has base classes, role classes, and a playedBy relation. It also has lifting and lowering to both shrink or widen a class during execution. This is an excellent talk on an excellent paper; it turns out that it has been awarded the best paper prize. I observe that today there are a number of papers about context and sujectivity. This is a clear indication that this is a hot topic, and not only for TOOLS!

There are two more talks in the session, “Location Types for Safe Distributed Object-Oriented Programming” and “Static Dominance Inference”, but I am off discussing ideas with some of the previous presenters. Apologies to the last two speakers; there’s so much good research I can’t keep up!

GPCE 2009 Special Section

Filed under: Special Section Editorial — Tags: — fisch @ 15:58

This special section presents two extended papers from the Eighth International Conference on Generative Programming and Component Engineering (GPCE’09), which was held October 4-5, 2009, in Denver, Colorado. GPCE has become the premier venue for research in generative and component-oriented programming. GPCE’09 attracted 62 submissions, of which 19 were accepted, a 31% acceptance rate.

For this special section, we invited the authors of the highest ranked papers to submit revised and extended versions of their conference contributions to this JOT special section, and received three submissions. After an additional round of reviewing, two of them were accepted, and we are proud to publish them here.

We would like to thank the authors and reviewers, of both the conference and this special section, for their efforts, and hope you enjoy the results.

Bernd Fischer (Guest Editor)
Oscar Nierstrasz (Editor-in-Chief)

2011/07/01

ICMT 2010 Special Section

Filed under: Editorial — Tags: — Laurence Tratt @ 13:49

This JOT special section presents 4 carefully selected and extended papers from the third edition of the International Conference on Model Transformations (ICMT), held in Málaga in June / July 2010. The conference itself was extremely well attended, where lively discussions triggered new work which we are sure will have an important impact in the coming years. The papers in this special section are no small part of that.

At the time of writing, ICMT has just finished its fourth edition in Zürich; proof — if any is needed — of the importance of model transformations to software modelling. Model transformations were originally thought unimportant; then begrudgingly hacked together from whatever tools were at hand; and gradually, as their importance was realised, dedicated theories and languages were then developed. The papers in this special section cover a wide part of the model transformations spectrum, showing how much progress this community has made in a relatively short space of time.

ICMT prides itself on transparency. The 2010 edition received 63 abstracts, which yielded 58 full submissions, of which 17 were eventually accepted — a 29% acceptance rate. Every paper was reviewed by at least three Programme Committee members. The resulting paper discussion and selection process was lively and detailed, reflecting the strength of the submissions. Of those 17 papers, the 4 most highly rated were then invited to submit to this special section. Each paper was reviewed by at least 2 of its original referees to ensure that it represented a substantial advance over the conference version; new referees were also used to ensure that the extended papers were of high quality in their own right.

We thank the people who made this special section possible. Most importantly, we thank the referees for giving of their time to thoroughly and thoughtfully review and re-review papers, and to the authors who put such hard work into the several revisions from conference submission to journal acceptance. Oscar Nierstrasz and the JOT staff made the process simple and stream-lined. Finally we thank the ICMT Steering Committee who have always been on hand to offer advice when needed.

This special section marks the end of our work on ICMT 2010. 12 months after the conference itself, we look back with many fond memories: from wonderful social events and lively discussion to stimulating papers. We hope that you enjoy this special section as much as we have preparing it!

Martin Gogolla and Laurence Tratt

2011/04/10

Dahl-Nygaard Prize awarded to Chambers and Igarashi

Filed under: Announcement,Uncategorized — Tags: — Jan Vitek @ 11:08

AITO is proud to announce the Dahl-Nygaard Prizes for 2011. The Senior Prize will be given to Craig Chambers, Google, for the design of the Cecil object-oriented programming language and his work on compiler techniques used to implement object-oriented languages efficiently on modern architectures. The Junior Prize will be given to Atsushi Igarashi, Graduate School of Informatics, Kyoto University, for his investigations into the foundation of object-oriented programming languages and their type systems. More information is available from AITO and the  ECOOP web page.

2011/03/07

Special Section on Formal Techniques for Java-like Programs

Filed under: Editorial — piessens @ 16:56

Dear readers,

Formal Techniques for Java-like Programs (FTfJP), with 12 completed successful  events, and a 13th event on the horizon, is one of the most prestigious and long-standing ECOOP workshop series.

The 12th edition of the workshop was organized on June 22, 2010 in Maribor, Slovenia. Twelve papers were submitted in response to the Call For Papers, and nine of them were accepted for presentation at the workshop. With an average attendance of about 30 participants — and a peak attendance of over 50 during the invited talk (thanks to Shriram Krishnamurthi) — FTfJP was one of the most popular workshops at ECOOP 2010.

We invited the authors of five papers to submit a revised and extended version to this JOT special section, and received three submissions. After an additional round of reviewing, two of them were accepted, and we are proud to publish them in this special issue.

We would like to thank the researchers that submitted their work to the workshop, the speakers and participants at the event, and in particular the  FTfJP 2010 program committee members and reviewers.

We look forward to see you (again) at a future FTfJP workshop,

Frank Piessens, Bart Jacobs and Gary T. Leavens

2011/01/31

JOT needs you!

Filed under: Editorial — Oscar Nierstrasz @ 22:28

Some of you may be wondering what has happened to JOT.

It seems that for the first time since its inception, JOT has missed a “deadline”, and no new issue appeared in January 2011.

Actually, JOT has moved to a new publication model in which there is no pipeline of accepted papers to be published.  Instead, accepted papers will appear as soon as possible after the final camera-ready copy is provided, whereas in the past, accepted papers sometimes had to wait up to a year to appear on-line.  The old pipeline of accepted papers and regularly scheduled issues of JOT expired with the November 2010 issue. Although the new system promises faster turnaround, the downside is that issues will no longer appear regularly as clockwork.

The current volume of JOT is now open. Articles will appear as they become available.
Although the new JOT promises faster publication for accepted papers, the evaluation process is tougher, while being more transparent (see the information for authors page).

Since the new procedures have been put in place, 43 papers have been submitted in 2010 for publication to JOT. Of these, 38 have been rejected for various reasons. Two papers are in the process of being revised, and a further 3 papers from 2010 are still under review. The average response time, excluding papers still in review was 24 days. The longest response time was 187 days (due to exceptional circumstances). By offering rapid feedback and fast, open-access publication to authors, we believe that JOT offers a highly credible and necessary service for the object-oriented community.

As of 2010, all current and prior JOT articles have Digital Object Identifiers, thus guaranteeing their archival nature. As of 2011, JOT is published by AITO, the steering association behind ECOOP, thus also guaranteeing JOT’s future existence.

JOT welcomes special sections on selected topics, including extended “best” papers of conferences, workshops and symposia. Several such special sections are planned for the coming weeks and months, specifically on:

JOT is changing, but we need your help to keep it going. We encourage our readers to submit mature research papers, state-of-the art surveys, empirical studies and best practice reports as regular technical articles, and reviews, conference reports, and technical notes as blog columns. We promise rapid feedback, quick publication turnaround and archival, open-access distribution.

2011/01/10

OOPSLA day 3 (finally)

Filed under: Conference Report — Tags: — nick @ 09:43

Final day of the conference (is this the latest blog post ever? Probably. Consider it an un-expected Christmas gift):

Homogeneous Family Sharing – Xin Qi

Xin talked about extending sharing from classes to class families in the J& family of languages. Sharing is a kind of bidirectional inheritance, and is a language-level alternative to the adapter design pattern. The work includes formalism, soundness proof, and implementation using Polyglot. Dispatch is controlled by the view of an object, the view can be changed by a cast-like operation.

I didn’t quite get shadow classes, but I think they are like further bound classes in Tribe.

Finally, their families are open, as in open classes, so the programmer can add classes to families post hoc.

Mostly Modular Compilation of Crosscutting Concerns by Contextual Predicate Dispatch – Shigeru Chiba

Chiba presented a halfway language between OOP and AOP called GluonJ. The idea is that it should be a more modular version of aspects (I think). However, it was found to be not as modular to check and compile as standard OOP. The language supported cross-cutting concerns with predicate dispatch and an enhanced overriding mechanism.

Ownership and Immutability in Generic Java – Yoav Zibin

Yoav talked about work that combined ownership and immutability in a single system using Java’s generics. It is nice work, but I’m afraid I was too busy being nervous about being next up to write any notes.

Tribal Ownership – Nick Cameron (me!)

I talked about work with James Noble and Tobias Wrigstad on using a language with virtual classes (Tribe) to support object ownership (i.e., ownership types without the extra type annotations) for free (that is, no additional programmer syntax overhead). I really like this work, it all seems to come together so neatly, which I find pretty satisfying. I really do think virtual classes are extraordinarily powerful and yet easy enough for programmers to understand. Hopefully, they’ll make it into a mainstream language before too long…

A Time-Aware type System for Data-Race Protection and Guaranteed Initialization – Nicholas Matsakis

Nicholas introduced a language (Harmony) where intervals of ‘time’ are represented in the type system to make the language time-aware. This can be used to prevent race conditions in concurrent programs and for other uses (including some non-concurrent ones), such as allowing new objects time to establish their invariants. Intervals are scoped and an ordering may be specified by the programmer; the runtime or compiler may reorder execution subject to this ordering. Checking is modular and is flow insensitive.

« Newer PostsOlder Posts »

Powered by WordPress