2012/01/04

European Research Project Symposium at ECOOP 2011

Filed under: Conference Report — Tags: — szschaler @ 16:17

The European Conference on Object Oriented Programming (ECOOP) was held in Lancaster, 25-29 July. This year, the conference featured the novelty of a Research Project Symposium, providing an opportunity for the dissemination of integrated project visions as well as discussions aiming to seed new collaborations and future research projects. With half-day sessions from three current European research projects, the symposium provided an interesting overview of European ICT research ranging from applications to the design of large-scale machine-translation systems and marine information systems to foundational research in the specification and verification of adaptable systems.

These projects should be of particular interest to JOT readers as they show the application of concepts from object-orientation and beyond (e.g., ontologies and components) in a variety of contexts, which is why a brief overview of the projects is given below. Full materials as well as slides and video capture of the presentations are available from the ECOOP website at 2011.ecoop.org. The text below is based on contributions from each of the projects.

HATS – Highly Adaptable Trustworthy Software

Many software projects make strong demands on the adaptability and trustworthiness of the developed software, while the contradictory goal of ever faster time-to-market is a constant drum beat. Current development practices do not make it possible to produce highly adaptable and trustworthy software in a large-scale and cost-efficient manner. Adaptability and trustworthiness are not easily reconciled: unanticipated change, in particular, requires freedom to add and replace components, subsystems, communication media, and functionality with as few constraints regarding behavioural preservation as possible. Trustworthiness, on the other hand, requires that behaviour is carefully constrained; preferably through rigorous models and property specifications since informal or semi-formal notations lack the means to describe precisely the behavioural aspects of software systems: concurrency, modularity, integrity, security, resource consumption, etc.

The HATS project develops a formal method for the design, analysis, and implementation of highly adaptable software systems that are at the same time characterized by a high demand on trustworthiness. The core of the method is an object-oriented, executable modelling language for adaptable, concurrent software components: the Abstract Behavioural Specification (ABS) language.  Its design goal is to permit formal specification of concurrent, component-based systems on a level that abstracts away from implementation details, but retains essential behavioural properties. HATS is an Integrated Project supported by the 7th Framework Programme of the EC within the FET (Future and Emerging Technologies) scheme.

PRESEMT – Pattern Recognition-Based Statistically Enhanced Machine Translation

The objective of the PRESEMT project is to develop a flexible and adaptable Machine Translation (MT) system from source to target language, based on a method which is easily portable to new language pairs. This method attempts to overcome well-known problems of other MT approaches, e.g. the need to compile bilingual corpora or create new rules per language pair. PRESEMT is intended to result in a language-independent machine-learning-based methodology. To that end, a cross-disciplinary approach is adopted, combining linguistic information with pattern recognition techniques towards the development of a language-independent analysis.

PRESEMT is intended to be easily customisable to new language pairs. Consequently, relatively inexpensive, readily available language tools and internet-sourced resources are used (a large monolingual corpus in the source language and a small parallel corpus in the source and target languages), while the platform can handle the input of different linguistic tools, in order to support extensibility to new language pairs and user requirements. The translation context is modelled on phrases that are produced via an automatic and language-independent process, removing the need for specific, compatible NLP tools per language pair. Furthermore, system optimisation and personalisation is implemented via meta-heuristics (such as genetic algorithms or swarm intelligence).

PRESEMT is funded by the European Union under its Framework 7 Programme.

NETMAR – Open Service Network for Marine Environmental Data

NETMAR aims to develop a pilot European Marine Information System (EUMIS) for searching, downloading and integrating satellite, in situ, and model data from ocean and coastal areas. It will be a user-configurable system offering standards-based, flexible service discovery, access and chaining facilities. It will use a semantic framework coupled with ontologies for identifying and accessing distributed data, such as near-real time, forecast and historical data. EUMIS will also enable further processing of such data to generate composite products and statistics suitable for decision-making in diverse marine application domains.

The project aims to support operational “Smart Discovery” based on ontologies. This is the process by which users of the system are able to locate and therefore utilise datasets using search terms that are different but semantically linked to dataset labels such as keywords.  A frequently quoted example is the location of datasets labelled ‘rainfall’ using the search term ‘precipitation’. In a pan-European context the issue of language arises, meaning that operational “Smart Discovery” also needs to be able to link datasets labelled in one European language to search terms supplied in another. The creation of “Smart Discovery” demonstrators is straightforward using a small ontology containing carefully selected dataset labels linked to a few search terms that are well known to the demonstrator. However, taking this to the operational scale is much more difficult because there is no foreknowledge of the search terms that will be used.  Consequently, domain coverage of the underlying ontologies needs to be as close to complete as possible.

One possible approach to the development of operational ontologies might be to bring together groups of domain experts and knowledge engineers in a series of knowledge capture semantic workshops.  However, this approach fails to take into account existing computerised knowledge resources making it extremely inefficient.  A much more productive approach is to identify pre-existing controlled vocabularies, thesauri and ontologies that may be brought together into a single semantic resource to underpin the EUMIS.

NETMAR is funded by the European Union under its Framework 7 Programme.

2011/08/04

TOOLS Europe 2011 – Day 1

Filed under: Conference Report — Tags: — alexbergel @ 17:55

My name is Alexandre Bergel and I’m an assistant professor at the University of Chile; for JOT, I’ll be blogging about TOOLS EUROPE 2011. This is the fourth TOOLS EUROPE I’ve attended. This 49th edition took place in the lovely city of Zurich, in the German-speaking part of Switzerland. This blog entry relates the conference as I lived it.

Opening Session

The conference is opened by Judith Bishop, Antonio Vallecillo (the program co-chairs) and Bertrand Meyer. Observation (and evidence!) suggests that this year’s event has more participants than the previous year. The amphitheater is full, and not many attendees had their laptop in their hand, which is a good sign! Judith and Antonio give us some stats, which you can find elsewhere, but what I found noteworthy was that the acceptance rate was 28%; by comparison, ECOOP’11 has an acceptance rate of 28% and OOPSLA’11 36% (!).

This year’s event contains three novelties: Catherine Dubois will give a tutorial about Coq on Wednesday; a best paper prize will be awarded; and a selection of the papers will appear in a special issue of JOT. I like surprises, especially when they are this good. A tutorial by an expert, at no additional cost, is indeed something one should not miss.  The award is also an inexpensive mechanism to advertise the conference: winners are always happy to mention it. The special issue is also a good thing. Unfortunately, JOT is (still) not an ISI journal, which means that is not considered by national science agencies in South America. Oscar, the editor-in-chief, is aware of the importance of being in ISI; I cannot resist to raise this with him each time we meet. Apparently JOT has a nice backlog, meaning that the journal camera ready version of papers is due in December.

Bertrand then tells us something about next year. TOOLS EUROPE 2012 will be the 50th event and will be held in Prague at the end of May. The conference will take place just before ICSE, which will be held in Zurich. This is earlier than usual, but Prague is terrific and I hope the conference as a result attracts lots of great papers.

As much as we like surprises, it’s now time to move on to the even better stuff: the talks!

Keynote – Oscar Nierstrasz

Oscar’s keynote is called “Synchronizing Models and Code”. He considered naming it “Mind the Gap”. However, Oscar is Canadian and lives in Bern, where there is no underground 🙂

Oscar divides his keynote into four parts.

Part 1: “programming is modeling”. This first part is about trying to understand what is modeling in the object-oriented paradigm. Oscar asks what is the “OO Paradigm”? What people traditionally think about OOP is “reuse”, “design”, “encapsulation”, “objects+classes+inheritance”, “programs = objects+messages”, or “everything is an object”. Oscar argues that none of these buzzwords gives an adequate definition, and that the essence of object-orientation lies in the possibility for one to define their own domain specific language. “Design your own paradigm” seems to reflect the truth behind OOP. This reflects a slight change from his previous blog on that topic (Ten Things I Hate About Object-Oriented Programming): Oscar now agrees Object-Orientation is indeed a paradigm. I buy Oscar’s claim that the essence of OOP cannot be easily captured, however I am wondering whether the ability to easily express new paradigms and domain specific abstractions is really what crystalizes OOP. Scheme programmers have a powerful macro mechanism that seems to provide equal capability.

Part 2: The second part of the talk is subtitled “mind the gap”. Here, Oscar is arguing that there is a gap between model and code. This can be easily seen from activity by the reverse engineering community, which spends most of its time making sense of code by providing adequate representations. The modeling community is actively working in the other direction (as can be seen by some of the papers at ICMT’11, which is co-located with TOOLS EUROPE 2011). Oscar presents his argument by clarifying that there are actually many gaps. For example, there is a gap between static and dynamic views: what you see in the source code is classes/inheritance; what you see in the executing software is objects sending messages. I like Oscar’s punch. It’s astonishing to see that code debuggers, code profilers and test coverage tools still base their analysis on classes, methods and stack frames. We profile and debug Java applications the same way as COBOL and C. How can one effectively identify the culprit of a bottleneck or a failure by solely focussing on classes and methods? There is indeed something odd here and Oscar puts his finger on it.

Part 3 of the talk is called “bridging the gaps”. After so many open questions, the natural one to dwell on is: how can be we bridge the gaps between model and code? Oscar’s group has extensive experience in reverse engineering and component-based software engineering. Moose, a platform for software and data analysis (moosetechnology.org) is a solution for bridging these gaps. Oscar spends some time talking about Moose’s features, including an extensible metamodel, metrics and visualisations. Polymetric views are particularly interesting. They’ve had a strong impact on my personal work since most of my research results are visually presented using a polymetric view. To demonstrate some of these ideas, Oscar talks about Mondrian for a while; it’s an agile visualization engine that visualizes any arbitrary data from short scripts written in Smalltalk/Pharo. I am myself the maintainer of Mondrian. I encourage the reader to try it out, people find it mind blowing 🙂

Oscar finishes this part of the talk by summarising a number of language and runtime extensions, including Bifrost (for live feature analysis), Pinocchio (an extensible VM for dynamic languages), Classboxes (which I defined in my PhD), which are the precursor to Changeboxes, which enable forking and merging application versions at runtime.

We now arrive at Part 4: “lessons learned”. What Oscar’s group has learned is that “less is more”, i.e., “keep it simple”. All the experience gained by Oscar’s group is that simple programming languages can often be more successful than complex languages. “Keep it simple” is apparently a favorite lesson learnt of many keynote these days (e.g., Craig Chambers at ECOOP’11, David Ungar at  ECOOP’09).

What else has Oscar’s group learned? Visualizations often use simple metrics defined on simple structures. Oscar also argues that currently there are too many different tools to support software engineering that are not connected. I indeed agree with Oscar. I often found opportunities by connecting different mechanisms and tools (e.g., profiling and visualization).

Oscar’s conclusion is that the things we talk about in software engineering are about managing change. We spend too much time recreating links between artifacts. Instead, we should maintain the links between them. To do this, we should manipulate models, not code. How can I disagree? Oscar’s talk is a success and leads to more than 30 minutes of questions. I had the impression that most of them are about software visualization.

Session 1

After coffee (much needed), we moved to the first technical session, which was actually a shared session with ICMT. As such, the presentations were chosen to be of interest to both TOOLS EUROPE and ICMT attendees. This is always a nice part of TOOLS, the cross-pollination of presentations from different co-located events.

Jochen Kuester presents the first paper on “Test Suite Quality for Model Transformation Chains”, focusing on issues to do with testing model transformation chains (and not just single transformations, which are easier) based on specifying models used as input and output. The authors have lots of experience in dealing with workflow models in particular. What is interesting here is how the authors put an emphasis on the quality of the test suite, including element coverage. This seems to be a good move towards a proper testing discipline for model transformations.

The second talk of the session is given by Marco Trudel, and  is titled “Automated Translation of Java Source Code to Eiffel”. Their work is driven by (i) the need to use Java libraries within Eiffel and (ii) compile Java applications to native code. Translating Java code into Eiffel has a number of challenges: “continue” and “return” Java instruction, exceptions, Java Native Interface (JNI). Only 2.2 overhead performance (based on running Mauve, the java.io unit tests). There are lots of questions after the talk, many of which seem to be abstractions of the question “why translate Java into Eiffel in the first place?”, even though the presenter addressed this at the start.

The third talk is by Markus Lepper, and it is an ICMT paper but from a group outside of the typical ICMT community (more from the compiler construction community). The paper is “Optimizing of Visitor Performance by Reflection-Based Analysis”. This is a bit outside of my personal research interests, but I understood the following: “Everything is a model. Every computation is a model transformation”.

The fourth and final talk of the session is by Fredrik Seehusen entitled “An evaluation of the Graphical Modeling Framework (GMF) based on the Development of the CORAS tool”. GMF is an Eclipse-based framework for development of graphical language editors. The paper evaluates GMF as a result of building a tool called CORAS, and some interesting empirical observations are made, even though the presenter noted that there were potentially simpler approaches to producing the GMF editor (e.g., using a tool like EuGENia) that would lead to different observations. Nevertheless, this is an interesting presentation showing clear benefits, but I admit this conveys an impression of deja-vu for the Smalltalker that I am!

Session 2

The second session of the day focuses purely on TOOLS EUROPE presentations. The first talk is called “KlaperSuite: an Integrated Model-Driven Environment for Non-Functional Requirements Analysis of Component-Based Systems”. The problem on which the authors are focussing on is how to control the cost of enhancing QoS. Their approach is to use an automatic QoS prediction at design-time. It is based on an automatic model transformation of several input models. This is an interesting presentation of a complicated transformation with real applications.

The second talk is titled “Unifying Subjectivity”, by Langone, Ressia and Nierstrasz. “Subjective” means “based on or include by personal feelings”. Subjectivity is related to ContextL, perspective (from Us), roles (from gBeta). The problem addressed by Langone et al. is that all these related work are based on a common point of view. One of their message is that “context is subjective”.

Session 3

The first day ends with three presentations. First up is “Lifted Java: A minimal Calculus for Translation Polymorphism”. Compared to FeatherweightJava, Lifted Java has base classes, role classes, and a playedBy relation. It also has lifting and lowering to both shrink or widen a class during execution. This is an excellent talk on an excellent paper; it turns out that it has been awarded the best paper prize. I observe that today there are a number of papers about context and sujectivity. This is a clear indication that this is a hot topic, and not only for TOOLS!

There are two more talks in the session, “Location Types for Safe Distributed Object-Oriented Programming” and “Static Dominance Inference”, but I am off discussing ideas with some of the previous presenters. Apologies to the last two speakers; there’s so much good research I can’t keep up!

2011/01/10

OOPSLA day 3 (finally)

Filed under: Conference Report — Tags: — nick @ 09:43

Final day of the conference (is this the latest blog post ever? Probably. Consider it an un-expected Christmas gift):

Homogeneous Family Sharing – Xin Qi

Xin talked about extending sharing from classes to class families in the J& family of languages. Sharing is a kind of bidirectional inheritance, and is a language-level alternative to the adapter design pattern. The work includes formalism, soundness proof, and implementation using Polyglot. Dispatch is controlled by the view of an object, the view can be changed by a cast-like operation.

I didn’t quite get shadow classes, but I think they are like further bound classes in Tribe.

Finally, their families are open, as in open classes, so the programmer can add classes to families post hoc.

Mostly Modular Compilation of Crosscutting Concerns by Contextual Predicate Dispatch – Shigeru Chiba

Chiba presented a halfway language between OOP and AOP called GluonJ. The idea is that it should be a more modular version of aspects (I think). However, it was found to be not as modular to check and compile as standard OOP. The language supported cross-cutting concerns with predicate dispatch and an enhanced overriding mechanism.

Ownership and Immutability in Generic Java – Yoav Zibin

Yoav talked about work that combined ownership and immutability in a single system using Java’s generics. It is nice work, but I’m afraid I was too busy being nervous about being next up to write any notes.

Tribal Ownership – Nick Cameron (me!)

I talked about work with James Noble and Tobias Wrigstad on using a language with virtual classes (Tribe) to support object ownership (i.e., ownership types without the extra type annotations) for free (that is, no additional programmer syntax overhead). I really like this work, it all seems to come together so neatly, which I find pretty satisfying. I really do think virtual classes are extraordinarily powerful and yet easy enough for programmers to understand. Hopefully, they’ll make it into a mainstream language before too long…

A Time-Aware type System for Data-Race Protection and Guaranteed Initialization – Nicholas Matsakis

Nicholas introduced a language (Harmony) where intervals of ‘time’ are represented in the type system to make the language time-aware. This can be used to prevent race conditions in concurrent programs and for other uses (including some non-concurrent ones), such as allowing new objects time to establish their invariants. Intervals are scoped and an ordering may be specified by the programmer; the runtime or compiler may reorder execution subject to this ordering. Checking is modular and is flow insensitive.

OOPSLA day 2 (belated)

Filed under: Conference Report — Tags: — nick @ 09:41

NOTE: I’ve come back to my notes about the last two days of OOPSLA; it’s two weeks since the conference ended, and my memory is already kind of hazy, so the quality of these last two posts might be less than ideal… and another week and a half has passed before I finished even the first of the last two posts, still, better late than never, eh?

Creativity: Sensitivity and Surprise – Benjamin Pearce

Benjamin gave the oddest invited talk I’ve ever seen. He talked about various aspects of creativity over a large set of photographs, including some of his own. The photos were beautiful and made a great show. Not entirely sure what it has to do with programming, languages, systems, or applications, except at the most abstract level. Still an interesting talk, and it seemed to go down very well with the audience too.

Specifying and Implementing Refactorings – Max Schaffer

Automatic refactoring is popular, and correct in the common cases, but specifications are imperfect. The current `best practice’ (e.g., in Eclipse) is to use a bunch of preconditions, but this is not ideal for automatic tools because it is difficult to identify all necessary preconditions; so refactoring sometimes fails, even if all the preconditions are satisfied.

The authors previously suggested specifications based on dependencies and breaking refactorings down into smaller pieces. In this work, they show that this idea actually works for proper refactorings. The dependencies are static and semantic, e.g., constraints on synchronisation and name binding. The authors specified and implemented 17 refactorings.


What can the GC compute efficiently? – Christoph Reichenbach

Christoph presented a system which checks assertions when the garbage collector is run. These assertions are about objects and relations between objects in the heap. This is a pretty efficient way to check heap assertions because the heap must be traversed anyway to do GC. There is a single-touch property – i.e., each assertion can only touch each object once – so checking the assertions is very fast. Their assertion language can describe reachability, dominance, and disjointness, and assertions can be combined with the usual logical operators. Interestingly, garbage collection must be re-ordered to check for reachability and dominance.

Type Classes as Objects and Implicits – Bruno Oliveira

This work `encodes’ Haskell type classes in Scala using generics and implicits (the latter being a Scala feature that enables the programmer to omit some parameters). My understanding of the work was that type classes can be done using only generics, but implicits are required to make the `encoding’ usable by a programmer. There is a whole lot of other complex-Scala-type-system stuff – I have notes about type members and dependent method types, but I can’t remember why…

The interesting thing is that you end up with a really, really powerful language feature: as well as type classes, you can encode session types, which I find incredible (although according to the paper, you can do this with Haskell type classes).

Supporting Dynamic, Third-Party Code Customizations in JavaScript using Aspects

The authors are motivated by the popularity of JavaScript, both on the web and to customise browsers. Such scripts typically rely heavily on code injection, that is inserting new code into existing scripts. This is a pretty ugly process all round – it’s as non-modular as you can imagine and implemented in totally unchecked and unsafe ways (mostly monkey patching). The authors propose doing it with aspect-style weaving instead, but claim it’s not really aspects, apparently. Weaving is done by the JIT. Their empirical results show that their approach is sufficiently expressive for most use.

2010/10/27

OOPSLA day 1

Filed under: Conference Report — Tags: — nick @ 09:50

OOPSLA proper starts today. William Cook gave a mercifully short introduction and then we were straight into the first ever SPLASH invited talk, on evolving software by Stephanie Forrest. After the break, I attended the Onward! research paper stream, then after lunch an OOPSLA session, and the panel in the last session.

Registration-Based Language Abstractions – Samuel Davis

Samuel presented a method for adding language constructs to a language. These constructs are outside of the language, but also outside of the source code, so each programmer can have their own personal version of a programming language and the tool will present code using the right constructs. It seems like a very sophisticated macro system to me, but with better tool support (I don’t mean this in a derogatory way, the system is obviously more powerful and useful than macros, I just mean it as a simile).

I attended, enjoyed and found interesting two talks – Pinocchio: Bringing Reflection to Life with First-class Interpreters presented by Toon Verwaest, and Lime: A Java-Compatible and Synthesizable Language for Heterogeneous Architectures presented by Joshua Auerbach. I’m afraid I can’t say much about either of them, but they were good talks and I’ll try to read both papers.

From OO to FPGA: Fitting Round Objects into Square Hardware? – Jens Palsberg

A talk on compiling high-level languages to FPGAs, the challenge is to compile a standard OO program to an FPGA. Currently code written in a small subset of C can be compiled to FPGAs, but hand-coded FPGA code is better (faster, less area, smaller energy consumption). The general technique presented is to compile from Virgil to C and then to FPGAs. Unfortunately, the C subset is so small (no pointers, etc.) that objects cannot be compiled in the usual way.

The authors used a mix of existing compilation techniques with some new ideas of their own. Essentially they compile objects to sparse integer arrays, but must then expend a lot of effort in compressing these tables.

They have experimental results which show slightly better performance for their tool chain than for the hand-tuned version (for the non-oo case). In the OO case, it is harder to compare (no-one else has done it), but by interpreting performance results from CPU execution, they reason that their tool gives good results here too.

An interesting challenge which emerged in the questions, is producing an intermediate language for compilation to FPGAs that preserves parallelisation, as opposed to C which ‘flattens’ away any parallel code into sequential code.

Panel – Manifesto: a New Educational Programming Language

For the last session of the day, I attended the panel session on a proposed new programming language, aimed at first year university students. The language is called Grace (http://gracelang.org), it is proposed to be a community effort, with a semi-open development process and this panel was an effort to get the community behind it. Grace will be a general purpose (as opposed to domain specific) language, designed for novices (so no fancy type system), and deigned for programming in the small (so no fancy module system). It will not be industrial strength, therefore it will not need to be backward compatible, and should have low overhead for small programs (no “public static void main”).

The proposers argued that the time is right: Java will be good for the next few years, but is getting big and a bit long in the tooth. Alex Buckley (Java “theologist”, also on the panel, but not associated with Grace) did not disagree, but did say that Java would have a lot of the features discussed in a few years time (which means it might not look so old but will be even bigger).

The proposers (James Noble, Andrew Black, and Kim Bruce) have ambitious goals: Grace should be multi-platform, multi-paradigm (it should support teaching with or without objects, with or without types, in a functional or procedural style), it should be useful for teaching first and second years how to program, and for data structures courses. With Smalltalk never far below the surface, it was declared that everything would be an object, although it was not stated what was meant by “everything”. The proposers proposed that Grace have a powerful extension/library system for adding in things like concurrency, basically because we don’t know the best way to do concurrency right now. This seems a big ask, one thing the concurrency community mostly agress on is that concurrency cannot be added on afterwards, it must be holistically baked in.

It sounds to me like a great idea – an academic, community based teaching language should be much better suited for purpose than a professional programming language. But, to be honest, the session did not have very much buzz. The panel itself was obviously excited about the project, the audience less so. There were no great questions from the floor, or any really exciting debate. The lengthiest discussion was about the relative merits of the PLT group’s book/language/curriculumn. On the other hand no one really disagreed that there was a gap in the market for such a language. I’m interested to find out if the proposers got encouraging words after the session. (Disclaimer: I skipped the last half hour to attend a research talk, so the session might have lit up then and I would have missed it.)

2010/10/25

Dynamic Languages Symposium (DLS)

Filed under: Conference Report — Tags: , — nick @ 16:46

Today (now yesterday) I will mainly be attending the DLS, with intermissions at PLATEAU.

Almost unbelievably, the wifi works really well. It’s been a while since I’ve been at a conference with really good wifi, but the Nugget seems to have cracked it, fingers crossed that it’ll last.

Invited talk – Smalltalk Virtual Machines to JavaScript Engines: Perspectives on Mainstreaming Dynamic Languages – Allen Wirfs-Brock

The theme of this talk is how to get dynamic languages into the mainstream. The talk started well with some interesting general points on implementing dynamic languages, and ended well with some observations on the current generations of Javascript interpreters, but most of the talk was a retrospective of Smalltalk.

An early point was that performance is important for dynamic language take up. As much as language designers and programming guides state that design (of the language or program) must come first, if a language’s runtime performance is not good enough for a given task, it will not be used. Another early point was that virtual machine implementors got blinded by the metaphor – a VM is not a machine, it is a language implementation, and must be coded like one.

Allen gave an impressive Smalltalk demo, running Smalltalk from 1984 on a machine of the day, which practically ran on steam. It was an interactive graphical demo and the performance was very impressive (in fact, the computer was pretty high powered for the time, but it was impressive nonetheless).

More of Allen’s observations: holistic design gives high performance, tweaks to an existing VM will not get you very far (therefore, he is not a fan of trying to extend the JVM to dynamic languages); optimising fast paths is fine, but don’t let the exceptional paths get too slow, it is probably these that makes the language special; methodologies are required to enable language adoption.

Most of the Smalltalk stuff was fairly typical, but the analysis of its death was more interesting: Smalltalk was going guns blazing in ’95, but was effectively dead by ’97. His analysis was that Smalltalk was a fad, never a mainstream language (which sounds right to me, not that it was bad language mind, but its influence in academic language research seems much higher than its influence in real life). One reason for this demise is that the ‘Smalltalk people’ expended way too much energy on GUI systems that nobody actually used, and not enough energy on real problems.

Another interesting analysis was on why Java succeeded, reasons given included: familiar syntax, conventional tools, the network effect, etc. It seems to me that people always try to find excuses for Java’s success (although those points are obviously true); maybe Java was actually a good language that fit the needs of the time better than other languages?

A slight tangent was that Java is essentially a manually dynamically typed language; that is, casts are manual dynamic typing.

We then got back into the good stuff. Javascript was assumed to be inherently slow, then V8 (Google) showed that Javascript could be fast. Fast Javascript is important for the web, which means computing in general nowadays. You only need to look at the competition between browsers to see that Javascript performance is important. This reminded me that I think that Javascript engines are possibly the coolest and most interesting language engineering happening at the moment, and sadly it is not happening in academia (Allen suggested we need a research browser, which would be nice, but seems unlikely to come about).

Some of Allen’s observations on Javascript VMs: most teams are still on their first or second tries (earlier in the talk, Allen stated that it takes three goes to get good at VMs) – things are going to get much better; performance is still low compared to Smalltalk in ’95(!); Sunspider is not a great benchmark and is holding back proper evaluation and development; Javascript is harder to make fast than Smalltalk (because it is more dynamic), new ideas are needed to get more speed; Allen wandered why all the Javascript VMs use so much memory; the Javascript engine needs to be part of an holistic browser design to get good performance; Javascript seems to be the mainstream; Javascript performance is at the beginning, not the end.

The talk ended by reminding us that ambient/ubiquitous computing is here, and suggested that dynamic languages were going to be part of that era. He didn’t explain why, however.

Meanwhile, at PLATEAU – GoHotDraw: Evaluating the Go Programming Language with Design Patterns – Frank Schmager

Frank Schmager presented work he has done with James Noble and I on evaluating the Go programming language using design patterns. This is a novel idea for evaluating programming languages, and hopefully a new tool in the language evaluation toolbox. Apparently he gave a very good talk, go Frank! (sorry for that pun)

PLATEAU invited talk – The Fitness Function for Programming Languages: A Matter of Taste? – Gilad Bracha

For the second session, I attended Gilad Bracha’s invited talk at PLATEAU. Gilad always gives interesting and entertaining talks, and this was no exception.

There are two kinds of languages – those that everyone complains about and those that aren’t used
— Bjorn Stroustrup

Gilad’s talk was about how to judge a language. He argued that there was more to a language’s success than popularity, that in fifty years time we will look back and certain languages will be admired, and others won’t. Furthermore, success is really mostly down to the network effect (or the sheep effect, as Gilad called it); and that the most successful languages follow the Swiss Army Knife approach (aka the kitchen sink approach aka the postmodern approach) to language design, which generally, it is agreed, is not elegant or ‘great’. So is it possible to define what makes a language great? Or is it just a matter of taste?

There were a couple of tangents on parser combinators and first-class pattern matching in Newspeak (Gilad’s language project).

Some criteria for greatness (or lack of it) were suggested: how much syntactic overhead is there in the language (such as unnecessary brackets, semicolons), does the language force attention on the irrelevant (e.g., on the low level in a non-systems language), how compositional is the language. Gilad asked if languages can be judged as theories (where programs are models of the theory), criteria here were consistency, comprehensiveness, and predictive value (which for languages means how hard is it to write a given program).

An interesting observation was that the most common actions in a language have no syntax (or actually no syntactic overhead), e.g., function application in functional languages, method call in Smalltalk.

Another observation on evaluating languages is that we often try to measure how hard a language is to learn. Gilad argued that ease of learning is not an indicator of greatness. He uses Newton’s calculus as an allegory – it is widely hated for being difficult to learn, but is truly a great contribution to science.

Finally, Gilad stated that good aesthetics makes good software, that strict criteria for evaluating a language are not ideal, and that quality is more important than market share.

There was a big debate in the question session afterwards, covering how to judge a language, how to review programming language papers and the review process in general, cognitive theories, and even maths vs. art.

Proxies: Design Principles for Robust Object-oriented Intercession APIs – Tom Van Cutsem

Back at the DLS, Tom talked about implementing intercession (that is, intercepting method calls) in Javascript. Unlike some scripting languages, this is surprisingly difficult in Javascript. He described a technique to do it using proxy objects.

Contracts for First-Class Classes – T. Stephen Strickland

The last talk of the day was on contracts in Racket, specifically on and using first-class classes. (By the way, Racket is the new name for PLT Scheme, I assumed I was the only one who didn’t realise this, but a few other people admitted their confusion later. Why did they change the name?) Not being a Lisp fan, I found the examples very hard to read – too many brackets! Anyway, Stephen (or T?) described Eiffel-style contracts (pre- and post-conditions). These can be added using first-class classes (in the same way that methods can be added to classes). He showed that object contracts were still required in some circumstances (as well as class contracts), and showed how these were implemented using class contracts on new classes and proxies to make the old and new classes work together.

OOPSLA FOOL Workshop

Filed under: Conference Report — Tags: , — nick @ 10:59

First day of the conference I attended the FOOL workshop. This used to be one of my favourite workshops, it always seemed to have high quality, interesting papers. It used to be held at POPL, but was cancelled last year. It has now been resurrected at OOPSLA and looks to be just as good as it’s ever been. The following talks were the highlights for me.

DeepFJig – Modular composition of nested classes – Marco Servetto

FJig is a formalisation of the Jigsaw language, which focuses on mixins and ‘class’ composition. This work extends FJig with an extension to nested classes. I believe that virtual nested classes are the most significant change to the OO paradigm currently being proposed. Being able to abstract and inherit at the level of class families, rather than just individual classes, solves an awful lot of problems with the usual class model. Thus, I’m happy to see another take on the idea.

From what I gathered, DeepFJig only supports ‘static’ nesting of classes, this makes the type system simple (no path dependent types, or exact types required), but at the expense of a lot of the benefits of virtual classes. The interesting thing here is that by combining nested classes with the mixin composition operators found in FJig, you get a great deal of expressivity – Marco showed how to encode generics (in a virtual types style) and aspects (yes, aspects, as in AOP). This latter probably means that the language is too expressive for some software engineering purposes, but it wasn’t clear from the talk how much you can sabotage classes, as you can usually do when using aspects.

Lightweight Nested Inheritance in Layer Decomposition – Tetsuo Kamina

Another nested classes paper, this time extending the ‘Lightweight Nested Inheritance’ (J&) concept. LNI uses paths of class names and exact class names to identify types. This work extends that with generics so that types can be more precisely specified. However, it seems that you only bite back some of the expressivity lost when compared with systems such as VC and Tribe (which have dependent path types). So, it is a slightly more heavyweight lightweight version of nested class types. The interesting aspect is that type variables can be used as type constructors for path types (but not generics types, i.e., X.C is OK, but X<C> is not). I guess that if you assume that you are going to have generics in such a language anyway, then getting this extra expressivity is nice. However, I am not entirely convinced that the result is much more lightweight than the path dependent approach.

Mojojojo – More Ownership for Multiple Owners – Paley Li

Paley presented work he has done with James Noble and I on Multiple Ownership type systems. Multiple Ownership was proposed at OOPSLA ’07, but the formalisation was unwieldy. This work presents a simpler, more elegant, and more powerful formalisation of the Multiple Ownership concept.

Traditional ownership types systems give each object a single owner, this organises the heap into a tree, which is great for reasoning about programs. Unfortunately, real programs rarely fall nicely into a runtime tree structure, so more flexible ways to organise the heap are required. Multiple Ownership allows each object to be owned by multiple owners, thus organising the heap into a DAG.

Mojojojo (if you don’t get the name, Google for the Powerpuff Girls) adds a powerful system of constraints over the structure of the heap, generics, existential quantification, and a host of small improvements to the formal system, resulting in something a lot nicer than MOJO. Paley gave a great talk, and I recommend you all read the paper (totally unbiased opinion, of course 🙂 ).

Interoperability in a Scripted World: Putting Inheritance & Prototypes Together – Kathryn E. Gray

More work on integrating typed and untyped languages, which seems to be very fashionable right now. This work focuses on making Java and Javascript work together, rather than focusing on type checking. The most interesting bit is making prototyping and inheritance work together in the same world.

I’m sorry I cannot write more about this paper, because it sounds really interesting, but I was a bit distracted at the beginning of the talk, and never quite got back into the flow. I’ll be reading the paper later though…

Adding Pattern Matching to Existing Object-Oriented Languages – Changhee Park

Changhee talked about adding pattern matching to Fortress (which reminds me to check on what is happening with Fortress nowadays). In fact one of the more interesting bits of the talk was the generalisation – the requirements on a language such that it can support pattern matching in the way described.

The general idea of the work is to support ADT-style decomposition of types by subtype using a typecase expression and function parameters and decomposition of objects into its fields, similarly to how tuples are decomposed in Python etc. What I thought was missing was a discussion of whether or not you would actually want to do this: you are breaking object-based encapsulation, which most languages avoid.

First Impressions of Reno and OOPSLA/SPLASH

Filed under: Conference Report — Tags: — nick @ 08:43

Hi, I’m Nick Cameron, a post-doc at Victoria University of Wellington. I’m going to be covering the SPLASH/OOPSLA conference for the JOT blog.

It should be an interesting year for OOPSLA: it has undergone re-branding from OOPSLA to SPLASH (a re-arrangement of the OOPSLA letters, minus OO (because who programs with objects any more?), and appended with “for Humanity” (cringe)). The research paper selection process has changed too, they introduced `white-ball’ papers (each member of the PC can select one paper to be accepted without argument), and there were slightly more papers accepted than in previous years (including mine, so I can’t really complain; Thursday afternoon, if you’re interested). The payment structure has changed too: you have to register and pay for individual workshops, I can’t comprehend why – the best thing about workshops is wandering between them.

Anyway, after twenty-odd hours on a plane from NZ, we started our descent into Reno, we got a birds-eye view of the Nugget (the conference venue and hotel) as we came in – sandwiched between the expressway and a railway yard, it did not look good. Reno airport was like a gateway into hell, slot machines everywhere and a backdrop of billboards for “gentleman’s clubs”.

The conference venue is almost comically grim. The main floor is a sea of slot machines and haggard looking people. There are a lot of cowboy hats around, and not in an ironic way. No-one looks happy to be here, mostly people look desperate, or just plain chewed up. People smoke a lot, indoors, which seems a bit odd in 2010. There is a patched motorcycle gang drinking in the lobby (seriously, this is not an exaggeration).

If I had to describe Sparks, and the Nugget, in a word, it would be “grim”. I don’t think I have ever been so disappointed in the location of a conference. I hope (and anticipate) the conference itself will be excellent, although it will have to be to justify enduring this place for a week. On the bright side lots of interesting people are arriving, and the free wifi at Starbucks has become a natural hub…

OOPSLA Registration

Things are looking up at registration: registration was very quick and efficient, the conference pack was pretty streamlined – no conference bag and not too much spam, which is great – I dislike the amount of waste most conferences generate. There is wifi in the lobby and conference rooms (yay!), and the gift was the cutest set of rainbow mini highlighters, which is a nice change from a USB stick, although not as practical.

Looking through the program is pretty exciting, there seems to be a lot of good-sounding papers and invited talks. The organisers also seem to have managed the scheduling well – despite three concurrent sessions at most times, there is not a single clash between talks I’d like to attend; Thursday’s invited talk does seem to clash with lunch, however, not sure how well that is going to work out.

2010/09/16

Conference Report: TOOLS’10

Filed under: Conference Report — Tags: — Jan Vitek @ 18:05
The 2010 TOOLS Federated Conference Series tool place between the 28th of June and the 2nd of July in Málaga Spain. TOOLS has been around the block a  few times, 48 times to be exact.  The conference was founded in 1989 by Bertrand Meyer as a place where practical results in object-oriented technology could be published. After a hiatus from 2002 to 2006, TOOLS reinvented itself as a Federated Conference consisting of the International Conference on Model Transformation (ICMT), Tests and Proofs (TAP), Software Composition (SC) and TOOLS Europe. For the first three years of the new formula, TOOLS was held in Zürich and financially supported by the Chair of Software Engineering at ETHZ and Eiffel Software. This year was the federation’s first time away from home and it has been a great success. The flawless hospitality of Antonio Vallecillo and his team from the University of Málaga and a wonderful weather conspired to make TOOLS a success. The program included nine workshops: Business System Management and Engineering, Dynamic Languages and Applications, Model Comparison in Practice, Model-Driven Service Engineering, Model Transformations with ATL, Quantitative Approaches on Object-Oriented Software Engineering and Related Paradigms, Transformation Tool Contest, Transforming and Weaving Ontologies and Model Driven Engineering, and Component and Service Interoperability. There were eight invited speakers: Stefano Ceri (Milano), Oege de Moor (Oxford), Betrand Meyer (ETHZ), Ivar Jacobson (IJI), Michael Ernst (Washington), Nachi Nagapan (Microsoft) and Valérie Issarny (INRIA).  Videos of their talks are available here.
The technical program of TOOLS Europe itself consisted of 16 papers selected with care from 60 strong submissions. Dynamic languages were well represented in the program. Arnaud et al.’s Read-only Execution for Dynamic Languages proposes a dynamic view mechanism on object references to prevent updates in Smalltalk (using the new Pharo VM). JavaScript was the topic of Phillip Heidegger’s talk on Contract-Driven Testing where an automated tool (download) for generating test cases from behavioral contracts is presented for JavaScript. In this work the specifications are mostly type-like properties and value ranges but even such simple properties are quite challenging to check in JavaScript. Stephen Nelson presented a more focused study of contracts in Understanding the Impact of Collection Contracts on Design. The work focuses on the equality contract between collection classes and the objects it contains. The contract is that equality should not change while an object is within a collection even if the object’s state changes. The contribution is an extensive analysis of the behavior of a corpus of Java programs taken from the Qualitas Corpus. Statically typed languages were also represented with, amongst others, the presentation by Ostlund of Welterweight Java a new formalism for researchers looking for a formal calculus that approximates Java and, unlike FeatherweightJava, supports threads and state. Ernst presented Revisiting Parametric Types and Virtual Classes which sheds light on the power of virtual constraints, a mechanism available in gbeta and Scala, for type refinement. Lastly, Renggli bridged static and dynamic languages with Domain-Specific Program Checking which argues for static checking of DSLs embedded in Smalltalk.
Many photos are on the Facebook page of the conference (here) where you can find the results of the TOOLS photo contest.

Powered by WordPress