PADL 2013 Accepted Papers with Abstracts

Paul Klint and Atze van der Ploeg. A Library for Declarative Resolution-Independent 2D Graphics
Abstract: The design of most 2D graphics frameworks has been guided by what the computer can draw efficiently, instead of by how graphics can best be expressed and composed. As a result, such frameworks restrict expressivity by providing a limited set of shape primitives, a limited set of textures and only affine transformations. For example, non-affine transformations can only be added by invasive modification or complex tricks rather than by simple composition. More general frameworks exist, but they make it harder to describe and analyze shapes. We present a new declarative approach to resolution-independent 2D graphics that generalizes and simplifies the functionality of traditional frameworks, while preserving their efficiency. As a real-world example, we show the implementation of a form of focus+context lenses that gives better image quality and better performance than the state-of-the-art solution at a fraction of the code. Our approach can serve as a versatile foundation for the creation of advanced graphics and of higher level frameworks.
George Giorgidze, Torsten Grust, Iassen Halatchliyski and Michael Kummer. Analysing the Entire Wikipedia History with Database Supported Haskell
Abstract: In this paper we report on our experience of using Database Supported Haskell (DSH) for analysing the entire Wikipedia history. DSH is a novel high-level database query facility allowing for the formulation and efficient execution of queries on nested and ordered collections of data. DSH grew out of a research project on the integration of database querying capabilities into high-level, general-purpose programming languages. It is an emerging trend that querying facilities embedded in general-purpose programming languages are gradually replacing lower-level database languages such as SQL as preferred facilities for querying large-scale database-resident data. We relate this new approach to the current practice which integrates database queries into (non-CS) analysts' workflows in a rather ad-hoc fashion. This paper would interest early technology adopters interested in new database query languages and practitioners working on large-scale data analysis.
Sergio Castro, Kim Mens and Paulo Moura. LogicObjects: Enabling Logic Programming in Java Through Linguistic Symbiosis
Abstract: While object-oriented programming languages are good at modelling real-world concepts and benefit from rich libraries and developer tools, logic programming languages are well suited for declaratively solving computational problems that require knowledge reasoning. Non-trivial declarative applications could take advantage of the modelling features of object oriented programming and of the rich software ecosystems surrounding them. Linguistic symbiosis is a common approach to enable complementary use of languages of different paradigms. However, the problem of concepts leaking from one paradigm to another often hinders the applicability of such approaches. This issue has mainly been reported for object-oriented languages participating in a symbiotic relation with a logic language. To address this issue, we present LogicObjects, a linguistic symbiosis framework for transparently and (semi-) automatically enabling logic programming in Java, that aims to solve most of the problems of paradigm leaking reported in other works.
Vítor Rodrigues, Benny Akesson, Mário Florido and Simão Melo de Sousa. A Declarative Compositional Timing Analysis for Multicores Using the Latency-Rate Abstraction
Abstract: This paper presents a functional model for timing analysis by abstract interpretation, used for estimation of worst-case execution times (WCET) in multicore architectures using a denotational semantics. The objective aims at surpassing the intrinsic computational complexity of timing analysis of multiple processing units sharing common resources. For this purpose, we propose a novel application of latency-rate (LR) servers, phrased in terms of abstract interpretation, to achieve timing compositionality on requests to shared resources. The soundness of the approach is proved with respect to a calculational fixpoint semantics for multicores that is able to express all possible ways in which a shared resource can be accessed. Experimental results show that the loss in precision introduced by the LR server model is about 8.5% and is fairly compensated by the gain in computational efficiency, which is above 99%. The system is implemented in Haskell, taking advantages of the declarative features of the language for a simpler and more robust specification of the underlying concepts.
Pablo Chico De Guzmán, Manuel Carro and Manuel Hermenegildo. Supporting Pruning in Tabled LP
Abstract: This paper analyzes the semantic issues for supporting pruning operators in tabled LP. A novel version of once/1 and its implementation is presented. This operator, together with answer-on-demand strategies, makes it possible to avoid computing unneeded solutions for a certain type of problems which can benefit from tabled LP but in which only a single solution is needed. Model checking and planning are examples of that broad class of applications in which all-solution evaluation strategies, such as local evaluation, perform unnecessary work. The proposed version of once/1 is directly applicable to the efficient implementation of other optimizations, such as early completion, cut-fail loops (to, e.g., prune at the top-level), if-then-else statements and constraint-based branch-and-bound. Although once/1 still presents open issues such as dependencies of tabled solutions on program history, our experimental evaluation confirms that once/1 provides an arbitrarily large efficiency improvement in several application areas.
Paulo Moura. A Portable and Efficient Implementation of Coinductive Logic Programming
Abstract: We describe the portable and efficient implementation of coinductive logic programming found in Logtalk, discussing its features and limitations. As Logtalk uses as a back-end compiler a compatible Prolog system, we also discuss the status of key Prolog features for an efficient and usable implementation of coinduction.
Gabriel Aranda, Susana Nieva, Fernando Sáenz-Pérez and Jaime Sánchez-Hernández. Formalizing a Broader Recursion Coverage in SQL
Abstract: SQL is the de facto standard language for relational databases and has evolved by introducing new resources and expressive capabilities, such as recursive definitions in queries and views. Recursion was included in the SQL-99 standard, but this approach is limited as only linear recursion is allowed, mutual recursion is not supported, and negation can not be combined with recursion. In this work, we propose a new approach, called R-SQL, aimed to overcome these limitations and others, allowing in particular cycles in recursive definitions of graphs and mutually recursive relation definitions. In order to combine recursion and negation, we import ideas from the deductive database field, such as stratified negation, based on the definition of a dependency graph between relations involved in the database. We develop a formal framework using a stratified fixpoint semantics and introduce a proof-of-concept implementation.
Benjamin Canou, Emmmanuel Chailloux and Vincent Balat. A Declarative-friendly API for Web Document Manipulation
Abstract: The Document Object Model (DOM) is the document manipulation API provided to the JavaScript developer by the browser. It allows the programmer to update the currently displayed Web page from a client side script. For this, DOM primitives can create, remove or modify element nodes in the internal tree representation of the document. Interactive documents can be created by attaching event handlers and other auxiliary data to these nodes. The principle is interesting and powerful, and no modern Web development could be possible without it. But the implementation is not satisfactory when seeking predictability and reliability, such as expected with declarative languages or static type systems. Primitives are too generic, and when called in abnormal conditions can either throw exceptions or perform implicit imperative actions. In particular, DOM primitives can conditionally and implicitly move nodes in the document, in a way very difficult to be statically prevented or even detected. In this article, we introduce cDOM, an alternative document model that performs implicit deep copies instead of moves. By not moving their children implicitly, it preserves the structure of nodes after their creation and between explicit mutations. Side data embedded in the document are also duplicated in a sensible way so that the copies are completely similar in structure to the originals. It thus provides a more usual semantics, over which existing declarative abstractions and type systems can be used in a sound way.
Bernd Brassel, Michael Hanus, Björn Peemöller and Fabian Reck. Implementing Equational Constraints in a Functional Language
Abstract: KiCS2 is a new system to compile functional logic programs of the source language Curry into purely functional Haskell programs. The implementation is based on the idea to represent the search space as a data structure and logic variables as operations that generate their values. This has the advantage that one can apply various, in particular, complete search strategies or even user-defined strategies to compute solutions. However, the generation of all values for logic variables might be inefficient for applications that exploit constraints on partially known values. To overcome this drawback, we propose new techniques to implement equational constraints in this framework. In particular, we show how unification modulo function evaluation and functional patterns can be added without sacrificing the efficiency of the kernel implementation.
João Santos and Ricardo Rocha. On the Efficient Implementation of Mode-Directed Tabling
Abstract: In a traditional tabling system, all the arguments of a tabled subgoal call are considered when storing answers into the table space. Traditional tabling systems are thus very good for problems that require storing all answers. Mode-directed tabling is an extension to the tabling technique that supports the definition of selective criteria for specifying how answers are inserted into the table space. In this paper, we focus our discussion on the efficient implementation of mode directed-tabling in the YapTab tabling system, which uses tries to implement the table space. We introduce and propose 7 different mode operators and discuss how we have extended and optimized YapTab's table space organization to provide engine support for them. Experimental results, in the context of benchmarks taking advantage of mode-directed tabling, show that our implementation compares favorably with the B-Prolog and XSB state-of-the-art Prolog tabling systems.
Georgios Fourtounis, Nikolaos Papaspyrou and Panos Rondogiannis. The Generalized Intensional Transformation for Implementing Lazy Functional Languages
Abstract: The intensional transformation is a promising technique for implementing lazy functional languages based on a demand-driven execution model. Despite its theoretical elegance and its simple and efficient execution model, the intensional transformation suffered, until now, from two main drawbacks: it could only be applied to programs that manipulate primitive data-types and it could only compile a simple (and rather restricted) class of higher-order functions. In this paper we remedy the above two deficiencies, obtaining a transformation algorithm that is applicable to mainstream lazy functional languages. The proposed transformation initially uses defunctionalization in order to eliminate higher-order functions from the source program. The original intensional transformation is then extended in order to apply to the target first-order language with user-defined data types that resulted from the defunctionalization. It is demonstrated that the proposed technique can be used to compile a relatively large subset of Haskell into portable C code whose performance is comparable to existing mainstream implementations.
Senlin Liang and Michael Kifer. Terminyzer: An Automatic Non-Termination Analyzer for Large Logic Programs
Abstract: There have been many studies in termination analysis of logic programming but little has been done on analyzing non-termination of logic programs, which is even more important in our opinion. Non-termination analysis examines program execution history when non-termination is suspected and attempts to inform the programmer about possible ways to fix the problem. This paper attempts to fill in the void. We study the problem of non-termination in tabled logic engines with subgoal abstraction, such as XSB, and propose a suite of algorithms, called non-Termination Analyzer, Terminyzer, for automatic detection of non-termination and explaining it to the user. Terminyzer includes several non-termination analysis approaches of different computational complexity. These approaches are all based on analyzing forest logging traces and supply sequences of tabled calls that are likely involved in non-terminating cycles. It also provides the sequences of functors that are applied repeatedly to generate infinitely many answers and thus help programmers debug large programs by focusing on much smaller subsets of rules.

Terminyzer is included in both XSB and FLORA, and all examples used in this paper are available online.
Nicos Angelopoulos, Vítor Santos Costa, João Azevedo, Jan Wielemaker and Rui Camacho, and Lodewyk Wessels. Integrative functional statistics in logic programming
Abstract: We present a logic programming library that facilitates low level integration of the R statistical package to YAP Prolog. Due to R's functional programming affinity the interface introduced has been afforded a minimalistic feel. Programs utilising the library syntax are elegant and succinct with intuitive semantics and clear integration. In effect, the library enhances logic programming with the ability to tap into the vast wealth of statistical and probabilistic reasoning available in R. The software is a useful addition to the efforts towards the integration of statistical reasoning and knowledge representation. Furthermore it can be used to open up new application areas for logic programming such as bioinformatics, computational biology, text mining, psychology and neuro sciences, where R has particularly strong presence.
Zoé Drey, Jose F. Morales, Manuel Hermenegildo and Manuel Carro. Reversible Language Extensions and their Application in Debugging
Abstract: A range of methodologies and techniques are available to guide the design and implementation of language extensions and domain-specific languages on top of a base language. A simple yet powerful technique to this end is to formulate the extension via source-to-source transformation rules that are interleaved across the different compilation passes of the base language. Despite being a very successful approach, it has the main drawback that the input source code is lost in the process. As a result, during the whole workflow of program development (warning and error reporting, source-level debugging, or even program analysis) the tools involved report to the user in terms of the base language. In this paper, we propose an approach to language extensions for Prolog that augments translation rules so that source-related symbolic annotations are included in the target program. Such annotations allow customizing the base language tools in a straightforward way, making it possible for them to selectively reverse the translated code and thus reflect their information back to the user in the source, unexpanded language. We illustrate the approach by showing that coupling it with minimal extensions to a generic Prolog debugger allows us to provide users with a familiar, source-level view during the debugging of programs which use a variety of language extensions, such as functional notation, DCGs, or CLP{Q,R}.
Sander Canisius, Nicos Angelopoulos, and Lodewyk Wessels. ProSQLite: Prolog file based databases via an SQLite interface
Abstract: We present a succinct yet powerful interface library to the Sqlite database system. The single file, serverless approach of SQLite along with the natural integration of relational data within Prolog, render the library a useful addition to the existing database libraries in modern open-source engines. We detail the architecture and predicates of the library and provide example deployment scenarios. A simple bioinformatics example is presented throughout to illustrate prosqlite's main functionalities. Finally, this paper discusses the strengths of the system and highlights possible extensions.
Alan Jeffrey. Dependently Typed Web Client Applications – FRP in Adga in HTML5
Abstract: In this paper, we describe a compiler back end and library for web client application development in Agda, a dependently typed functional programming language. The compiler back end targets ECMAScript (also known as JavaScript), and so is executable in a browser. The library is an implementation of Functional Reactive Programming (FRP) using a constructive variant of Linear-time Temporal Logic (LTL) as its type system.
Rui Machado, Salvador Abreu and Daniel Diaz. Parallel Performance of Declarative Programming using a PGAS Model
Abstract: Constraint Programming is one approach to declarative programming where a problem is modelled as a set of variables with a domain and a set of relations (constraints) between them. Constraint-based Local Search builds on the idea of using constraints to describe and control local search. Problems are modelled using constraints and heuristics for which solutions are searched, using Local Search. With the progressing move toward multi and many-core systems, parallelism has become mainstream as the number of cores continues to increase. Declarative programming approaches such as those based on constraints need to be better understood and experimented in order to understand their parallel behaviour.
In this paper, we discuss some experiments we have been doing with Adaptive Search and present a new parallel version of it based on GPI, a recent API and programming model for the development of scalable parallel applications. Our experiments on different problems show interesting speed-ups and, more importantly, a deeper interpretation on such declarative programming approach.