John Alan Robinson
Professeur Emérite à l'Université de Syracuse
Unification and Resolution in Retrospect
|In retrospect, unification and resolution seem rather obvious ideas, which arise inevitably when one asks what must be syntactically true of a set of clauses which possesses the semantic property of having no Herbrand models.|
Professeur à l'Université de Gênes
Constraint Programming for Concurrent and Distributed Computing
The interest in extending a language for constraint-handling with
mechanisms for concurrency is driven mainly by two aims: the increase of
efficiency due to the possibility of solving in parallel parts of a
specific problem, and the development of a language suitable for
distributed computing which benefits from the advantages of
So far, the most popular proposal for combining constraints and concurrency is CCP (Concurrent Constraint Programming). This paradigm comes from the tradition of Concurrent Logic Programming, and incorporate features from "classical" process algebras like CCS and CSP. It is a very general paradigm which subsumes not only Concurrent Logic Programming, but also other declarative concurrent formalisms like for instance the language for Communicating Agents and Dataflow languages.
Thanks to their declarative core, CCP allows for an easier style of programming, and furthermore has a simple denotational semantics, much simpler than concurrent languages in the imperative style. This is important not only for the understandability of the language, but also for the development of software tools based on compositional semantics (structural systems for verification, abstract interpreters etc.). As an example of this "declarative core", the notion of hiding (or locality), which is rather problematic in imperative programming, is modeled in CCP in an elegant and logical/algebraic way by reducing it to a cylindrification operator on the underlying constraint system.
The various versions of CCP which have been proposed however are not suitable for a distributed architecture. On one hand there are the languages with "atomic tell", based on the principle that the addition of a new constraint to the store and the test for consistency of the resulting store must be done "atomically". The resulting mechanism is very powerful, but it is hard to implement on a really distributed setting: this kind of test-and-set action would be a bottleneck reducing dramatically the degree of parallelism.
On the other hand there are the languages with "eventual tell", where the synchronization between processes is based only on the entailment check (ask). The problem with this approach is that it does not provides powerful mechanisms for process cooperation. For instance, in most cases when it is required that certain processes achieve some agreement, it is necessary to have also an additional process which acts as a monitor. Another drawback is that there are no mechanisms to prevent entering into an "inconsistent state of the world", and, worse yet, if such an inconsistency is generated, there is no way to backtrack from it.
In this talk we propose a concurrent constraint paradigm based on a new mechanisms for process synchronization, suitable for cooperation in a distributed setting. The core of the proposal is a test-and-set primitive which acts only on parts of the state, so to avoid global dependencies, but still is powerful enough to express in a natural way distributed algorithms. More specifically, this primitive, inspired from the language Linda, is an ask-and-remove action which performs an entailment check and consumes the entailing constraint. This mechanism also offers a solution to the problem of recovering from inconsistency.
Professeur à l'Université de Technologie de Compiegne
en collaboration avec Eric Pinson, Université Catholique de l'Ouest, Angers
Bounds and adjustments
associated with the Pm/ri,qi/Cmax scheduling problem
|The aim of this talk is to present some bounds and adjustments associated with the m-machine problem. In the m-machine problem, one has to schedule n tasks on m identical machines in order to minimize the makespan. Each task is supposed to have a head, a processing time and a tail. m-machine problems permit to model cumulative constraints. For m=1 we get the one machine sequencing problem which plays a central role in solving NP-hard disjunctive scheduling problem as the job-shop. It is due to the properties of Jackson's Preemptive Schedule (JPS) which permits to get good lower bounds and efficient adjustments of heads and tails. We have generalized JPS by introducing Jackson's Pseudo Preemptive Schedule (JPPS). The makespan of JPPS can be computed in O(nlogn+nmlogm). We can also get adjustments of heads and tails with a m-machine problem. So JPPS could also play a central role for solving the Resource Constrained Project Scheduling Problem.|