LICS 2011 Accepted Papers with Abstracts
Abstract:
We study the two-variable fragments D^2 and IF^2 of dependence logic
and independence-friendly logic. We consider the satisfiability and
finite satisfiability problems of these logics and show that for D^2,
both problems are NEXPTIME-complete, whereas for IF^2, the problems are
Pi^0_1 and Sigma^0_1 -complete, respectively. We also show that D^2 is
strictly less expressive than IF^2 and that already in D^2 infinity and
equicardinality of two unary predicates can be expressed.
Abstract:
For a fixed infinite structure Gamma with finite signature tau, we
study the following computational problem: Input are quantifier-free
first-order tau-formulas phi_0,phi_1,...,phi_n that define relations
R_0,R_1,...,R_n over Gamma. The question is whether the relation R_0 is
primitive positive definable from R_1,...,R_n, i.e., definable by a
first-order formula that uses only relation symbols for R_1,..., R_n,
equality, conjunctions, and existential quantification (disjunction,
negation, and universal quantification are forbidden).
We show
decidability of this problem for all structures Gamma that have a
first-order definition in an ordered homogeneous structure Delta with a
finite language whose age is a Ramsey class and determined by finitely
many forbidden substructures. Examples for structures Gamma with this
property are the order of the rationals, the random graph, the
homogeneous universal poset, the random tournament, all homogeneous
universal C-relations, and many more. We also obtain decidability of the
problem when we replace primitive positive definability by existential
positive, or existential definability. Our proof makes use of universal
algebraic and model theoretic concepts, Ramsey theory, and a recent
characterization of Ramsey classes in topological dynamics.
Abstract:
We study the probabilistic coherent spaces --- a denotational semantics
interpreting programs by entire functions taking scalars on the non
negative real numbers. We prove that this semantics is adequate for a
probabilistic extension of pure lambda-calculus: the probability that a
term reduces to a head normal form is equal to the sum of the scalars of
its denotation on a suitable set of values. The result gives, in a
probabilistic setting, a quantitative refinement to the adequacy of
Scott's model for untyped lambda-calculus.
Richard Mayr and Parosh Abdulla.
Computing Optimal Coverability Costs in Priced Timed Petri Nets
Abstract:
We consider timed Petri nets, i.e., unbounded Petri nets where each token carries a
real-valued clock. Transition arcs are labeled with time intervals,
which specify constraints on the ages of tokens.
Our cost model assigns token storage costs per time unit to places,
and firing costs to transitions.
We study the cost to reach a given control-state.
In general, a cost-optimal run may not exist. However,
we show that the infimum of the costs is computable.
Abstract:
There are standard logics DTC, TC, and LFP capturing the complexity
classes L, NL, and P on ordered structures, respectively. In [Chen and
Flum, 2010] we have shown that LFP_inv, the ``order-invariant least
fixed-point logic LFP,'' captures P (on all finite structures) if and
only if there is a listing of the P-subsets of the set TAUT of
propositional tautologies. We are able to extend the result to listings
of the L-subsets (NL-subsets) of TAUT and the logic DTC_inv (TC_inv). As
a byproduct we get that LFP_inv captures P if DTC_inv captures L.
Furthermore,
we show that the existence of a listing of the L-subsets of TAUT is
equivalent to the existence of an almost space optimal algorithm for
TAUT. To obtain this result we have to derive a space version of a
theorem of Levin on optimal inverters.
Abstract:
The use of probability to reason about uncertainty is fundamental to
modern science and AI, and the computation of conditional probabilities,
in order to perform evidential reasoning in probabilistic models, is
perhaps its single most important computational problem. The
desire to produce increasingly complex probabilistic models has led to
renewed interest in probabilistic programming languages, in which
practitioners can define intricate, even infinite-dimensional, models by
implementing a generative process that produces an exact sample from
the joint distribution. Implementations of these languages
have endeavored to provide general algorithmic support for conditional
distributions with respect to continuous random variables, but these
efforts have remained ad-hoc and incomplete. Here we
demonstrate why this support is necessarily incomplete, by exhibiting a
computable joint distribution with a noncomputable conditional
distribution. Specifically, we construct a pair of computable
random variables (X,Y) in the unit interval whose conditional
distribution P[Y|X] encodes the halting problem.
Nevertheless,
probabilistic inference has proven remarkably successful in practice,
even in infinite-dimensional continuous settings. We prove
several results giving general conditions under which conditional
distributions are computable. In the discrete or dominated
setting, under suitable computability hypotheses, conditional
distributions are computable. Likewise, conditioning in
abstract topological settings is a computable operation, in the presence
of certain additional structure, such as independent absolutely
continuous noise.
Abstract:
Ground tree rewrite systems (GTRS) are an extension of pushdown systems
with the ability to spawn new subthreads that
are hierarchically structured.
In this paper, we study the following problems over GTRS:
(1) model checking EF-logic, (2)
weak bisimilarity checking against finite systems, and (3) strong bisimilarity
checking against finite systems. Although they are all known to be decidable,
we show that problems (1) and (2) have nonelementary complexity, whereas
problem (3) is shown to be in $\coNEXP$ by finding a syntactic fragment of EF
whose model checking complexity is complete for $\P^\NEXP$.
The same problems are studied
over a more general but decidable extension of GTRS
called regular GTRS (RGTRS), where regular rewriting is allowed. Over RGTRS
we show that all three problems have nonelementary complexity.
We also apply our techniques to problems over PA-processes, a well-known class
of infinite systems in Mayr's PRS (Process Rewrite Systems) hierarchy. For
example,
strong bisimilarity checking of PA-processes against finite systems is shown
to be in
$\coNEXP$, yielding a first elementary upper bound for this problem.
Abstract:
Abstract—We study finite tree automata running over infinite binary
trees. As usual, a run of such an automaton over an input tree is a tree
labeled by control states of the automaton: the labelling is built in a
top-down fashion and should be consistent with the transitions of the
automaton. A branch in a run is accepting if the ω-word obtained by
reading the states along the branch satisfies some acceptance condition
(typically an ω-regular condition such as a Büchi or a parity
condition). Finally, a tree is accepted by the automaton if there exists
a run over this tree in which every branch is accepting. In this paper,
we consider two relaxations of this definition introducing a
qualitative accept.
First, we relaxe the notion of accepting run
by allowing a negligible set (in the sense of measure theory) of
non-accepting branches. In this qualitative setting, a tree is accepted
by the automaton if there exists a run over this tree in which almost
every branch is accepting. This leads to a new class of tree languages,
qualitative tree languages. This class enjoys many good properties:
closure under union and intersection (but not under complement),
emptiness is decidable in polynomial time, strong relation with Markov
decision process.
The second one, is by replacing the existential
quantification — a tree is accepted if there exists some accepting run
over the input tree — by a probabilistic quantification — a tree is
accepted if almost every run over the input tree is accepting. For the
run, we may use either classical acceptance or qualitative acceptance.
In particular, for the latter, when equipped with a Büchi condition, we
show that it leads to a class of probabilistic automata on infinite
trees enjoying a decidable emptiness problem. To our knowledge, this is
the first positive result for a class of probabilistic automaton over
infinite trees.
Abstract:
We classify completely the complexity of evaluating positive equality-free sentences
of first-order logic over a fixed, finite structure D. This problem may be seen as a
natural generalisation of the quantified constraint satisfaction problem QCSP(D). We obtain
a tetrachotomy for arbitrary finite domains: each problem is either in L, is NP-complete,
is co-NP-complete or is Pspace-complete. Moreover, its complexity is characterised algebraically
in terms of the presence or absence of specific surjective hyper-endomorphisms; and, logically,
in terms of relativisation properties with respect to positive equality-free sentences.
We prove that the meta-problem, to establish for a specific D into which of the four
classes the related problem lies, is NP-hard.
Abstract:
Using Jeřábek's framework for probabilistic reasoning, we formalize the
correctness of two fundamental RNC2 algorithms for bipartite perfect
matching within the theory VPV for polytime reasoning. The first
algorithm is for testing if a bipartite graph has a perfect matching,
and is based on the Schwartz-Zippel Lemma for polynomial identity
testing applied to the Edmonds polynomial of the graph. The second
algorithm, due to Mulmuley, Vazirani and Vazirani, is for finding a
perfect matching, where the key ingredient of this algorithm is the
Isolating Lemma.
Abstract:
There is recently a significant effort to add quantitative objectives to formal verification and synthesis.
We
introduce and investigate the extension of temporal logics with
quantitative atomic assertions, aiming for a general and flexible
framework for quantitative-oriented specifications.
In the heart
of quantitative objectives lies the accumulation of values along a
computation. It is either the accumulated summation, as with
the
energy objectives, or the accumulated average, as with the mean-payoff
objectives. We investigate the extension of temporal logics with the
{\em
prefix-accumulation assertions\/} $\Sum(v) \geq c$ and $\Avg(v) \geq
c$, where $v$ is a numeric variable of the system, $c$ is a constant
rational
number, and $\Sum(v)$ and $\Avg(v)$ denote the accumulated sum and
average of the values of $v$ from the beginning of the computation
up
to the current point of time. We also allow the {\em path-accumulation
assertions\/} $\LimInfAvg(v)\geq c$ and $\LimSupAvg(v)\geq c$,
referring to the average value along an entire computation.
We
study the border of decidability for extensions of various temporal
logics. In particular, we show that extending the fragment of CTL that
has only the EX, EF, AX, and AG temporal modalities by
prefix-accumulation assertions and extending LTL with path-accumulation
assertions, result in temporal logics whose model-checking problem is
decidable. The extended logics allow to significantly extend the
currently known energy and mean-payoff objectives. Moreover, the
prefix-accumulation assertions may be refined with
``controlled-accumulation'', allowing, for example, to specify
constraints on the average waiting time between a request and a grant.
On
the negative side, we show that the fragment we point to is, in a
sense, the maximal logic whose extension with prefix-accumulation
assertions permits a decidable model-checking procedure. Extending a
temporal logic that has the EG or EU modalities, and in particular CTL
and LTL, makes the problem undecidable.
Abstract:
We introduce the domain of continuous random variables (CRV) over a
domain, as an alternative to Jones and Plotkin's probabilistic
powerdomain. While no known Cartesian-closed category is stable
under the latter, we show that the so-called thin CRVs define a strong
monad on the Cartesian-closed category of bc-domains. The Jones and
Plotkin semantics of higher-order probabilistic programs occurs as a
form of extensional collapse of our new semantics in terms of thin
CRVs, preserving observational equivalence. We apply this to solve
a recent problem posed by M. Escard\'o [note: proof not complete as
of Tuesday, Jan 04, 13h; claim may be removed from final
submission]. Thin CRVs arose from the study of the second author's
(layered) Hoare indexed valuations, and we also make the connection
apparent.
Sergey Goncharov and Lutz Schröder.
Powermonads and Tensoring Unranked Effects
Abstract:
Both in semantic theory and in programming practice, algebraic
concepts such as monads or, essentially equivalently, (large)
Lawvere theories are a well-established tool for modelling generic
side-effects. An important issue in this context are combination
mechanisms for such algebraic effects, which allow for the modular
design of programming languages and verification logics. The most
basic combination operators are sum and tensor: while the sum of
effects is just their non-interacting union, the tensor imposes
commutation of effects. However, for effects with unbounded arity,
such as continuations or unbounded non-determinism, it is not a
priori clear whether these combinations actually exist in all
cases. Here, we introduce the class of uniform effects, which
includes unbounded non-determinism and continuations, and prove that
the tensor does always exist if one of the component effects is
uniform, thus in particular improving on previous results on
tensoring with continuations. We then treat the case of (unbounded)
non-determinism in more detail, and give an order-theoretic
characterization of effects for which tensoring with non-determinism
is conservative, thus enabling non-deterministic arguments for
deterministic programs such as a generic version of the
Fisher-Ladner encoding of control operators.
Abstract:
A bicategory of very general nondeterministic concurrent games and
strategies is presented. The intention is to formalize distributed games
in which both Player (or a team of players) and Opponent (or a team of
opponents) can interact in highly distributed fashion, without, for
instance, enforcing that their moves alternate. Games and
strategies are represented as polarized event structures, in
which
polarities distinguish the moves of Player and
Opponent. A total map of polarized event structures into $A$
can be understood as a pre-strategy in a game $A$---the map ensures that
Player and Opponent respect the constraints of the
game. Following Joyal, a pre-strategy from a game $A$ to a
game $B$ is understood as a pre-strategy in a composite game got by
setting the dual game of $A$, reversing the roles of Player and
Opponent, in parallel with $B$. From this general scheme
concurrent strategies---pre-strategies for which copy-cat strategies
behaves as an identity w.r.t.~composition of pre-strategies---are
characterized as those pre-strategies which satisfy the two conditions
of receptivity and innocence. It is shown how (bi)categories of stable
spans, certain profunctors, Berry's stable functions, and certain
sequential algorithms arise as sub(bi)categories of concurrent
games. It is indicated how the results can be adapted to
other models such as asynchronous transition systems and Mazurkiewicz
trace languages, which leads to a brief comparison with related work.
Abstract:
We extend languages of dot-depth one to infinite words. As for
finite words, this class of languages corresponds to the Boolean
closure of Sigma1 of the first level of the quantifier alternation
hierarchy of first-order logic FO[<,+1,min,max]. We give a decidable
characterization of the Boolean closure of Sigma1[<,+1,min]
over infinite words; here, the max-predicate is void.
Our characterization is a combination of Knast's algebraic property
for dot-depth one and a topological condition. This yields decidability
of the problem whether a given regular omega-language has
dot-depth one.
Our approach considers finite and infinite words simultaneously. We
prove decidable characterizations for the Boolean closure of
Sigma1[<,+1,min] as well as for the Boolean closure of
Sigma1[<,+1,min,max] over finite and infinite words. In this
setting, the max-predicate can be used for distinguishing between
finite and infinite words. Hence, our result on infinite words can be
obtained as a corollary.
This paper is self-contained and we give full proofs. In particular,
a new combinatorial proof of Knast's Theorem concerning languages of
dot-depth one over finite words is included.
Abstract:
Dickson's Lemma is a simple yet powerful tool widely used in
decidability proofs, especially when dealing with counters or related
data structures in algorithmics, verification and model-checking,
constraint solving, logic, etc. While Dickson's Lemma is
well-known, most computer scientists are not aware of the complexity
upper bounds that are entailed by its use. This is mainly
because, on this issue, the existing literature is not very accessible.
We
propose a new analysis of the length of bad sequences over (N^k,<=),
improving on earlier results and providing upper bounds that are
essentially tight. This analysis is complemented by a "user
guide" explaining through practical examples how to easily derive
complexity upper bounds from Dickson's Lemma.
Abstract:
The Halpern-Shoham logic is a modal logic of time intervals. Some
effort has been put in last ten years to classify fragments of this
beautiful logic with respect to decidability of its satisfiability
problem. We close this classification by showing -- what we believe is
quite an unexpected result -- that the logic of subintervals,
the fragment of the Halpern-Shoham where only the operator
"during", or D, is allowed, is undecidable over discrete structures.
This
is surprising as this, apparently very simple, logic is decidable over
dense orders and its reflexive variant is known to
be decidable over discrete structures.
Our result
subsumes a lot of previous negative results for the discrete case, like
the undecidability for ABE, BD, ADB, AAbarD, and so on.
Nikos Tzevelekos and Andrzej Murawski.
Game semantics for good general references
Abstract:
We present a new fully abstract and effectively presentable
denotational model for RefML, a paradigmatic higher-order programming
language combining call-by-value evaluation and general references in
the style of ML. Our model is built using game semantics. In contrast
to the previous model by Abramsky, Honda and McCusker, it provides a
faithful account of reference types, and the full abstraction result
does not rely on the availability of spurious constructs of reference
type (bad variables). This is the first proper denotational model of
this kind, preceded only by the trace model recently proposed by Laird.
Abstract:
We propose a type system for an imperative programming language, which
certifies program time bounds. This type system is based on language
information flow security approach. Each program variable has a level
and we prevent information from flowing from low level variables to high
level variables. Moreover, we introduce a downgrading mechanism in
order to delineate a broad class of programs. Thus, we establish the
first relation between security-typed language and implicit
computational complexity. We establish a characterization of the class
of polynomial time functions.
To our knowledge, this is the first
type system for an imperative language, which provides a complete
characterization of polynomial time functions.
Christian Herrmann and Martin Ziegler.
Computational Complexity of Quantum Satisfiability
Abstract:
Quantum logic generalizes, and in dimension one coincides with, Boolean propositional logic.
We
introduce the weak and strong satisfiability problem for quantum logic
formulas; and show both NP-complete in dimension two as well.
For
higher-dimensional spaces R^d and C^d with d>2 fixed, on the other
hand, we show the problem to be complete for the nondeterministic
Blum-Shub-Smale model of real computation.
This provides a unified view on both Turing and real BSS complexity theory;
and
adds (a perhaps more natural and combinatorially flavoured) one to the
still sparse list of NP_R-complete problems, mostly pertaining to real
algebraic geometry.
Our proofs rely on (a careful examination of)
works by John von Neumann as well as contributions by Hagge et.al
(2005,2007,2009).
We finally investigate the problem over INdefinite finite dimensions and relate it to NONcommutative semialgebraic geometry.
Abstract:
We investigate the problem of type isomorphisms in a programming
language with higher-order references. We first recall the
game-theoretic model of higher-order references by Abramsky, Honda and
McCusker. Solving an open problem by Laurent, we show that two finitely
branching arenas are isomorphic if and only if they are geometrically
the same, up to renaming of moves (Laurent's forest isomorphism). We
deduce from this an equational theory characterising isomorphisms of
types in a finitary programming language L2 with higher order
references. We show however that Laurent's conjecture does not hold on
infinitely branching arenas, yielding a non-trivial type isomorphism in
the extension of L2 with natural numbers.
Abstract:
We present the topos S of trees as a model of guarded recursion. We
study the internal dependently-typed higher-order logic of S and show
that S models two modal operators, on predicates and types, which
serve as guards in recursive definitions of terms, predicates, and
types. In particular, we show how to solve recursive type equations
involving dependent types. We propose that the internal logic of S
provides the right setting for the synthetic construction of abstract
versions of step-indexed models of programming languages and program
logics. As an example, we show how to construct a model of a
programming language with higher-order store and recursive types
entirely inside the internal logic of S.
Abstract:
We study the satisfiability problem for a logic on data words. A data
word is a finite word where every position carries a label from a finite
alphabet and a data value from an infinite domain. The logic we
consider is two-way, contains "future" and "past" modalities,
where these are considered as reflexive and transitive relations, and
data equality and inequality tests. This logic corresponds to the
fragment of XPath with the 'following-sibling-or-self' and
'preceding-sibling-or-self' axes over a data word. We show that this
problem is decidable, ExpSpace-complete. This is surprising
considering that with the strict navigation relations (non-reflexive)
the satisfiability problem is undecidable. To show this we first reduce
the problem to a derivation problem for an infinite transition system,
and then we show how to abstract this problem into a reachability
problem of a finite transition system.
Martin Churchill, James Laird and Guy McCusker.
Imperative Programs as Proofs via Game Semantics
Abstract:
Game semantics extends the Curry-Howard isomorphism to a three-way
correspondence: proofs, programs, strategies. But the universe of
strategies goes beyond intuitionistic logics and lambda calculus, to
capture stateful programs. In this paper we describe a logical
counterpart to this extension, in which proofs denote such strategies.
We can embed intuitionistic first-order linear logic into this system,
as well as an imperative total programming language. The logic makes
explicit use of the fact that in the game semantics the exponential can
be expressed as a final coalgebra. We establish a full completeness
theorem for our logic, showing that every bounded strategy is the
denotation of a proof.
Abstract:
The introduction of Halpern and Shoham's modal logic of intervals
(later on called HS) dates back to 1986. Despite its natural semantics,
this logic is undecidable over all interesting classes of temporal
structures. This discouraged research in this area until recently, when a
number of non-trivial decidable fragments have been found.
This
paper is a contribution towards the complete classification of HS
fragments. Different combinations of Allen's interval relations begins
(B), meets (A), and later (L), and their inverses Abar, Bbar, and Lbar,
have been considered in the literature.
We know from previous work
that the combination ABBbarAbar is decidable over finite linear orders
and undecidable everywhere else. We extend these results by showing that
ABBbarLbar is decidable over the class of all (resp., dense,
discrete) linear orders, and that it is maximal w.r.t decidability over
these classes: adding any other interval modality immediately leads to
undecidability.
Ichiro Hasuo and Naohiko Hoshino.
Semantics of Higher-Order Quantum Computation via Geometry of Interaction
Abstract:
Quantum computation attracts growing attention as a new computing
paradigm. While much of its current study employs low-level formalisms
such as quantum circuits, several high-level languages/calculi have
been recently proposed aiming at structured quantum programming. The
current work contributes to the semantical study of such high-level
quantum languages, by providing interaction-based semantics of a
functional quantum programming language---it is based on linear lambda
calculus and equipped with features like the $!$ modality and
recursion. We prove soundness and adequacy of our semantics. The
construction of our model is by a series of standard techniques taken
from the semantics of classical computation as well as from process
theory, namely: 1) monads with an order structure as models for
branching, used in the coalgebraic study of state-based systems; 2)
Girard's Geometry of Interaction, categorically formulated by
Abramsky, Haghverdi and Scott, providing interaction-based, game-like
semantics for linear logic and computation; and 3) the realizability
technique that turns an (untyped) combinatorial algebra into a
categorical model of a typed calculus. The mathematical genericity of
these techniques---mostly due to their categorical formulation---is
exploited for our move from classical to quantum.
Abstract:
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions.
We consider two different objectives, namely, expectation and satisfaction
objectives.
Given
an MDP with k reward functions, in the expectation objective the goal
is to maximize the expected value, and in the satisfaction objective the
goal is to maximize the probability of runs such that the limit-average
value stays above a given vector.
We show that under the expectation
objective, in contrast to the single-objective case, both randomization
and memory are necessary for strategies, and we show that finite-memory
randomized strategies are sufficient. We show that under the
satisfaction objective, in contrast to the single-objective case,
randomization is necessary for strategies, and we show that randomized
memoryless strategies are sufficient for epsilon-approximation, for all
epsilon>0. We show that the decision problems for both expectation
and satisfaction objectives can be solved in polynomial time and the
trade-off curve (Pareto curve) can be epsilon-approximated in time
polynomial in the size of the MDP and 1/epsilon, and exponential in the
number of reward functions, for all epsilon>0.
Our results also
reveal flaws in an earlier paper ("Markov Decision Processes with
multiple Long-Run Average Objectives", FSTTCS 2007) for MDPs with
multiple mean-payoff functions under the expectation objective, correct
the flaws and obtain improved results.
Ugo Dal Lago and Marco Gaboardi.
Linear Dependent Types and Relative Completeness
Abstract:
A system of linear dependent types for the lambda calculus with full
higher-order recursion, called dlPCF, is introduced and proved sound and
relatively complete. Completeness holds in a strong sense: dlPCF is not
only able to precisely capture the functional behaviour of PCF programs
(i.e. how the output relates to the input) but also some of their
intensional properties, namely the complexity of evaluating them with
Krivine Machine. dlPCF is designed around dependent types and linear
logic and is parametrized on the underlying language of index terms,
which can be tuned so as to sacrifice completeness for decidability.
Abstract:
Separation logic has proven to be a highly effective tool for the
verification of heap-manipulating programs. However, it has been
applied almost exclusively in language settings where either memory is
managed manually or the issue of memory management is ignored
altogether. In this paper, we present a variant of separation logic,
GCSL, for reasoning about low-level programs that interface to a
garbage collector. In contrast to prior work by Calcagno et al., our
model of GCSL (1) permits reasoning about programs that use internal
pointers and address arithmetic, (2) supports logical variables that
range over pointers, and (3) straightforwardly validates the ``frame''
rule, as well as a standard interpretation of separation-logic
assertions. Essential to our approach is the technique (due
originally to McCreight et al.) of distinguishing between ``logical''
and ``physical'' states, which enables us to insulate the logic from
the physical reality that pointer ``values'' may be moved and/or
deallocated by the garbage collector.
Abstract:
This paper is a study of the forcing translation through the proofs as
programs correspondence in classical logic, following the methodology
introduced by Krivine in [Kri08,Kri10]. For that, we introduce an
extension of (classical) higher-order arithmetic suited to express the
forcing translation, called PAw+. We present the proof system of PAw+ -
based on Curry-style proof terms with call/cc - as well as the
corresponding classical realizability semantics à la Krivine. Then,
given a poset of conditions (represented in PAw+ as an upwards closed
subset of a fixed meet semi-lattice), we define the forcing translation A
-> (p forces A) (where A ranges over propositions) and show that the
corresponding transformation of proofs is induced by a simple program
transformation t -> t^* defined on raw proof-terms (i.e.
independently from the derivation). From an analysis of the
computational behavior of transformed programs, we show how to avoid the
cost of the transformation by introducing an extension of Krivine's
abstract machine devoted to the execution of proofs constructed by
forcing. We finally show that this machine induces new classical
realizability models and present the corresponding adequacy results.
Abstract:
We describe a denotational model of higher-order functional reactive
programming using ultrametric spaces and nonexpansive maps, which
provides a natural Cartesian closed generalization of causal stream
functions and guarded recursive definitions. We define a type theory
corresponding to this semantics and show that it satisfies
normalization. Finally, we show how reactive programs written in this
language may be implemented efficiently using an imperatively updated
dataflow graph and give a higher-order separation logic proof that
this low-level implementation is correct with respect to the
high-level semantics.
Mikołaj Bojańczyk, Bartek Klin and Sławomir Lasota.
Automata with group actions
Abstract:
Our motivating question is a Myhill-Nerode theorem for infinite
alphabets. We consider several kinds of infinite
alphabets: alphabets whose letters can be compared only for
equality, but also alphabets with more structure, such as a total order
or a partial order. We develop a framework for studying such alphabets,
where the key role is played by the automorphism group of the
alphabet. This framework builds on the idea of nominal sets of Gabbay
and Pitts; nominal sets are the special case of our framework
where letters can be only compared for equality. We use
the framework to uniformly generalize to infinite alphabets
parts of automata theory, including algorithms. In the case of letters
compared for equality, we obtain automata equivalent in
expressive power to finite memory automata, as defined by
Francez and Kaminski.
Libor Barto.
The Dichotomy for Conservative Constraint Satisfaction Problems Revisited
Abstract:
A central open question in the study of non-uniform constraint
satisfaction problems (CSPs) is the dichotomy conjecture of Feder and
Vardi stating that the CSP over a fixed constraint language is either
NP-complete, or tractable.
One of the main achievements in this
direction is a result of Bulatov (LICS'03) confirming the dichotomy
conjecture for conservative CSPs, that is, CSPs over constraint
languages containing all unary relations.
Unfortunately, the proof is very long and complicated, and therefore hard to understand even for a specialist.
This paper provides a short and transparent proof.
Abstract:
Additive linear logic, the fragment of linear logic concerning linear
implication between strictly additive formulae, coincides with
sum-product logic, the internal language of categories with free finite
products and coproducts. Deciding equality of its proof terms, as
imposed by the categorical laws, is complicated by the presence of the
units (the initial and terminal objects of the category) and the fact
that in a free setting products and coproducts do not distribute. The
best known desicion algorithm, due to Cockett and Santocanale (CSL
2009), is highly involved, requiring an intricate case analysis on the
syntax of terms.
This paper provides canonical, graphical
representations of the categorical morphisms, yielding a novel solution
to this decision problem. Starting with (a modification of) existing
proof nets, due to Hughes and Van Glabbeek, for additive linear logic
without units, canonical forms are obtained by graph rewriting. The
rewriting algorithm is remarkably simple. As a decision procedure for
term equality it matches the known complexity of the problem. A main
technical contribution of the paper is the substantial correctness proof
of the algorithm.
Bruno Barras, Jean-Pierre Jouannaud, Pierre-Yves Strub and Qian Wang.
COQ MTU : a higher-order type theory with a predicative hierachy of universes parameterized by a decidable first-order theory
Abstract:
We study a complex type theory, a Calculus of Inductive Constructions
with a predicative hierarchy of Universes and a first-order theory T
built in its conversion relation. The built-in theory T is specified
abstractly, by a set of constructors, a set of defined symbols, axioms
expressing that constructors are free and defined symbols completely
defined, and a generic elimination principle relying on crucial
properties of first-order structures satisfying these axioms. We first
show that CoqMTU enjoys the usual meta-theoretical properties of such
calculi, including confluence and subject reduction. We then show that
predicative beta-reductions enjoy normal forms. This result is then used
to build an induction principle in order to show the strong
normalization of CoqMTU when restricted to weak-elimination, implying
the decidability of type-checking in this case as well as consistency.
Abstract:
What do you do if a computational object (e.g. program trace) fails a
specification? An obvious approach is to perform \emph{repair}: modify
the object minimally to get something that satisfies the constraints.
This approach has been extensively investigated in the database
community, for relational integrity constraints, and in the AI community
for classical (e.g propositional) logics. In this paper we study it for
temporal constraints, given as regular languages or temporal logic
formulas. We focus on determining the maximal number of repairs that
must be applied to a word satisfying a given input constraint in order
to ensure that it satisfies a given target constraint. This maximum may
well be infinite; one of our main contributions is to determine when it
is finite -- isolating the complexity of the ``bounded repair problem'',
based on a characterization of the pairs of regular languages that
admit such a repair. We consider this in the setting where the repair
strategy is unconstrained and also when the strategy is restricted to
use finite memory. For the latter case, we analyze several models of
``streaming repair strategies'' (different cost models and lookahead).
Although the streaming setting is quite different from the general
setting, we find that there are surprising connections between streaming
and non-streaming, as well as within variants of the streaming problem.
Abstract:
A {\em nondeterministic weighted finite automaton\/} (WFA) maps an
input word to a numerical value. Applications of weighted
automata include formal verification of quantitative properties, as
well as text, speech, and image processing. Many of these applications
require the WFAs to be deterministic, or work substantially better
when the WFAs are deterministic. Unlike NFAs, which can always be
determinized, not all WFAs have an equivalent deterministic weighted
automaton (DWFA). In \cite{Moh97}, Mohri describes a determinization
construction for a subclass of WFA. He also describes a property of
WFAs (the {\em twins property}), such that all WFA that satisfy the
twins property are determinizable and the algorithm terminates on them.
Unfortunately, many natural WFA do not satisfy the twins property and
cannot be determinized using Mohri's algorithm.
In this paper we study {\em approximated determinization\/} of WFAs.
We describe an algorithm that, given a WFA $\A$ and an
approximation factor $t > 1$, constructs a DWFA $\A'$ that {\em $t$-
determinizes\/} $\A$. Formally, for all words $w \in \Sigma^*$, the
value of $w$ in $\A'$ is at least its value in $\A$ and at most $t$
times its value in $\A$. Our construction involves two new ideas:
attributing states in the subset construction by both upper and lower
residues, and collapsing attributed subsets whose residues can be
tightened.
The larger the approximation factor is, the more attributed subsets we
can collapse. Thus, $t$-determinization is helpful not only for WFAs that
cannot be determinized, but also in cases determinization is possible
but results in automata that are too big to handle. In addition,
$t$-determinization is useful for reasoning about the competitive
ratio
of online algorithms. We also describe a property (the {\em $t$-twins
property}) and use it in order to characterize $t$-determinizable WFAs.
Finally, we describe a polynomial algorithm for deciding whether a
given WFA has the $t$-twins property.