next day
all days

View: session overviewtalk overview

08:30-10:30 Session 2: Invited tutorial
Handwriting Recognition (a Tutorial)
Handwritten input is increasingly important in modern computing. Tablets, electronic white boards and telephones today accept hand written input of one sort or another. Document analysis systems strive to handle handwritten annotations or entire documents using multiple languages and scripts. and large-scale business applications, such as mail sorting and cheque cashing, rely critically on computer-based handwriting recognition.
This tutorial focuses primarily on mathematical handwriting recognition.  Accurate computer recognition of handwritten mathematics offers to provide a natural interface for mathematical computing, document creation and collaboration.   Mathematical handwriting, however, provides a number of challenges beyond what is required for the recognition of handwritten natural languages. For example, it is usual to use symbols from a range of different alphabets and there are many similar-looking symbols.  Many writers are unfamiliar with the symbols they must use and therefore write them incorrectly.  Mathematical notation is two-dimensional and size and placement information is important.   Additionally, there is no fixed vocabulary of mathematical ``words'' that can be used to disambiguate symbol sequences.  On the other hand, there are some simplifications. For example, symbols do tend to be well-segmented.   With these characteristics, new methods of character recognition are important for accurate handwritten mathematics input.
We explore the issues in handwriting recognition in general and for mathematical handwriting in particular.   Special attention will be given to methods based on functional approximation of digital ink traces and general classification techniques.  A section of the tutorial is devoted to issues that arise in the recognition of mathematical handwriting in Arabic documents. 
10:30-11:00Coffee Break
11:00-12:00 Session 3: Contributed talks
PDT Logic for Stream Reasoning in Multi-agent Systems

ABSTRACT. We present Probabilistic Doxastic Temporal (PDT) Logic for streams, a formalism to reason about probabilistic beliefs and their infinite temporal evolution in multi-agent systems. It enables the quantification of beliefs through probability intervals and incorporates the concepts of frequency functions and epistemic actions. We show how agents can update their beliefs with respect to their observations, provide a model for infinite streams of possible worlds and show how we can map clippings of these streams to finite time windows. Based on these time windows, we provide an appropriate semantics for PDT Logic, and show that PDT Logic for streams is decidable.

A refutation procedure for proving satisfiability of constraint specifications on XML documents

ABSTRACT. In this paper we first present three sorts of constraints (positive, negative and conditional ones) to specify that certain patterns must be satisfied in a XML document. These constraints are built on boolean XPath patterns. We define a specification as a set of clauses whose literals are constraints. Then, to reason about these specifications, we study some sound rules which permit to infer, subsume or simplify clauses. The main goal is to design a refutation procedure (based on these rules) to test if a given specification is satisfiable or not. We have formally proven the soundness of our procedure and we study the completeness and termination of the proposed method.

12:00-14:00Lunch Break
14:00-15:30 Session 4: Invited and a contributed talk
νZ - Maximal Satisfaction with Z3

ABSTRACT. Satisfiability Modulo Theories, SMT, solvers are used in many applications. These applications benefit from the power of tuned and scalable theorem proving technologies for supported logics and specialized theory solvers. SMT solvers are primarily used to determine whether formulas are satisfiable. Furthermore, when formulas are satisfiable, many applications need models that assign values to free variables. Yet, in many cases arbitrary assignments are insufficient, and what is really needed is an \emph{optimal} assignment with respect to objective functions. So far, users of Z3, an SMT solver from Microsoft Research, build custom loops to achieve objective values. This is no longer necessary with $\nuZ$ (new-Z, or max-Z), an extension within Z3 that lets users formulate objective functions directly with Z3. Under the hood there is a portfolio of approaches for solving linear optimization problems over SMT formulas, MaxSMT, and their combinations. Objective functions are combined as either Pareto fronts, lexicographically, or each objective is optimized independently.

Parametric Strategy Iteration
SPEAKER: Helmut Seidl

ABSTRACT. Program behavior may depend on parameters, which are either configured before compilation time, or provided at runtime, e.g., by sensors or other input devices. Parametric program analysis explores how different parameter settings may affect the program behavior.

In order to infer invariants depending on parameters, we introduce parametric strategy iteration. This algorithm determines the precise least solution of systems of integer equations depending on surplus parameters. Conceptually, our algorithm performs ordinary strategy iteration on the given integer system for all possible parameter settings in parallel. This is made possible by means of region trees to represent the occurring piecewise affine functions. We indicate that each required operation on these trees is polynomial-time if only constantly many parameters are involved.

Parametric strategy iteration for systems of integer equations allows to construct parametric integer interval analysis as well as parametric analysis of differences of integer variables. It thus provides a general technique to realize precise parametric program analysis if numerical properties of integer variables are of concern.

15:30-16:00Coffee Break
16:00-17:15 Session 5: Contributed talks
A Self-Disciplined Privacy Oriented Access Control Framework for Public Clouds

ABSTRACT. While the transformative power of cloud computing in terms of cost scalability and agility is widely known, one of the first questions that arises is how the data hosted in the cloud-is accessed, stored and used? In this context, privacy preserving is a key user concern when judging the adoption of clouds in domains where sensitive personal information are highly involved. Indeed, as far as cloud providers are concerned, they should ensure the privacy protection of data hosted in clouds on behalf of their customers. Equally, they should satisfy the needs of law enforcement. It is a fact that access control is one of the essential and traditional security mechanisms of data protection. However, in the context of open and dynamic environments such as clouds, access control becomes more complicated. This is because the security policies, models and related mechanisms have to be defined across heterogeneous security domains. Thus, improving the current access control paradigms is crucial in order to ensure privacy compliance in distributed environments. In this paper, we aim to produce a self-disciplined access control framework for public clouds. We believe that a formal knowledge representation of access control policies, in particular, when these policies are able to integrate data protection requirements, could enforce privacy compliance at runtime. Indeed, we mainly focus on ontologies, as formalized conceptual models for access rules expression that conducts a private and secure reasoning. It could resolve issues like ambiguity in the expression of privacy legislation. The issue of the interoperability between various jurisdictions, denoted by the geographical area of the different cloud actors, could also be addressed in a formal manner.

Automated detection and resolution of firewall misconfigurations

ABSTRACT. Firewalls provide efficient security services if they are correctly configured. Unfortunately, configuring a firewall is well known highly error prone and work-intensive if performed manually and become infeasible in the presence of a large number of rules in a firewall configuration. Therefore, there is a need of automated methods to analyze, detect and correct misconfigurations. Prior solutions have been proposed but we note their drawbacks are threefold: First, common approaches deal only with pairwise filtering rules. In such a way, some other classes of configuration anomalies could be uncharted. Second, they did not distinguish the intended firewall conflicts from the effective misconfigurations. Third, although anomalies resolution is a tedious task, it is generally given to the network administrator. We present, in this paper, a formal approach whose contributions are the following: Detecting new classes of anomalies, bringing out real misconfigurations and finally, proposing automatic resolution method by considering the security policy. We prove the soundness of our method. The first results we obtained are very promising.

A Library of Anti-Unification Algorithms
SPEAKER: Temur Kutsia

ABSTRACT. Generalization problems arise in many areas of software science: code clone detection, program reuse, partial evaluation, program synthesis, invariant generation, etc. Anti-unification is a technique often used to solve generalization problems. In this paper we describe an open-source library of some newly developed anti-unification algorithms in various theories: for first- and second-order unranked terms, higher-order patterns, and nominal terms.

Modelling and Simulation for the Analysis of Securities Markets
Many financial markets today are dominated by automated high-frequency trading.  According to a few reports, this has accounted for more than half the volume of US equity markets in recent years. High-frequency trading strategies typically adopt powerful computers and communications infrastructure and a variety of algorithms to process a large number of orders at high speed, attempting to profit sometimes a fraction of a cent on every trade. While this has led to significant theoretical and applied research, the area still presents many important challenges. These arise both in the strategy modelling phase, where accurate and efficient prediction of securities' price movement is required, and in the evaluation phase, where strategies must be examined in a variety of market conditions before being launched in real markets. We investigate these challenges and make contributions to both of them. Our modelling approach is based on clustering in a space of technical indicators, using a weighted Euclidean distance in a manner similar to certain handwriting recognition algorithms. Our evaluation environment is a market simulator that uses historical data or live data and agents to reproduce the fine-grained dynamics of financial markets. This paper outlines the modelling and simulation and how they work together.