VSL 2014: VIENNA SUMMER OF LOGIC 2014
PRUV ON WEDNESDAY, JULY 23RD, 2014
Days:
next day
all days

View: session overviewtalk overviewside by side with other conferences

09:00-09:15 Session 161C: Opening
Location: FH, Seminarraum 107
09:15-10:15 Session 163: Invited Talk
Location: FH, Seminarraum 107
10:15-10:45Coffee Break
10:45-12:15 Session 166M: Vagueness to some degree
Location: FH, Seminarraum 107
10:45
In Which Sense Is Fuzzy Logic a Logic for Vagueness?

ABSTRACT. The problem of artificial precision demonstrates the inadequacy of naive fuzzy semantics for vagueness. This problem is, nevertheless, satisfactorily remedied by fuzzy plurivaluationism; i.e., by taking a class of fuzzy models (a fuzzy plurivaluation), instead of a single fuzzy model, for the semantics of a vague concept. Such a fuzzy plurivaluation in turn represents the class of models of a formal theory, preferably formulated in first- or higher-order fuzzy logic, which formalizes the meaning postulates of the vague concepts involved. The consequence relation of formal fuzzy logic then corresponds to the (super)truth of propositions involving these vague concepts. An adequate formal treatment of vague propositions by means of fuzzy logic thus consists in derivations in the formal calculus of a suitable fuzzy logic, while the particular truth degrees found in engineering applications actually pertain to artificially precisified (so no longer vague) gradual notions.

11:15
Stable Models of Fuzzy Propositional Formulas
SPEAKER: Joohyung Lee

ABSTRACT. We introduce the stable model semantics for fuzzy propositional formulas, which generalizes both fuzzy propositional logic and the stable model semantics of classical propositional formulas. Combining the advantages of both formalisms, the introduced language allows highly configurable default reasoning involving fuzzy truth values. We show that several properties of Boolean stable models are naturally extended to this formalism, and discuss how it is related to other approaches to combining fuzzy logic and the stable model semantics. 

11:45
Towards a Logic of Dilation
SPEAKER: unknown

ABSTRACT. We investigate the notion of dilation of a propositional theory based on neighbourhoods in a generalized approximation space. We take both a semantic and a syntactic approach in order to define a suitable notion of theory dilation in the context of approximate reasoning on the one hand, and a generalized notion of forgetting in propositional logic on the other hand. We place our work in the context of existing theories of approximation spaces and forgetting, and show that neighbourhoods obtained by combining collective and selective dilation provide a suitable semantic framework within which to reason computationally with uncertainty in a classical setting.

13:00-14:30Lunch Break
14:30-16:00 Session 172L: Reasoning for Vagueness perhaps
Location: FH, Seminarraum 107
14:30
Similarity-based Relaxed Instance Queries in EL++
SPEAKER: Andreas Ecke

ABSTRACT. Description Logic (DL) knowledge bases (KBs) allow to express knowledge about concepts and individuals in a formal way. This knowledge is typically crisp, i.e., an individual either is an instance of a given concept or it is not. However, in practice this is often too restrictive: when querying for instances, one may often also want to find suitable alternatives, i.e., individuals that are not instances of query concept, but could still be considered `good enough'. Relaxed instance queries have been introduced to gradually relax this inference in a controlled way via the use of similarity measures. So far, those algorithms only work for the DL EL, which has limited expressive power.

In this paper, we introduce a suitable similarity measure for EL++-concepts. EL++ adds nominals, role inclusion axioms, and concrete domains to EL and thus (besides others) allows the representation and comparison of concrete values and specific individuals. We extend the algorithm to compute relaxed instance queries w.r.t. this new CSM, and thus to work for general EL++ KBs.

15:00
Resolution and Clause Learning for Multi-Valued CNF Formulas

ABSTRACT. Conflict-directed clause learning (CDCL) is the basis of SAT solvers with impressive performance on many problems. This performance has led to many reasoning tasks being carried out either by reduction to propositional CNF, or by adaptations of the CDCL algorithm to other logics. CDCL with restarts (CDCL-R) has been shown to have essentially the same reasoning power as unrestricted resolution (formally, they p-Simulate each other). Here, we examine the generalization of resolution and CDCL-R to a family of multi-valued CNF formulas, which are possible reduction targets for a variety of multi-valued or fuzzy logics. In particular, we study the formulas called Signed (or Multi-Valued) CNF formulas, and the variant called Regular Formulas. The main purpose of the paper is to show that an analogous result holds for these cases: a natural generalization of CDCL-R to these logics has essentially the same reasoning power as natural verions of resolution which appear in the literature.

15:30
Many-valued Horn Logic is Hard

ABSTRACT. In this short paper we prove that deciding satisfiability of fuzzy Horn theories with n-valued Lukasiewicz semantics is NP-hard, for any n greater or equal to 4.

16:00-16:30Coffee Break
16:30-18:00 Session 175M: Favorite reasoning procedures for preferences
Location: FH, Seminarraum 107
16:30
Learning Preferences for Collaboration
SPEAKER: Eva Armengol

ABSTRACT. In this paper we propose the acquisition of a set of preferences of collaboration between classifiers based on decision trees. A classifier uses $k$-NN with leaf-one-out on its own knowledge base to generate a set of tuples with information about the object to be classified, the number of similar precedents, the maximum similarity, and about if it is a situation of collaboration or not. We considered that a classifier does not collaborate when it is able to reach by itself the correct classification for an object, otherwise it has to collaborate. These tuples are given as input to generate a decision tree from which a set of collaboration preferences is obtained.

17:00
Computing k-Rank Answers with Ontological CP-Nets

ABSTRACT. The tastes of a user can be represented in a natural way by using qualitative preferences. In this paper, we describe how to combine ontological knowledge with CP-nets to represent preferences in a qualitative way and enriched with domain knowledge. Specifically, we focus on conjunctive query (CQ) answering under CP-net-based preferences. We define k-rank answers to CQs based on the user’s preferences encoded in an ontological CP-net, and we provide an algorithm for k-rank answering CQs.

17:30
Multi-Attribute Decision Making using Weighted Description Logics
SPEAKER: Erman Acar

ABSTRACT. We introduce a framework based on Description Logics, which can be used to encode and solve decision problems in terms of combining inference services in DL and utility theory to represent preferences of the agent. The novelty of the approach is that we consider ABoxes as alternatives and weighted concept and role assertions as preferences in terms of possible outcomes. We discuss a relevant use case to show the benefits of the approach from the decision theory point of view.