WWV 2010:Papers with Abstracts

Papers
Abstract. Answer Set Programming (ASP) has emerged in the recent years as a powerful paradigm for declarative problem solving, which has its roots in knowledge representation and non-monotonic logic programming. Similar to SAT solving, the basic idea is to encode solutions to a problem in the models of a non-monotonic logic program, which can be computed by reasoning engines off the shelf. ASP is particularly well-suited for modeling and solving problems which involve common sense reasoning or transitive closure, and has been fruitfully applied to a growing range of applications. Among the latter are also problems in testing and verfication, for which efficient core fragments of ASP that embrace Datalog haven been exploited. This talk gives a brief introduction to ASP, covering the basic concepts, some of its properties and features, and solvers. It further addresses some applications in the context of verification and recent developments in ASP, which bring evaluation closer to other formalisms and logics.
Abstract. EasyChair is the most commonly used conference management system. Currently about twelve conferences or workshops per day register for using EasyChair. There are about 1,200 papers submitted to EasyChair every day. The number of users of EasyChair at the time of writing this abstract is over 300,000, which is greater than the population of Linz.

In this talk we give an overview of EasyChair and describe its philosophy, design, implementation, evolution and future. We will also discuss issues related to formal analysis and verification of Web services.
Abstract. In this paper we present a novel user-friendly high-level approach to the specification of temporal properties of web documents which can be used for verification purposes. The method described is based on specification patterns supporting an incremental construction of commonly used consistency criteria. We show that our approach fills the gap between a temporal logic such as CTL as a powerful tool for specifying consistency criteria for web documents and users that maintain documents but have no or very limited knowledge about the specification formalism. An empiric assessment of the usability of specification patterns for web documents confirms that a pattern based specification shows significantly better results than the direct specification with CTL.
Abstract. Algebraic specification methods, well-known in the area of programming languages, are adapted to present a tailored framework for hyperdocuments and hyperdocument systems. In this framework, a hyperdocument is defined via its abstract syntax, which is a variable-free term of a suitable constructor-based signature. Both the representation in a markup language and the graphical presentation on the screen as well as further representations are elements of particular algebraic interpretations of the same signature. This technique allows the application of well-known methods from the field of compiler construction to the development of hyperdocument systems. Ideas for its implementation in the functional language Haskell are roughly drafted. It is shown how XML-based markup languages with schemas and stylesheets can be defined in terms of this framework and how this framework can be extended so that it can deal with partially specified documents, called semi documents. These semi documents can be automatically adapted to the users' needs, which e.g. is helpful to ensure accessibility.
Abstract. A new algorithm for incrementally generating counterexamples for the temporal description logic ALCCTL is presented. ALCCTL is a decidable combination of the description logic ALC and computation tree logic CTL that is expressive for content- and structure-related properties of web documents being verified by model checking. In the case of a specification violation, existing model checkers provide a single counterexample which may be large and complex. We extend existing algorithms for generating counterexamples in two ways. First, a coarse counterexample is generated initially that can be refined subsequently to the desired level of detail in an incremental manner. Second, the user can choose where and in which way a counterexample is refined. This enables the interactive step-by-step analysis of error scenarios according to the user's interest.

We demonstrate in a case study on a web-based training document that the proposed approach reveals more errors and explains the cause of errors more precisely than the counterexamples of existing model checkers. In addition, we demonstrate that the proposed algorithm is sufficiently fast to enable smooth interaction even in the case of large documents.
Abstract. Geography Markup Language (GML) has been established as the standard language for the transport, storage and modelling of geographic information. In this paper we study how to adapt the XPath query language to GML documents. With this aim, we have defined a semantic based XPath language which is not based on the (tree-based) syntactic structure of GML documents, rather than, it is based on the “semantic structure” of GML documents. In order words, the proposed XPath language is based on the GML schema instead of the syntactic structure. We have developed a system called UALGIS, in order to implement the approach. Such system stores GML documents by means of the PostGIS RDBMS. In order to execute semantic-based XPath queries we have defined a translation of the queries into SQL. Such translation takes into account the GML schema. Finally, the system allows to visualize the result. With this aim, the results of a queries are exported to the Keyhole Markup Language (KML) format.
Abstract. This work presents a model for information retrieval from multiple webpages. This model does not need to pre-process, parser or label the webpages; and thus, it can work online and in real time. The model introduces two new techniques for visualization that allows us to automatically reconstruct a new webpage with the information retrieved. This page is structured taking into account the semantically related information. The technique has been implemented and the implementation is discussed focussing on the main problems that appear when the proposed algorithms are integrated into a commercial web browser.