PSI 2015: 10TH INTERNATIONAL ANDREI ERSHOV MEMORIAL CONFERENCE
PROGRAM FOR THURSDAY, AUGUST 27TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:00 Session 15: Keynote Speech
09:00
Automated Verification of Fault-Tolerant Distributed Algorithms
SPEAKER: Helmut Veith

ABSTRACT. Distributed algorithms have numerous mission-critical applications in embedded avionic and automotive systems, cloud computing, computer networks, hardware design, and the internet of things. Although distributed algorithms exhibit complex interactions with their computing environment and are difficult to understand for human engineers, computer science has developed only very limited tool support to catch logical errors in distributed algorithms at design time.
Recent work by our research group has demonstrated that the progress of the last two decades in abstract model checking, SMT solving, and partial order reduction gives leverage for parameterized model checking techniques that are able to verify, for the first time, non-trivial fault tolerant distributed algorithms. In this talk, we will survey our results and argue that model checking has acquired sufficient critical mass to build the theory and the practical tools for the formal verification of large classes of distributed algorithms.


 

10:00-10:30Coffee Break
10:30-13:00 Session 16: Applications
10:30
Unifying Requirements and Code: an Example
SPEAKER: unknown

ABSTRACT. Requirements and code, in conventional software engineering wisdom, belong to entirely different worlds. Is it possible to unify these two worlds? A unified framework could help make software easier to change and reuse. To explore the feasibility of such an approach, the case study reported here takes a classic example from the requirements engineering literature and describes it using a programming language framework to express both domain and machine properties. The paper describes the solution, discusses its benefits and limitations, and assesses its scalability.

11:00
Clone Detection in Reuse of Software Technical Documentation
SPEAKER: unknown

ABSTRACT. As software documentation is becoming more and more complicated, efficiency of maintenance process could be increased through documentation reuse. In this paper, we apply software clone detection technique to automate searching of repeated fragments in software technical documentation to be reused. Our approach supports adaptive reuse, which means extracting "near duplicate" text fragments (repetitions with variations) and producing customizable reusable elements. We present a process and a tool, which can work with both DocBook documentation (widely used XML markup language) and DRL (DocBook extension with adaptive reuse features), as well as with plain text. Our tool is based on Clone Miner software clone detection tool, and integrated to DocLine environment (adaptive reuse documentation framework), providing visualization and navigation facilities on the clone groups found, and also supporting refactoring to extract clones into reusable elements.

11:30
Analysis of DOM Structures for Site-Level Template Extraction
SPEAKER: unknown

ABSTRACT. Web templates are one of the main development resources for website engineers. Templates allow them to increase productivity by plugin content into already formatted and prepared pagelets. For the final user templates are also useful, because they provide uniformity and a common look and feel for all webpages. However, from the point of view of crawlers and indexers, templates are an important problem, because templates usually contain irrelevant information such as advertisements, menus, and banners. Processing and storing this information leads to a waste of resources (storage space, bandwidth, etc.). It has been measured that templates represent between 40% and 50% of data on the Web. Therefore, identifying templates is essential for indexing tasks. In this work we propose a novel method for automatic template extraction that is based on similarity analysis between the DOM trees of a collection of webpages that are detected using an hyperlink analysis. Our implementation and experiments demonstrate the usefulness of the technique.

12:00
RSSA: A Reversible SSA Form

ABSTRACT. The SSA form (Static Single Assignment form) is used in compilers as an intermediate language as an alternative to traditional three-address code because code in SSA form is easier to analyse and optimize using data-flow analysis such as common-subexpression elimi- nation, value numbering, register allocation and so on. We introduce RSSA, a reversible variant of the SSA form suitable as an intermediate language for reversible programming languages that are compiled to reversible machine language. The main issues in making SSA reversible are the unsuitablility of reversible updates and exchanges (that are traditional in reversible languages) for SSA and the need for φ-nodes on both joins and splits of control-flow. The first issue is handled by making certain uses of a variable destroy the variable and the latter by adding parameters to labels. We show how programs in the reversible intermediate language RIL can be translated into RSSA and discuss copy propagation, constant propa- gation and register allocation in the context of RSSA.

12:30
Estimating Development Effort for Software Architectural Tactics
SPEAKER: unknown

ABSTRACT. The increased awareness of the quality requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. Furthermore, existing functional size measurement methods still remain unpopular in industry. In this paper, we propose the usage of the Analytic Hierarchy Process (AHP) technique in the effort estimation for architectural tactics derived to satisfy the quality requirements. The paper demonstrates the applicability of the approach through a case study.

13:00-14:00Lunch Break
14:00-17:00 Session 17A: Tutorial
14:00
Introduction to algorithms on biosequence data

ABSTRACT. High-throughput DNA sequencing technologies generate gigabytes of sequence data in a single run of a sequencing machine. In this tutorial, we will present some modern algorithmic techniques used for processing these data. After giving a quick introduction to DNA sequencing and main computational tasks behind it, we will first focus on efficient data structures for storing these data, and especially on the so-called FM-index that recently became a very popular tool applied to biosequence analysis. We will further present several algorithms illustrating how this data structure is used in real-life applications.

14:00-17:00 Session 17B: Tutorial
14:00
How to teach IT? Human side of IT-education

ABSTRACT. I am not too young to know everything (Oscar Wilde)

The main goal of this (semi ironically titled) session is to attract attention of colleagues to the problem of our imperfectness. As the first approximation, let us call it “nobody is perfect” problem. We cannot be perfect; we can be better. But who will teach us how?

In my opinion, the problem is global; which means here not just eternal, universal and fundamental, but also strictly personal and intimate for everyone. Nevertheless, in the framework of our discussion let us put it a bit more locally. Namely, as a core problem of education in general and IT-education in particular. Behind Babylon tower of uncertainties we face here, one thing is clear: the better we understand our restrictions, the better decisions we can make in the nearest future.

Stating that, I am planning to play the role of a sceptic towards modern attempts to ignore the mentioned problem in IT-learning. On the other hand, I will support “marathon against sprint”, suggesting to take a fresh look on classical concept of education as a cultural evolution of “homo sapiens” (i.e. our own formation as relatively sane human beings).

Further, I will try to submit some pros in favor of a roll back to such approach in teaching theory and practice of computer programming.  These are specific types of intellectual labor, deeply rooting in the past. Hence to understand more clearly similarities and distinctions between the past and the present we will need to pay greater attention to previous “evolution of pure reasoning” – i.e. formation of modern way of thinking, hiding off-stage of formation of behavior.

During the session, I will suggest colleagues to consider the following issues:

  1. Introduction to “action-problem”. The flight of an arrow: preconditions and post conditions - a priori and a posteriori. My goal, possibilities and restrictions. Them and us. On inertia of thought, collision of stereotypes and professional fanaticism. Your and my problems.
  2. Specific features of education. Teaching as a practical semantics. Information blow up – the right time to collect stones together. Specifying the imperfectness problem. “Three stakes of education”: attitudes towards labor (spending/economizing efforts), attitudes to others (cooperation/confrontation) and attitudes to yourself (self-rating - dignity and egomania).
  3. What is “progress”? Individuals, things and abstract notions (and on the main question of philosophy, again). “Human factor” – imperfect humans, perfect machines and crisis of software development. What is wrong with historicism of education? Crisis of external control of behavior – “Big brother”, “Sweet mum” and other “walking dead” notions. 
  4. Problems as a progress engine. On origins of reasoning – “Ape and banana”. Safe and unsafe thinking. Funny mind games and not so funny crises – “Kennel and skyscraper”. Safe trials and errors - modelling and motivation by problem.
  5. “Humble programmer” “Under spell of the Leibniz dream”. Music of spheres, Newton machine and Turing automata. On mathematical Platonism, descriptive information theories and evolution of software development.
  6. Help by request in IT-teaching. Incremental semantics - motivation theories through practices. Example – «search of correct map coloring». Analyses and synthesis – on evolution of data type notion.
  7. The enterprise with human face. “Easy for me” and “easier for us” - cautious thoughts on evolution of labor organization. From esthetics to ethics – Agile!
  8. Conclusions. Regression laws and stone of Sisyphus. Global problems and “we here and now”.
14:00-17:00 Session 17C: Tutorial
14:00
Software Development Lifecycle: Anti-Crisis Optimization
SPEAKER: Sergey Zykov

ABSTRACT. The tutorial discusses the lifecycle tuning in order to address the crisis. This is a non-trivial issue. In what way can software engineering principles and practices assist for software development crisis? The tutorial discusses the differences between software project and software product lifecycle. Based on the lifecycle structural analysis, we outline the low-cost strategies, still affordable in terms of product quality. Further on, we identify the key software project growth factors, such as transparent communication, development discipline, standards to set/follow, and CASE applicability, to name a few. Afterwards, we search for a reliable framework of the lifecycle optimization, which is dependent on the lifecycle model. Therewith, we keep in mind that certain models can converge and blend. Finally, the tutorial examines the object-oriented model strengths and weaknesses based on maintainability and reusability principles.

14:00-17:00 Session 17D: Poster Talks
14:00
Development of high performance visualization module for hydrodynamic web simulator
SPEAKER: Turar Olzhas

ABSTRACT. The paper considers creation of a specialized visualization module for web simulator of oil and gas fields. Both process of creating the mentioned module and tests of software on data taken from open sources are described. Also several specific tests have been generated for testing the performance and capabilities of the module. Visual comparisons with the similar simulators of leading manufacturers and large data tests used to system checking are presented.

14:15
An Initial Study on the Prediction of the Successful Completion of Requirements in Software Development

ABSTRACT. A lot of requirements are discarded throughout the product development process. However, resources are invested on them regardless of their fate. If it would exist a model that predicts reliably and early enough whether a requirement will be deployed or not, the overall process would be more cost-effective and the software system itself more qualitative, since effort would be channeled efficiently. In this work we try to build such a predictive model through modelling the lifecycle of each requirement based on its history, and capturing the underlying dynamics of its evolution. We employ a simple classification model, using logistic regression algorithm, with features coming from an engineering understanding of the problem and patterns observed on the data. We verify the model on more than 80,000 logs for a development process of over 10 years in an Italian Aeronautical Company. The results are encouraging, so we plan to extend our study on one side collecting more experimental data and, on the other, employing more refined modeling techniques, like those coming from data mining and fuzzy logic.