Events 2016

Proving Performance Properties of Higher-order Functions with Memoization

Ravi Madhavan EPFL
20 Dec 2016, 11:30 am - 12:30 pm
Kaiserslautern building G26, room 112
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Static verification of performance properties of programs is an important problem that has attracted great deal of research. However, most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk, I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects, for functional Scala programs that may rely on lazy evaluation and memoization. In our system, users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. ...
Static verification of performance properties of programs is an important problem that has attracted great deal of research. However, most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk, I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects, for functional Scala programs that may rely on lazy evaluation and memoization. In our system, users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. as "steps <= ? * size(list) ?", along with other functional properties necessary for establishing the bounds. The system automatically infers values for the holes that will make the templates hold for all executions of the functions. For example, the property that a function converting a propositional formula f into negation-normal form (NNF) takes time linear in the size of f can be expressed in the post-condition of the function using the predicate "steps <= ? * size(f) ?", where size is a user-defined function counting the number of nodes in the syntax tree of the formula. Using our tool, we have verified asymptotically precise bounds of several algorithms and data structures that rely on complex sharing of higher-order functions and memoization. Our benchmarks include balanced search trees like red-black trees and AVL trees, Okasaki’s constant-time queues, deques, lazy data structures based on numerical representations such as lazy binomial heaps, cyclic streams, and dynamic programming algorithms. Some of the benchmarks have posed serious challenges to automatic as well as manual reasoning. The system is a part of the Leon verifier and can be tried online at "http://leondev.epfl.ch" ("Resource bounds" section).
Read more

Proving Performance Properties of Higher-order Functions with Memoization

Ravi Madhavan EPFL
20 Dec 2016, 11:30 am - 12:30 pm
Kaiserslautern building G26, room 112
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Static verification of performance properties of programs is an important problem that has attracted great deal of research. However, most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk, I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects, for functional Scala programs that may rely on lazy evaluation and memoization. In our system, users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. ...
Static verification of performance properties of programs is an important problem that has attracted great deal of research. However, most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk, I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects, for functional Scala programs that may rely on lazy evaluation and memoization. In our system, users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. as "steps <= ? * size(list) ?", along with other functional properties necessary for establishing the bounds. The system automatically infers values for the holes that will make the templates hold for all executions of the functions.  For example, the property that a function converting a propositional formula f into negation-normal form (NNF)  takes time linear in the size of f can be expressed in the post-condition of the function using the predicate "steps <= ? * size(f) ?",  where size is a user-defined function counting the number of nodes in the syntax tree of the formula. Using our tool, we have verified asymptotically precise bounds of several algorithms and data structures that rely on complex sharing of higher-order functions and memoization. Our benchmarks include balanced search trees like red-black trees and AVL trees, Okasaki’s constant-time queues, deques, lazy data structures based on numerical representations such as lazy binomial heaps, cyclic streams, and dynamic programming algorithms. Some of the benchmarks have posed serious challenges to automatic as well as manual reasoning. The system is a part of the Leon verifier and can be tried online at "http://leondev.epfl.ch" ("Resource bounds" section).

References:   (a) Symbolic Resource Bound Inference For Functional Programs. Ravichandhran Madhavan and Viktor Kuncak. Computer Aided Verification, CAV 2014 (b) Contract-based Resource Verification for Higher-order Functions with Memoization. Ravichandhran Madhavan, Sumith Kulal and Viktor Kuncak. To appear in POPL 2017
Read more

Network Inference: Graph Reconstruction and Verification

Hang Zhou MPI-INF - D1
07 Dec 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
How efficiently can we find an unknown graph using shortest path queries between its vertices? This is a natural theoretical question from the standpoint of recovery of hidden information. This question is related to discovering the topology of Internet networks, which is a crucial step for building accurate network models and designing efficient algorithms for Internet applications.

In this talk, I will introduce the problems of graph reconstruction and verification via oracles. I will investigate randomized algorithms based on a Voronoi cell decomposition. ...
How efficiently can we find an unknown graph using shortest path queries between its vertices? This is a natural theoretical question from the standpoint of recovery of hidden information. This question is related to discovering the topology of Internet networks, which is a crucial step for building accurate network models and designing efficient algorithms for Internet applications.

In this talk, I will introduce the problems of graph reconstruction and verification via oracles. I will investigate randomized algorithms based on a Voronoi cell decomposition. I will also analyze greedy algorithms, and prove that they are near-optimal.

The talk is based on joint work with Claire Mathieu and Sampath Kannan.
Read more

Sustaining the Energy Transition: A Role for Computer Science and Complex Networks

Marco Aiello Rijksuniversiteit Groningen
03 Nov 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
The energy sector is in the midst of an exciting transition. Driven by new generation technologies and by infrastructure digitalization, the traditional way of transmitting, distributing and using energy is transforming a centralized hierarchical system into a multi-directional open infrastructure. While the vision of Intelligent Energy Networks is appealing and desirable---especially from a sustainability perspective---a number of major challenges remain to be tackled. The loss of centralized control, the intermittent nature of renewable energy sources and the scale of the future digital energy systems are novel situations for power systems infrastructures and consumers that pose reliability and availability threats. ...
The energy sector is in the midst of an exciting transition. Driven by new generation technologies and by infrastructure digitalization, the traditional way of transmitting, distributing and using energy is transforming a centralized hierarchical system into a multi-directional open infrastructure. While the vision of Intelligent Energy Networks is appealing and desirable---especially from a sustainability perspective---a number of major challenges remain to be tackled. The loss of centralized control, the intermittent nature of renewable energy sources and the scale of the future digital energy systems are novel situations for power systems infrastructures and consumers that pose reliability and availability threats.

In this talk, I show examples of how Computer Science techniques are having and will have an important role in future energy systems. I focus on electricity as energy vector, and techniques from Service-Oriented Computing and AI Planning. I also present Complex Network theory as a design tool for energy distribution systems. To make things concrete, I will review almost ten years of personal research that include making office buildings energy efficient, homes smarter, and futuristic models for the evolution of power distribution grids to accommodate for multi-directional energy flows with distributed generation and local control.
Read more

A Promising Semantics for Relaxed-Memory Concurrency

Viktor Vafeiadis Max Planck Institute for Software Systems
02 Nov 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Despite decades of research in shared-memory concurrency and in the semantics of programming languages, there is still no adequate answer to the following fundamental question: What is the right semantics for concurrent shared-memory programs written in higher-level languages? The difficulty in answering this question is that any solution will have to balance the conflicting desiderata of programmers, compilers, and hardware. Previous attempts either allow spurious "out-of-thin-air" program behaviours or incur a non-negligible implementation cost. Nevertheless, we show that the question can be answered elegantly with the novel notion of promises: a thread may promise to execute a write in the future, ...
Despite decades of research in shared-memory concurrency and in the semantics of programming languages, there is still no adequate answer to the following fundamental question: What is the right semantics for concurrent shared-memory programs written in higher-level languages? The difficulty in answering this question is that any solution will have to balance the conflicting desiderata of programmers, compilers, and hardware. Previous attempts either allow spurious "out-of-thin-air" program behaviours or incur a non-negligible implementation cost. Nevertheless, we show that the question can be answered elegantly with the novel notion of promises: a thread may promise to execute a write in the future, thus enabling other threads to read from that write out of order. This idea alone suffices to explain most of the relaxed consistency behaviours observed as a result of compiler and hardware optimisations, while ruling out "out-of-thin-air" behaviours.
Read more

Heap-based reasoning about asynchronous concurrency

Johannes Kloos Max Planck Institute for Software Systems
27 Oct 2016, 6:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 105
SWS Student Defense Talks - Thesis Proposal
Asynchronous concurrency is a wide-spread way of writing programs that deal with many short tasks. It is the programming model behind event-driven concurrency, as exemplified by GUI applications, where the tasks correspond to event handlers, web applications based around JavaScript, the implementation of web browsers, but also of server-side software or operating systems. It provides a middle-ground between sequential programming and multi-threading, giving the benefits of concurrency while being easier to program than multi-threading. While there are languages and libraries that make the construction of asynchronous programs relatively simple, ...
Asynchronous concurrency is a wide-spread way of writing programs that deal with many short tasks. It is the programming model behind event-driven concurrency, as exemplified by GUI applications, where the tasks correspond to event handlers, web applications based around JavaScript, the implementation of web browsers, but also of server-side software or operating systems. It provides a middle-ground between sequential programming and multi-threading, giving the benefits of concurrency while being easier to program than multi-threading. While there are languages and libraries that make the construction of asynchronous programs relatively simple, there is little support for asynchronous program analysis. Existing working is mostly focused on model checking or performing specific targeted analyses. The model checking techniques in particular have turned out to be inefficient and completely ignore the heap. In this thesis proposal, I will address the question of how we can reason about asynchronous programs, and how I want to use this to optimize asynchronous programs. I will address three main questions: 1.      How can we reason about asynchronous programs, without ignoring the heap? 2.      How can we use such reasoning techniques to optimize programs involving asynchronous behavior? 3.      How can we scale reasoning and optimization to apply to real-world software? In the proposal, I will describe answers to all three questions. The unifying idea behind all of these results is the use of a appropriate model of global state (usually, the heap) and a promise-based model of asynchronous concurrency.

To address the first question, I will describe some prior work (ALST, ECOOP 2015), where we extended the OCaml type system to reason about asynchronous concurrency. To address the second and third question, I will describe an ongoing project (JSDefer) about optimization web pages by loading JavaScript asynchronously, and a planned project about an automated optimization step for OCaml programs to use change synchronous to asynchronous I/O operations safely.
Read more

Polynomial Identity Testing

Markus Bläser Fachrichtung Informatik - Saarbrücken
05 Oct 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Does randomness help to speed up computation? This is a fundamental question in computer science. Inspired by early successes like randomized primality tests, a significant amount of researcher believed for a long time that randomness indeed helps speeding up computations. In 1997, Impagliazzo and Wigderson proved that the existence of certain hard to compute functions implies that randomness can only give a speedup by a polynomial factor. Shortly after this, Agrawal, Kayal, and Saxena were able to give a deterministic polynomial time primality test. ...
Does randomness help to speed up computation? This is a fundamental question in computer science. Inspired by early successes like randomized primality tests, a significant amount of researcher believed for a long time that randomness indeed helps speeding up computations. In 1997, Impagliazzo and Wigderson proved that the existence of certain hard to compute functions implies that randomness can only give a speedup by a polynomial factor. Shortly after this, Agrawal, Kayal, and Saxena were able to give a deterministic polynomial time primality test.

We still do not know whether for every problem having a randomized polynomial time algorithm there is also a deterministic polynomial time one. In this talk, we will look at the flagship problem that has a randomized polynomial time algorithm, but for which we do not know whether a deterministic one exists: the so-called polynomial identity testing problem. We are given an arithmetic circuit computing some polynomial and we want to answer whether this polynomial is identically zero. While there is an easy randomized algorithm, we do not know whether a deterministic polynomial time algorithm exists and it can be shown that the existence of such an algorithm implies lower bounds for circuit size, a partial converse to the result by Impagliazzo and Wigderson. In this talk, we will introduce the polynomial identity testing problem and its variants and give an overview about deterministic algorithms for some special cases.
Read more

Multi-Authority ABE: Constructions and Applications

Beverly Li Hunan University
23 Sep 2016, 11:00 am - 12:00 pm
Saarbrücken building E1 5, room 029
SWS Colloquium
Attribute-based Encryption(ABE) is a form of asymmetric cryptography that allows encryption over labels named "attributes". In an ABE scheme, an "authority" generates public parameters and secrets and assigns attributes (and associated secrets) to users. Data can be encrypted using formulas over attributes; users can decrypt if they have attribute secrets that satisfy the encryption formula.

In this talk, I will discuss an extension to ABE that allows encryption over attributes provided by multiple authorities. ...
Attribute-based Encryption(ABE) is a form of asymmetric cryptography that allows encryption over labels named "attributes". In an ABE scheme, an "authority" generates public parameters and secrets and assigns attributes (and associated secrets) to users. Data can be encrypted using formulas over attributes; users can decrypt if they have attribute secrets that satisfy the encryption formula.

In this talk, I will discuss an extension to ABE that allows encryption over attributes provided by multiple authorities. Such a scheme enables secure data sharing between otherwise distrusting organizations. I will discuss example scenarios where multi-authority ABE is useful, and describe one new construction of multi-authority ABE scheme named DMA.

In DMA, a data owner is a first class principal: users in the system get attributes in cooperation with the data owner and various authorities. Compared to previous work, DMA does not require a global identity for users, or require the multiple authorities to trust a single central authority. DMA is also immune to collusion attacks mounted by users and authorities.
Read more

Useful but ugly games

Ruediger Ehlers Univ. of Bremen
13 Sep 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 112
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
The controller synthesis problem for CPS is often reduced to solving omega-regular games with an optional optimization criterion. The criteria commonly used in the literature on omega-regular games are however frequently unsuitable for obtaining high-quality CPS controllers as they are unable to capture many, if not most, real-world optimization objectives. We survey a few such cases and show that such problems can be overcome with more sophisticated optimization criteria. The synthesis problem for them gives rise to ugly games, ...
The controller synthesis problem for CPS is often reduced to solving omega-regular games with an optional optimization criterion. The criteria commonly used in the literature on omega-regular games are however frequently unsuitable for obtaining high-quality CPS controllers as they are unable to capture many, if not most, real-world optimization objectives. We survey a few such cases and show that such problems can be overcome with more sophisticated optimization criteria. The synthesis problem for them gives rise to ugly games, i.e. games that have complicated definitions but relatively simple solutions.
Read more

Towards Privacy-Compliant Mobile Computing

Paarijaat Aditya Max Planck Institute for Software Systems
06 Sep 2016, 5:30 pm
Saarbrücken building E1 5, room 005
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Student Defense Talks - Thesis Proposal
Sophisticated mobile computing, sensing and recording devices like smartphones, Life-logging cameras and Google Glass are carried by their users virtually around the clock. While these devices enable transformative new applications and services, they also introduce entirely new threats to users' privacy, because they can capture a complete record of the user's location, online and offline activities, and social encounters, including an audiovisual record. Such a record of users' personal information is highly sensitive and is subject to numerous privacy risks. ...
Sophisticated mobile computing, sensing and recording devices like smartphones, Life-logging cameras and Google Glass are carried by their users virtually around the clock. While these devices enable transformative new applications and services, they also introduce entirely new threats to users' privacy, because they can capture a complete record of the user's location, online and offline activities, and social encounters, including an audiovisual record. Such a record of users' personal information is highly sensitive and is subject to numerous privacy risks. In this thesis we address two specific contexts in which this problem arises, which are: 1) risks to users' personal information introduced by a popular class of apps called mobile social apps, 2) privacy risks due to ubiquitous digital capture, where bystanders may inadvertently (and/or against their wishes) be captured in photos and video recorded by other nearby users. Both the solutions aim at putting the user back in control of what personal information is being collected and shared, while enabling innovative new applications.
Read more

Dynamics of Crowdlearning and Value of Knowledge

Utkarsh Upadhyay Max Planck Institute for Software Systems
02 Sep 2016, 11:00 am
Kaiserslautern building G26, room 517
SWS Student Defense Talks - Qualifying Exam
Learning from the crowd has become increasingly popular in the Web and social media. There is a wide variety of crowdlearning sites in which, on the one hand, users learn from the knowledge that other users contribute to the site, and, on the other hand, knowledge is reviewed and curated by the same users using assessment measures such as upvotes or likes.

In this talk, I present a probabilistic modeling framework of crowdlearning, which uncovers the evolution of a user's expertise over time by leveraging other users' assessments of her own contributions. ...
Learning from the crowd has become increasingly popular in the Web and social media. There is a wide variety of crowdlearning sites in which, on the one hand, users learn from the knowledge that other users contribute to the site, and, on the other hand, knowledge is reviewed and curated by the same users using assessment measures such as upvotes or likes.

In this talk, I present a probabilistic modeling framework of crowdlearning, which uncovers the evolution of a user's expertise over time by leveraging other users' assessments of her own contributions. The model allows for both off-site and on-site learning and captures forgetting of knowledge. We then develop a scalable estimation method to fit the model parameters from millions of recorded learning and contributing events.

We show the effectiveness of our model by tracing activity of ~25 thousand users in Stack Overflow over a 4.5 year period. We find that answers with high knowledge value are rare. Newbies and experts tend to acquire less knowledge than users in the middle range. Prolific learners tend to be also proficient contributors that post answers with high knowledge value.
Read more

Domain Specific Languages for Verified Software

Damien Zufferey MIT
15 Aug 2016, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
SWS Colloquium
In this talk, I will show how we can harness the synergy between programming languages and verification methods to help programmers build reliable software, prove complex properties about them, and scale verification to industrial projects. First, I will describe P a domain-specific language to write asynchronous event driven code. P isolates the control-structure, or protocol, from the data-processing. This allows us not only to generate efficient code, but also to test it using model checking techniques. ...
In this talk, I will show how we can harness the synergy between programming languages and verification methods to help programmers build reliable software, prove complex properties about them, and scale verification to industrial projects. First, I will describe P a domain-specific language to write asynchronous event driven code. P isolates the control-structure, or protocol, from the data-processing. This allows us not only to generate efficient code, but also to test it using model checking techniques. P was used to implement and verify the core of the USB device driver stack that ships with Microsoft Windows 8 and later versions. The language abstractions and verification helped building a driver which is both reliable and fast. Then, I will introduce PSync a domain specific language for fault-tolerant distributed algorithms that simplifies the implementation of these algorithms enables automated formal verification, and can be executed efficiently. Fault-tolerant algorithms are notoriously difficult to implement correctly, due to asynchronous communication and faults. PSync provides an high-level abstraction by viewing an asynchronous faulty system as synchronous one with an adversarial environment that simulates faults. We have implemented in PSync several important fault-tolerant distributed algorithms and we compare the implementation of consensus algorithms in PSync against implementations in other languages.
Read more

Cache-Persistence-Aware Response-Time Analysis for Fixed-Priority Preemptive Systems

Geoffrey Nelissen CISTER, Porto
10 Aug 2016, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
The existing gap between the processor and main memory operating speeds motivated the introduction of intermediate cache memories to accelerate the average access time to instructions and data accessed by programs running on the processor. The introduction of cache memories in modern computing platforms is the cause of important variations in the execution time of each task, depending on whether the instruction and data it requires are already loaded in the cache or not. Many works have focused on analyzing the impact of preemptions on the worst-case execution time (WCET) and worst-case response time (WCRT) of tasks in preemptive systems. ...
The existing gap between the processor and main memory operating speeds motivated the introduction of intermediate cache memories to accelerate the average access time to instructions and data accessed by programs running on the processor. The introduction of cache memories in modern computing platforms is the cause of important variations in the execution time of each task, depending on whether the instruction and data it requires are already loaded in the cache or not. Many works have focused on analyzing the impact of preemptions on the worst-case execution time (WCET) and worst-case response time (WCRT) of tasks in preemptive systems. Indeed, the preempted tasks may suffer additional cache misses if its memory blocks are evicted from the cache during the execution of preempting tasks. These evictions cause extra accesses to the main memory, which result in additional delays in the task execution. This extra cost is usually referred to as cache-related preemption delays (CRPDs).

Several approaches use information about the tasks' memory access patterns to bound and incorporate preemption costs into the WCRT analysis. These approaches all result in pessimistic WCRT bounds due to the fact that they do not consider the variation in memory demand for successive instances of a same task. They usually assume that the useful cache content for the task is completely erased between two of its executions. However, in actual systems, successive instances of a task may re-use most of the data and instructions that were already loaded in the cache during previous executions. During this talk, we will discuss the concept of persistent cache blocks from a task WCRT perspective, and will present how it can be used to reduce the pessimism of the WCRT analysis for fixed priority preemptive systems. Then, we will introduce techniques exploiting this notion of cache persistence to pre-configure systems so as to improve their runtime behavior.
Read more

An overview of MSR-I

Chandu Thekkath Microsoft Research India
05 Aug 2016, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Distinguished Lecture Series
This talk will briefly cover the overall research agenda of the MSR Lab in Bangalore. We work in many broad areas that of CS including Algorithms, Crypto, Systems, ML, and ICT4D among others.  The talk will cover three ongoing projects to give you a sense of the breadth of our work: The Trusted Cloud, Green Spaces, and 99DOTS. The goal of the Trusted Cloud project is to explore the challenges of keeping client data stored in the Cloud secure without trusting the Cloud operator, ...
This talk will briefly cover the overall research agenda of the MSR Lab in Bangalore. We work in many broad areas that of CS including Algorithms, Crypto, Systems, ML, and ICT4D among others.  The talk will cover three ongoing projects to give you a sense of the breadth of our work: The Trusted Cloud, Green Spaces, and 99DOTS. The goal of the Trusted Cloud project is to explore the challenges of keeping client data stored in the Cloud secure without trusting the Cloud operator, and involves research in the disciplines of computer security, programming languages and verification, and hardware.

The Green Spaces project attempts to understand the implications of using TV spectrum to provide ubiquitous internet access in countries like India or Brazil where, unlike the US, there is plenty of unused spectrum that can be tapped. This project involves both questions in CS research as well as policy issues at the national level on spectrum allocation.

The 99DOTS project address the problem that arises when patients do not adhere to medications as prescribed by the doctors. Such non-adherence has severe health consequences to large population of patients in all parts of the world. 99DOTS proposes a novel solution to ensure medication adherence in a very cost way, and is used by the Indian Government for Tuberculosis in all its treatment centers in the country.
Read more

Learning-Based Synthesis

Daniel Neider UCLA
25 Jul 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Synthesis, the automatic construction of objects related to hard- and software, is one of the great challenges of computer science. Although synthesis problems are impossible to solve in general, learning-based approaches, in which the synthesis of an object is based on learning from examples, have recently been used to build elegant and extremely effective solutions for a large number of difficult problems. Such examples include automatically fixing bugs, translating programs from one language into another, program verification, ...
Synthesis, the automatic construction of objects related to hard- and software, is one of the great challenges of computer science. Although synthesis problems are impossible to solve in general, learning-based approaches, in which the synthesis of an object is based on learning from examples, have recently been used to build elegant and extremely effective solutions for a large number of difficult problems. Such examples include automatically fixing bugs, translating programs from one language into another, program verification, as well as the generation of high-level code from given specifications.

This talk gives an introduction to learning-based synthesis. First, we develop a generic view on learning-based synthesis, called abstract learning frameworks for synthesis, which introduces a common terminology to compare and contrast learning-based synthesis techniques found in the literature. Then, we present a learning-based program verifier, which can prove the correctness of numeric programs (nearly) automatically, and show how this technique can be modeled as an abstract learning framework for synthesis. During the talk, we present various examples that highlight the power of the learning-based approach to synthesis.
Read more

Algorithmic fairness: a mathematical perspective

Suresh Venkatasubramanian University of Utah
22 Jul 2016, 2:00 pm - 3:30 pm
Saarbrücken building E1 5, room 029
SWS Colloquium
Machine learning has taken over our world, in more ways than we realize. You might get book recommendations, or an efficient route to your destination, or even a winning strategy for a game of Go. But you might also be admitted to college, granted a loan, or hired for a job based on algorithmically enhanced decision-making. We believe machines are neutral arbiters: cold, calculating entities that always make the right decision, that can see patterns that our human minds can't or won't. ...
Machine learning has taken over our world, in more ways than we realize. You might get book recommendations, or an efficient route to your destination, or even a winning strategy for a game of Go. But you might also be admitted to college, granted a loan, or hired for a job based on algorithmically enhanced decision-making. We believe machines are neutral arbiters: cold, calculating entities that always make the right decision, that can see patterns that our human minds can't or won't. But are they? Or is decision-making-by-algorithm a way to amplify, extend and make inscrutable the biases and discrimination that is prevalent in society? To answer these questions, we need to go back — all the way to the original ideas of justice and fairness in society. We also need to go forward — towards a mathematical framework for talking about justice and fairness in machine learning.
Read more

Framing Dependencies Introduced by Underground Commoditization

Damon McCoy NYU
21 Jul 2016, 1:30 pm - 3:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Colloquium
Internet crime has become increasingly dependent on the underground economy: a loose federation of specialists selling capabilities, services, and resources explicitly tailored to the abuse ecosystem. Through these emerging markets, modern criminal entrepreneurs piece together dozens of à la carte components into entirely new criminal endeavors. In this talk, I'll discuss parts of this ecosystem and show that criminal reliance on this black market introduces fragile dependencies that, if disrupted, undermine entire operations that as a composite appear intractable to protect against.
Internet crime has become increasingly dependent on the underground economy: a loose federation of specialists selling capabilities, services, and resources explicitly tailored to the abuse ecosystem. Through these emerging markets, modern criminal entrepreneurs piece together dozens of à la carte components into entirely new criminal endeavors. In this talk, I'll discuss parts of this ecosystem and show that criminal reliance on this black market introduces fragile dependencies that, if disrupted, undermine entire operations that as a composite appear intractable to protect against.

When Eye Talk to You: Exploiting Eye-gaze in Spoken Communication

Maria Staudte Cluster of Excellence - Multimodal Computing and Interaction - MMCI
06 Jul 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
As many of you might know from experience (e.g. from talking to Siri, Cortana etc.), talking to systems can still be frustrating while humans talk seemingly effortless to each other. One reason why we find it easy to process language is that we exploit a variety of information available from the scene, the interaction partner, our experience and world knowledge. Specifically, we use our eyes to inspect those sources of information and try to link them in order to accomplish smooth communication. ...
As many of you might know from experience (e.g. from talking to Siri, Cortana etc.), talking to systems can still be frustrating while humans talk seemingly effortless to each other. One reason why we find it easy to process language is that we exploit a variety of information available from the scene, the interaction partner, our experience and world knowledge. Specifically, we use our eyes to inspect those sources of information and try to link them in order to accomplish smooth communication. Such eye-movements are, in turn, used by the partner as additional information. It is non-trivial, however, to infer from an individual gaze cue what its exact purpose was and how it should be interpreted.

In this talk, I will explain how we identify specific gaze patterns from people as they understand and produce language and how we successfully exploit them to inform an (artificial) interaction partner.  In particular, I will present ongoing work on natural language generation in the real-world, which exploits the user's gaze to monitor her understanding, as well as first steps towards developing an avatar that knows when to follow (and exploit) a user's gaze in face-to-face interaction.
Read more

Algorithms for the Quantitative Analysis of Infinite-State Systems

Christoph Haase Laboratoire Spécification et Vérification, Cachan, France
20 Jun 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Counter automata are an appealing formal model of systems and programs with an unbounded number of states that find a plethora of applications, for instance in the verification of concurrent shared-memory programs. A counter automaton comprises a finite-state controller with a finite number of counters ranging over the natural numbers that can be incremented, decremented and tested for zero when a transition is taken. Despite having been studied since the early days of computer science, many important problems about decidable subclasses of counter automata have remained unsolved. ...
Counter automata are an appealing formal model of systems and programs with an unbounded number of states that find a plethora of applications, for instance in the verification of concurrent shared-memory programs. A counter automaton comprises a finite-state controller with a finite number of counters ranging over the natural numbers that can be incremented, decremented and tested for zero when a transition is taken. Despite having been studied since the early days of computer science, many important problems about decidable subclasses of counter automata have remained unsolved. This talk will give an overview over the history and some of the progress on both theoretical and practical aspects of counter automata that has been made over the last two years, focusing on new results concerning reachability, stochastic extensions, practical decision procedures, and the challenges that lie ahead.

This talk is based on joint work with M. Blondin (Montreal), A. Finkel (Cachan), S. Haddad (Cachan), P. Hofman (Cachan), S. Kiefer (Oxford) and M. Lohrey (Siegen).
Read more

Decision making at scale: Algorithms, Mechanisms, and Platforms

Dr. Ashish Goel Stanford University
09 Jun 2016, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Distinguished Lecture Series
YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism.  Twitter has a similar relationship to news media, and Coursera to Universities. But there are no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe two algorithmic approaches towards large scale decision making that we are exploring.

a) Knapsack voting and participatory budgeting: All budget problems are knapsack problems at their heart, ...
YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism.  Twitter has a similar relationship to news media, and Coursera to Universities. But there are no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe two algorithmic approaches towards large scale decision making that we are exploring.

a) Knapsack voting and participatory budgeting: All budget problems are knapsack problems at their heart, since the goal is to pack the largest amount of societal value into a budget. This naturally leads to "knapsack voting" where each voter solves a knapsack problem, or comparison-based voting where each voter compares pairs of projects in terms of benefit-per-dollar. We analyze natural aggregation algorithms for these mechanisms, and show that knapsack voting is strategy-proof. We will also describe our experience with helping implement participatory budgeting in close to two dozen cities and municipalities, and briefly comment on issues of fairness.

b) Triadic consensus: Here, we divide individuals into small groups (say groups of three) and ask them to come to consensus; the results of the triadic deliberations in each round form the input to the next round. We show that this method is efficient and strategy-proof in fairly general settings, whereas no pair-wise deliberation process can have the same properties.

This is joint work with Tanja Aitamurto, Brandon Fain, Anilesh Krishnaswamy, David Lee, Kamesh Munagala, and Sukolsak Sakshuwong.
Read more

Dynamic Scheduling for High-Performance Procedural Generation on the GPU

Markus Steinberger MPI-INF - D4
01 Jun 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
In this talk, we present the concept of operator graph scheduling for high performance procedural generation on the graphics processing unit (GPU). After a brief introduction into GPU scheduling, we will present the operator graph, which forms an intermediate representation that describes all possible operations and objects that can arise during a specific procedural generation. While previous methods have focused on parallelizing a specific procedural approach, the operator graph is applicable to all procedural generation methods that can be described by a graph, ...
In this talk, we present the concept of operator graph scheduling for high performance procedural generation on the graphics processing unit (GPU). After a brief introduction into GPU scheduling, we will present the operator graph, which forms an intermediate representation that describes all possible operations and objects that can arise during a specific procedural generation. While previous methods have focused on parallelizing a specific procedural approach, the operator graph is applicable to all procedural generation methods that can be described by a graph, such as L-systems, shape grammars, or stack based generation methods. Using the operator graph, we show that all partitions of the graph correspond to possible ways of scheduling a procedural generation on the GPU, including the scheduling strategies of previous work. As the space of possible partitions is very large, we describe three search heuristics, aiding an optimizer in finding the fastest valid schedule for any given operator graph. The best partitions found by our optimizer increase performance of 8 to 30x over the previous state of the art in GPU shape grammar and L-system generation.
Read more

Building Fast and Consistent (Geo-)Replicated Systems: From Principles to Practice

Cheng Li Max Planck Institute for Software Systems
30 May 2016, 4:00 pm - 5:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Student Defense Talks - Thesis Defense
Distributing data across replicas within a data center or across multiple data centers plays an important role in building Internet-scale services that provide a good user experience, namely low latency access and high throughput. This approach often compromises on strong consistency semantics, which helps maintain application-specific desired properties, namely, state convergence and invariant preservation. To relieve such inherent tension, in the past few years, many proposals have been designed to allow programmers to selectively weaken consistency levels of certain operations to avoid costly immediate coordination for concurrent user requests. ...
Distributing data across replicas within a data center or across multiple data centers plays an important role in building Internet-scale services that provide a good user experience, namely low latency access and high throughput. This approach often compromises on strong consistency semantics, which helps maintain application-specific desired properties, namely, state convergence and invariant preservation. To relieve such inherent tension, in the past few years, many proposals have been designed to allow programmers to selectively weaken consistency levels of certain operations to avoid costly immediate coordination for concurrent user requests. However, these fail to provide principles to guide programmers to make a correct decision of assigning consistency levels to various operations so that good performance is extracted while the system behavior still complies with its specification.

The primary goal of this thesis work is to provide programmers with principles and tools for building fast and consistent (geo-) replicated systems by allowing programmers to think about various consistency levels in the same framework. The first step we took was to propose RedBlue consistency, which presents sufficient conditions that allow programmers to safely separate weakly consistent operations from strongly consistent ones in a coarse-grained manner. Second, to improve the practicality of RedBlue consistency, we built SIEVE - a tool that explores both Commutative Replicated Data Types and program analysis techniques to assign proper consistency levels to different operations and to maximize the weakly consistent operation space. Finally, we generalized the tradeoff between consistency and performance and proposed Partial Order-Restrictions consistency (or short, PoR consistency) - a generic consistency definition that captures various consistency levels in terms of visibility restrictions among pairs of operations and allows programmers to tune the restrictions to obtain a fine-grained control of their targeted consistency semantics.
Read more

Truly Continuous Mobile Sensing for Behaviour Modelling

Cecilia Mascolo University of Cambridge
27 May 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 002
SWS Distinguished Lecture Series
In this talk I will introduce first  my general research interests which span from mobility modelling and geo-social network analysis to mobile sensing and mobile systems. I will then describe our work in the area of understanding patterns of mobility through sensing using microphone, accelerometer and gyroscope, primarily. I will then describe techniques which help make this energy efficient to achieve continuous sensing on smartphones and wearables. Examples will be drawn from our studies in mental health monitoring, ...
In this talk I will introduce first  my general research interests which span from mobility modelling and geo-social network analysis to mobile sensing and mobile systems. I will then describe our work in the area of understanding patterns of mobility through sensing using microphone, accelerometer and gyroscope, primarily. I will then describe techniques which help make this energy efficient to achieve continuous sensing on smartphones and wearables. Examples will be drawn from our studies in mental health monitoring, vehicular mobility monitoring and organization analytics.
Read more

RustBelt: Logical Foundations for the Future of Safe Systems Programming

Derek Dreyer Max Planck Institute for Software Systems
04 May 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Rust is a new language developed at Mozilla Research that marries together the low-level flexibility of modern C with a strong "ownership-based" type system guaranteeing type safety, memory safety, and data race freedom. As such, Rust has the potential to revolutionize systems programming, making it possible to build software systems that are safe by construction, without having to give up low-level control over performance.

Unfortunately, none of Rust's safety claims have been formally investigated, ...
Rust is a new language developed at Mozilla Research that marries together the low-level flexibility of modern C with a strong "ownership-based" type system guaranteeing type safety, memory safety, and data race freedom. As such, Rust has the potential to revolutionize systems programming, making it possible to build software systems that are safe by construction, without having to give up low-level control over performance.

Unfortunately, none of Rust's safety claims have been formally investigated, and it is not at all clear that they hold. To rule out data races and other common programming errors, Rust's core type system prohibits the aliasing of mutable state.  This is too restrictive, however, for a number of essential data structures, and thus Rust's standard libraries make widespread internal use of "unsafe" features of the language.  The Rust developers claim that such "unsafe" code has been properly encapsulated, so that Rust's language-level safety guarantees are preserved, but this claim is called into question by various safety bugs uncovered in Rust in the past year.

In the RustBelt project, funded by an ERC Consolidator Grant, we aim to develop the first formal tools for verifying the safety of Rust. To tackle this challenge, we will build on recent breakthrough developments in the area of Concurrent Separation Logic, in particular our work on the Iris and GPS program logics.  In the talk, I will explain what Concurrent Separation Logic is, why it is relevant to the Rust verification effort, what problems remain in developing a program logic suitable for Rust, and how Iris and GPS make progress toward that goal.
Read more

Towards Trustworthy Social Computing Systems

Bimal Viswanath Max Planck Institute for Software Systems
28 Apr 2016, 9:00 am - 10:30 am
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Student Defense Talks - Thesis Defense
The rising popularity of social computing systems has managed to attract rampant forms of service abuse that negatively affects the sustainability of these systems and degrades the quality of service experienced by their users. The main factor that enables service abuse is the weak identity infrastructure used by most sites, where identities are easy to create with no verification by a trusted authority. Attackers are exploiting this infrastructure to launch Sybil attacks, where they create multiple fake (Sybil) identities to take advantage of the combined privileges associated with the identities to abuse the system. ...
The rising popularity of social computing systems has managed to attract rampant forms of service abuse that negatively affects the sustainability of these systems and degrades the quality of service experienced by their users. The main factor that enables service abuse is the weak identity infrastructure used by most sites, where identities are easy to create with no verification by a trusted authority. Attackers are exploiting this infrastructure to launch Sybil attacks, where they create multiple fake (Sybil) identities to take advantage of the combined privileges associated with the identities to abuse the system. In this thesis, we present techniques to mitigate service abuse by designing and building defense schemes that are robust and practical. We use two broad defense strategies: (1) Leveraging the social network: We first analyze existing social network-based Sybil detection schemes and present their practical limitations when applied on real world social networks. Next, we present an approach called Sybil Tolerance that bounds the impact an attacker can gain from using multiple identities; (2) Leveraging activity history of identities: We present two approaches, one that applies anomaly detection on user social behavior to detect individual misbehaving identities, and a second approach called Stamper that focuses on detecting a group of Sybil identities. We show that both approaches in this category raise the bar for defense against adaptive attackers.
Read more

Data-driven Software security: Motivation and Methods

Ulfar Erlingsson Google
20 Apr 2016, 1:00 pm - 2:30 pm
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Distinguished Lecture Series
For computer software, our security models, policies, mechanisms, and means of assurance were primarily conceived and developed before the end of the 1970's. However, since that time, software has changed radically: it is thousands of times larger, comprises countless libraries, layers, and services, and is used for more purposes, in far more complex ways. This suggests that we should revisit some of our core computer security concepts. For example, what does the Principle of Least Privilege mean when all software contains libraries that can express arbitrary functionality? ...
For computer software, our security models, policies, mechanisms, and means of assurance were primarily conceived and developed before the end of the 1970's. However, since that time, software has changed radically: it is thousands of times larger, comprises countless libraries, layers, and services, and is used for more purposes, in far more complex ways. This suggests that we should revisit some of our core computer security concepts. For example, what does the Principle of Least Privilege mean when all software contains libraries that can express arbitrary functionality? And, what security policy should be enforced when software is too complex for either its developers or its users to explain its intended behavior in detail? One possibility is to take an empirical, data-driven approach to modern software, and determine its exact, concrete behavior via comprehensive, online monitoring. Such an approach can be a practical, effective basis for security— as demonstrated by its success in spam and abuse fighting—but its use to constrain software behavior raises many questions. In particular, two questions seem critical. First, is it possible to learn the details of how software *is* behaving, without intruding on the privacy of its users?  Second, are those details a good foundation for deriving security policies that constrain how software *should* behave?  This talk answers both these questions in the affirmative, as part of an overall approach to data-driven security. Specifically, the talk describes techniques for learning detailed software statistics while providing differential privacy for its users, and how deep learning can help derive useful security policies that match users' expectations with intended software behavior. Those techniques are both practical and easy to adopt, and have already been used at scale for billions of users.
Read more

Telco Innovation at Home

Dina Papagiannaki Telefonica, Barcelona
14 Apr 2016, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Distinguished Lecture Series
Connectivity services have been the focus of tremendous innovation in the recent past. The majority of such innovation, however, has primarily targeted mobile devices, despite the ever growing interest around home services. In this talk I am going to describe different types of innovation that I consider interesting for residential users and why they have or have not succeeded. The fundamental question is "What does it actually take to create interesting, novel user experiences for residential users?". ...
Connectivity services have been the focus of tremendous innovation in the recent past. The majority of such innovation, however, has primarily targeted mobile devices, despite the ever growing interest around home services. In this talk I am going to describe different types of innovation that I consider interesting for residential users and why they have or have not succeeded. The fundamental question is "What does it actually take to create interesting, novel user experiences for residential users?". In the talk I am going to focus on constraints, but also opportunities that could create meaningful value added services for the place that we love to call our home.
Read more

Formalizing object capabilities

David Swasey Max Planck Institute for Software Systems
13 Apr 2016, 2:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Qualifying Exam
Object capabilities let mutually distrustful principals share private state without giving up integrity, thus ostensibly making secure programming easier in the presence of buggy or malicious code. In practice, however, programmers may use object capabilities incorrectly, creating subtle security flaws. I aim to enable verification of programs that use object capabilities. In this work, I model Firefox's use of object capabilities—which is quite rich, going beyond "standard" patterns described in the literature—and develop a program logic to prove integrity properties.
Object capabilities let mutually distrustful principals share private state without giving up integrity, thus ostensibly making secure programming easier in the presence of buggy or malicious code. In practice, however, programmers may use object capabilities incorrectly, creating subtle security flaws. I aim to enable verification of programs that use object capabilities. In this work, I model Firefox's use of object capabilities—which is quite rich, going beyond "standard" patterns described in the literature—and develop a program logic to prove integrity properties.

Sustainable Reliability for Distributed Systems

Manos Kapritsos Microsoft Research, Redmond
11 Apr 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Reliability is a first-order concern in modern distributed systems. Even large, well-provisioned systems such as Gmail and Amazon Web Services can be brought down by failures, incurring millions of dollars of cost and hurting company reputation. Such service outages are typically caused by either hardware failures or software bugs. The systems community has developed various techniques for dealing with both kinds of failures (e.g. replication, software testing), but those techniques come at a significant cost. For example, ...
Reliability is a first-order concern in modern distributed systems. Even large, well-provisioned systems such as Gmail and Amazon Web Services can be brought down by failures, incurring millions of dollars of cost and hurting company reputation. Such service outages are typically caused by either hardware failures or software bugs. The systems community has developed various techniques for dealing with both kinds of failures (e.g. replication, software testing), but those techniques come at a significant cost. For example, our replication techniques for handling hardware failures are incompatible with multithreaded execution, forcing a stark choice between reliability and performance. As for guarding against software failures, our only real option today is to test our system as best we can and hope we have not missed any subtle bugs. In principle there exists another option, formal verification, that fully addresses this problem, but its overhead in both raw performance and programming effort is considered way too impractical to adopt in real developments.

In this talk, I make the case for Sustainable Reliability, i.e. reliability techniques that provide strong guarantees without imposing unnecessary overhead that limits their practicality. My talk covers the challenges faced by both hardware and software failures and proposes novel techniques in each area. In particular, I will describe how we can reconcile replication and multithreaded execution by rethinking the architecture of replicated systems. The resulting system, Eve, offers an unprecedented combination of strong guarantees and high performance. I will also describe IronFleet, a new methodology that brings formal verification of distributed systems within the realm of practicality. Despite its strong guarantees, IronFleet incurs a very reasonable overhead in both performance and programming effort.
Read more

Low Distortion Geometry Modelling

Renjie Chen MPI-INF - D4
06 Apr 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Computer graphics endeavors to deliver natural-looking and convincing graphic contents, including images and geometric models for various applications. In many cases, "natural-looking" can be interpreted as low distortion with respect to some reference. As distortions can largely affect human perception of the contents, we want to produce images and geometry shapes with no distortion or controlled amount of distortions, while satisfying the user-defined constraints for various applications.   In this talk, I will introduce the typical distortions used in geometry processing, ...
Computer graphics endeavors to deliver natural-looking and convincing graphic contents, including images and geometric models for various applications. In many cases, "natural-looking" can be interpreted as low distortion with respect to some reference. As distortions can largely affect human perception of the contents, we want to produce images and geometry shapes with no distortion or controlled amount of distortions, while satisfying the user-defined constraints for various applications.   In this talk, I will introduce the typical distortions used in geometry processing, and how to reduce and control these distortions in order to generate natural-looking results in a few applications, including a) high level image editing. By embedding an image into a mesh formed of polygons, we can transform the image into another by deforming the mesh. Equipped with proper definition of the distortion, different image editing goals can be achieved, such as content-aware image resizing, which allows to change the image size while keeping its overall appearance undistorted, and rephotography, which allows to change a photo as if it was shoot from different view positions or cameras. b) key-frame animation. As one of the most important applications in computer graphics, modern geometry processing tools can greatly reduce the artist efforts in creating visually appealing animation sequences.
Read more

Accountability for Distributed Systems

Andreas Haeberlen University of Pennsylvania
31 Mar 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Many of our everyday activities are now performed online - whether it is banking, shopping, or chatting with friends. Behind the scenes, these activities are implemented by large distributed systems that often contain machines from several different organizations. Usually, these machines do what we expect them to, but occasionally they 'misbehave' - sometimes by mistake, sometimes to gain an advantage, and sometimes because of a deliberate attack.

In society, accountability is widely used to counter such threats. ...
Many of our everyday activities are now performed online - whether it is banking, shopping, or chatting with friends. Behind the scenes, these activities are implemented by large distributed systems that often contain machines from several different organizations. Usually, these machines do what we expect them to, but occasionally they 'misbehave' - sometimes by mistake, sometimes to gain an advantage, and sometimes because of a deliberate attack.

In society, accountability is widely used to counter such threats. Accountability incentivizes good performance, exposes problems, and builds trust between competing individuals and organizations. In this talk, I will argue that accountability is also a powerful tool for designing distributed systems. An accountable distributed system ensures that 'misbehavior' can be detected, and that it can be linked to a specific machine via some form of digital evidence. The evidence can then be used just like in the 'offline' world, e.g., to correct the problem and/or to take action against the responsible organizations.

I will give an overview of our progress towards accountable distributed systems, ranging from theoretical foundations and efficient algorithms to practical applications. I will also present one result in detail: a technique that can detect information leaks through covert timing channels.
Read more

Why applications are still draining our batteries, and how we can help

Aaron Schulman Stanford University
29 Mar 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Application developers lack tools to profile and compare the energy consumption of different software designs. This energy-optimization task is challenging because of unpredictable interactions between the application and increasingly complex power management logic. Yet, having accurate power information would allow application developers to both avoid inefficient designs and discover opportunities for new optimizations.

In this talk, I will show that it is possible to accurately measure system-level power and attribute it to application activities. ...
Application developers lack tools to profile and compare the energy consumption of different software designs. This energy-optimization task is challenging because of unpredictable interactions between the application and increasingly complex power management logic. Yet, having accurate power information would allow application developers to both avoid inefficient designs and discover opportunities for new optimizations.

In this talk, I will show that it is possible to accurately measure system-level power and attribute it to application activities. I will present BattOr, a portable, easy-to-use power monitor that provides developers with a profile of the energy consumption of their designs-without modifications to hardware or software. I will show how Google developers are using BattOr to improve Chrome's energy efficiency. I will also show how fine-grained understanding of cellular power at different signal strengths enables novel energy optimizations. Finally, I will describe my future plans to attribute system-level power to individual hardware components and to investigate opportunities presented by instrumenting every server in a data center with fine-grained power monitoring.
Read more

What's in a Game? An intelligent and adaptive approach to security

Arunesh Sinha University of Southern California
23 Mar 2016, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Understanding the complex defender-adversary interaction in any adversarial interaction allows for the design of intelligent and adaptive defense.  Game theory is a natural model for such multi-agent interaction. However, significant challenges need to be overcome in order to apply game theory in practice. In this talk, I will present my work on addressing two such challenges: scalability and learning adversary behavior. First, I will present a game model of screening of passengers at airports and a novel optimization approach based on randomized allocation and disjunctive programming techniques to solve large instances of the problem. ...
Understanding the complex defender-adversary interaction in any adversarial interaction allows for the design of intelligent and adaptive defense.  Game theory is a natural model for such multi-agent interaction. However, significant challenges need to be overcome in order to apply game theory in practice. In this talk, I will present my work on addressing two such challenges: scalability and learning adversary behavior. First, I will present a game model of screening of passengers at airports and a novel optimization approach based on randomized allocation and disjunctive programming techniques to solve large instances of the problem. Next, I will present an approach that learns adversary behavior and then plans optimal defensive actions, thereby bypassing standard game-theoretic assumptions such as rationality. However, a formal Probably Approximately Correct (PAC) model analysis of the learning module in such an approach reveals possible conditions under which learning followed by optimization can produce sub-optimal results. This emphasizes the need of formal compositional reasoning when using learning in large systems. 

The airport screening work was done in collaboration with the Transport Security Administration in USA. The approach of learning adversary behavior was applied for predictive policing in collaboration with University of Southern California (USC) police, and is being tested on the USC campus.
Read more

Towards a Non-Tracking Web

Istemi Ekin Akkus Max Planck Institute for Software Systems
21 Mar 2016, 2:00 pm - 3:30 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Defense
Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. ...
Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.

In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.
Read more

Performance-aware Repair for Concurrent Programs

Arjun Radhakrishna University of Pennsylvania
21 Mar 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
We present a recent line of work on automated synthesis of synchronization constructs for concurrent programs. Our techniques are inspired by a study of the most common types of concurrency bugs and bugs-fixes in Linux device-drivers.  As opposed to classical techniques which tend to use expensive synchronization constructs, our technique attempts to use inexpensive program transformations, such as reordering independent statements, to improve the performance of generated fixes.

Our techniques are based on the observation that a large fraction of concurrency bugs are data-independent. ...
We present a recent line of work on automated synthesis of synchronization constructs for concurrent programs. Our techniques are inspired by a study of the most common types of concurrency bugs and bugs-fixes in Linux device-drivers.  As opposed to classical techniques which tend to use expensive synchronization constructs, our technique attempts to use inexpensive program transformations, such as reordering independent statements, to improve the performance of generated fixes.

Our techniques are based on the observation that a large fraction of concurrency bugs are data-independent. This observations allow us to characterize and fix concurrency bugs based only on the order of execution of the statements involved. We evaluated our techniques on several real concurrency bugs that occurred in Linux device drivers, and showed that our synthesis procedure is able to produce more efficient and "programmer-like" bug-fixes.

We finish by talk with a brief note on the general theme of soft specifications, such as performance and energy consumption, in program synthesis. Specifically, we will discuss the use of quantitative specifications and their applications to resource management in embedded and cyber-physical systems.
Read more

Online social interactions: a lens on humans and a world for humans

Chenhao Tan Cornell University
17 Mar 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Online social interactions have become an integral part of people's lives, e.g., presidential candidates use Facebook and Twitter to engage with the public, programmers rely on Stackoverflow to write code, and various communities have been forming online. This unprecedented amount of social interaction offers tremendous opportunities to understand human behavior. Such an understanding can induce significant social impact, ranging from influencing election outcomes to better communication for everyone. 

My research leverages newly available massive datasets of social interactions to understand human behavior and predict human decisions. ...
Online social interactions have become an integral part of people's lives, e.g., presidential candidates use Facebook and Twitter to engage with the public, programmers rely on Stackoverflow to write code, and various communities have been forming online. This unprecedented amount of social interaction offers tremendous opportunities to understand human behavior. Such an understanding can induce significant social impact, ranging from influencing election outcomes to better communication for everyone. 

My research leverages newly available massive datasets of social interactions to understand human behavior and predict human decisions. These results can be used to build or improve socio-technical systems. In this talk, I will explain my research at both micro and macro levels. At the micro level, I investigate the effect of wording in message sharing via natural experiments. I develop a classifier that outperforms humans in predicting which tweet will be retweeted more. At the macro level, I examine how users engage with multiple communities and find that, surprisingly, users continually explore new communities on Reddit. Moreover, their exploration patterns in their early ``life'' can be used to predict whether they will eventually abandon Reddit. I will finish with some discussion of future research directions in understanding human behavior.
Read more

Securing the Internet by Proving the Impossible

Dave Levin University of Maryland
14 Mar 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
The state of Internet security today is largely reactive, continually raising the defensive bar in response to increasingly sophisticated attackers. In this talk, I will present an alternate approach to building Internet systems that underlies much of my work: instead of reactively working around some attacks, what if we were to make them impossible in the first place?

 I will discuss two primitives my collaborators and I have created that provide small proofs of impossibility, ...
The state of Internet security today is largely reactive, continually raising the defensive bar in response to increasingly sophisticated attackers. In this talk, I will present an alternate approach to building Internet systems that underlies much of my work: instead of reactively working around some attacks, what if we were to make them impossible in the first place?

 I will discuss two primitives my collaborators and I have created that provide small proofs of impossibility, and I will demonstrate   how they can be applied to solve large-scale problems, including censorship resistance, digital currency, and online voting.     First, I will present TrInc, a small piece of trusted hardware that provides proof that an attacker could not have sent conflicting   messages to others.  Second, I will present Alibi Routing, a peer-to-peer system that provides proof that a user's packets could   not have gone through a region of the world the user requested them to forbid. Finally, I will describe some of my ongoing   and future efforts, including securing the Web's public key infrastructure.
Read more

Algorithms and Tools for Verification and Testing of Asynchronous Programs

Zilong Wang Max Planck Institute for Software Systems
10 Mar 2016, 3:30 pm - 4:30 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Defense
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.

Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies diffi- cult to follow. ...
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.

Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies diffi- cult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.

In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter- abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LL- SPLAT, two testing tools for asynchronous programs that efficiently uncover many sub- tle memory violation bugs.
Read more

Stamping Out Concurrency Bugs

Baris Kasikci EPFL
10 Mar 2016, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
The shift to multi-core architectures in the past ten years pushed developers to write concurrent software to leverage hardware parallelism. The transition to multi-core hardware happened at a more rapid pace than the evolution of associated programming techniques and tools, which made it difficult to write concurrent programs that are both efficient and correct. Failures due to concurrency bugs are often hard to reproduce and fix, and can cause significant losses.

In this talk, ...
The shift to multi-core architectures in the past ten years pushed developers to write concurrent software to leverage hardware parallelism. The transition to multi-core hardware happened at a more rapid pace than the evolution of associated programming techniques and tools, which made it difficult to write concurrent programs that are both efficient and correct. Failures due to concurrency bugs are often hard to reproduce and fix, and can cause significant losses.

In this talk, I will first give an overview of the techniques we developed for the detection, root cause diagnosis, and classification of concurrency bugs. Then, I will discuss how the techniques we developed have been adopted at Microsoft and Intel. I will then discuss in detail Gist, a technique for the root cause diagnosis of failures. Gist uses hybrid static-dynamic program analysis and gathers information from real user executions to isolate root causes of failures. Gist is highly accurate and efficient, even for failures that rarely occur in production. Finally, I will close by describing future work I plan to do toward solving the challenges posed to software systems by emerging technology trends.
Read more

Human Behavior in Networks

Robert West Stanford University
07 Mar 2016, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
SWS Colloquium
Humans as well as information are organized in networks. Interacting with these networks is part of our daily lives: we talk to friends in our social network; we find information by navigating the Web; and we form opinions by listening to others and to the media. Thus, understanding, predicting, and enhancing human behavior in networks poses important research problems for computer and data science with practical applications of high impact. In this talk I will present some of my work in this area, ...
Humans as well as information are organized in networks. Interacting with these networks is part of our daily lives: we talk to friends in our social network; we find information by navigating the Web; and we form opinions by listening to others and to the media. Thus, understanding, predicting, and enhancing human behavior in networks poses important research problems for computer and data science with practical applications of high impact. In this talk I will present some of my work in this area, focusing on (1) human navigation of information networks and (2) person-to-person opinions in social networks.

Network navigation constitutes a fundamental human behavior: in order to make use of the information and resources around us, we constantly explore, disentangle, and navigate networks such as the Web. Studying navigation patterns lets us understand better how humans reason about complex networks and lets us build more human-friendly information systems. As an example, I will present an algorithm for improving website hyperlink structure by mining raw web server logs. The resulting system is being deployed on Wikipedia's full server logs at terabyte scale, producing links that are clicked 10 times as frequently as the average link added by human Wikipedia editors.

Communication and coordination through natural language is another prominent human network behavior. Studying the interplay of social network structure and language has the potential to benefit both sociolinguistics and natural language processing. Intriguing opportunities and challenges have arisen recently with the advent of online social media, which produce large amounts of both network and natural language data. As an example, I will discuss my work on person-to-person sentiment analysis in social networks, which combines the sociological theory of structural balance with techniques from natural language processing, resulting in a machine learning model for sentiment prediction that clearly outperforms both text-only and network-only versions.

I will conclude the talk by sketching interesting future directions for computational approaches to studying and enhancing human behavior in networks.
Read more

Efficient Formally Secure Compilers to a Tagged Architecture

Catalin Hritcu InRIA, Paris
22 Feb 2016, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Severe low-level vulnerabilities abound in today's computer systems, allowing cyber-attackers to remotely gain full control. This happens in big part because our programming languages, compilers, and architectures were designed in an era of scarce hardware resources and too often trade off security for efficiency. The semantics of mainstream low-level languages like C is inherently insecure, and even for safer languages, establishing security with respect to a high-level semantics does not guarantee the absence of low-level attacks. ...
Severe low-level vulnerabilities abound in today's computer systems, allowing cyber-attackers to remotely gain full control. This happens in big part because our programming languages, compilers, and architectures were designed in an era of scarce hardware resources and too often trade off security for efficiency. The semantics of mainstream low-level languages like C is inherently insecure, and even for safer languages, establishing security with respect to a high-level semantics does not guarantee the absence of low-level attacks. Secure compilation using the coarse-grained protection mechanisms provided by mainstream hardware architectures would be too inefficient for most practical scenarios.

In this talk I will present a new project that is aimed at leveraging emerging hardware capabilities for fine-grained protection to build the first, efficient secure compilers for realistic programming languages, both low-level (the C language) and high-level (ML and F*, a dependently-typed variant). These compilers will provide a secure semantics for all programs and will ensure that high-level abstractions cannot be violated even when interacting with untrusted low-level code. To achieve this level of security without sacrificing efficiency, our secure compilers target a novel tagged architecture, which associates a metadata tag to each word and efficiently propagates and checks tags according to software-defined rules. Formally, our goal is full abstraction with respect to a secure high-level semantics. This property is much stronger than just compiler correctness and ensures that no machine-code attacker can do more harm to securely compiled components than a component in the secure source language already could.
Read more

Timing Guarantees for Cyber-Physical Systems

Linh Thi Xuan Phan University of Pennsylvania
15 Feb 2016, 10:30 am - 10:45 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Cyber-physical systems -- such as cars, pacemakers, and power plants -- need to interact with the physical world in a timely manner to ensure safety. It is important to have a way to analyze these systems and to prove that they can meet their timing requirements. However, modern cyber-physical systems are increasingly complex: they can involve thousands of tasks running on dozens of processors, many of which can have multiple cores or shared caches. Existing techniques for ensuring timing guarantees cannot handle this level of complexity. ...
Cyber-physical systems -- such as cars, pacemakers, and power plants -- need to interact with the physical world in a timely manner to ensure safety. It is important to have a way to analyze these systems and to prove that they can meet their timing requirements. However, modern cyber-physical systems are increasingly complex: they can involve thousands of tasks running on dozens of processors, many of which can have multiple cores or shared caches. Existing techniques for ensuring timing guarantees cannot handle this level of complexity. In this talk, I will present some of my recent work that can help to bridge this gap, such as overhead-aware compositional scheduling and analysis. I will also discuss some potential applications, such as real-time cloud platforms and intrusion-resistant cyber-physical systems.
Read more

ANON: Practical Privacy Preserving Data Analytics

Reinhard Munz Max Planck Institute for Software Systems
09 Feb 2016, 3:30 pm - 4:30 pm
Kaiserslautern building G26, room 111
SWS Student Defense Talks - Qualifying Exam
Privacy preserving analytics on statistical databases is an old problem. Analysts want to query high quality statistical information across the user population of a database. At the same time, obtaining isolated information about a single user needs to be prevented. User privacy can be successfully protected by a number of techniques, such as adding noise to the content or results of a database or limiting the number of possible queries. Unfortunately, all of them reduce the quality of statistics and usability to a point that none has been widely adopted by industry. ...
Privacy preserving analytics on statistical databases is an old problem. Analysts want to query high quality statistical information across the user population of a database. At the same time, obtaining isolated information about a single user needs to be prevented. User privacy can be successfully protected by a number of techniques, such as adding noise to the content or results of a database or limiting the number of possible queries. Unfortunately, all of them reduce the quality of statistics and usability to a point that none has been widely adopted by industry. In this report we explore the extent to which one can lift the limitations on statistical quality without giving up much in terms of privacy protection. We present ANON, a new point in the design space that—contrary to many previous techniques—also takes queries into account. Through limiting and active monitoring of queries and results, ANON is able to reduce the amount of noise needed and increase the number of queries allowed. We describe and analyse the design of ANON as well as our findings from experimental attacks on real-world data.
Read more

Computational Modelling of Human Language Comprehension

Vera Demberg Cluster of Excellence - Multimodal Computing and Interaction - MMCI
03 Feb 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Predicting the proto-typical "who", "what" and "where" of eating

Being able to learn information about proto-typical event representations, i.e., learning probability distributions over the typical "who"/"whom"/"how"/"where" for specific verbs has many applications in natural language understanding (e.g. inferring a location even if it's not mentioned) as well as cognitive modelling. Consider for example the verb "to serve": a model also needs to encode which event participants typically occur together: waiters serve food in a restaurant, ...
Predicting the proto-typical "who", "what" and "where" of eating

Being able to learn information about proto-typical event representations, i.e., learning probability distributions over the typical "who"/"whom"/"how"/"where" for specific verbs has many applications in natural language understanding (e.g. inferring a location even if it's not mentioned) as well as cognitive modelling. Consider for example the verb "to serve": a model also needs to encode which event participants typically occur together: waiters serve food in a restaurant, mum serves food at home, priests serve God in a church, and prisoners serve sentences in jail.

In this talk, I will give an overview of my group's current research in psycholinguistics, cognitive modelling and NLP and will then delve into more depth on recent models we have developed to address the above question. Specifically, I will discuss the use of large vector space models compared to neural networks on the task of predicting typical thematic role fillers based on large amounts of text.
Read more

Parameterized Algorithms and the Structure of Networks

Erik Jan van Leeuwen MPI-INF - D1
13 Jan 2016, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
How many friends do you have? Are you and I friends, or friends of common friends, or friends of friends of friends? What is the largest group of your friends that are also friends of each other? Well-known algorithms exist to efficiently compute the answer to the first two questions. Stunningly, no algorithm is likely to exist that efficiently answers the third question. A major caveat, however, is that this last observation assumes that we have no specific knowledge about the network. ...
How many friends do you have? Are you and I friends, or friends of common friends, or friends of friends of friends? What is the largest group of your friends that are also friends of each other? Well-known algorithms exist to efficiently compute the answer to the first two questions. Stunningly, no algorithm is likely to exist that efficiently answers the third question. A major caveat, however, is that this last observation assumes that we have no specific knowledge about the network.

In this talk, I will investigate how knowledge of the structure of a network can be exploited in the quest for efficient algorithms. I will also employ a novel paradigm in algorithm design and analysis called parameterized algorithms. In recent work, I combined this paradigm with structural properties of networks to achieve fine-grained dichotomies for the algorithmic complexity of many problems. In particular, I will take you on a journey from the Clique problem in social networks to the Dominating Set problem in an intriguing class of networks called t-claw-free graphs.
Read more

Verasco, a formally verified C static analyzer

Jacques-Henri Jourdan INRIA
06 Jan 2016, 1:00 pm - 2:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 005
SWS Colloquium
This talk will present the design and soundness proof of Verasco, a formally verified static analyzer for most of the ISO C99 language (excluding recursion and dynamic allocation), developed using the Coq proof assistant. Verasco aims at establishing the absence of run-time errors in the analyzed programs. It enjoys a modular architecture that supports the extensible combination of multiple abstract domains, both relational and non-relational. It include a memory abstract domain, an abstract domain of arithmetical symbolic equalities, ...
This talk will present the design and soundness proof of Verasco, a formally verified static analyzer for most of the ISO C99 language (excluding recursion and dynamic allocation), developed using the Coq proof assistant. Verasco aims at establishing the absence of run-time errors in the analyzed programs. It enjoys a modular architecture that supports the extensible combination of multiple abstract domains, both relational and non-relational. It include a memory abstract domain, an abstract domain of arithmetical symbolic equalities, an abstract domain of intervals, an abstract domain of arithmetical congruences and an octagonal abstract domain.

Verasco integrates with the CompCert formally-verified C compiler so that not only the soundness of the analysis results is guaranteed with mathematical certitude, but also the fact that these guarantees carry over to the compiled code.
Read more

'Embedded Control Systems --- From Theory to Implementation'

Amir Aminifar Linköping University
05 Jan 2016, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Today, many embedded and cyber-physical systems, e.g., automotive, comprise several control applications. Guaranteeing the stability of these control applications is perhaps the most fundamental requirement while implementing such applications. Often, however, the design of such systems is done without considering the implementation impacts. In this case, the guarantees provided at design time might not be preserved for the final implementation. In this talk, we discuss the implementation- aware design of embedded control systems.