Skip Navigation LinksInvited-Speakers

Invited Speakers

      Jetty Kleijn | Leiden University, The Netherlands

Concurrency, Histories, and Nets

We discuss an approach to lift the partial order model of concurrency as exemplified by Mazurkiewicz traces and Elementary Net systems to the case that events may happen simultaneously. Now behavioural observations are no longer in terms of sequences (words), but based on steps i.e. sets of simultaneously occurring actions. To identify step sequences that represent observations of the same concurrent computation, additional information on the relationships between actions has to be provided. The resulting equivalence classes are referred to as step traces. Partial orders alone are not expressive enough to describe the invariant, common relationships between the symbol occurrences in the step sequences forming a step trace. Instead we use general invariant structures allowing to express 'not later than' and 'mutual exclusion' relations. A history is then a set of saturated labelled invariant structures that can be used to describe the elements of a step trace. Elementary Net systems extended with inhibitor arcs and mutex arcs provide a system model that fits THE concurrency paradigm for such histories.



       Linqiang Pan | Huazhong University of Science and Technology, China

Spiking Neural P Systems: A Class of Parallel Computing Models Inspired by Neurons

Biological systems, including a single cell and from complexes of cells, are a rich source for abstracting computing ideas (data structures, operations with data, ways to control operations, computing models, etc.). We will describe in this talk a class of parallel computing models inspired by neurons, called spiking neural P systems. At the beginning an overview of spiking neural P systems will be given including motivation, biological background (at the level for a mathematician or computer scientist), the basic ingredients and functioning of a spiking neural P system, and the relationship and difference with the traditional spiking neural networks. Then, we describe some classical or recent results on spiking neural P systems. Finally, an outlook will be given with a discussion of applications.



     Anne Siegel | IRISA/CNRS, Dyliss, France

Decidability problems for self-induced systems

In this talk we will survey several decidability and undecidability results on topological properties of self-affine or self-similar fractal tiles. Such tiles are obtained as fixed point of set equations governed by a graph. The study of their topological properties is known to be complex in general: we will illustrate this by undecidability results on tiles generated by multitape automata. In contrast, the class of self affine tiles called Rauzy fractals is particularly interesting. Such fractals provide geometrical representations of self-induced mathematical processes. They are associated to one-dimensional combinatorial substitutions (or iterated morphisms). They are somehow ubiquitous as self-replication processes appear naturally in several fields of mathematics. We will survey the main decidable topological properties of these specific Rauzy fractals and detail how the arithmetic properties yields by the combinatorial substitution underlying the fractal construction make these properties decidable. We will end up this talk by discussing new questions arising in relation with continued fraction algorithm and fractal tiles generated by S-adic expansion systems.



    Mike Stannett | University of Sheffield, UK

Towards Formal Verification of Computations and Hypercomputations in Relativistic Physics

It is now more than 15 years since Copeland and Proudfoot introduced the term hypercomputation. Although no hypercomputer has yet been built (and perhaps never will be), it is instructive to consider what properties any such device should possess, and whether these requirements could ever be met. Aside from the potential benefits that would accrue from a positive outcome, the issues raised are sufficiently disruptive that they force us to re-evaluate existing computability theory. From a foundational viewpoint the questions driving hypercomputation theory remain the same as those addressed since the earliest days of computer science, viz. what is computation? and what can be computed? Early theoreticians developed models of computation that are independent of both their implementation and their physical location, but it has become clear in recent decades that these aspects of computation cannot always be neglected. In particular, the computational power of a distributed system can be expected to vary according to the spacetime geometry in which the machines on which it is running are located. The power of a computing system therefore depends on its physical environment and cannot be specified in absolute terms. Even Turing machines are capable of super-Turing behaviour, given the right environment.


EMU Websites

Invited-Speakers

​​​​​​