Palma De Mallorca | October 19-20, 2022
Information Processing in Complex Systems


Part of the CCS 2022 conference.

 Paseo Marítimo
Paseo Marítimo, 18-07014-Palma de Mallorca, Spain

Important dates

 July 4 Call for abstracts
 Aug. 28 Extended deadline for abstract submission
 Sept. 5 Notification of acceptance
 July 25 CCS EarlyBird Registration deadline
 Sept. 16 CCS Standard Registration deadline
 Oct. 19 Satellite event

Submit an abstract

Only contributions submitted through the EasyChair system will be considered.

Authors of accepted contributions must also register to the main conference.

Submit Register

Invited Speakers

Leonardo Banchi

Leonardo Banchi

University of Florence

Felix Binder

Felix Binder

Trinity College Dublin

Jayne Thompson

Jayne Thompson

Horizon Quantum Computing

Daniele Marinazzo

Daniele Marinazzo

Ghent University

Pedro Mediano

Pedro Mediano

University of Cambridge

Program of IPCS2022

Invited talks

Generalization in Quantum Machine Learning: A Quantum Information Standpoint

Leonardo Banchi

Quantum classification and hypothesis testing (state and channel discrimination) are two tightly related subjects, the main difference being that the former is data driven: how to assign to quantum states ρ(x) the corresponding class c (or hypothesis) is learnt from examples during training, where x can be either tunable experimental parameters or classical data “embedded” into quantum states. Does the model generalize? This is the main question in any data-driven strategy, namely the ability to predict the correct class even of previously unseen states. Here we establish a link between quantum classification and quantum information theory, by showing that the accuracy and generalization capability of quantum classifiers depend on the (Rényi) mutual information I(C:Q) and I2(X:Q) between the quantum state space Q and the classical parameter space X or class space C. Based on the above characterization, we then show how different properties of Q affect classification accuracy and generalization, such as the dimension of the Hilbert space, the amount of noise, and the amount of neglected information from X via, e.g., pooling layers. Moreover, we introduce a quantum version of the information bottleneck principle that allows us to explore the various trade-offs between accuracy and generalization. Finally, in order to check our theoretical predictions, we study the classification of the quantum phases of an Ising spin chain, and we propose the variational quantum information bottleneck method to optimize quantum embeddings of classical data to favor generalization.

Parameter estimation for complex processes

Felix Binder

Many real-world tasks include some kind of parameter estimation, i.e., determination of a parameter encoded in a probability distribution. Often, such probability distributions arise from stochastic processes for which complexity manifests in the form of temporal correlations. For a stationary stochastic process this means that the random variables that constitute it are identically distributed but not independent. The memory complexity underlying these correlations may appear as an advantage or as a disadvantage for parameter estimation compared to the memoryless case. Here, we illustrate this effect with suitable examples and present a fundamental bound, which is asymptotically linear in the number of outcomes. We then apply our results to the case of thermometry on a spin chain.

Higher order informational interactions in systems close to transition

Daniele Marinazzo

Information transfer is crucial to understanding the dynamics of complex systems. Most approaches have so far considered pairwise interactions, overlooking the fact that two or more variables can share information on the rest of the system. Considering higher order interactions allows for a more unbiased estimate of pairwise and conditioned interactions, and allows to disclose new properties about the system. This view is particularly relevant when the system undergoes a transition.

Information decomposition as a link between biological and artificial brains

Pedro Mediano

One of the key principles of information processing is that it is substrate-independent – i.e. the same computation can be implemented by multiple systems obeying different physical laws. In this talk, I will illustrate how the principles of information decomposition (in particular, metrics of synergy and redundancy) can provide such a substrate-independent description of computation in complex systems by linking biological and artificial brains. First, I will show results obtained from fMRI data showing that regions of the brain responsible for high-level cognitive processes are synergy-rich, while areas responsible for sensory input and motor output are redundancy-rich. Then, I will show results from a study of artificial neural networks, showing that synergy increases as neural networks learn novel tasks, possibly aiding in the process of generalizing learned representations; while redundancy helped the network sustain random perturbations. Together, these results suggest different functional roles of synergy and redundancy for computation in complex systems.

Contributed talks

Lead/Lag directionality is not generally equivalent to causality: Precautionary tale for PSI compared to CMI

Andreu Arinyo i Prats

The application of causal techniques to neural imaging of the brain has increased extensively over the years to include a wide and diverse family of methods that can be applied to EEG data. Moreover, growing interest has developed on the analysis of cross frequency, phase and amplitude correlations and directionality. These show in some cases contradicting results of directionality from high frequency to low frequency data. However, lacking is a comparison of two widely available methods which provide an estimation for directionality for cross frequency analysis in EEG data, these are Conditional Mutual Information (CMI) and Phase Slope Index (PSI). We show that these methods can present differing interpretations of the data based on the phase lag.

Two orders for decomposing multivariate information

Conor Finn

The partial information decomposition (PID) of Williams and Beer provides a general framework for decomposing the information provided by a set of source variables about a target variable. For instance, when we have two source variables then the total information provided by these sources about the target can be decomposed into four components, namely (i) the unique information provided first source, (ii) the unique information provided by the second source, (iii) the shared information which is redundantly provided by both sources, and (iv) the synergistic information which is only attainable from simultaneous knowledge of both sources together. The PID framework is based upon three axioms regarding the behaviour of a measure of redundant or intersection information. From these axioms, Williams and Beer derived a general structure for shared multivariate information called the redundant information lattice. These three axioms, however, do not uniquely determine a particular measure of redundant information. Consequently, over the past decade, there has been much debate as to exact measure of redundant information that should be used. Many of these are based on new axioms or differing operational interpretations, and several of these new measures have subsequently been determine to be incompatible with the original formulation. At the present time, there is no clear consensus as to how exactly one should proceed when evaluating such a decomposition. In this talk, I will discuss a novel method for decomposing the information provided by a set of source variables about a target variable. This method is based upon three axioms for a measure of the total marginal or union information which are analogous to the original Williams and Beer axioms. From these axioms I derive a second structure that can also be used to decompose multivariate information, which I call the total marginal information lattice. Finally, I will explore the implications of demanding a consistency between these two decompositions, and in particular how requiring this consistency constrains the possible measures of redundant information, or equivalently, possible measures of the total marginal information. This approach is similar to Kolchinsky, who also considered two orders but did not demand that they produce the same decomposition.

Information Dynamics and Stability in Tangled Nature Model of Evolutionary Ecology

Hardik Rajpal, Clemens Von Stengel, Pedro A.M. Mediano, Fernando Rosas and Henrik Jeldtoft Jensen

The tangled nature model has been a rich resource to study metastability, species abundance, adaptability, and mass extinctions in closed ecosystems. In this study, we analyse the model from an information-theoretic perspective to study species-environment relationships and the emergence of higher-order cliques of species. To this end we rely on two novel developments in the field of information theory – Phi -Information Decomposition (ϕ-ID) and Information Individuality. The ϕ-ID framework enables us to decompose the information shared between species and its environment into various components like self-prediction, information transfer, redundant and synergistic information propagation. Thus, enabling us to study the variation of various aspects of information propagation as we make the system incrementally more unstable. By focusing on the self-information dimension of ϕ-ID, we can look for optimally self-predictive groups of species known as information individuals. This notion defines the boundary between an individual and environment to lie outside maximally self-informative sub-system. We leverage this definition to identify the emergence of higher-level structures where groups of species act as an individual in the fitness landscape. These stable groups emerge only for an intermediate range of instability in the system which enables these groups to be adaptable against stochastic fluctuations.

An information theory perspective on tipping points in dynamical networks

Casper van Elteren, Rick Quax and Peter Sloot

Abrupt, system-wide transitions can be endogenously generated by seemingly stable networks of interacting dynamical units, such as mode switching in neuronal networks or public opinion changes in social systems. However, it remains poorly understood how such `noise-induced transitions' are generated by the interplay of network structure and dynamics on the network. We identify two key roles for nodes on how tipping points can emerge in dynamical networks governed by the Boltzmann-Gibbs distribution. In the initial phase, initiator nodes absorb and transmit short-lived fluctuations to neighboring nodes, causing a domino-effect making neighboring nodes more dynamic. Conversely, towards the tipping point we identify stabilizer nodes whose state information becomes part of the long-term memory of the system. We validate these roles by targeted interventions that make tipping points more (less) likely to begin or lead to systemic change. This opens up possibilities for understanding and controlling endogenously generated metastable behavior.

About IPCS

The Information Processing in Complex Systems (IPCS) satellite meeting is organized, since 2012, during the Conference on Complex Systems.

Our goal is to provide a forum for researchers who follow an information-theoretic approach for the analysis of complex systems. Here they can present recent achievements and discuss promising hypotheses and further research directions, combining both classical and quantum information approaches.


All systems in nature can be considered from the perspective that they process information.

Imformation is registered in the state of a system and its elements, implicitly and invisibly. As elements interact, information is transferred. Indeed, bits of information about the state of one element will travel – imperfectly – to the state of the other element, forming its new state. This storage and transfer of information, possibly between levels of a multi level system, is imperfect due to randomness or noise. From this viewpoint, a system can be formalized as a collection of bits that is organized according to its rules of dynamics and its topology of interactions. Mapping out exactly how these bits of information percolate through the system reveals fundamental insights in how the parts orchestrate to produce the properties of the system.

A theory of information processing would be capable of defining a set of universal properties of dynamical multi level complex systems, which describe and compare the dynamics of diverse complex systems ranging from social interaction to brain networks, from financial markets to biomedicine. Each possible combination of rules of dynamics and topology of interactions, with disparate semantics could be translated into the language of information processing which in turn will provide a lingua franca for complex systems.


Past Editions

 IPCS12 Bruxelles, Belgium (2012)
 IPCS13 Barcelona, Spain (2013)
 IPCS14 Lucca, Italy (2014)
 IPCS15 Tempe, Arizona (2015)
 IPCS16 Amsterdam, Netherlands (2016)
 IPCS17 Cancun, Mexico (2017)
 IPCS18 Thessaloniki, Greece (2018)
 IPCS19 Singapore (2019)