Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. Networked units are ubiquitous, from interacting proteins in cells to neural connections in human brain or social relationships in virtual platforms, such as Twitter or Facebook. They provide a useful tool to model complex systems whose interacting constituents give rise to emergent behavior which can not be caused by single units separately.
The concept of entropy, a measure of the uncertainty that is crucial to understand the physical properties of a system and its evolution, is a pillar of classical and quantum information theory that, however, has remained elusive in the case of networks. We argue that the way information diffuses from one unit to another can be used to measure the entropy of a networked system.
Inspired by quantum thermodynamics and quantum computing, the new measure of entropy allows unprecedented approaches to statistical inference, establishing a fundamental step towards a network information theory.
Potential applications include machine learning, systems biology and inference of quantum complex network models based on qubit entanglement.
In the realm of complex networks, a satisfactory definition of network entropy has been hindered by the lack of a tractable probability distribution defining interconnected systems as a whole. A candidate starting point has been classical information theory, which is built centrally on the quantification of information through entropy, and has found vast interdisciplinary applications. This approach however seems limited to applying entropy to a known network descriptor, resulting only in an alternative means to analyze a probability distribution. Its quantum-mechanical counterpart, introduced by von Neumann, provided the foundation of modern quantum information theory. Applications of this entropy to a normalized network Laplacian or adjacency matrix results in nonsensical additivity properties. We address this issue by introducing a probability distribution representing a complex network which satisfies the same properties as a density matrix in quantum mechanics and from which we can recover the desired additivity properties. From this we build an information-theoretic framework which allows us to quantify the information content of complex networks; define a likelihood based on spectral properties of observed networks and their models, rather than relying on a subset of network descriptors such as mesoscale structure or degree distribution.
To study complex networks systematically from an information-theoretical perspective, it is necessary to develop a precise mathematical formulation of classical tools. We have just started to develop these tools, such as information entropy, Renyi q-entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks.
One of our achievements concerns the identification of network likelihood and the definition of model selection criteria in this new framework, which deserves further investigation.
We exploit this metric as a distance to compare the units of interconnected systems such as multilayer networks, as the human microbiome. Spectral methods have proven to be fundamental for analyzing and understanding complex networks, and our framework introduces a set of spectral tools that will boost network statistical inference.
More recently, this framework has been applied for multiple purposes, such as:
The constituents of a complex system exchange information to function properly. Their signaling dynamics often leads to the appearance of emergent phenomena, such as phase transitions and collective behaviors. While information exchange has been widely modeled by means of distinct spreading processes — such as continuoustime diffusion, random walks, synchronization and consensus — on top of complex networks, a unified and physically grounded framework to study information dynamics and gain insights about the macroscopic effects of microscopic interactions is still eluding us.
We present this framework in terms of a statistical field theory of information dynamics, unifying a range of dynamical processes governing the evolution of information on top of static or time-varying structures. We show that information operators form a meaningful statistical ensemble and their superposition defines a density matrix that can be used for the analysis of complex dynamics. This density matrix is exactly the same introduced above.
As a direct application, we show that the von Neumann entropy of the ensemble can be a measure of the functional diversity of complex systems, defined in terms of the functional differentiation of higher-order interactions among their components. Our results suggest that modularity and hierarchy, two key features of empirical complex systems—from the human brain to social and urban networks—play a key role to guarantee functional diversity and, consequently, are favored.
Ensemble of information streams. A simple system of three fully connected constituents. The dynamical process is chosen to be random walk dynamics. The figure illustrates the information streams and their corresponding activation probabilities changing over time.
Modularity and hierarchy. (a) The entropy, rescaled by its upper bound ln N, of hierarchical modular topology (HM), modular topology (M), and their configuration model (C), which randomizes the links while keeping the degree distribution, is compared. The adjacency matrix of a realization of each topology is shown in top panels. For each topology, 100 realizations have been considered and their mean entropy is plotted by solid lines with shades, representing the fluctuations around the mean. Our result shows that random topology exhibits slightly higher entropy at small timescales, while at larger timescales, the modular and hierarchical modular topology exceed it. Interestingly, hierarchical modular topology with significantly slow decay preserves its functional diversity across a considerable range of timescales. (b) Node-node communications are plotted as heat maps for one realization of each considered topology at four different propagation times t = 1, 3, 10, 20. Each value, represented as a pixel of a heatmap, corresponds to information exchange between two nodes. In hierarchical modular networks, long-range communications, characterized by large propagation times, are meaningful as nodes keep their functional diversity. In contrast, when connections are randomly distributed, the field quickly reaches its final frozen state and the nodes cannot be differentiated as senders.
Entanglement as a multi-scale measure of node centrality. A lattice (a), an Erdos Renyi (b) and a Barabasi-Albert network (c) are considered, from left to right panels. The centrality of each node is color coded from lighter to darker according to distinct measures: entanglement, betweenness, closeness and PageRank. The specific time scale beta_c has been considered in case of entanglement centrality, by minimizing the collective measure here shown in the top panels. It is evident that network entanglement is not trivially related to existing centrality measures.
Virus-host interactome as an interdependent network. BIOSTR Human PPI, obtained from data fusion of two comprehensive public repositories, namely STRING and BIOGRID (see the text for details). The network consists of N = 19, 945 proteins linked by |E| = 737, 668 edges, and the largest connected component (99.8% nodes, 99.6% edges) is shown. Proteins targeted by viruses are highlighted in two ways. On the one hand, markers of distinct size identify targeted proteins: bigger the marker larger the number of times a protein is targeted by viruses in our data set. On the other hand, distinct colored markers of constant size encode distinct viruses (45 in total, including SARS-CoV-2): on the right-hand side the same color scheme is used to show the contribution of each virus to the most frequently targeted proteins.
To cope with the additional complexity of some systems, such as multilayer networks, we have proposed an algorithm, grounded on information theory and inspired by quantum computing, to reduce the structure of such networks by aggregating appropriately its layers. Irreducibility of a system is used as an argument in favor of its multilayer representation.
Figure: Structural vs functional reducibility of multilayer systems. In structural reduction (left), one alters the structure by iteratively aggregating layers, e.g., by summing the corresponding adjacency matrices. However, this type of intervention usually comes with some costs (e.g., temporal, infrastructural, economic, etc.). Here, we propose functional reduction (right), an alternative approach where the structure of the system is not altered while coupling the dynamics on the top of layers. While any dynamics is allowed in principle, diffusive processes are particularly suitable ones because they allow for theoretical and computational treatment. In this work, random walk dynamics is considered and layers are functionally coupled if their network states are not distinguishable from the point of view of a walker (i.e., the same color is assigned to functionally coupled layers). From an application perspective, this approach corresponds to shared tickets for public multimodal transport infrastructure or shared promotions for users who are part of multiple social networks at the same time.
Functional reducibility of empirical multilayer systems. The same analysis shown in Fig. 2 is performed on different multilayer systems, namely, the (a) protein-protein interactions of Xenopus Laevis (the African clawed frog), (b) co-authorship network of the Pierre Auger experiment, (c) public transportation of London, and (d) European airport network. See the text for further details about the data set. Note that entries di j of distance matrices have been transformed by −log(d_ij) to enhance the visualization of the corresponding heat maps (The average return probability is rescaled by N).