Research

Currently I have the privilege of working on topics that are not only exciting and useful but also beautiful. In particular, I am delighted to be working on applications of algebraic signal processing for algebraic neural networks (AlgNN), where I am interested on their stability properties. Additionally, I recently introduced the notion of quiver signal processing (QSP) as an alternative to handle heterogeneous information on networks. On the other side, I am currently working on applications of graph limit theory to problems in graph signal processing and graph neural networks.

Algebraic Neural Networks

Figure 1. (Left): Diagram indicating the algebraic signal model (\mathcal{A},\mathcal{M},\rho). (Center): Algebraic Neural Network (AlgNN) with three layers whose algebraic model is different. (Right): Notion of perturbation of an algebraic signal model.

We (my mentor at Upenn Alejandro Ribeiro and I) introduced Algebraic neural networks (AlgNN) as a tool to study stability properties in convolutional neural architectures. AlgNNs arise from the use of algebraic signal processing as a generalization tool of the signal processing models in the layers of neural convolutional architectures. A vast number of new points of view about the processing of signals has emerged in recent years exploiting different types of convolutions, and while notions of stability have been obtained independently for traditional convolutional neural networks (CNNs) and graph neural networks (GNNs), there were substantial similarities not only in the results but in the analysis performed that were not well understood or explained. Our recent paper explains these similarities and provides a general notion of stability at the algebraic level.

An algebraic signal model (see Fig. 1 (left)) can be defined as a triple (\mathcal{A},\mathcal{M},\rho), where \mathcal{A} is an algebra, \mathcal{M} is a vector space and \rho:\mathcal{A}\rightarrow\text{End}(\mathcal{M}) is a homomorphism from \mathcal{M} onto itself. The elements of the algebra \mathcal{A} are called algebraic filters and the elements in \mathcal{M} are the signals to be processed. The elements in the model (\mathcal{A},\mathcal{M},\rho) are tied together under the notions of representation theory. In particular, the pair (\mathcal{M},\rho) is a representation of the algebra \mathcal{A}.

While the structural and algebraic properties of \mathcal{A} define the rules under which signals on \mathcal{M} are being processed, the elements \rho(a) represent the physical realization of the filters. In this context, a perturbation in (\mathcal{A},\mathcal{M},\rho) is associated to changes in \rho i.e. they affect the physical realization of the filters and not the rules that define the processing of the signals (see Fig. 1 (right)). AlgNN are stacked layered structures (see Fig.1 (center)) where each layer is associated to an algebraic signal model. In our work, we analyze the stability of AlgNNs to deformations on the images of \rho.

References:

  1. A. Parada-Mayorga and A. Ribeiro, “Algebraic Neural Networks: Stability to Deformations” (submitted to IEEE Transactions on Signal Processing). [link][pdf]
  2. Online lectures about Algebraic Neural Networks [video]
  3. Markus Püschel, José M. F. Moura. Algebraic Signal Processing Theory, 2006. [link]
  4. Martin Lorenz. A Tour of Representation Theory. AMS [link].

Signal Processing on Quiver Representations

Left: Quiver that consists on a set of nodes denotes by i\in\{1,\ldots,5\} and edges indicated by a_{i,j} with i,j\in\{1,\ldots,5\}. Right: a representation of the quiver on the left where a vector space \mathcal{V}_{i} is assigned to each node $i$ and a linear map \phi_{i,j} is assigned to each arrow a_{i,j}.

One of the exciting sub areas of representation theory is that one of quiver representations. A quiver is nothing but a directed graph where the nodes and the arrows have no property or numerical values associated, and a quiver representation is an assignment of a vector space to the nodes and linear maps to the arrows. The assignment of a vector space to the nodes allows us to handle information in networks in unconventional ways, and exploiting the tools from algebraic signal processing we can define the notions of filtering and Fourier decomposition even when the data associated to the nodes have different dimensions. The applications of this approach for processing information over networks are numerous, but we highlight its potential in robotics when heterogeneous agents should be coordinated in order to perform a set of tasks. To learn more about the details about this part of my work you can check my recent paper.

References:

  1. A. Parada-Mayorga, H. Riess, A. Ribeiro and R. Ghrist, “Quiver Signal Processing (QSP)”, (submitted to Icassp 2020). [link][pdf]
  2. Harm Derksen and Jerzy Weyman. An Introduction to Quiver Representations. [link]

Graph Limit Theory in Graph Neural Networks

Figure 1. (Left): A Graph Neural Network whose layers are obtained from a graphon W(x,y) considering one of the discretization strategies illustrated on the right. (Right): Three discretization strategies to obtain graphs and sequences of graphs from a graphon W(x,y)

Recent developments in graph limit theory have provided tools for a better understanding of structures associated to large graphs. In particular, graphons play now a central role in several applications. A graphon is roughly speaking a bounded function in the unit square that can be considered the limit of a dense sequence of graphs. However, graphons can also be used to represent finite graphs, stating a natural bridge between graph signal processing and harmonic analysis on the unit interval, where ideas and concepts from functional analysis come into play. In my recent work I used graphons and the spectral clustering consistency in graphons to introduce strategies of pooling in graph neural networks. We use graphons to associate nodes between graphs of difference sizes with zero computational cost. The hypothesis assumed in our analysis is that the graphs in the GNN are large enough to be well approximated by a graphon, and then signals between layers lie on graphs whose spectral properties descend and converge to those of the graphon.

References:

  1. A. Parada-Mayorga, L. Ruiz and A. Ribeiro, “Graphon Pooling in Graph Neural Networks” (accepted), eusipco 2020. [link]
  2. László Lovász, “Large Networks and Graph Limits”. American Mathematical Society, 2012. [link]

Sampling of Signals on Graphs

Fig1. (Left): Blue noise sampling pattern on a graph, G, whose local isoperimetric dimension is the same. (Right): Random sampling pattern on the graph G.

Given a signal \mathbf{x} on a graph G one is interested in determining in which nodes is contained the essential information of \mathbf{x}, i.e. what are the nodes from which \mathbf{x} is uniquely determined. My research in this particular sub area of graph signal processing is focused on exploring sampling strategies considering particular subclasses of graphs. Instead of considering the sampling problem on a general graph, we exploit the properties of particular subclasses of graphs to find sampling strategies that are not only simple but that highlight the inefficiency of using standard greedy approaches. My paper about blue noise sampling on graphs study the notion of equidistance sampling, we show providing formal theoretical guarantees and numerical evidence, that in those graphs where the local isoperimetric dimension is the same, equidistance sampling provides an almost optimal sampling strategy.

Additionally, in my conference paper (journal paper to appear) we study the notion of sampling for signals defined on cographs. Cographs represent a special class of graphs that are build recursively from the operations of join and union and whose construction is synthesized by a cotree. We show that exploiting the information from the cotree, sampling sets can be built by means of simple operations without requiring spectral decompositions or the calculation of geodesic distances.

References:

  1. A. Parada-Mayorga, D. Lau, Jhony H. Giraldo and G.R. Arce, “Blue-Noise Sampling on Graphs”, IEEE Transactions on Signal and Information Processing over Networks). [link][code][pdf]
  2. D.L. Lau, G. R. Arce, A. Parada-Mayorga, D. Dapena, K. Pena-Pena, “Blue-Noise Sampling of Graph and Multigraph Signals”, Special Issue of the IEEE Signal Processing Magazine On Graph Signal Processing: Foundations and Emerging Directions, 2020. [link][pdf]
  3. D. Guillot, A. Parada-Mayorga, S. Cioaba, G. Arce, “Optimal Sampling Sets in Cographs”, IEEE data Science Workshop, Minneapolis, Minnesota, 2019. [link][pdf]