My research program develops mathematical foundations for information processing systems that work across diverse domains—from discrete networks to continuous manifolds. The unifying theme is the discovery and exploitation of algebraic structure to build principled, explainable, and transferable learning architectures with provable guarantees.
This page organizes my research into four interconnected themes. Each builds on fundamental mathematical insights to address both theoretical questions and practical applications in machine learning, signal processing, and networked systems.
I. Graphon Signal Processing & Scalable Network Analysis
Core Question: How can we design algorithms that work reliably as networks grow from hundreds to millions of nodes?
Networks pervade modern systems—power grids, social media, transportation infrastructure, recommendation platforms. A fundamental challenge is that algorithms designed for small networks often fail or require complete redesign when applied at scale. Graphon signal processing solves this by working with the continuum limit of large graphs, enabling analysis and design of truly scalable algorithms.
Key Contributions
Graphon Pooling (IEEE TSP 2023): I developed the first mathematically rigorous pooling operators for graph neural networks that preserve signal structure as network size changes. This provides theoretical guarantees that GNN architectures trained on small graphs transfer to large-scale deployments—critical for applications from drug discovery to social network analysis.
Sampling and Uniqueness Theory (IEEE TSP 2024): Established fundamental limits on sampling graphon signals, analogous to the Nyquist-Shannon theorem for classical signals. These results determine the minimum information needed to reconstruct signals on large networks and guide optimal sensor placement in networked systems.
Stability of Aggregation GNNs (IEEE TSIPN 2023): Proved that graph neural networks based on aggregation are stable to graph perturbations when operating in the graphon regime. This theoretical guarantee explains why GNNs generalize across different network instances in practice.
Impact & Applications
—
Representative Publications:
- A. Parada-Mayorga and A. Ribeiro, “Sampling and Uniqueness Sets in Graphon Signal Processing,” IEEE TSP, 2024
- A. Parada-Mayorga, Z. Wang, and A. Ribeiro, “Graphon Pooling for Reducing Dimensionality of Signals and Convolutional Operators on Graphs,” IEEE TSP, 2023
- A. Parada-Mayorga, Z. Wang, F. Gama and A. Ribeiro, “Stability of Aggregation Graph Neural Networks,” IEEE TSIPN, 2023
- A. Parada-Mayorga, L. Ruiz and A. Ribeiro, “Graphon Pooling in Graph Neural Networks,” EUSIPCO, 2020
II. Algebraic Foundations of Neural Network Architectures
Core Question: What fundamental properties make convolutional neural networks successful, and how can we extend these principles beyond traditional grid-structured data?
Convolutional neural networks dominate modern machine learning, yet their success has primarily been empirical. My work provides rigorous mathematical foundations by treating CNNs as representations of algebraic structures, revealing universal principles that apply across domains and enabling principled design of architectures for non-Euclidean data.
Key Contributions
Stability Theory for Algebraic Neural Networks (IEEE TSP 2021): Proved that neural networks built from algebraic convolutions are stable to small deformations of the input domain. This explains why CNNs trained on clean images still work on slightly distorted versions and provides design principles for robust architectures. This work has been presented at venues from CERN to Yale to multiple university seminars.
RKHS Convolutional Filtering (IEEE TSP 2024): Extended convolutional filtering to reproducing kernel Hilbert space (RKHS) algebras, enabling continuous-domain signal processing. This framework unifies discrete and continuous convolution while providing approximation guarantees—critical for applications like point cloud analysis and robotic perception.
Lie Group Algebra Filters (IEEE TSP 2024): Developed convolutional architectures on Lie groups, the mathematical structures describing continuous symmetries. This enables neural networks that respect physical symmetries in applications from molecular dynamics to robotics, improving both sample efficiency and generalization.
Non-Commutative Algebraic Convolutions (IEEE TSP 2023): Classical signal processing assumes operations commute (order doesn’t matter). I removed this restriction, enabling signal processing on directed networks, temporal graphs, and other structures where order is meaningful. This opens new applications in analyzing information flow and causal relationships in networks.
Evolution of Ideas
This line of work represents a systematic progression through increasingly general algebraic structures:
- Commutative algebras (traditional graphs) → non-commutative algebras (directed/temporal networks)
- Discrete algebras → continuous RKHS algebras
- Specific symmetry groups → general Lie group algebras
Each step reveals deeper universality in how convolution and filtering operate across domains.
Impact & Applications
These theoretical insights drive practical improvements in diverse applications: hate speech detection in social networks, wireless resource allocation, recommendation systems, and point cloud classification for autonomous systems. The stability guarantees are particularly valuable for safety-critical applications.
Representative Publications:
- A. Parada-Mayorga, L. Agorio, A. Ribeiro, and J. Bazerque, “Convolutional Filtering with RKHS Algebras,” IEEE TSP, 2024
- H. Kumar*, A. Parada-Mayorga*, and A. Ribeiro, “Lie Group Algebra Convolutional Filters,” IEEE TSP, 2024
- A. Parada-Mayorga, L. Butler and A. Ribeiro, “Convolutional Filtering and Neural Networks with Non Commutative Algebras,” IEEE TSP, 2023
- L. Butler*, A. Parada-Mayorga*, and A. Ribeiro, “Convolutional Learning on Multigraphs,” IEEE TSP, 2023
- A. Parada-Mayorga and A. Ribeiro, “Algebraic Neural Networks: Stability to Deformations,” IEEE TSP, 2021
III. Optimal Sampling & Reconstruction on Graphs
Core Question: Where should we place sensors on a network to capture the maximum information with minimum resources?
Sampling theory is foundational in signal processing, but traditional results assume regular grids. My doctoral research developed methods on sampling theory for signals on arbitrary graphs.
Key Contributions
Blue-Noise Graph Sampling (IEEE TSIPN 2019, PhD Thesis): Adapted blue-noise sampling—a technique from image processing that distributes samples uniformly while maintaining randomness—to graph domains. This provides sampling patterns that are both optimal for reconstruction and robust to graph irregularities.
Optimal Sampling Sets in Structured Graphs (IEEE DSW 2019): Characterized optimal sampling sets for special graph families (cographs), providing closed-form solutions that guide sensor placement in hierarchical and modular networks.
Spectral Resolution in Compressive Imaging (IEEE TCI 2019): Extended sampling theory to spectral imaging systems, establishing fundamental limits on resolution and developing practical reconstruction algorithms.
Applications
This work has direct applications in wireless sensor networks, environmental monitoring, medical imaging, and infrastructure monitoring. The blue-noise approach is particularly valuable when sensors must be distributed across irregular network topologies while maintaining statistical properties needed for reconstruction guarantees.
Representative Publications:
- A. Parada-Mayorga, D. Lau, J.H. Giraldo and G.R. Arce, “Blue-Noise Sampling on Graphs,” IEEE TSIPN, 2019
- D.L. Lau, G.R. Arce, A. Parada-Mayorga, et al., “Blue-Noise Sampling of Graph and Multigraph Signals,” IEEE Signal Processing Magazine, 2020
- E. Salazar, A. Parada-Mayorga and G.R. Arce, “Spectral Zooming and Resolution Limits,” IEEE TCI, 2019
IV. Early Work: Compressive Spectral Imaging
Core Question: How can we design optical systems that efficiently capture high-dimensional spectral information using compressed measurements?
During my MS and early PhD work, I developed optimization methods for coded aperture design in compressive spectral imaging systems. This involved designing physical masks that modulate light to enable reconstruction of 3D spectral data cubes from 2D measurements.
Key Contributions
Coherence-Based Coded Aperture Design (IEEE TCI 2017): Formulated coded aperture design as a coherence minimization problem, enabling more accurate reconstruction from fewer measurements.
Spectral Super-Resolution (IEEE TCI 2016): Developed methods to recover spectral details beyond the nominal resolution of imaging systems through careful aperture design and reconstruction algorithms.
This work represents the foundation of my interest in sampling, reconstruction, and the interplay between physical system design and mathematical signal processing—themes that continue in my current research on graph sampling and network analysis.
Representative Publications:
- A. Parada-Mayorga and G.R. Arce, “Colored Coded Aperture Design in Compressive Spectral Imaging via Minimum Coherence,” IEEE TCI, 2017
- A. Parada-Mayorga and G.R. Arce, “Spectral Super-Resolution in Colored Coded Aperture Spectral Imaging,” IEEE TCI, 2016
Ongoing Directions & Future Work
My current research continues to explore deeper connections between algebra, geometry, and learning. This continues the core philosophy of my research: uncover universal mathematical structure, prove rigorous guarantees, and apply these insights to solve real-world problems in robust, explainable ways.
Research Philosophy
My work is guided by three principles:
- Structure reveals insight: Similar problems across different domains often share fundamental algebraic structure. Identifying this structure leads to general solutions rather than domain-specific heuristics.
- Rigor enables reliability: Provable guarantees for stability, convergence, and optimality are essential for deploying learning systems in safety-critical applications.
- Theory drives practice: Deep mathematical understanding doesn’t just explain existing methods—it reveals entirely new approaches that pure empiricism would never discover.
For full publication list, see Publications page.
For talks and presentations, see Talks page.