I am a Tenure Track Assistant Professor in the Electrical Engineering Department at the University of Colorado Denver, founder and director of the Abstract Signal Processing (ASP) Lab. Previously, I was a postdoctoral researcher and research scientist at the University of Pennsylvania, working on Algebraic Signal Processing and the Mathematical Foundations of Deep Learning. I received my Ph.D. in Electrical Engineering from the University of Delaware.

I study a fundamental question in modern information science: when and why do signal processing systems transfer across domains — and what algebraic structure guarantees it? A filter designed for a power grid, a neural network trained on social data, a sampling strategy derived for a manifold — under what conditions do these generalize, and what guarantees can we give? This is not a question about fine-tuning models or adapting datasets. It is a structural question — and part of the answer turns out to be algebraic. Transferability is not an empirical accident but a structural property, determined by the algebraic relationships between domains. By making this precise, I build theory that is predictive, not just descriptive — and that leads to systems that are more reliable, explainable, and efficient by design.

The central observation driving my work is this: despite radical geometric and topological differences between domains — graphs, graphons, manifolds, quivers, or discrete structures — many share deep algebraic structure. That shared structure is precisely what makes transfer possible. Using tools from operator algebras, representation theory, graphon theory, and category theory, I develop rigorous frameworks for sampling, filtering, pooling, and stability analysis that are transferable across domains by construction — not by hope.

This has direct consequences for hard problems. In large networked systems, my graphon-based frameworks yield optimal solutions to computationally expensive problems — like optimal power flow — that transfer provably across network scales. In machine learning, my algebraic approach to convolutional architectures provides guarantees for stability and generalization that purely empirical methods cannot. In signal processing, my methods produce optimal sampling strategies for complex network data, grounded in a theory of uniqueness sets that extends classical sampling to arbitrary domains.

At the ASP Lab, we pursue these questions at the intersection of algebra, geometry, and learning theory — building the mathematical foundations that make transferable intelligence principled and provable.