H.B. Keller Colloquium
Symmetry provides a powerful lens for building machine learning models that interact with scientific data. Euclidean neural networks (E(3)NNs) make this concrete: architectures that encode transformation laws through group representations, enabling models to operate on geometric and tensorial data while respecting the structure of physical systems.
In this talk, I'll share lessons from building and applying these models in practice. Incorporating symmetry shapes how data is represented, how models learn, and how they are optimized, while introducing new trade-offs in expressivity and computation.
I'll also explore how symmetry can emerge implicitly in data and learning systems, even when it is not built directly into the model. Together, these perspectives point toward a broader view: building structure into models changes what they can represent and how we draw insight from them.
