Development site for the EIT FOAI CDT
When does equivariance help?
While theoretically appealing, there is an ongoing debate and a lack of clear quantitative understanding of the performance trade-offs associated with enforcing equivariance in neural networks versus allowing them to learn symmetries from data.
This project will develop a mathematical and empirical framework to quantify the sample complexity and performance benefits of equivariant models. The student will investigate how factors like dataset size, the complexity of the symmetry group, and the use of data augmentation affect the performance gap between equivariant and standard architectures. The goal is to provide a clear understanding of when explicit equivariance provides a significant advantage, and when standard models can effectively learn the necessary symmetries from data alone.
Group theory, Statistical learning theory, information theory, deep learning fundamentals
Equivariant neural networks, empirical benchmarking