Neural Networks (NNs) have undergone a remarkable evolution, transitioning from academic labs to key technologies across various domains. This proliferation underscores their capability and versatility. As these models become integral to critical decision-making processes, the demand for methods to understand their inner workings likewise becomes more pronounced. This thesis addresses the challenge of understanding NNs through the lens of their most fundamental component: the weights, which encapsulate the learned information and determine the model behavior.

The NN weight space contains complex local and global structures which makes it a challenging domain. Addressing these challenges, this thesis develops innovative representation learning methods for the domain of weight spaces. The proposed methods embed and disentangle model weights in a representation space. The representation space allows not only to analyze existing models but also to generate new models with specified characteristics. Such an analysis builds on populations of models, to develop a nuanced understanding of the structure of NN weights.

At the core of this thesis is a fundamental question: Can we learn general, task-agnostic representations from populations of Neural Network models? The key contribution of this thesis to answer that question are hyper-representations, a self-supervised method to learn representations of NN weights. Work in this thesis finds that trained NN models indeed occupy meaningful structures in the weight space, that can be learned and used. Through extensive experiments, this thesis demonstrates that hyper-representations uncover model properties, such as their performance, state of training, or hyperparameters.


Moreover, the identification of regions with specific properties in hyper-representation space allows to sample and generate model weights with targeted properties. This thesis demonstrates applications for fine-tuning, and transfer learning to great success. Lastly, it presents methods that allow hyper-representations to generalize beyond model sizes, architectures, and tasks. The practical implications of that are profound, as it opens the door to foundation models of Neural Networks, which aggregate and instantiate their knowledge across models and architectures.

Ultimately, this thesis contributes to the deeper understanding of Neural Networks by investigating structures in their weights which leads to more interpretable, efficient, and adaptable models. By laying the groundwork for representation learning of NN weights, this research demonstrates the potential to change the way Neural Networks are developed, analyzed, and used.

[My tweet]

Congratulatuon to Dr Konstantin Schürholt (@k_schuerholt ) for the successfull dwfense of his visionary PhD interpreting sets of NN weights as points in a hyperspace.  Proudly coadvised with Damian Bort (@damianborth ) from @HSGStGallen , and Michael Mahoney from @UCBerkeley