Back to All Events

Any Deep ReLU Network is Shallow

Given an AI model, can we build an equivalent one that's massively simpler and easier to understand? For good ol' feed forward networks with ReLU we can! This week Brandon Wilson will discuss research that not only provides a rare bit of mathematical rigour, but also has direct demonstrations of practicality. What does this say about interpretability of these model? How much can be leveraged in modern architectures? These are questions that we hope to engage in this discussion.

We constructively prove that every deep ReLU network can be rewritten as a functionally identical three-layer network with weights valued in the extended reals. Based on this proof, we provide an algorithm that, given a deep ReLU network, finds the explicit weights of the corresponding shallow network. The resulting shallow network is transparent and used to generate explanations of the model’s behaviour.

https://arxiv.org/abs/2306.11827

Previous
Previous
2 August

The Limit Is Not The Sequence

Next
Next
16 August

Corrigibility