Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations

Neural networks deliver exceptional performance but can be impractical for applications with limited hardware or energy resources due to their high memory and computational demands. This work introduces a novel algorithm to identify efficient low-rank subnetworks during the training phase, significantly reducing both training and evaluation costs.

The approach restricts weight matrices to a low-rank manifold and updates only the low-rank factors during training. Using dynamic model order reduction techniques, the method ensures approximation, stability, and descent guarantees. It also adapts ranks dynamically throughout training to maintain the desired accuracy. Numerical experiments on fully-connected and convolutional networks demonstrate the efficiency of this technique