Based on "Neural Networks Fail to Learn Periodic Functions and How to Fix It" by Liu Ziyin, Tilman Hartwig, Masahito Ueda
This is a PyTorch implementation of the snake
activation function from the paper - or at least I think it is, no affiliation with the authors, use at your own risk, etc., etc. Huge thanks to contributors klae01 and fedebotu who made big improvments to the code.
A few variations of the function are discussed in the paper, this package implements:
Snake is periodic, but also monotonic. To see how snake behaves for a range of x given various choices of a
, watch this animation:
Two methods:
- Using pip,
pip install torch-snake
- To install from source, first clone this repository. Then, from the main repo folder, run
python setup.py install
Fairly easy really from snake.activations import Snake
. The Snake
constructor (code here) has an optional learnable parameter alpha which defaults to 1. The authors of the paper find values between 5 and 50 work quite well for "known-periodic" data, while for better results with non-periodic data, you should choose a small value like 0.2. The constructor also takes an alpha_learnable
parameter which defaults to True
, so that you can disable "learnability" if your experiments so require.
There's a notebook, still quite rough - example.ipynb. Early indications are that good choices of hyperparameters are quite important for best results (though snake's own parameter trains quite readily).
This code probably wouldn't have gotten written if it hadn't been for Alexandra Deis and her excellent article .