Heroku: deploying Deep Learning model

Heroku is a very good cloud platform to deploy your apps but if you have a Deep Learning model i.e. an app that needs to predict using large CNN / Deep Learning models then this cloud is not suitable. You can try other cloud platforms like AWS, Amazon Sagemaker, MS Azure, IBM Watson.

I was facing the same issue and after spending several days I came to know it was tensorflow library that was causing this slug overhead.

I solved it using 1 line in the requirements.txt file:

tensorflow-cpu==2.5.0

Instead of

tensorflow==2.5.0

You can use any updated tensorflow library version. Read more about tensorflow-cpu here


You can reduce the model size and use tensorflow-cpu which has a smaller size (144MB with Python 3.8)

pip install tensorflow-cpu

https://pypi.org/project/tensorflow-cpu/#files


The first thing I would check, as suggested by others, is to find out why your repo is so big given that the model size is only 83MB.

Given that you cannot reduce the size there is the option of offloading parts of the repo, but to do this you will still need an idea of which files are taking up the space. Offloading is suggested in the heroku docs. Slug size is limited to 500MB as stated here: https://devcenter.heroku.com/articles/slug-compiler#slug-size and I believe this has to do with the time it takes to spin up a new instance if a change in resources is needed. However, you can use offloading if you have particularly large files. More info on offloading here: https://devcenter.heroku.com/articles/s3