How to build custom model
This tutorial show how to implement a custom neural network architecture or make changes to an existing one.
Pre-requirements¶
Before you start, please make sure you already have an account in Supervisely and at least one agent deployed on a machine with GPU support.
You will also need nvidia-docker
. Please check cluster section to learn more.
GitHub repo¶
We make our sources for neural networks (and more!) publicly available at GitHub.
Run this command to clone the sources to your computer:
git clone https://github.com/supervisely/supervisely
Docker images¶
To deal with the software requirements and deployment we pack the source code of a neural network into a Docker image. Before you start, you should build one.
Switch to the supervisely/nn
folder. Here you will see every network we provide. In this tutorial we will use unet_v2
network.
Lets take a look at what's inside:
. ├── Dockerfile └── src ├── common.py ├── dataset.py ├── debug_saver.py ├── fast_inference.py ├── inference.py ├── __init__.py ├── legacy_inference.py ├── metrics.py ├── plot_from_logs.py ├── schemas.json ├── servicer.py ├── train.py └── unet.py
src
folder contains the source codes. Put python code or other files here.
Dockerfile
contains commands necessary to install the pre-requirements and instructions to seal the source codes.
Here are the contents of Dockerfile
:
FROM supervisely/nn-base # pytorch RUN conda install -y -c soumith \ magma-cuda90=2.3.0 \ && conda install -y -c pytorch \ pytorch=0.3.1 \ torchvision=0.2.0 \ cuda90=1.0 \ && conda clean --all --yes # sources ENV PYTHONPATH /workdir:$PYTHONPATH WORKDIR /workdir/src ARG SOURCE_PATH ARG RUN_SCRIPT ENV RUN_SCRIPT=$RUN_SCRIPT COPY supervisely_lib /workdir/supervisely_lib COPY $SOURCE_PATH /workdir/src ENTRYPOINT ["sh", "-c", "python -u ${RUN_SCRIPT}"]
So, what's happening here? First, we start from an already prepared image with stuff like CUDA, tensorflow and other libs. We usually start from this particular image, but for some architectures we may need another version of CUDA or other linux distributive - in this case you can start from scratch.
Next, for this particular model we need some additional libraries. In this case it's pytorch.
Finally, we copy the source codes and setup some build-time arguments. As you may note, there are several arguments:
SOURCE_PATH
- source code path. Usually we pass here something like "nn/unet_v2/src"RUN_SCRIPT
- what python file to run
The last one requires some additional explanation. We support three modes for each architecture:
- Train - to train a new model from scratch or with the help of transfer learning
- Inference - to inference an existing model on images
- Deploy - to use as a deployed API
Basically, it's the same docker image built with a different RUN_SCRIPT
. For example to build "inference" version we may set RUN_SCRIPT
to inference.py
file with corresponding code.
Now, let's build a fresh docker image with UNet.
Here we have a little help script build-docker-image.sh
. What it does is just running a docker build
command with respect to the repo folder structure. For the case of unet_v2
, it will look like the following:
./build-docker-image.sh unet_v2 inference
The first argument is folder with the corresponding network, the second is entrypoint. It will build an image and tag it as ${NETWORK}-${ENTRYPOINT}
- in this case, unet_v2-inference.
Connect the image to a Supervisely platform¶
Push the image to a Docker Registry. You can use Docker Hub or your own private registry. Login to it using the docker login
command, tag it and push, like this:
docker login ... docker tag unet_v2-inference myname/unet_v2-inference docker push myname/unet_v2-inference
Now open Neural networks -> Architectures page and click on "Create button". Put here a title and docker image to a "Inference docker image" field. Please do not forget to mention tag (for example, :latest
).
New architecture has been created. Next, let's add a new model. Go to Neural networks -> Import. Choose here your newly created architecture. You will also need to attach a weights as a .tar archive. Those files will be provided at runtime in local folder.
In this case i suggest to deploy our UNet model from Model Zoo and download it's weights by clicking on "three dots" at Models page and selecting "Download option".
Last thing, if you have pushed you images to a private docker registry, do not forget to re-deploy your agent and provide the credentials under "Advanced settings".
That's it! Not, if you click the "Test" button, agent on your computer will pull your image and execute RUN_SCRIPT
.
Local development¶
Of course, it's impossible to build and push a new image every time you change a single line. It that case we don't even need the agent - you can run docker image locally!
We created another helper run-as-developer.sh
. It will build and run your image and also share folder with source codes so that every file change will be immediately applied inside the container.
We also share some specific environment variables and files like $DISPLAY
- so you can even run your favorite IDE for debugging.
Also, the input images and model weights (see above) must be provided in /sly_task_data
. For that purposes we share nn/unet_v2/data
folder when you run run-as-developer.sh
. For an example input, you can look in tasks folder
of your agent directory here: ~/.supervisely-agent/:token/tasks/:any-id
.