yolov5rt - YOLOv5 Runtime Stack

What it is. Yet another implementation of Ultralytics's yolov5, and with modules refactoring to make it available in deployment backends such as libtorch, onnxruntime and so on.

About the code. Follow the design principle of detr:

object detection should not be more difficult than classification, and should not require complex libraries for training and inference.

yolov5rt is very simple to implement and experiment with. You like the implementation of torchvision's faster-rcnn, retinanet or detr? You like yolov5? You love yolov5rt!

? What's New

  • Support exporting to TorchScript model. Oct. 8, 2020.
  • Support inferring with LibTorch cpp interface. Oct. 10, 2020.
  • Add TorchScript cpp inference example, Nov. 4, 2020.
  • Refactor YOLO modules and support dynmaic batching inference, Nov. 16, 2020.
  • Support exporting to onnx, and inferring with onnxruntime interface. Nov. 17, 2020.
  • Add graph visualization tools. Nov. 21, 2020.

Usage

There are no extra compiled components in yolov5rt and package dependencies are minimal, so the code is very simple to use.

Expand to see the instructions of how to install dependencies via conda.

  • First, clone the repository locally:

    git clone https://github.com/zhiqwang/yolov5-rt-stack.git
    
  • Then, install PyTorch 1.7.0+ and torchvision 0.8.1+:

    conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
    
  • Install pycocotools (for evaluation on COCO) and scipy (for training):

    conda install cython scipy
    pip install -U pycocotools>=2.0.2  # corresponds to https://github.com/ppwwyyxx/cocoapi
    
  • That's it, should be good to train and evaluate detection models.

Loading via torch.hub

The models are also available via torch hub, to load yolov5s with pretrained weights simply do:

model = torch.hub.load('zhiqwang/yolov5-rt-stack', 'yolov5s', pretrained=True)

Updating checkpoint from ultralytics/yolov5

The module state of yolov5rt has some differences comparing to ultralytics/yolov5. We can load ultralytics's trained model checkpoint with minor changes, and we have converted ultralytics's lastest release v3.1 checkpoint here.

Expand to see more information of how to update ultralytics's trained (or your own) model checkpoint.

  • If you train your model using ultralytics's repo, you should update the model checkpoint first. ultralytics's trained model has a limitation that their model must load in the root path of ultralytics, so a important thing is to desensitize the path dependence as follows:

    # Noted that current path is the root of ultralytics/yolov5, and the checkpoint is
    # downloaded from <https://github.com/ultralytics/yolov5/releases/download/v3.1/yolov5s.pt>
    ultralytics_weights = 'https://github.com/ultralytics/yolov5/releases/download/v3.1/yolov5s.pt'
    checkpoints_ = torch.load(ultralytics_weights, map_location='cpu')['model']
    torch.save(checkpoints_.state_dict(), desensitize_ultralytics_weights)
    
  • Load yolov5rt model as follows:

    from hubconf import yolov5s
    
    model = yolov5s()
    model.eval()
    
  • Now let's update ultralytics/yolov5 trained checkpoint, see the conversion script for more information:

    from utils.updated_checkpoint import update_ultralytics_checkpoints
    
    model = update_ultralytics_checkpoints(model, desensitize_ultralytics_weights)
    # updated checkpint is saved to checkpoint_path_rt_stack
    torch.save(model.state_dict(), checkpoint_path_rt_stack)
    

Inference on PyTorch backend ?

To read a source image and detect its objects run:

python -m detect [--input_source YOUR_IMAGE_SOURCE_DIR]
                 [--labelmap ./notebooks/assets/coco.names]
                 [--output_dir ./data-bin/output]
                 [--min_size 640]
                 [--max_size 640]
                 [--save_img]
                 [--gpu]  # GPU switch, Set False as default

You can also see the inference-pytorch-export-libtorch notebook for more information.

Inference on LibTorch backend ?

We provide an example of getting LibTorch inference to work. For details see the GitHub actions.

? Model Graph Visualization

Now, yolov5rt can draw the model graph directly, checkout our visualize-jit-models notebook to see how to use and visualize the model graph.

yolov5.detail

? Acknowledgement

  • The implementation of yolov5 borrow the code from ultralytics.
  • This repo borrows the architecture design and part of the code from torchvision.

GitHub