Skip to content

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

License

Notifications You must be signed in to change notification settings

derekray311511/segment-anything-cv2

 
 

Repository files navigation

SAM - Local APP

cv2 DEMO

NEWS!!!

Installation

The code requires python>=3.8, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

We have tested the setting below on 4090, 3060ti, 1060-6G
Python 3.8 pytorch 2.0.0 (py3.8_cuda11.7_cudnn8.5.0_0) torchvision 0.15.0

Install Segment Anything:

https://github.com/derekray311511/segment-anything.git
cd segment-anything; pip install -e .

The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. jupyter is also required to run the example notebooks.

pip install opencv-python pycocotools matplotlib onnxruntime onnx

Model Checkpoints

You can download the model checkpoints here.

Run

python scripts/select_obj.py --img /PATH/TO/YOUR/IMG.file_type --output /OUTPUT/FILE/NAME --model_type MODEL_TYPE --checkpoint /PATH/TO/MODEL

MODEL_TYPE: vit_h, vit_l, vit_b

Functions

Mode

  • Auto: Segment all objects in the image
  • Custom: Select object(s) with points or boxes using mouse clicks
  • View: View the masks you just created and disable manipulation

Auto mode

  • Press SPACE to inference all objects in the image

Custom mode

  • Point select: Press p to switch to point select function
    • a: Positive prompt
    • d: Negative prompt
  • Box select: Press b to switch to box select function

View mode

  • Press v to switch between view / previous mode

Shortcut Table

Function Key
Switch to auto mode enter
Switch to view mode v
Point select mode p
Box select mode b
Positive prompt a
Negative prompt d
Save image s
Inference SPACE
Exit ESC

About

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.0%
  • Other 1.0%