Skip to content

Autodistill Google Cloud Vision module for use in training a custom, fine-tuned model.

License

Notifications You must be signed in to change notification settings

autodistill/autodistill-gcp-vision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autodistill GCP Vision Module

This repository contains the code supporting the Google Cloud Object Localization API base model for use with Autodistill.

With this repository, you can label images using the Google Cloud Object Localization API and train a fine-tuned model using the generated labels.

This is ideal if you want to train a model that you own on a custom dataset.

You can then use your trained model on your computer using Autodistill, or at the edge or in the cloud by deploying with Roboflow Inference.

See our Autodistill modules for AWS Rekognition and Azure Custom Vision if you are interested in using those services instead.

Read the full Autodistill documentation.

Read the GCP Vision Autodistill documentation.

Installation

Note

Using this project will incur billing charges for API calls to the Google Cloud Object Localization API. Refer to the Google Cloud Vision pricing for more information. This package makes one API call per image you want to label.

To use the Google Cloud Object Localization API with autodistill, you need to install the following dependency:

pip install autodistill-gcp-vision

You will then need to authenticate with the gcloud CLI.

Learn how to install gcloud.

Learn how to set up and authenticate with gcloud.

Quickstart

from autodistill_gcp_vision import GCPVision
from autodistill.detection import CaptionOntology
import supervision as sv
import cv2

# define an ontology to map class names to our Google Cloud Object Localization API prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = GCPVision(
    ontology=CaptionOntology(
        {
            "Person": "Person",
            "a forklift": "forklift"
        }
    )
)

detections = base_model.predict("image.jpeg")
print(detections)

# annotate predictions on an image
classes = base_model.ontology.classes()

box_annotator = sv.BoxAnnotator()

labels = [
	f"{classes[class_id]} {confidence:0.2f}"
	for _, _, confidence, class_id, _
	in detections
]

image = cv2.imread("image.jpeg")

annotated_frame = box_annotator.annotate(
	scene=image.copy(),
	detections=detections,
	labels=labels
)

sv.plot_image(image=annotated_frame, size=(16, 16))

License

This project is licensed under an MIT license.

🏆 Contributing

We love your input! Please see the core Autodistill contributing guide to get started. Thank you 🙏 to all our contributors!

About

Autodistill Google Cloud Vision module for use in training a custom, fine-tuned model.

Topics

Resources

License

Stars

Watchers

Forks