Skip to content

๐Ÿ”ฎ SuperDuperDB: Bring AI to your favourite database! Integrate, train and manage any AI models and APIs directly with your database and your data.

License

Notifications You must be signed in to change notification settings

aminalaee/superduperdb

ย 
ย 

Repository files navigation

Bring AI to your favorite database!

slack Twitter Coverage Package version Supported Python versions Apache License

๐Ÿ”ฎ SuperDuperDB is open-source: Leave a star โญ๏ธ to support the project!


๐Ÿ“ข Important Announcement !

On the 21st of November, we are going to officially launch SuperDuperDB with the release of v0.1.

The release will include:

  • Full integration of major SQL databases, including PostgreSQL, MySQL, SQLite, DuckDB, BigQuery, Snowflake, and many more.
  • Massive overhaul of the docs
  • Revamped and modularized testing suite

โญ๏ธ Leave a star to be informed of more exciting updates!


SuperDuperDB is not another database. It is a framework that transforms your favorite database into an AI powerhouse:

  • A single scalable AI deployment of all your models and AI APIs, including output computation (inference) โ€” always up-to-date as changing data is handled automatically and immediately.
  • A model trainer that allows easy training and fine-tuning of models simply by querying the database.
  • A feature store in which the model outputs are stored alongside the inputs in any data format.
  • A fully functional vector database that allows easy generalization of vector embeddings and vector indexes of the data with preferred models and APIs.

โšก Integrations (more coming soon):

Build AI applications easily without needing to move your data to complex pipelines and specialized vector databases. Integrate AI and vector search directly with your database including real-time inference and model training. All through a simple Python interface!

Datastores

Unlock the power of SuperDuperDB to connect and manage various types of data sources effortlessly!

 Full Support 
 Full Support 
 Full Support 
 Experimental 
 Experimental 
 Experimental 
 Experimental 
 Experimental 

AI Frameworks

Leverage SuperDuperDB to discover insights from your data using a variety of AI models!

 Full Support 
 Full Support 
 Full Support 

AI APIs

Let SuperDuperDB make your applications smarter using a suite of ready-to-use AI models!

 Full Support 
 Full Support 
 Full Support 

๐Ÿ”ฅ Featured Examples

Try our ready-to-use notebooks live on your browser.

  • Generative AI & chatbots
  • Vector Search
  • Standard Use-Cases (classification, regression, clustering, recommendation, etc)
  • Highly custom AI use cases and workflows with specialized models.
Text-To-Image Search Text-To-Video Search Question the Docs
Semantic Search Engine Classical Machine Learning Cross-Framework Transfer Learning

๐Ÿš€ Installation

1. Install SuperDuperDB via pip (~1 minute)

pip install superduperdb

2. Try SuperDuperDB via Docker (~2 minutes):

  • You need to install Docker? See the docs here.
docker run -p 8888:8888 superduperdb/demo:latest

๐Ÿ“š Tutorial

In this tutorial, you will learn how to Integrate, train, and manage any AI models and APIs directly with your database with your data. You can visit the docs to learn more.

- Deploy ML/AI models to your database:

Automatically compute outputs (inference) with your database in a single environment.

import pymongo
from sklearn.svm import SVC

from superduperdb import superduper

# Make your db superduper!
db = superduper(pymongo.MongoClient().my_db)

# Models client can be converted to SuperDuperDB objects with a simple wrapper.
model = superduper(SVC())

# Add the model into the database
db.add(model)

# Predict on the selected data.
model.predict(X='input_col', db=db, select=Collection(name='test_documents').find({'_fold': 'valid'}))

- Train models directly from your database.

Simply by querying your database, without additional ingestion and pre-processing:

import pymongo
from sklearn.svm import SVC

from superduperdb import superduper

# Make your db superduper!
db = superduper(pymongo.MongoClient().my_db)

# Models client can be converted to SuperDuperDB objects with a simple wrapper.
model = superduper(SVC())

# Predict on the selected data.
model.train(X='input_col', y='target_col', db=db, select=Collection(name='test_documents').find({'_fold': 'valid'}))

- Vector-Search your data:

Use your existing favorite database as a vector search database, including model management and serving.

# First a "Listener" makes sure vectors stay up-to-date
indexing_listener = Listener(model=OpenAIEmbedding(), key='text', select=collection.find())

# This "Listener" is linked with a "VectorIndex"
db.add(VectorIndex('my-index', indexing_listener=indexing_listener))

# The "VectorIndex" may be used to search data. Items to be searched against are passed
# to the registered model and vectorized. No additional app layer is required.
db.execute(collection.like({'text': 'clothing item'}, 'my-index').find({'brand': 'Nike'}))

- Integrate AI APIs to work together with other models.

Use OpenAI, PyTorch or Hugging face model as an embedding model for vector search.

# Create a ``VectorIndex`` instance with indexing listener as OpenAIEmbedding and add it to the database.
db.add(
    VectorIndex(
        identifier='my-index',
        indexing_listener=Listener(
            model=OpenAIEmbedding(identifier='text-embedding-ada-002'),
            key='abstract',
            select=Collection(name='wikipedia').find(),
        ),
    )
)
# The above also executes the embedding model (openai) with the select query on the key.

# Now we can use the vector-index to search via meaning through the wikipedia abstracts
cur = db.execute(
    Collection(name='wikipedia')
        .like({'abstract': 'philosophers'}, n=10, vector_index='my-index')
)

- Add a Llama 2 model directly into your database!:

model_id = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.float16,
    device_map="auto",
)

model = Pipeline(
    identifier='my-sentiment-analysis',
    task='text-generation',
    preprocess=tokenizer,
    object=pipeline,
    torch_dtype=torch.float16,
    device_map="auto",
)

# You can easily predict on your collection documents.
model.predict(
    X=Collection(name='test_documents').find(),
    db=db,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200
)

- Use models outputs as inputs to downstream models:

model.predict(
    X='input_col',
    db=db,
    select=coll.find().featurize({'X': '<upstream-model-id>'}),  # already registered upstream model-id
    listen=True,
)

๐Ÿค Community & Getting Help

If you have any problems, questions, comments, or ideas:

๐ŸŒฑ Contributing

There are many ways to contribute, and they are not limited to writing code. We welcome all contributions such as:

Please see our Contributing Guide for details.

โค๏ธ Contributors

Thanks goes to these wonderful people:

License

SuperDuperDB is open-source and intended to be a community effort, and it wouldn't be possible without your support and enthusiasm. It is distributed under the terms of the Apache 2.0 license. Any contribution made to this project will be subject to the same provisions.

Join Us

We are looking for nice people who are invested in the problem we are trying to solve to join us full-time. Find roles that we are trying to fill here!

About

๐Ÿ”ฎ SuperDuperDB: Bring AI to your favourite database! Integrate, train and manage any AI models and APIs directly with your database and your data.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.8%
  • CSS 3.7%
  • JavaScript 2.0%
  • Other 0.5%