Would like to use an on-prem server or cluster for training and deploying AI models? Simply list hostnames and SSH creds, and dstack will let you run dev environments, tasks, and services—no Slurm or Kubernetes required. 🤯 https://lnkd.in/d58KnpQA
dstack
IT-Dienstleistungen und IT-Beratung
Open-source tools for managing AI infrastructure and containers
Info
dstack is an open-source orchestration engine for managing clusters and running containers in the cloud and on-prem, built for AI engineers. dstack Sky is a global GPU marketplace offering the most affordable GPUs from multiple providers.
- Website
-
https://dstack.ai
Externer Link zu dstack
- Branche
- IT-Dienstleistungen und IT-Beratung
- Größe
- 2–10 Beschäftigte
- Hauptsitz
- Munich
- Art
- Privatunternehmen
- Gegründet
- 2022
Orte
-
Primär
Munich, DE
Beschäftigte von dstack
-
Igor Artemev
VP of Tech @ EPNS
-
Santhosh Basavarajappa
Growth-Oriented Sales & Marketing Professional
-
Victor Skvortsov
Software Developer
-
Arndt Hüsges
Serial Entrepeneur & Founder & CEO at Hüsges Group - The Expert - We make car workflows efficient, simple, and digital for everyone.
Updates
-
The 0.18.11 release, announced yesterday, is now GA. 🎉 In addition to the AMD support, it includes: * Encryption of cloud credentials and user tokens for enhanced security * Persisting run logs in an external service (AWS CloudWatch) * Improved project access control Dive into the details here: https://lnkd.in/dDCnuv4H
We’re excited to share that dstack now supports AMD GPUs, with RunPod being the first cloud provider to integrate them through our platform! Check out the details here: https://lnkd.in/dWbBDshX This is a significant step forward, and we’ll soon be expanding support to more cloud providers and on-prem servers. 🚀 #AI #AMD #LLM #dstack #RunPod
dstack adds support for AMD accelerators on RunPod
dstack.ai
-
dstack hat dies direkt geteilt
We're proud to be the first platform supported by dstack for their AMD MI300X integration. Start using these powerful GPUs with ease on RunPod and check out their Hugging Face TGI example to see how you can deploy models like Meta Llama 3.1 70B in FP16 with just a few lines of code.
We’re excited to share that dstack now supports AMD GPUs, with RunPod being the first cloud provider to integrate them through our platform! Check out the details here: https://lnkd.in/dWbBDshX This is a significant step forward, and we’ll soon be expanding support to more cloud providers and on-prem servers. 🚀 #AI #AMD #LLM #dstack #RunPod
dstack adds support for AMD accelerators on RunPod
dstack.ai
-
dstack hat dies direkt geteilt
dstack is an awesome solution for who wants to finetune and deploy LLMs in cloud infrastructure agnostic way. here we show how it can be integrated with OCI
𝐍𝐞𝐱𝐭 𝐠𝐞𝐧 𝐨𝐩𝐞𝐧-𝐬𝐨𝐮𝐫𝐜𝐞 𝐀𝐈 𝐟𝐨𝐫 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 ?? 𝐖𝐞𝐥𝐥, 𝐰𝐞 𝐠𝐨𝐭 𝐲𝐨𝐮 𝐜𝐨𝐯𝐞𝐫𝐞𝐝 !! dstack and Oracle Cloud collaborated to enhance the developer experience for fine-tuning and deploying large language models (LLMs) using an open-source approach. You want to know more? Yeah, I thought so 😎 Scroll down to the comments section where you can find the details about the collaboration and how your 𝐀𝐈/𝐌𝐋 𝐭𝐞𝐚𝐦 can leverage dstack on OCI. Let me know your thoughts👨🏻💻👩🏻💻 ------------------------------------------------- Feel free to share and repost ✔️ #oci #oracle #gcp #google #azure #microsoft #orcl #googl #msft #openai #ai #gpu #dstack #llm #genai #llmops #huggingface #llama
-
dstack hat dies direkt geteilt
With dstack, you can manage any (on cloud or on premise) resources much easier in a uniform way. My latest blog post on Hugging Face introduces how to use dstack for fine-tuning and deployment of Google DeepMind's Gemma 7B model on Google Cloud Platform. This blog post would be useful if you are curious about leveraging multi-node cluster setups. It assumes that you run fine-tuning Gemma 7B on 3 nodes of which have (2 x A10) GPUs. https://lnkd.in/gqm7e9VC
-
dstack hat dies direkt geteilt
Wow, dstack is on Hacker News! Give us a shout-out!
-
We’re excited to share that dstack now supports AMD GPUs, with RunPod being the first cloud provider to integrate them through our platform! Check out the details here: https://lnkd.in/dWbBDshX This is a significant step forward, and we’ll soon be expanding support to more cloud providers and on-prem servers. 🚀 #AI #AMD #LLM #dstack #RunPod
dstack adds support for AMD accelerators on RunPod
dstack.ai
-
dstack hat dies direkt geteilt
Using open source LLMs on OCI with dstack
blogs.oracle.com
-
dstack hat dies direkt geteilt
𝐍𝐞𝐱𝐭 𝐠𝐞𝐧 𝐨𝐩𝐞𝐧-𝐬𝐨𝐮𝐫𝐜𝐞 𝐀𝐈 𝐟𝐨𝐫 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 ?? 𝐖𝐞𝐥𝐥, 𝐰𝐞 𝐠𝐨𝐭 𝐲𝐨𝐮 𝐜𝐨𝐯𝐞𝐫𝐞𝐝 !! dstack and Oracle Cloud collaborated to enhance the developer experience for fine-tuning and deploying large language models (LLMs) using an open-source approach. You want to know more? Yeah, I thought so 😎 Scroll down to the comments section where you can find the details about the collaboration and how your 𝐀𝐈/𝐌𝐋 𝐭𝐞𝐚𝐦 can leverage dstack on OCI. Let me know your thoughts👨🏻💻👩🏻💻 ------------------------------------------------- Feel free to share and repost ✔️ #oci #oracle #gcp #google #azure #microsoft #orcl #googl #msft #openai #ai #gpu #dstack #llm #genai #llmops #huggingface #llama
-
Recently, dstack introduced support for Volumes, allowing network storage use across cloud providers for your workloads. We’ve put together a detailed blog post on how to leverage this feature with RunPod to optimize your model inference and significantly reduce cold start times. https://lnkd.in/dT3Pwqn2 P.S.: Using volumes can not only optimize inference cold start times but also enhance the efficiency of data and model checkpoint loading during training and fine-tuning. Read the blog post to learn more.
Optimizing inference cold starts on RunPod with volumes - dstack
dstack.ai