- Shanghai
- blog.nuullll.com
Lists (1)
Sort Name ascending (A-Z)
Stars
Development repository for the Triton language and compiler
OpenAI Triton backend for Intel® GPUs
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
GEMM performance kernels for Intel GPUs, Nvidia GPUs, and Intel CPUs, written using SYCL joint matrix extension
AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.
Standardized Serverless ML Inference Platform on Kubernetes
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Stable Diffusion and Flux in pure C/C
Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
MiniCPM-V 2.6: A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone
Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
Intel® NPU Acceleration Library
An Implementation of NTQQ Protocol, with Pure C#, Derived from Konata.Core
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
A Gradio web UI for Large Language Models.
Stable Diffusion web UI
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU su…
App showcasing multiple real-time diffusion models pipelines with Diffusers
Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference