Pinned Loading
-
flux-fp8-api
flux-fp8-api PublicForked from aredden/flux-fp8-api
Flux diffusion model implementation using quantized fp8 matmul & remaining layers use faster half precision accumulate, which is ~2x faster on consumer devices.
Python 2
-
mlx-lm-notebooks
mlx-lm-notebooks PublicApple MLX language model (mlx-lm) notebooks, exploration and tinkering
Jupyter Notebook 5
-
cogvlm-image-caption
cogvlm-image-caption PublicUsing CogVLM and CogAgent for image captioning
-
mlx-funbox
mlx-funbox Publicmlx and mlx-lm CLI toolbox for my own personal use, and maybe yours too
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.