Skip to content

NVIDIA/JAX-Toolbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JAX Toolbox

License Apache 2.0 Build

JAX Toolbox provides a public CI, Docker images for popular JAX libraries, and optimized JAX examples to simplify and enhance your JAX development experience on NVIDIA GPUs. It supports JAX libraries such as MaxText, Paxml, and Pallas.

Frameworks and Supported Models

We support and test the following JAX frameworks and model architectures. More details about each model and available containers can be found in their respective READMEs.

Framework Models Use cases Container
maxtext GPT, LLaMA, Gemma, Mistral, Mixtral pretraining ghcr.io/nvidia/jax:maxtext
paxml GPT, LLaMA, MoE pretraining, fine-tuning, LoRA ghcr.io/nvidia/jax:pax
t5x T5, ViT pre-training, fine-tuning ghcr.io/nvidia/jax:t5x
t5x Imagen pre-training ghcr.io/nvidia/t5x:imagen-2023-10-02.v3
big vision PaliGemma fine-tuning, evaluation ghcr.io/nvidia/jax:gemma
levanter GPT, LLaMA, MPT, Backpacks pretraining, fine-tuning ghcr.io/nvidia/jax:levanter

Build Pipeline Status

Components Container Build Test
ghcr.io/nvidia/jax:base
[no tests]
ghcr.io/nvidia/jax:jax








ghcr.io/nvidia/jax:levanter

ghcr.io/nvidia/jax:equinox
[tests disabled]
ghcr.io/nvidia/jax:triton
ghcr.io/nvidia/jax:upstream-t5x
ghcr.io/nvidia/jax:t5x
ghcr.io/nvidia/jax:upstream-pax
ghcr.io/nvidia/jax:pax
ghcr.io/nvidia/jax:maxtext
ghcr.io/nvidia/jax:gemma

In all cases, ghcr.io/nvidia/jax:XXX points to latest nightly build of the container for XXX. For a stable reference, use ghcr.io/nvidia/jax:XXX-YYYY-MM-DD.

In addition to the public CI, we also run internal CI tests on H100 SXM 80GB and A100 SXM 80GB.

Environment Variables

The JAX image is embedded with the following flags and environment variables for performance tuning of XLA and NCCL:

XLA Flags Value Explanation
--xla_gpu_enable_latency_hiding_scheduler true allows XLA to move communication collectives to increase overlap with compute kernels
--xla_gpu_enable_triton_gemm false use cuBLAS instead of Trition GeMM kernels
Environment Variable Value Explanation
CUDA_DEVICE_MAX_CONNECTIONS 1 use a single queue for GPU work to lower latency of stream operations; OK since XLA already orders launches
NCCL_NVLS_ENABLE 0 Disables NVLink SHARP (1). Future releases will re-enable this feature.

There are various other XLA flags users can set to improve performance. For a detailed explanation of these flags, please refer to the GPU performance doc. XLA flags can be tuned per workflow. For example, each script in contrib/gpu/scripts_gpu sets its own XLA flags.

For a list of previously used XLA flags that are no longer needed, please also refer to the GPU performance page.

Profiling

See this page for more information about how to profile JAX programs on GPU.

Frequently asked questions (FAQ)

`bus error` when running JAX in a docker container

Solution:

docker run -it --shm-size=1g ...

Explanation: The bus error might occur due to the size limitation of /dev/shm. You can address this by increasing the shared memory size using the --shm-size option when launching your container.

enroot/pyxis reports error code 404 when importing multi-arch images

Problem description:

slurmstepd: error: pyxis:     [INFO] Authentication succeeded
slurmstepd: error: pyxis:     [INFO] Fetching image manifest list
slurmstepd: error: pyxis:     [INFO] Fetching image manifest
slurmstepd: error: pyxis:     [ERROR] URL https://ghcr.io/v2/nvidia/jax/manifests/<TAG> returned error code: 404 Not Found

Solution: Upgrade enroot or apply a single-file patch as mentioned in the enroot v3.4.0 release note.

Explanation: Docker has traditionally used Docker Schema V2.2 for multi-arch manifest lists but has switched to using the Open Container Initiative (OCI) format since 20.10. Enroot added support for OCI format in version 3.4.0.

JAX on Public Clouds

Resources

Videos