Stars
A concise but complete full-attention transformer with a set of promising experimental features from various papers
Differentiable Optimizers with Perturbations in Pytorch
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Implementation of TabTransformer, attention network for tabular data, in Pytorch
Open source code for AlphaFold.
CLI for running the LEAN engine locally and in the cloud
Offline Reinforcement Learning (aka Batch Reinforcement Learning) on Atari 2600 games
Extensible Combinatorial Optimization Learning Environments
Lean Algorithmic Trading Engine by QuantConnect (Python, C#)
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
☕ A tool to generate requirements.txt for Python project, and more than that. (IT IS NOT A PACKAGE MANAGEMENT TOOL)
🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
A collection of reference environments for offline reinforcement learning
A vectorized implementation of py_vollib, that supports numpy arrays and pandas Series and DataFrames.
PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models (CIKM 2021)
Author's PyTorch implementation of TD3 for OpenAI gym tasks
Pytorch library for fast transformer implementations
A repository for explaining feature attributions and feature interactions in deep neural networks.
A Vim-like interface for Firefox, inspired by Vimperator/Pentadactyl.
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Natural Gradient Boosting for Probabilistic Prediction
A Python-embedded modeling language for convex optimization problems.
Differentiable convex optimization layers
Understanding the Difficulty of Training Transformers