Highlights
- Pro
Stars
Program analysis tools built on tree-sitter (https://github.com/tree-sitter/tree-sitter).
UBGen can generate programs with undefined behaviors (e.g., buffer-overflow, use-after-free, etc.)
Set of tools to assess and improve LLM security.
llama3 implementation one matrix multiplication at a time
Run evaluation on LLMs using human-eval benchmark
A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.
Parsing, analyzing, and comparing source code across many languages
A Dataset of Python Challenges for AI Research
An unofficial guide to contributing to GCC, aimed at newbies
APPS: Automated Programming Progress Standard (NeurIPS 2021)
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。
Code for the paper "Evaluating Large Language Models Trained on Code"
Bear is a tool that generates a compilation database for clang tooling.
Measuring Massive Multitask Language Understanding | ICLR 2021
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
《Effective Modern C 》- 完成翻译
Boost LaTeX typesetting efficiency with preview, compile, autocomplete, colorize, and more.
View a Git Graph of your repository in Visual Studio Code, and easily perform Git actions from the graph.
李宏毅2021/2022/2023春季机器学习课程课件及作业