Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
laugh12321 committed Jul 15, 2024
1 parent 47ecc6e commit 4941654
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 40 deletions.
40 changes: 20 additions & 20 deletions docs/cn/build_and_install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 2,6 @@

# 快速编译安装

## 安装 `tensorrt_yolo`

通过 PyPI 安装 `tensorrt_yolo` 模块,您只需执行以下命令即可:

```bash
pip install -U tensorrt_yolo
```

如果您希望获取最新的开发版本或者为项目做出贡献,可以按照以下步骤从 GitHub 克隆代码库并安装:

```bash
git clone https://github.com/laugh12321/TensorRT-YOLO # 克隆代码库
cd TensorRT-YOLO
pip install --upgrade build
python -m build
pip install dist/tensorrt_yolo/tensorrt_yolo-3.*-py3-none-any.whl
```

在以上步骤中,您可以先克隆代码库并进行本地构建,然后再使用 `pip` 安装生成的 Wheel 包,确保安装的是最新版本并具有最新的功能和改进。

## `Deploy` 编译

### 环境要求
Expand All @@ -42,4 22,24 @@ xmake f -k shared --tensorrt="C:/Program Files/NVIDIA GPU Computing Toolkit/Tens
xmake -P . -r
```

## 安装 `tensorrt_yolo`

通过 PyPI 安装 `tensorrt_yolo` 模块,您只需执行以下命令即可:

> 或者通过[Release](https://github.com/laugh12321/TensorRT-YOLO/releases) 下载构建好的 Wheel 包安装。
```bash
pip install -U tensorrt_yolo
```

如果需要自己构建合适CUDA与TensorRT版本的 `tensorrt_yolo` 则需要先对 `Deploy` 进行编译,然后再按照以下步骤构建:

```bash
pip install --upgrade build
python -m build
pip install dist/tensorrt_yolo/tensorrt_yolo-4.*-py3-none-any.whl
```

在以上步骤中,您可以先克隆代码库并进行本地构建,然后再使用 `pip` 安装生成的 Wheel 包,确保安装的是最新版本并具有最新的功能和改进。

在这个过程中,您可以使用 xmake 工具根据您的部署需求选择动态库或者静态库的编译方式,并且可以指定 TensorRT 的安装路径以确保编译过程中正确链接 TensorRT 库。Xmake 会自动识别 CUDA 的安装路径,如果您有多个版本的 CUDA,可以使用 `--cuda` 进行指定。编译后的文件将位于 `lib` 文件夹下。
41 changes: 21 additions & 20 deletions docs/en/build_and_install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 2,6 @@ English | [中文](../cn/build_and_install.md)

# Quick Compilation and Installation

## Installing `tensorrt_yolo`

To install the `tensorrt_yolo` module from PyPI, simply execute the following command:

```bash
pip install -U tensorrt_yolo
```

If you wish to get the latest development version or contribute to the project, you can follow these steps to clone the code repository from GitHub and install:

```bash
git clone https://github.com/laugh12321/TensorRT-YOLO # Clone the code repository
cd TensorRT-YOLO
pip install --upgrade build
python -m build
pip install dist/tensorrt_yolo/tensorrt_yolo-3.*-py3-none-any.whl
```

In these steps, you can clone the code repository first, perform local builds, and then install the generated Wheel package using `pip`. This ensures that you install the latest version with the newest features and improvements.

## `Deploy` Compilation

### Requirements
Expand All @@ -42,4 22,25 @@ xmake f -k shared --tensorrt="C:/Program Files/NVIDIA GPU Computing Toolkit/Tens
xmake -P . -r
```

## Installing `tensorrt_yolo`

To install the `tensorrt_yolo` module from PyPI, simply execute the following command:

> Or download the pre-built Wheel package from the [Release](https://github.com/laugh12321/TensorRT-YOLO/releases) page for installation.
```bash
pip install -U tensorrt_yolo
```

If you need to build the tensorrt_yolo for a specific CUDA and TensorRT version yourself, you need to perform the Deploy build first, and then build as follows:

```bash
pip install --upgrade build
python -m build
pip install dist/tensorrt_yolo/tensorrt_yolo-4.*-py3-none-any.whl
```stall dist/tensorrt_yolo/tensorrt_yolo-3.*-py3-none-any.whl
```

In these steps, you can clone the code repository first, perform local builds, and then install the generated Wheel package using `pip`. This ensures that you install the latest version with the newest features and improvements.

During this process, you can use the xmake tool to choose between dynamic and static library compilation based on your deployment needs. You can also specify the TensorRT installation path to ensure correct linking of TensorRT libraries during compilation. Xmake automatically detects the CUDA installation path, but if you have multiple CUDA versions, you can specify them using `--cuda`. The compiled files will be located in the `lib` folder.

0 comments on commit 4941654

Please sign in to comment.