Update README.md
Browse files
README.md
CHANGED
|
@@ -41,9 +41,30 @@ For model conversion and deployment guidance:
|
|
| 41 |
|
| 42 |
## How to Use
|
| 43 |
|
| 44 |
-
BEVFormer requires multi-view camera inputs (typically 6 views: front, front-left, front-right, back, back-left, back-right).
|
| 45 |
-
|
| 46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
### Inference with AX650 Host
|
| 48 |
```
|
| 49 |
(base) root@ax650:~/data# python inference_axmodel.py compiled.axmodel inference_config.json inference_data/ --output-dir ./inference_results
|
|
@@ -67,3 +88,7 @@ Save scene results: 100%|ββββββββββββββββββ
|
|
| 67 |
### Results
|
| 68 |
The model generates a 3D detection map projected onto the Bird's-Eye-View plane. Results are saved as images and videos which visualize the ego-vehicle and surrounding detected objects.
|
| 69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
## How to Use
|
| 43 |
|
| 44 |
+
BEVFormer requires multi-view camera inputs (typically 6 views: front, front-left, front-right, back, back-left, back-right).
|
| 45 |
+
|
| 46 |
+
### Prerequisites
|
| 47 |
+
|
| 48 |
+
1. **Environment:** Ensure you have the required Python environment activated (e.g., using Conda or a virtual environment) with the following core packages installed:
|
| 49 |
+
* **NPU Runtime:** `axengine` (PyAXEngine)
|
| 50 |
+
* **Core Libraries:** `numpy` (>= 1.22.0), `opencv-python` (`cv2`), `tqdm`, and `cffi`.
|
| 51 |
+
*
|
| 52 |
+
*(Recommended: Use a dedicated Conda environment to manage these dependencies.)*
|
| 53 |
+
|
| 54 |
+
2. **Model/Data:** Ensure the compiled `.axmodel`, `inference_config.json`, and input data (`inference_data/`) are available on the host.
|
| 55 |
+
|
| 56 |
+
### Inference Command
|
| 57 |
+
|
| 58 |
+
Run the inference script by providing the compiled model, configuration, and data directory.
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
# 1. ζΏζ΄» Conda η―ε’ (ε¦ζζ¨δΉεεε»ΊδΊι离η―ε’)
|
| 62 |
+
# conda activate ax_env
|
| 63 |
+
|
| 64 |
+
# 2. ζ§θ‘ζ¨η
|
| 65 |
+
python inference_axmodel.py compiled.axmodel inference_config.json inference_data/ --output-dir inference_results
|
| 66 |
+
|
| 67 |
+
|
| 68 |
### Inference with AX650 Host
|
| 69 |
```
|
| 70 |
(base) root@ax650:~/data# python inference_axmodel.py compiled.axmodel inference_config.json inference_data/ --output-dir ./inference_results
|
|
|
|
| 88 |
### Results
|
| 89 |
The model generates a 3D detection map projected onto the Bird's-Eye-View plane. Results are saved as images and videos which visualize the ego-vehicle and surrounding detected objects.
|
| 90 |
|
| 91 |
+
**Example Visualization:**
|
| 92 |
+

|
| 93 |
+
|
| 94 |
+
|