repo_id stringlengths 19 138 | file_path stringlengths 32 200 | content stringlengths 1 12.9M | __index_level_0__ int64 0 0 |
|---|---|---|---|
apollo_public_repos/apollo-model-centerpoint/deploy/pointpillars/cpp | apollo_public_repos/apollo-model-centerpoint/deploy/pointpillars/cpp/custom_ops/iou3d_nms_api.cpp | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/pointpillars/cpp/cmake | apollo_public_repos/apollo-model-centerpoint/deploy/pointpillars/cpp/cmake/external/boost.cmake | include(ExternalProject)
set(BOOST_PROJECT "extern_boost")
# To release PaddlePaddle as a pip package, we have to follow the
# manylinux1 standard, which features as old Linux kernels and
# compilers as possible and recommends CentOS 5. Indeed, the earliest
# CentOS version that works with NVIDIA CUDA is CentOS ... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/paconv | apollo_public_repos/apollo-model-centerpoint/deploy/paconv/python/infer.py | # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by appli... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/python/infer.py | # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applic... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/main.cc | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/CMakeLists.txt | cmake_minimum_required(VERSION 3.0)
project(cpp_inference_demo CXX C)
option(WITH_MKL "Compile demo with MKL/OpenBlas support, default use MKL." ON)
option(WITH_GPU "Compile demo with GPU/CPU, default use CPU." ON)
option(USE_TENSORRT "Compile demo with TensorRT." ON)
option(CUS... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/compile.sh | # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by appli... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/pointnet2/group_points_gpu.cu | /*
Stacked-batch-data version of point grouping, modified from the original
implementation of official PointNet++ codes. Written by Shaoshuai Shi All Rights
Reserved 2019-2020.
*/
#include "paddle/include/experimental/ext_all.h"
#define THREADS_PER_BLOCK 256
#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0))
__global... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/pointnet2/sampling.cc | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/pointnet2/sampling_gpu.cu | #include <cmath>
#include "paddle/include/experimental/ext_all.h"
#define TOTAL_THREADS 1024
#define THREADS_PER_BLOCK 256
#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0))
inline int opt_n_threads(int work_size) {
const int pow_2 = std::log(static_cast<double>(work_size)) / std::log(2.0);
return max(min(1 << po... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/pointnet2/voxel_query.cc | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/pointnet2/voxel_query_gpu.cu | #include <curand_kernel.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include "paddle/include/experimental/ext_all.h"
#define THREADS_PER_BLOCK 256
#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0))
__global__ void voxel_query_kernel_stack(int M, int R1, int R2, int R3,
... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/pointnet2/group_points.cc | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/voxel/voxelize_op.cc | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/voxel/voxelize_op.cu | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/iou3d_nms/iou3d_cpu.cpp | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/iou3d_nms/iou3d_nms.cpp | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/iou3d_nms/iou3d_nms_kernel.cu | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/iou3d_nms/iou3d_nms.h | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/iou3d_nms/iou3d_cpu.h | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/custom_ops/iou3d_nms/iou3d_nms_api.cpp | // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required... | 0 |
apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/cmake | apollo_public_repos/apollo-model-centerpoint/deploy/voxel_rcnn/cpp/cmake/external/boost.cmake | include(ExternalProject)
set(BOOST_PROJECT "extern_boost")
# To release PaddlePaddle as a pip package, we have to follow the
# manylinux1 standard, which features as old Linux kernels and
# compilers as possible and recommends CentOS 5. Indeed, the earliest
# CentOS version that works with NVIDIA CUDA is CentOS ... | 0 |
apollo_public_repos/apollo-model-centerpoint/tests | apollo_public_repos/apollo-model-centerpoint/tests/apis/test_scheduler.py | import unittest
import paddle3d
class SchedulerTestCase(unittest.TestCase):
"""
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.scheduler = paddle3d.apis.Scheduler(
save_interval=10, log_interval=5, do_eval=True)
def test_status(self):
... | 0 |
apollo_public_repos/apollo-model-centerpoint/tests | apollo_public_repos/apollo-model-centerpoint/tests/datasets/test_kitti_dataset.py | import unittest
import numpy as np
import paddle
import paddle3d
class KittiMonoDatasetTestCase(unittest.TestCase):
"""
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
#prepare dataset to temp dir
self.kitti_train = paddle3d.datasets.KittiMonoDataset(
... | 0 |
apollo_public_repos/apollo-model-centerpoint/tests | apollo_public_repos/apollo-model-centerpoint/tests/datasets/test_nuscenes_dataset.py | import unittest
import numpy as np
import paddle
import paddle3d
class NuscenesPCDatasetTestCase(unittest.TestCase):
"""
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
#prepare dataset to temp dir
self.nuscenes_minitrain = paddle3d.datasets.NuscenesPC... | 0 |
apollo_public_repos/apollo-model-centerpoint | apollo_public_repos/apollo-model-centerpoint/docs/release_note.md | # Release Notes
## v1.0
2022.12.27
### New Features
* The new version 1.0 of Paddle3D is released, which provides the following features
* We supports multiple type of 3D perception models, including monocular 3D models SMOKE/CaDDN/DD3D, pointcloud detection models PointPillars/CenterPoint/IA-SSD/PV-RCNN/Voxel... | 0 |
apollo_public_repos/apollo-model-centerpoint | apollo_public_repos/apollo-model-centerpoint/docs/api.md | # 训练
* [paddle3d.apis.Checkpoint](apis/checkpoint.md)
* [paddle3d.apis.Config](apis/config.md)
* [paddle3d.apis.Scheduler](apis/scheduler.md)
* [paddle3d.apis.Trainer](apis/trainer.md)
# 模型
* [paddle3d.models.SMOKE](apis/models/smoke.md)
# 数据集
* [paddle3d.datasets.KittiMonoDataset](apis/datasets/kitti_mono_data... | 0 |
apollo_public_repos/apollo-model-centerpoint | apollo_public_repos/apollo-model-centerpoint/docs/quickstart.md | # 快速开始
本文以SMOKE模型和KITTI数据集为例,介绍如何基于Paddle3D进行模型训练、评估、可视化的全流程操作。其他模型的全流程操作与此一致,各模型详细的使用教程和benchmark可参考[模型文档](./models)。
## 准备工作
在开始本教程之前,请确保已经按照 [安装文档](./installation.md) 完成了相关的准备工作
<br>
## 模型训练
**单卡训练**
使用如下命令启动单卡训练,由于一次完整的训练流程耗时较久,我们只训练100个iter进行快速体验,下面的命令在Telsa V100上大约耗时2分钟
```shell
python tools/train.py --co... | 0 |
apollo_public_repos/apollo-model-centerpoint | apollo_public_repos/apollo-model-centerpoint/docs/configuration.md | # 配置文件详解
Paddle3D支持通过配置文件来描述相关的任务,从而实现配置化驱动的训练、评估、模型导出等流程,Paddle3D的配置化文件具备以下特点:
* 以yaml格式进行编写
* 支持用户配置模型、数据集、训练超参等配置项
* 通过特定的关键字 `type` 指定组件类型,并将其他参数作为实参来初始化组件
* 支持加载PaddleSeg和PaddleDetection中的组件:
* 在指定类型 `type` 时,加上 `$paddledet.` 前缀即可加载PaddleDetection的组件。
* 在指定类型 `type` 时,加上 `$paddleseg.` 前缀即可加载PaddleSeg的组件... | 0 |
apollo_public_repos/apollo-model-centerpoint | apollo_public_repos/apollo-model-centerpoint/docs/installation.md | # 1. 安装教程
- [1. 安装教程](#1-安装教程)
- [1.1. 环境要求](#11-环境要求)
- [1.2. 安装说明](#12-安装说明)
- [1.2.1. 安装MiniConda](#121-安装miniconda)
- [1.2.2. 安装PaddlePaddle](#122-安装paddlepaddle)
- [1.2.2.1. 创建虚拟环境](#1221-创建虚拟环境)
- [1.2.2.2. 进入 conda 虚拟环境](#1222-进入-conda-虚拟环境)
- [1.2.2.3. 添加清华源(可选)](#1223-添加清华源可选)
... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs | apollo_public_repos/apollo-model-centerpoint/docs/apis/trainer.md | # paddle3d.apis.Trainer
训练器对象,支持在指定的数据集上训练和评估模型
## \_\_init\_\_
* **参数**
* model: 待训练或者评估的模型
* iters: 更新的训练步数,可以不指定,与epochs互斥,当指定iters时,epochs不生效
* epochs: 更新的训练轮次,可以不指定
* optimizer: 训练所用的优化器
* train_dataset: 训练数据集
* val_dataset: 评估数据集,可以不指定
* resume: 是否从检查点中恢复到上一... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs | apollo_public_repos/apollo-model-centerpoint/docs/apis/scheduler.md | # paddle3d.apis.SchedulerABC
调度器抽象基类,定义调度器应该实现的方法
## step
通知调度器对象步进一次,并返回当前步的调度状态 `SchedulerStatus`
<br>
# paddle3d.apis.Scheduler
调度器类,继承自SchedulerABC,用于决定Trainer训练过程中的调度行为,包括:
* 是否打印日志
* 是否保存检查点
* 是否执行评估操作
## \_\_init\_\_
* **参数**
* save_interval: 保存检查点的间隔步数
* log_interval: 打印日志... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs | apollo_public_repos/apollo-model-centerpoint/docs/apis/checkpoint.md | # paddle3d.apis.CheckpointABC
检查点抽象基类,定义检查点应该实现的方法
## have
检查点中是否保存了指定tag的信息
* **参数**
* tag: 数据tag
## get
获取检查点中的指定信息
* **参数**
* tag: 数据tag
## push
保存一组模型参数和优化器参数到检查点中
* **参数**
* params_dict: 待保存的模型参数
* opt_dict: 待保存的优化器参数
* kwargs: 其余参数,和各个继承类实现有关
## pop
删除检查点队列中最先保... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs | apollo_public_repos/apollo-model-centerpoint/docs/apis/config.md | # paddle3d.apis.Config
配置类方法,用于解析配置文件(yaml格式),提取文件中指定的组件并实例化成对应的Paddle3D对象
## \_\_init\_\_
* **参数**
* path: 配置文件路径
* learning_rate: 更新的学习率参数,可以不指定
* batch_size: 更新的batch_size,可以不指定
* iters: 更新的训练步数,可以不指定
* epochs: 更新的训练轮次,可以不指定
*注意:使用一个 batch 数据对模型进行一次参数更新的过程称之为一步,iters 即为训练过程中的训练步数... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/apis | apollo_public_repos/apollo-model-centerpoint/docs/apis/datasets/kitti_pointcloud_dataset.md | # paddle3d.datasets.KittiPCDataset
KITTI点云检测数据集,数据集信息请参考[KITTI官网](http://www.cvlibs.net/datasets/kitti/)
*注意:KITTI官网只区分了训练集和测试集,我们遵循业界的普遍做法,将7481个训练集样本,进一步划分为3712个训练集样本和3769个验证集样本*
## \_\_init\_\_
* **参数**
* dataset_root: 数据集的根目录
* mode: 数据集模式,支持 `train` / `val` / `trainval` / `test` 等格式
* tran... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/apis | apollo_public_repos/apollo-model-centerpoint/docs/apis/datasets/kitti_mono_dataset.md | # paddle3d.datasets.KittiMonoDataset
KITTI单目3D检测数据集,数据集信息请参考[KITTI官网](http://www.cvlibs.net/datasets/kitti/)
*注意:KITTI官网只区分了训练集和测试集,我们遵循业界的普遍做法,将7481个训练集样本,进一步划分为3712个训练集样本和3769个验证集样本*
## \_\_init\_\_
* **参数**
* dataset_root: 数据集的根目录
* mode: 数据集模式,支持 `train` / `val` / `trainval` / `test` 等格式
* ... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/apis | apollo_public_repos/apollo-model-centerpoint/docs/apis/datasets/nuscenes_pointcloud_dataset.md | # paddle3d.datasets.NuscenesPCDataset
Nuscenes点云检测数据集,数据集信息请参考[NuScenes官网](https://www.nuscenes.org/)
## \_\_init\_\_
* **参数**
* dataset_root: 数据集的根目录
* mode: 数据集模式,支持 `train` / `val` / `trainval` / `test` / `mini_train` / `mini_val` 等格式
*注意:当使用NuScenes官方提供的mini数据集时,请指定mode为 mini_train 或者 mini... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/apis | apollo_public_repos/apollo-model-centerpoint/docs/apis/datasets/semantickitti_seg_dataset.md | # paddle3d.datasets.SemanticKITTIDataset
SemanticKITTI点云分割数据集,数据集信息请参考[SemanticKITTI官网](http://www.semantic-kitti.org/)
## \_\_init\_\_
* **参数**
* dataset_root: 数据集的根目录
* mode: 数据集模式,支持 `train` / `val` / `trainval` / `test` 等格式
* sequences: 数据划分序列,可以不指定,默认使用官网推荐的划分方式
* transforms: 数据增强方法
| 0 |
apollo_public_repos/apollo-model-centerpoint/docs/apis | apollo_public_repos/apollo-model-centerpoint/docs/apis/models/smoke.md | # paddle3d.models.SMOKE
单目3D检测模型 《Single-Stage Monocular 3D Object Detection via Keypoint Estimation》
## \_\_init\_\_
* **参数**
* backbone: 所用的骨干网络
* head: 预测头,目前只支持 `SMOKEPredictor`
* depth_ref: 深度参考值
* dim_ref: 每个类别的维度参考值
* max_detection: 最大检测目标数量,默认为50
* pred_2d: 是否同时预... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs | apollo_public_repos/apollo-model-centerpoint/docs/datasets/custom.md | # 自定义数据集格式说明
Paddle3D支持按照[KITTTI数据集](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d)格式构建自己的数据集,目录结构示意如下:
```
custom_dataset
|—— training
| |—— image_2
| | |—— 000001.png
| | |—— ...
| |—— label_2
| | |—— 000001.txt
| | |—— ...
| |—— calib
| | |—— 000001.txt
| | |——... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/smoke/README.md | # SMOKE:Single-Stage Monocular 3D Object Detection via Keypoint Estimation
## 目录
* [引用](#引用)
* [简介](#简介)
* [训练配置](#训练配置)
* [使用教程](#使用教程)
* [数据准备](#数据准备)
* [训练](#训练)
* [评估](#评估)
* [导出部署](#导出部署)
* [自定义数据集](#自定义数据集)
<br>
## 引用
> Liu, Zechen, Zizhang Wu, and Roland Tóth. "Smoke: Single-stage monocular 3d object detecti... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/petr/README.md | # PETR
## 目录
* [引用](#1)
* [简介](#2)
* [训练配置](#3)
* [使用教程](#4)
* [数据准备](#5)
* [训练](#6)
* [评估](#7)
* [导出 & 部署](#8)
## <h2 id="1">引用</h2>
> Liu, Yingfei and Wang, Tiancai and Zhang, Xiangyu and Sun, Jian. "Petr: Position embedding transformation for multi-view 3d object detection." arXiv preprint arXiv:2203.05625, 2022... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/centerpoint/README.md | # CenterPoint:Center-based 3D Object Detection and Tracking
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [nuScenes数据集](#41)
* [KITTI数据集](#42)
* [导出 & 部署](#8)
* [Apollo模型](#9)
* [训练自定义数据集](#10)
## <h2 id="1">引用</h2>
> Yin, Tianwei and Zhou, Xingyi and Krahenbuhl, Philipp. "Center-Based 3D Object Detec... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/iassd/README.md | # Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds

## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [KITTI数据集](#41)
* [Waymo数据集](#42)
* [导出 & 部... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/caddn/README.md | # CADDN:Categorical Depth DistributionNetwork for Monocular 3D Object Detection
## 目录
* [引用](#1)
* [简介](#2)
* [训练配置](#3)
* [使用教程](#4)
* [数据准备](#5)
* [训练](#6)
* [评估](#7)
* [导出 & 部署](#8)
* [自定义数据集](#9)
* [Apollo使用教程](#10)
## <h2 id="1">引用</h2>
> Cody Reading, Ali Harakeh, Julia Chae, Steven L. Waslander. "Categorical ... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/squeezesegv3/README.md | # SqueezeSegV3: Spatially-Adaptive Convolution for Efficient Point-Cloud Segmentation
## 目录
* [引用](#h2-id1h2)
* [简介](#h2-id2h2)
* [模型库](#h2-id3h2)
* [训练配置](#h2-id4h2)
* [使用教程](#h2-id5h2)
* [数据准备](#h3-id51h3)
* [训练](#h3-id52h3)
* [评估](#h3-id53h3)
* [模型导出](#h3-id54h3)
* [模型部署](#h3-id55h3)
## <h2 id="1">引用</h2... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/pv_rcnn/README.md | # PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [KITTI数据集](#41)
* [导出 & 部署](#5)
* [自定义数据集](#6)
## <h2 id="1">引用</h2>
> Shi, Shaoshuai, et al. "Pv-rcnn: Point-voxel feature set abstraction for 3d object detection." Proceedings of the I... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/pointpillars/README.md | # PointPillars: Fast Encoders for Object Detection from Point Clouds
## 目录
* [引用](#h2-id1h2)
* [简介](#h2-id2h2)
* [模型库](#h2-id3h2)
* [训练配置](#h2-id4h2)
* [使用教程](#h2-id5h2)
* [数据准备](#h3-id51h3)
* [训练](#h3-id52h3)
* [评估](#h3-id53h3)
* [模型导出](#h3-id54h3)
* [模型部署](#h3-id55h3)
## <h2 id="1">引用</h2>
> Lang, Alex H... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/bevformer/README.md | # BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [nuScenes数据集](#41)
* [导出 & 部署](#8)
## <h2 id="1">引用</h2>
```
@article{li2022bevformer,
title={BEVFormer: Learning Bird’s-Eye-View Representation ... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/paconv/README.md | # PAConv:Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [使用教程](#4)
* [数据准备](#41)
* [训练](#42)
* [评估](#43)
* [导出部署](#5)
* [执行预测](#51)
* [python部署](#52)
* [自定义数据集](#6)
<br>
## <h2 id="1">引用</h2>
> Xu, Mutian and Ding, Runyu and Zhao, Hen... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/voxel_rcnn/README.md | # Voxel r-cnn: Towards high performance voxel-based 3d object detection
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [KITTI数据集](#41)
* [导出 & 部署](#5)
* [自定义数据集](#6)
## <h2 id="1">引用</h2>
> Deng, Jiajun, et al. "Voxel r-cnn: Towards high performance voxel-based 3d object detection." Proceedings of the A... | 0 |
apollo_public_repos/apollo-model-centerpoint/docs/models | apollo_public_repos/apollo-model-centerpoint/docs/models/dd3d/README.md | # DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection?
## 目录
* [引用](#1)
* [简介](#2)
* [训练配置](#3)
* [使用教程](#4)
* [数据准备](#5)
* [训练](#6)
* [评估](#7)
## <h2 id="1">引用</h2>
> Dennis Park and Rares Ambrus and Vitor Guizilini and Jie Li and Adrien Gaidon. "Is Pseudo-Lidar needed for Monocular 3D Object detection?"... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/_base_/kitti_mono.yml | train_dataset:
type: KittiMonoDataset
dataset_root: datasets/KITTI
transforms:
- type: LoadImage
- type: Normalize
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
mode: train
val_dataset:
type: KittiMonoDataset
dataset_root: datasets/KITTI
transforms:
- type: LoadImage
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/_base_/semantickitti.yml | train_dataset:
type: SemanticKITTISegDataset
dataset_root: datasets/SemanticKITTI
sequences: [ 0, 1, 2, 3, 4, 5, 6, 7, 9, 10 ]
transforms:
- type: LoadSemanticKITTIRange
project_label: true
- type: NormalizeRangeImage
mean: [ 12.12, 10.88, 0.23, -1.04, 0.21 ] # range, x, y, z, remission
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/smoke/smoke_dla34_no_dcn_kitti.yml | _base_: '../_base_/kitti_mono.yml'
batch_size: 8
iters: 70000
train_dataset:
transforms:
- type: LoadImage
reader: pillow
to_chw: False
- type: Gt2SmokeTarget
mode: train
num_classes: 3
input_size: [1280, 384]
- type: Normalize
mean: [0.485, 0.456, 0.406]
std: [... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/smoke/smoke_hrnet18_no_dcn_kitti_mini.yml | # This is a training configuration for a simplified version of KITTI. It is just for a quick start,
# all the hyperparameters are not strictly tuned, so the training result is not optimal
_base_: '../_base_/kitti_mono.yml'
batch_size: 8
iters: 10000
train_dataset:
transforms:
- type: LoadImage
reader: pil... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/smoke/README.md | # SMOKE:Single-Stage Monocular 3D Object Detection via Keypoint Estimation
## 目录
* [引用](#引用)
* [简介](#简介)
* [训练配置](#训练配置)
* [使用教程](#使用教程)
* [数据准备](#数据准备)
* [训练](#训练)
* [评估](#评估)
* [导出部署](#导出部署)
* [自定义数据集](#自定义数据集)
<br>
## 引用
> Liu, Zechen, Zizhang Wu, and Roland Tóth. "Smoke: Single-stage monocular 3d object detecti... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/smoke/smoke_hrnet18_no_dcn_kitti.yml | _base_: '../_base_/kitti_mono.yml'
batch_size: 8
iters: 70000
train_dataset:
transforms:
- type: LoadImage
reader: pillow
to_chw: False
- type: Gt2SmokeTarget
mode: train
num_classes: 3
input_size: [1280, 384]
- type: Normalize
mean: [0.485, 0.456, 0.406]
std: [... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/smoke/smoke_dla34_no_dcn_kitti_amp.yml | _base_: '../_base_/kitti_mono.yml'
batch_size: 8
iters: 70000
amp_cfg:
enable: True
level: O1
scaler:
init_loss_scaling: 1024.0
custom_black_list: ['matmul_v2', 'elementwise_mul']
train_dataset:
transforms:
- type: LoadImage
reader: pillow
to_chw: False
- type: Gt2SmokeTarget
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/quant/centerpoint_kitti.yml | slim_type: QAT
quant_config:
weight_quantize_type: channel_wise_abs_max
activation_quantize_type: moving_average_abs_max
weight_bits: 8
activation_bits: 8
dtype: int8
window_size: 10000
moving_rate: 0.9
quantizable_layer_type: ['Conv2D', 'Linear']
finetune_config:
epochs: 80
lr_scheduler:
typ... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/quant/smoke_kitti.yml | slim_type: QAT
quant_config:
weight_quantize_type: channel_wise_abs_max
activation_quantize_type: moving_average_abs_max
weight_bits: 8
activation_bits: 8
dtype: int8
window_size: 10000
moving_rate: 0.9
quantizable_layer_type: ['Conv2D', 'Linear']
finetune_config:
iters: 40000
lr_scheduler:
t... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petrv2_vovnet_gridmask_p4_800x320_cos_epoch.yml | batch_size: 1
epochs: 24
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
class_names: [
'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
'barrier', 'motorcycle', 'bicycle', 'pedestrian'... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petrv2_vovnet_gridmask_p4_800x320_dn_amp.yml | batch_size: 1
epochs: 24
amp_cfg:
# only enable backbone and fpn
enable: False
level: O1
scaler:
init_loss_scaling: 512.0
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
use_valid_flag: True
class_names:... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petr_vovnet_gridmask_p4_800x320_amp.yml | batch_size: 1
epochs: 24
amp_cfg:
# only enable backbone and fpn
enable: False
level: O1
scaler:
init_loss_scaling: 512.0
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
class_names: [
'car', '... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petrv2_vovnet_gridmask_p4_800x320.yml | batch_size: 1
epochs: 24
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
use_valid_flag: True
class_names: [
'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
'barrier', 'motorcycle', ... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petrv2_vovnet_gridmask_p4_1600x640_dn_multiscale_amp.yml | batch_size: 1
epochs: 24
amp_cfg:
# only enable backbone and fpn
enable: False
level: O1
scaler:
init_loss_scaling: 512.0
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
use_valid_flag: True
class_names:... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petrv2_vovnet_gridmask_p4_800x320_dn_centerview_amp.yml | batch_size: 1
epochs: 24
amp_cfg:
# only enable backbone and fpn
enable: False
level: O1
scaler:
init_loss_scaling: 512.0
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
use_valid_flag: True
class_names:... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petr_vovnet_gridmask_p4_800x320.yml | batch_size: 1
epochs: 24
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
class_names: [
'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
'barrier', 'motorcycle', 'bicycle', 'pedestrian'... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/petr/petrv2_vovnet_gridmask_p4_800x320_amp.yml | batch_size: 1
epochs: 24
amp_cfg:
# only enable backbone and fpn
enable: False
level: O1
scaler:
init_loss_scaling: 512.0
train_dataset:
type: NuscenesMVDataset
dataset_root: data/nuscenes/
ann_file: data/nuscenes/petr_nuscenes_annotation_train.pkl
mode: train
use_valid_flag: True
class_names:... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/centerpoint_pillars_016voxel_kitti_mini.yml | # This is a training configuration for a simplified version of KITTI. It is just for a quick start,
# all the hyperparameters are not strictly tuned, so the training result is not optimal
batch_size: 4
epochs: 20
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
transforms:
- type: LoadPointClo... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/centerpoint_pillars_02voxel_nuscenes_10sweep.yml | batch_size: 4
epochs: 20
train_dataset:
type: NuscenesPCDataset
dataset_root: datasets/nuscenes/
transforms:
- type: LoadPointCloud
dim: 5
use_dim: 4
use_time_lag: True
sweep_remove_radius: 1
- type: SamplingDatabase
min_num_points_in_box_per_class:
car: 5
tru... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/centerpoint_voxels_008voxel_kitti.yml | batch_size: 4
epochs: 160
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
transforms:
- type: LoadPointCloud
dim: 4
use_dim: 4
- type: RemoveCameraInvisiblePointsKITTI
- type: SamplingDatabase
min_num_points_in_box_per_class:
Car: 5
Cyclist: 5
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/README.md | # CenterPoint:Center-based 3D Object Detection and Tracking
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [nuScenes数据集](#41)
* [KITTI数据集](#42)
* [导出 & 部署](#8)
* [Apollo模型](#9)
* [训练自定义数据集](#10)
## <h2 id="1">引用</h2>
> Yin, Tianwei and Zhou, Xingyi and Krahenbuhl, Philipp. "Center-Based 3D Object Detec... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/centerpoint_voxels_010voxel_apolloscape.yml | batch_size: 8
epochs: 32
amp_cfg:
use_amp: False
enable: False
level: O1
scaler:
init_loss_scaling: 32.0
train_dataset:
type: ApolloPCDataset
dataset_root: datasets
dataset_list: ['apolloscape']
transforms:
- type: LoadPointCloud
dim: 5
use_dim: 4
sep: ''
- type: SamplingD... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/centerpoint_pillars_02voxel_apolloscape.yml | batch_size: 8
epochs: 5
amp_cfg:
use_amp: False
enable: False
level: O1
scaler:
init_loss_scaling: 32.0
train_dataset:
type: ApolloPCDataset
dataset_root: datasets/
dataset_list: ['apolloscape']
transforms:
- type: LoadPointCloud
dim: 5
use_dim: 4
sep: ''
- type: Sampling... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/centerpoint_voxels_0075voxel_nuscenes_10sweep.yml | batch_size: 4
epochs: 20
train_dataset:
type: NuscenesPCDataset
dataset_root: datasets/nuscenes/
transforms:
- type: LoadPointCloud
dim: 5
use_dim: 4
use_time_lag: True
sweep_remove_radius: 1
- type: SamplingDatabase
min_num_points_in_box_per_class:
car: 5
tru... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/centerpoint/centerpoint_pillars_016voxel_kitti.yml | batch_size: 4
epochs: 160
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
transforms:
- type: LoadPointCloud
dim: 4
use_dim: 4
- type: RemoveCameraInvisiblePointsKITTI
- type: SamplingDatabase
min_num_points_in_box_per_class:
Car: 5
Cyclist: 5
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/iassd/iassd_waymo.yaml | batch_size: 4 #on 4 gpus, total bs = 16
epochs: 30
train_dataset:
type: WaymoPCDataset
dataset_root: datasets/waymo
class_names: [ "Vehicle", "Pedestrian", "Cyclist" ]
sampled_interval: 5
transforms:
- type: SamplingDatabase
min_num_points_in_box_per_class:
Vehicle: 5
Pedestrian: 5... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/iassd/iassd_kitti.yaml | batch_size: 8 #on 4 gpus, total bs = 32
epochs: 80
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
class_names: [ "Car", "Pedestrian", "Cyclist"]
use_road_plane: True
transforms:
- type: LoadPointCloud
dim: 4
use_dim: 4
- type: RemoveCameraInvisiblePointsKITTIV2
- ty... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/caddn/caddn_ocrnet_hrnet_w18_kitti.yml | batch_size: 4
iters: 74240 # 928*80
sync_bn: true
train_dataset:
type: KittiDepthDataset
dataset_root: data/kitti
point_cloud_range: [2, -30.08, -3.0, 46.8, 30.08, 1.0]
depth_downsample_factor: 4
voxel_size: [0.16, 0.16, 0.16]
class_names: ['Car', 'Pedestrian', 'Cyclist']
mode: train
val_dataset:
typ... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/caddn/README.md | # CADDN:Categorical Depth DistributionNetwork for Monocular 3D Object Detection
## 目录
* [引用](#1)
* [简介](#2)
* [训练配置](#3)
* [使用教程](#4)
* [数据准备](#5)
* [训练](#6)
* [评估](#7)
* [导出 & 部署](#8)
* [自定义数据集](#9)
* [Apollo使用教程](#10)
## <h2 id="1">引用</h2>
> Cody Reading, Ali Harakeh, Julia Chae, Steven L. Waslander. "Categorical ... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/caddn/caddn_deeplabv3p_resnet101_os8_kitti.yml | batch_size: 4
iters: 74240 # 928*80
sync_bn: true
train_dataset:
type: KittiDepthDataset
dataset_root: data/kitti
point_cloud_range: [2, -30.08, -3.0, 46.8, 30.08, 1.0]
depth_downsample_factor: 4
voxel_size: [0.16, 0.16, 0.16]
class_names: ['Car', 'Pedestrian', 'Cyclist']
mode: train
val_dataset:
typ... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/squeezesegv3/squeezesegv3_rangenet21_semantickitti.yml | _base_: '../_base_/semantickitti.yml'
batch_size: 2
iters: 179250 # 150 epochs
optimizer:
type: Momentum
momentum: 0.9
weight_decay: 0.0008
lr_scheduler:
type: LinearWarmup
learning_rate:
type: ExponentialDecay
learning_rate: 0.008
gamma: 0.999995805413129 # .995 ** (1 / steps_per_epoch)
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/squeezesegv3/README.md | # SqueezeSegV3: Spatially-Adaptive Convolution for Efficient Point-Cloud Segmentation
## 目录
* [引用](#h2-id1h2)
* [简介](#h2-id2h2)
* [模型库](#h2-id3h2)
* [训练配置](#h2-id4h2)
* [使用教程](#h2-id5h2)
* [数据准备](#h3-id51h3)
* [训练](#h3-id52h3)
* [评估](#h3-id53h3)
* [模型导出](#h3-id54h3)
* [模型部署](#h3-id55h3)
## <h2 id="1">引用</h2... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/squeezesegv3/squeezesegv3_rangenet53_semantickitti.yml | _base_: '../_base_/semantickitti.yml'
batch_size: 1
iters: 179250 # 150 epochs
optimizer:
type: Momentum
momentum: 0.9
weight_decay: 0.0008
lr_scheduler:
type: LinearWarmup
learning_rate:
type: ExponentialDecay
learning_rate: 0.004
gamma: 0.999995805413129 # .995 ** (1 / steps_per_epoch)
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/pv_rcnn/pv_rcnn_005voxel_kitti.yml | batch_size: 2
epochs: 80
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
transforms:
- type: LoadPointCloud
dim: 4
use_dim: 4
- type: RemoveCameraInvisiblePointsKITTIV2
- type: SamplingDatabase
min_num_points_in_box_per_class:
Car: 5
Cyclist: 5
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/pv_rcnn/README.md | # PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [KITTI数据集](#41)
* [导出 & 部署](#5)
* [自定义数据集](#6)
## <h2 id="1">引用</h2>
> Shi, Shaoshuai, et al. "Pv-rcnn: Point-voxel feature set abstraction for 3d object detection." Proceedings of the I... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/pointpillars/pointpillars_xyres16_kitti_cyclist_pedestrian.yml | batch_size: 2
iters: 296960 # 160 epochs
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
class_names: [ "Cyclist", "Pedestrian" ]
transforms:
- type: LoadPointCloud
dim: 4
use_dim: 4
- type: RemoveCameraInvisiblePointsKITTI
- type: SamplingDatabase
min_num_point... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/pointpillars/README.md | # PointPillars: Fast Encoders for Object Detection from Point Clouds
## 目录
* [引用](#h2-id1h2)
* [简介](#h2-id2h2)
* [模型库](#h2-id3h2)
* [训练配置](#h2-id4h2)
* [使用教程](#h2-id5h2)
* [数据准备](#h3-id51h3)
* [训练](#h3-id52h3)
* [评估](#h3-id53h3)
* [模型导出](#h3-id54h3)
* [模型部署](#h3-id55h3)
## <h2 id="1">引用</h2>
> Lang, Alex H... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/pointpillars/pointpillars_xyres16_kitti_car.yml | batch_size: 2
iters: 296960 # 160 epochs
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
class_names: [ "Car" ]
transforms:
- type: LoadPointCloud
dim: 4
use_dim: 4
- type: RemoveCameraInvisiblePointsKITTI
- type: SamplingDatabase
min_num_points_in_box_per_class... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/bevformer/bevformer_tiny_r50_fpn_fp16_nuscenes.yml | batch_size: 2
epochs: 24
amp_cfg:
enable: False
level: O1
scaler:
init_loss_scaling: 512.0
train_dataset:
type: NuscenesMVDataset
dataset_root: ./datasets/nuscenes
ann_file: ./datasets/nuscenes/bevformer_nuscenes_annotation_train.pkl
queue_length: 3
use_valid_flag: True
mode: train
class_names... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/bevformer/README.md | # BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [nuScenes数据集](#41)
* [导出 & 部署](#8)
## <h2 id="1">引用</h2>
```
@article{li2022bevformer,
title={BEVFormer: Learning Bird’s-Eye-View Representation ... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/bevformer/bevformer_tiny_r50_fpn_nuscenes.yml | batch_size: 1
epochs: 24
train_dataset:
type: NuscenesMVDataset
dataset_root: ./datasets/nuscenes
ann_file: ./datasets/nuscenes/bevformer_nuscenes_annotation_train.pkl
queue_length: 3
use_valid_flag: True
mode: train
class_names: [
'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/paconv/paconv_modelnet40.yml | batch_size: 32
epochs: 350
train_dataset:
type: ModelNet40
dataset_root: datasets/modelnet40_ply_hdf5_2048
num_points: 1024
transforms:
- type: GlobalScale
min_scale: 0.667
max_scale: 1.5
size: 3
- type: GlobalTranslate
translation_std: 0.2
distribution: uniform
- type... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/paconv/README.md | # PAConv:Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [使用教程](#4)
* [数据准备](#41)
* [训练](#42)
* [评估](#43)
* [导出部署](#5)
* [执行预测](#51)
* [python部署](#52)
* [自定义数据集](#6)
<br>
## <h2 id="1">引用</h2>
> Xu, Mutian and Ding, Runyu and Zhao, Hen... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/voxel_rcnn/README.md | # Voxel r-cnn: Towards high performance voxel-based 3d object detection
## 目录
* [引用](#1)
* [简介](#2)
* [模型库](#3)
* [训练 & 评估](#4)
* [KITTI数据集](#41)
* [导出 & 部署](#5)
* [自定义数据集](#6)
## <h2 id="1">引用</h2>
> Deng, Jiajun, et al. "Voxel r-cnn: Towards high performance voxel-based 3d object detection." Proceedings of the A... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/voxel_rcnn/voxel_rcnn_005voxel_kitti_car.yml | batch_size: 2
epochs: 80
train_dataset:
type: KittiPCDataset
dataset_root: datasets/KITTI
transforms:
- type: LoadPointCloud
dim: 4
use_dim: 4
- type: RemoveCameraInvisiblePointsKITTIV2
- type: SamplingDatabase
min_num_points_in_box_per_class:
Car: 5
max_num_samples_per... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/dd3d/dd3d_dla_34_kitti.yml | _base_: '../_base_/kitti_mono.yml'
batch_size: 8 #total bs 32
iters: 50000
train_dataset:
transforms:
- type: LoadImage
reader: pillow
to_chw: False
to_rgb: False
- type: ResizeShortestEdge
short_edge_length: [288, 304, 320, 336, 352, 368, 384, 400, 416, 448, 480, 512, 544, 576]
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/dd3d/dd3d_dla_34_kitti_warmup.yml | _base_: '../_base_/kitti_mono.yml'
batch_size: 8 #total bs 32
iters: 4000
train_dataset:
transforms:
- type: LoadImage
reader: pillow
to_chw: False
to_rgb: False
- type: ResizeShortestEdge
short_edge_length: [288, 304, 320, 336, 352, 368, 384, 400, 416, 448, 480, 512, 544, 576]
... | 0 |
apollo_public_repos/apollo-model-centerpoint/configs | apollo_public_repos/apollo-model-centerpoint/configs/dd3d/dd3d_v2_99_kitti_warmup.yml | _base_: '../_base_/kitti_mono.yml'
batch_size: 4 #total bs 16
iters: 8000
train_dataset:
transforms:
- type: LoadImage
reader: pillow
to_chw: False
to_rgb: False
- type: ResizeShortestEdge
short_edge_length: [288, 304, 320, 336, 352, 368, 384, 400, 416, 448, 480, 512, 544, 576]
... | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.