repo_id
stringlengths
19
138
file_path
stringlengths
32
200
content
stringlengths
1
12.9M
__index_level_0__
int64
0
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation/IPC/Nuvo-6108GC_Installation_Guide_cn.md
## Nuvo-6108GC配置和安装指南 ``` Nuvo-6018GC is world's first industrial-grade GPU computer supporting high-end graphics cards. It's designed to fuel emerging GPU-accelerated applications, such as artificial intelligence, VR, autonomous driving and CUDA computing, by accommodating nVidia GPU with up to 250W TDP. Leveraging Intel® C236 chipset, Nuvo-6018GC supports Xeon® E3 V5 or 6th-Gen Core™ i7/i5 CPU with up to 32 GB ECC/ non-ECC DDR4 memory. It incorporates general computer I/Os such as Gigabit Ethernet, USB 3.0 and serial ports. In addition to the x16 PCIe port for GPU installation, Nuvo-6108GC further provides two x8 PCIe slots so you can have additional devices for information collection and communication.Nuvo-6108GC comes with sophisticated power design to handle heavy power consumption and power transient of a 250W GPU. Furthermore, to have reliable GPU performance for industrial environments, Nuvo-6018GC inherits Neousys' patented design, a tuned cold air intake to effectively dissipate the heat generated by GPU. This unique design guarantees operation at 60°C with 100% GPU loading and makes Nuvo-6018GC extremely reliable for demanding field usage. ----NEOUSYS MARKETING TEAM ``` ### IPC配置 参考下述IPC配置: - 华硕 GTX1080 GPU-A8G-Gaming 显卡 - 32GB DDR4 内存 - PO-280W-OW 280W AC/DC 电源适配器 - 2.5" SATA磁盘 1TB 7200转/秒 ![IPC-6108GC-front-side](images/IPC-6108GC-front-side.jpg) ### 准备IPC 参考下述步骤: 1. 准备好CAN卡并进行安装:在Neousys Nuvo-6108GC中,华硕GTX1080 GPU-A8G-Gaming显卡被预先安装在一个PCI插槽中,我们需要将CAN卡安装在另外一个PCI插槽中。 a. 找到并拧下机器边上的8个螺丝(显示在棕色方框内或棕色箭头指向的区域) ![Positions_of_Screws](images/IPC-6108GC-Screw-Positions_labeled.png) b. 移除机器盖板 ![removing the cover](images/Removing_the_cover.jpg) 在机箱底部将能看到固定着的3个PCI插槽(其中一个已经被显卡占据) ![Before installing the CAN card](images/Before_installing_the_can_card.png) c. 【可选】设置CAN卡的终端跳线:将红色的跳线帽从原位置移除(下图所示)并放置在终端位置 ![prepare_can_card2](images/prepare_can_card2.png) **![warning_icon](images/warning_icon.png)WARNING**:如果终端跳线没有被正确设置,CAN卡将不能正确工作。 d. 【可选】将CAN卡插入到一个PCI插槽中 ![installed CAN](images/After_installing_the_CAN_Card.png) e. 安装IPC的盖板 ![IPC-6108GC-Screw-Positions.png](images/IPC-6108GC-Screw-Positions.png) 2. 配置IPC加电组件: a. 将电源线接入到为IPC配置的电源连接器(接线板) ![warning_icon](images/warning_icon.png)**WARNING**:确保电源线的正极(标记为 **R** 表示红色)和负极(标记为 **B** 表示黑色)接入到了IPC接线板的正确接口,如下图所示: ![ipc_power_RB](images/ipc_power_RB.png) b. 将显示器、以太网线、键盘和鼠标接入IPC ![IPC-6108GC-CableConnected-overexposed.png](images/IPC-6108GC-CableConnected-overexposed.png) 3. 启动计算机 ![warning](images/tip_icon.png)如果系统接入了一个或多个外部插入卡,建议通过BIOS设置风扇的转速 ``` - 计算机启动时按F2进入BIOS设置菜单 - 进入 [Advanced] => [Smart Fan Setting] - 设置 [Fan Max. Trip Temp] 为 50 - 设置 [Fan Start Trip Temp] 为 20 ``` ![tip_icon](images/tip_icon.png)建议使用者使用数字视频接口(DVI)连接器连接显卡和显示器。设置投影到主板的DVI接口,参考下述的设置步骤: ``` - 计算机启动时按F2进入BIOS设置菜单 - 进入 [Advanced]=>[System Agent (SA) Configuration]=>[Graphics Configuration]=>[Primary Display]=> 设置为 "PEG" ``` ![tip_icon](images/tip_icon.png)建议设置IPC的运行状态为一直以最佳性能状态运行: ``` - 计算机启动时按F2进入BIOS设置菜单 - 进入 [Power] => [SKU POWER CONFIG] => 设置为 "MAX. TDP" ``` 4. 连接电源: ![IPC-6108GC-PowerCable.jpg](images/IPC-6108GC-PowerCable.jpg) ### 参考资料 1. Neousys Nuvo-6108GC [产品页面](http://www.neousys-tech.com/en/product/application/rugged-embedded/nuvo-6108gc-gpu-computing) ## 免责声明 This device is `Apollo Platform Supported`
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation/IPC/Nuvo-6108GC_Installation_Guide.md
## Guide on Nuvo-6108GC Installation ``` Nuvo-6018GC is world's first industrial-grade GPU computer supporting high-end graphics cards. It's designed to fuel emerging GPU-accelerated applications, such as artificial intelligence, VR, autonomous driving and CUDA computing, by accommodating nVidia GPU with up to 250W TDP. Leveraging Intel® C236 chipset, Nuvo-6018GC supports Xeon® E3 V5 or 6th-Gen Core™ i7/i5 CPU with up to 32 GB ECC/ non-ECC DDR4 memory. It incorporates general computer I/Os such as Gigabit Ethernet, USB 3.0 and serial ports. In addition to the x16 PCIe port for GPU installation, Nuvo-6108GC further provides two x8 PCIe slots so you can have additional devices for information collection and communication.Nuvo-6108GC comes with sophisticated power design to handle heavy power consumption and power transient of a 250W GPU. Furthermore, to have reliable GPU performance for industrial environments, Nuvo-6018GC inherits Neousys' patented design, a tuned cold air intake to effectively dissipate the heat generated by GPU. This unique design guarantees operation at 60°C with 100% GPU loading and makes Nuvo-6018GC extremely reliable for demanding field usage. ----NEOUSYS MARKETING TEAM ``` ### IPC Configuration Configure the IPC as follows: - ASUS GTX1080 GPU-A8G-Gaming GPU Card - 32GB DDR4 RAM - PO-280W-OW 280W AC/DC power adapter - 2.5" SATA Hard Disk 1TB 7200rpm ![IPC-6108GC-front-side](images/IPC-6108GC-front-side.jpg) ### Preparing the IPC Follow these steps: 1. Prepare and install the Controller Area Network (CAN) card: In the Neousys Nuvo-6108GC, ASUS® GTX-1080GPU-A8G-GAMING GPU card is pre-installed into one of the three PCI slots. We still need to install a CAN card into a PCI slot. a. Locate and unscrew the eight screws (shown in the brown squares or pointed by brown arrows) on the side of computer: ![Positions_of_Screws](images/IPC-6108GC-Screw-Positions_labeled.png) b. Remove the cover from the IPC. ![removing the cover](images/Removing_the_cover.jpg) You will find 3 PCI slots (one occupied by the graphic card) located on the base: ![Before installing the CAN card](images/Before_installing_the_can_card.png) c. [Optional] Set the CAN card termination jumper by removing the red jumper cap (shown in the diagram below) from its default location and placing it at its termination position: ![prepare_can_card2](images/prepare_can_card2.png) **![warning_icon](images/warning_icon.png)WARNING**: The CAN card will not work if the termination jumper is not set correctly. d. [Optional] Insert the CAN card into the slot in the IPC: ![installed CAN](images/After_installing_the_CAN_Card.png) e. Reinstall the cover for the IPC ![IPC-6108GC-Screw-Positions.png](images/IPC-6108GC-Screw-Positions.png) 2. Power up the IPC: a. Attach the power cable to the power connector (terminal block) that comes with the IPC: ![warning_icon](images/warning_icon.png)**WARNING**: Make sure that the positive(labeled **R** for red) and the negative(labeled **B** for black) wires of the power cable are inserted into the correct holes on the power terminal block as seen in the image below. ![ipc_power_RB](images/ipc_power_RB.png) b. Connect the monitor, Ethernet cable, keyboard, and mouse to the IPC: ![IPC-6108GC-CableConnected-overexposed.png](images/IPC-6108GC-CableConnected-overexposed.png) 3. Start the computer: ![warning](images/tip_icon.png)It is recommended to configure the fan speed through BIOS settings, if one or more plugin card is added to the system ``` - While starting up the computer, press F2 to enter BIOS setup menu. - Go to [Advanced] => [Smart Fan Setting] - Set [Fan Max. Trip Temp] to 50 - Set [Fan Start Trip Temp] to 20 ``` ![tip_icon](images/tip_icon.png)It is recommended that you use a Digital Visual Interface (DVI) connector on the graphic card for the monitor. To set the display to the DVI port on the motherboard, following is the setting procedure: ``` - While starting up the computer, press F2 to enter BIOS setup menu. - Go to [Advanced]=>[System Agent (SA) Configuration]=>[Graphics Configuration]=>[Primary Display]=> Set to "PEG" ``` ![tip_icon](images/tip_icon.png)It is recommended to configure the IPC to run at maximum performance mode at all time: ``` - While starting up the computer, press F2 to enter BIOS setup menu. - Go to [Power] => [SKU POWER CONFIG] => set to "MAX. TDP" ``` 4. Connect the power: ![IPC-6108GC-PowerCable.jpg](images/IPC-6108GC-PowerCable.jpg) ### References 1. Neousys Nuvo-6108GC [Product Page](http://www.neousys-tech.com/en/product/application/rugged-embedded/nuvo-6108gc-gpu-computing) ## Disclaimer This device is `Apollo Platform Supported`
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation/Radar/Racobit_B01HC_Radar_Installation_Guide.md
## Installation Guide of Racobit B01HC Radar Racobit developed one Radar product with **60 degree FOV** and **150 m** detection range for autonomous driving needs. ![radar_image](images/b01hc.png) ### Installation 1. A mechanical mount needs to be designed to mount the Radar to the desired position. The mount should be able to hold the Radar in a way that the scanning plane is parallel to the bottom of the car, so that the scanning radar wave would not be blocked by the road surface causing it to create ghost objects. If a stationary mount cannot satisfy such requirement, please consider adding adjustment in vertical and horizontal directions to the mount 2. When you receive the Radar package, a set of connection cables should be included. Connect the water-proof connector to the Radar, and guide the cable through/under the car into the trunk. Secure the cable to the body of the car if necessary 3. Connect the power cable to **12VDC** power supply. 4. Connect the CAN output to the CAN interface of the IPC. 5. You should be able to receive the CAN messages through the CAN port once the Radar is powered. 6. Please discuss with the vendor for additional support if needed while integrating it with your vehicle. ## Disclaimer This device is `Apollo Hardware Development Platform Supported`
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation/Radar/README.md
# Apollo Radar You could integrate 3 types of Radar's with Apollo. Refer to their individual Installation guides for more information. 1. [Continental ARS408-21 Radar](Continental_ARS408-21_Radar_Installation_Guide.md) 3. [Racobit B01HC Radar](Racobit_B01HC_Radar_Installation_Guide.md)
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/车辆集成/传感器安装 sensor installation/Radar/Continental_ARS408-21_Radar_Installation_Guide.md
## Installation Guide of Continental ARS-408-21 Radar ``` The ARS408 realized a broad field of view by two independent scans in conjunction with the high range functions like Adaptive Cruise Control, Forward Collision Warning and Emergency Brake Assist can be easily implemented. Its capability to detect stationary objects without the help of a camera system emphasizes its performance. The ARS408 is a best in class radar, especially for the stationary target detection and separation. ----Continental official website ``` ![radar_image](images/ARS-408-21.jpg) The following diagram contains the range of the ARS-408-21 Radar: ![radar_range](images/ars-404-21-range.jpg) ### Installation 1. A mechanical mount needs to be designed to mount the Radar to the desired position. The mount should be able to hold the Radar in a way that the scanning plane is parallel to the bottom of the car, so that the scanning radar wave would not be blocked by the road surface causing it to create ghost objects. If a stationary mount cannot satisfy such requirement, please consider adding adjustment in vertical and horizontal directions to the mount 2. When you receive the Radar package, a set of connection cables should be included. Connect the water-proof connector to the Radar, and guide the cable through/under the car into the trunk. Secure the cable to the body of the car if necessary ![radar_cable](images/ars-408-21-cable.png) 3. Connect the power cable to **12VDC** power supply 4. Connect the CAN output to the CAN interface of the IPC 5. You should be able to receive the CAN messages through the CAN port once the Radar is powered. ### References 1. Additional information can be found on the [product page](https://www.continental-automotive.com/Landing-Pages/Industrial-Sensors/Products/ARS-408-21) 2. For information on troubleshooting or for the user manual, contact Continental directly ## Disclaimer This device is `Apollo Platform Supported`
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/传感器标定/apollo_2_0_sensor_calibration_guide_cn.md
# Apollo 2.0 传感器标定方法使用指南 欢迎使用Apollo传感器标定服务。本文档提供在Apollo 2.0中新增的3项传感器标定程序的使用流程说明,分别为:相机到相机的标定,相机到多线激光雷达的标定,以及毫米波雷达到相机的标定。 ## 文档概览 * 概述 * 准备工作 * 标定流程 * 标定结果获取 * 标定结果验证 ## 概述 在Apollo 2.0中,我们新增了3项标定功能:相机到相机的标定,相机到多线激光雷达的标定,以及毫米波雷达到相机的标定。对于多线激光雷达到组合惯导的标定,请参考多线激光雷达-组合惯导标定说明。Velodyne HDL64用户还可以使用Apollo 1.5提供的标定服务平台。标定工具均以车载可执行程序的方式提供。用户仅需要启动相应的标定程序,即可实时完成标定工作并进行结果验证。标定结果以 `.yaml` 文件形式返回。 ## 准备工作 1. 下载[标定工具](https://github.com/ApolloAuto/apollo/releases/download/v2.0.0/calibration.tar.gz),并解压缩到`$APOLLO_HOME/modules/calibration`目录下。(APOLLO_HOME是apollo代码的根目录) 2. 相机内参文件 内参包含相机的焦距、主点和畸变系数等信息,可以通过一些成熟的相机标定工具来获得,例如 [ROS Camera Calibration Tools](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration) 和 [Camera Calibration Toolbox for Matlab](http://www.vision.caltech.edu/bouguetj/calib_doc/)。内参标定完成后,需将结果转换为 `.yaml` 格式的文件。下面是一个正确的内参文件样例: ```bash header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: short_camera height: 1080 width: 1920 distortion_model: plumb_bob D: [-0.535253, 0.259291, 0.004276, -0.000503, 0.0] K: [1959.678185, 0.0, 1003.592207, 0.0, 1953.786100, 507.820634, 0.0, 0.0, 1.0] R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] P: [1665.387817, 0.0, 1018.703332, 0.0, 0.0, 1867.912842, 506.628623, 0.0, 0.0, 0.0, 1.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 do_rectify: False ``` 我们建议每一只相机都需要单独进行内参标定,而不是使用统一的内参结果。这样可以提高外参标定的准确性。 3. 初始外参文件 本工具需要用户提供初始的外参值作为参考。一个良好的初始值可以帮助算法得到更精确的结果。下面是一个正确的相机到激光雷达的初始外参文件样例,其中translation为相机相对激光雷达的平移距离关系,rotation为旋转矩阵的四元数表达形式: ```bash header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: velodyne64 child_frame_id: short_camera transform: rotation: y: 0.5 x: -0.5 w: 0.5 z: -0.5 translation: x: 0.0 y: 1.5 z: 2.0 ``` 注意:相机到激光雷达的标定方法比较依赖于初始外参值的选取,一个偏差较大的外参,有可能导致标定失败。所以,请在条件允许的情况下,尽可能提供更加精准的初始外参值。 4. 标定场地 我们的标定方法是基于自然场景的,所以一个理想的标定场地可以显著地提高标定结果的准确度。我们建议选取一个纹理丰富的场地,如有树木,电线杆,路灯,交通标志牌,静止的物体和清晰车道线。图1是一个较好的标定环境示例: ![](images/calibration/sensor_calibration/calibration_place.png) <p align="center"> 图1 一个良好的标定场地 </p> 5. 所需Topics 确认程序所需传感器数据的topics均有输出。如何查看传感器有数据输出? 各个程序所需的topics如下表1-表3所示: 表1. 相机到相机标定所需topics | 传感器 | Topic名称 |Topic发送频率(Hz)| | ------------ | ----------------------------------------- | ----------------- | | Short_Camera | /apollo/sensor/camera/traffic/image_short | 9 | | Long_Camera | /apollo/sensor/camera/traffic/image_long | 9 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | 表2. 相机到64线激光雷达标定所需topics | 传感器 | Topic名称 |Topic发送频率(Hz)| | ------------ | ----------------------------------------- | ----------------- | | Short_Camera | /apollo/sensor/camera/traffic/image_short | 9 | | LiDAR | /apollo/sensor/velodyne64/compensator/PointCloud2 | 10 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | 表3. 毫米波雷达到相机标定所需topics | 传感器 | Topic名称 |Topic发送频率(Hz)| | ------------ | ----------------------------------------- | ----------------- | | Short_Camera | /apollo/sensor/camera/traffic/image_short | 9 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | ## 标定流程 所有标定程序需要用到车辆的定位结果。请确认车辆定位状态为56,否则标定程序不会开始采集数据。输入以下命令可查询车辆定位状态: ```bash rostopic echo /apollo/sensor/gnss/ins_stat ``` ### 相机到相机 1. 运行方法 使用以下命令来启动标定工具: ```bash cd /apollo/scripts bash sensor_calibration.sh camera_camera ``` 2. 采集标定数据 * 由于两个相机的成像时间无法完全同步,所以在录制数据的时候,尽量将车辆进行慢速行驶,可以有效地缓解因时间差异所引起的图像不匹配问题。 * 两个相机需有尽量大的图像重叠区域,否则该工具将无法进行外参标定运算。 3. 配置参数 配置文件保存在以下路径,详细说明请参照表4。 ```bash /apollo/modules/calibration/camera_camera_calibrator/conf/camera_camera_calibrtor.conf ``` 表4. 相机到相机标定程序配置项说明 |配置项 | 说明 | |----------------- | ---------------- | |long_image_topic | 长焦相机的图像topic | |short_image_topic | 广角相机的图像topic | |odometry_topic | 车辆定位topic | |ins_stat_topic | 车辆定位状态topic | |long_camera_intrinsics_filename | 长焦相机的内参文件路径 | |short_camera_intrinsics_filename | 广角相机的内参文件路径 | |init_extrinsics_filename | 初始外参文件路径 | |output_path | 标定结果输出路径 | |max_speed_kmh | 最大车速限制,单位km/h | 4. 输出内容 * 外参文件: 长焦相机到广角相机的外参文件。 * 验证参考图片:包括一张长焦相机图像、一张广角相机图像及一张长焦相机依据标定后的外参投影到广角相机的去畸变融合图像。 ### 相机到多线激光雷达 1. 运行方法 使用以下命令来启动标定工具: ```bash cd /apollo/scripts bash sensor_calibration.sh lidar_camera ``` 2. 采集标定数据 * 为避免时间戳不同步,在录制数据的时候,尽量将车辆进行慢速行驶,可以有效地缓解因时间差异所引起的标定问题。 * 相机中需看到一定数量的投影点云,否则该工具将无法进行外参标定运算。因此,我们建议使用短焦距相机来进行相机-激光雷达的标定。 3. 配置参数 配置文件保存在以下路径,详细说明请参照表5。 ```bash /apollo/modules/calibration/lidar_camera_calibrator/conf/lidar_camera_calibrtor.conf ``` 表5. 相机到多线激光雷达标定程序配置项说明 配置项 | 说明 --- | --- image_topic | 相机的图像topic lidar_topic | LiDAR的点云topic odometry_topic | 车辆定位topic ins_stat_topic | 车辆定位状态topic camera_intrinsics_filename | 相机的内参文件路径 init_extrinsics_filename | 初始外参文件路径 output_path | 标定结果输出路径 calib_stop_count | 标定所需截取的数据站数 max_speed_kmh | 最大车速限制,单位km/h 4. 输出内容 * 外参文件:相机到多线激光雷达的外参文件。 * 验证参考图片:两张激光雷达点云利用标定结果外参投影到相机图像上的融合图像,分别是依据点云深度渲染的融合图像,和依据点云反射值渲染的融合图像。 ### 毫米波雷达到相机 1. 运行方法 使用以下命令来启动标定工具: ```bash cd /apollo/scripts bash sensor_calibration.sh radar_camera ``` 2. 采集标定数据 * 请将车辆进行低速直线行驶,标定程序仅会在该条件下开始采集数据。 3. 配置参数 配置文件保存在以下路径,详细说明请参照表6。 ```bash /apollo/modules/calibration/radar_camera_calibrator/conf/radar_camera_calibrtor.conf ``` 表6. 相机到毫米波雷达标定程序配置项说明 配置项 | 说明 --- | --- image_topic | 相机的图像topic radar_topic | Radar的数据topic odometry_topic | 车辆定位topic ins_stat_topic | 车辆定位状态topic camera_intrinsics_filename | 相机的内参文件路径 init_extrinsics_filename | 初始外参文件路径 output_path | 标定结果输出路径 max_speed_kmh | 最大车速限制,单位km/h 4. 输出内容 * 外参文件:毫米波雷达到短焦相机的外参文件。 * 验证参考图片:将毫米波雷达投影到激光雷达坐标系的结果,需运行 `radar_lidar_visualizer` 工具。具体方法可参阅 `标定结果验证` 章节。 ## 标定结果获取 所有标定结果均保存在配置文件中所设定的 `output` 路径下,标定后的外参以 `yaml` 格式的文件提供。此外,根据传感器的不同,标定结果会保存在 `output` 目录下的不同文件夹中,具体如表7所示: 表7. 标定结果保存路径 | 传感器 | 外参保存路径 | | ------------ | -----------------------| | Short_Camera | [output]/camera_params | | Long_Camera | [output]/camera_params | | Radar | [output]/radar_params | ## 标定结果验证 当标定完成后,会在 `[output]/validation` 目录下生成相应的标定结果验证图片。下面会详细介绍每一类验证图片的基本原理和查看方法。 ### 相机到相机标定 * 基本方法:根据长焦相机投影到短焦相机的融合图像进行判断,绿色通道为短焦相机图像,红色和蓝色通道是长焦投影后的图像,目视判断检验对齐情况。在融合图像中的融合区域,选择场景中距离较远处(50米以外)的景物进行对齐判断,能够重合则精度高,出现粉色或绿色重影(错位),则存在误差,当误差大于一定范围时(范围依据实际使用情况而定),标定失败,需重新标定(正常情况下,近处物体因受视差影响,在水平方向存在错位,且距离越近错位量越大,此为正常现象。垂直方向不受视差影响)。 * 结果示例:如下图所示,图2为满足精度要求外参效果,图3为不满足精度要求的现象,请重新进行标定过程。 ![](images/calibration/sensor_calibration/cam_cam_good.png) <p align="center"> 图2 良好的相机到相机标定结果 </p> ![](images/calibration/sensor_calibration/cam_cam_error.png) <p align="center"> 图3 错误的相机到相机标定结果 </p> ### 相机到多线激光雷达标定 * 基本方法:在产生的点云投影图像内,可寻找其中具有明显边缘的物体和标志物,查看其边缘轮廓对齐情况。如果50米以内的目标,点云边缘和图像边缘能够重合,则可以证明标定结果的精度很高。反之,若出现错位现象,则说明标定结果存在误差。当误差大于一定范围时(范围依据实际使用情况而定),该外参不可用。 * 结果示例:如下图所示,图4为准确外参的点云投影效果,图5为有偏差外参的点云投影效果 ![](images/calibration/sensor_calibration/cam_lidar_good.png) <p align="center"> 图4 良好的相机到多线激光雷达标定结果 </p> ![](images/calibration/sensor_calibration/cam_lidar_error.png) <p align="center"> 图5 错误的相机到多线激光雷达标定结果 </p> ### 毫米波雷达到相机 * 基本方法:为了更好地验证毫米波雷达与相机间外参的标定结果,引入激光雷达作为桥梁,通过同一系统中毫米波雷达与相机的外参和相机与激光雷达的外参,计算得到毫米波雷达与激光雷达的外参,将毫米波雷达数据投影到激光雷达坐标系中与激光点云进行融合,并画出相应的鸟瞰图进行辅助验证。在融合图像中,白色点为激光雷达点云,绿色实心圆为毫米波雷达目标,通过图中毫米波雷达目标是否与激光雷达检测目标是否重合匹配进行判断,如果大部分目标均能对应匹配,则满足精度要求,否则不满足,需重新标定。 * 结果示例:如下图所示,图6为满足精度要求外参效果,图7为不满足精度要求外参效果。 ![](images/calibration/sensor_calibration/radar_cam_good.png) <p align="center"> 图6 良好的毫米波雷达到激光雷达投影结果 </p> ![](images/calibration/sensor_calibration/radar_cam_error.png) <p align="center"> 图7 错误的毫米波雷达到激光雷达投影结果 </p> * 注意事项: * 为了得到毫米波雷达目标和激光雷达点云融合的验证图像,系统会自动或手动调用毫米波雷达到激光雷达的投影工具(`radar_lidar_visualizer`)进行图像绘制和生成过程。该投影工具在启动时会自动载入毫米波雷达与相机的外参文件及相机与激光雷达的外参文件,因此在启动之前,需要先进行相应的标定工具或将两文件以特定的文件名放在相应路径中,以备工具调用。 * 使用以下命令来启动 `radar_lidar_visualizer` 工具: ```bash cd /apollo/scripts bash sensor_calibration.sh visualizer ``` * `radar_lidar_visualizer` 工具的配置文件在以下路径,详细说明请参照表8。 ```bash /apollo/modules/calibration/radar_lidar_visualizer/conf/radar_lidar_visualizer.conf ``` 表8. 毫米波雷达到激光雷达投影工具配置项说明 配置项 | 说明 --- | --- radar_topic | Radar的数据topic lidar_topic | LiDAR的点云topic radar_camera_extrinsics_filename | 毫米波雷达到相机的外参文件 camera_lidar_extrinsics_filename | 相机到激光雷达的外参文件 output_path | 标定结果输出路径 * 验证图片同样保存在 `[output]/validation` 目录下。
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/传感器标定/apollo_1_5_lidar_calibration_guide_cn.md
欢迎使用Apollo传感器标定服务。本文档提供64线激光雷达与组合惯导之间的外参标定服务使用流程。 ### 文档概览 1. 服务概述 2. 准备工作 3. 标定数据录制 4. 标定数据上传以及任务创建 5. 标定结果获取 6. 错误说明 ### 服务概述 本服务作为Apollo整车传感器标定功能中的一部分,提供Velodyne 64线激光雷达HDL-64ES3与IMU之间的外参标定功能。标定结果可用于将激光雷达检测的障碍物转换至IMU坐标系,进而转到世界坐标系下。标定结果以 `.yaml` 文件形式返回。 ### 准备工作 为了更好地使用本服务,请按以下顺序进行准备工作: 1.安装Apollo所支持的64线激光雷达和组合惯性导航系统,下载镜像安装docker环境。 2.开机并启动64线激光雷达以及组合惯导系统。Novatel组合惯导初次上电时需要校准。此时应将车在开阔地带进行直行、左右转弯等操作,直至惯导初始化完成。 3.确认本服务所需传感器数据的topic均有输出。[如何查看传感器有数据输出?](../../15_FAQS/Calibration_FAQs_cn.md) 本服务所需的topics如下表1所示: 表1. 传感器topic名称 | 传感器 | Topic名称 | Topic发送频率(Hz) | | --------- | ---------------------------------------- | ------------- | | HDL-64ES3 | /apollo/sensor/velodyne64/VelodyneScanUnified | 10 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | 4.确认车辆采集标定数据时的定位状态为56。[如何查看车辆定位状态?](../../15_FAQS/Calibration_FAQs_cn.md) 5.选择合适的标定场地。 标定的地点需要选择无高楼遮挡、地面平坦、四周有平整的建筑物并且可以进行如图1所示8字轨迹行驶的地方。一个合适的标定场地如图2所示。 ![](images/calibration/lidar_calibration/trajectory.png) <p align="center">图1 标定所需车辆行驶的轨迹。</p> ![](images/calibration/lidar_calibration/field.png) <p align="center">图2 标定场地。</p> ### 标定数据录制 准备工作完成后,将车辆驶入标定场地进行标定数据的录制。 1.录制脚本工具为 `apollo/script/lidar_calibration.sh`。 2.运行以下命令,开始数据录制工作: ```bash bash lidar_calibration.sh start_record ``` 所录制的bag在 `apollo/data/bag` 目录下。 3.以8字形轨迹驾驶汽车,将车速控制在20-40km/h,并使转弯半径尽量小。行驶的时长3分钟即可,但要保证标定数据至少包含一个完整的8字。 4.录制完成后,输入以下命令结束数据录制: ```bash bash lidar_calibration.sh stop_record ``` 5.随后,程序会检测所录制的bag中是否含有所需的所有topics。检测通过后,会将bag打包成 `lidar_calib_data.tar.gz` 文件,内容包括录制的rosbag以及对应的MD5校验和文件。 ### 标定数据上传以及任务创建 录制好标定数据后,登录至[标定服务页面](https://console.bce.baidu.com/apollo/calibrator/index/list)以完成标定。 1.进入标定服务页面,在**任务管理**列表下点击**新建任务**按钮以新建一个标定任务。 2.进入新建任务页面后,需先填写简单的任务描述,然后点击**上传数据并创建任务**按钮,选择上传标定文件,则可以开始进行数据上传。 3.开始上传数据后,页面将跳转至任务流程视图。流程视图图示为上传进度页面,待其到达100%后则可以开始进行标定。上传期间请保持网络畅通。 4.数据上传完毕后,将开始数据校验流程,如图3所示。校验流程可以保证数据完整以及适合标定,校验项目有: * 数据包解压校验 * MD5校验 * 数据格式校验 * 8字路径与GPS质量校验 * 初始外参评估合格 若数据校验失败,则会提示相应错误。错误的原因请参照错误说明。 ![](images/calibration/lidar_calibration/calib_valid_cn.png) <p align="center">图3 标定数据校验流程。</p> 6.校验通过后将开始标定流程,一个标定进度页面会展示给用户,如图4所示。视数据大小和质量的影响,整体标定时间大约持续10-30分钟,用户可以随时进入该页面查看当前任务的标定进度。 ![](images/calibration/lidar_calibration/calib_progress_cn.png) <p align="center">图4 标定进度页面。</p> 7.标定完成后,进入人工质检环节。点击[查看]按钮会弹出用于质检的拼接点云,此时可以开始人工质检。若质检通过,则可以点击**确认入库**按钮以保存标定结果。最后,点击**下载数据**按钮来下载标定结果,至此标定流程完成。[如何进行质检?](../../15_FAQS/Calibration_FAQs_cn.md) ### 标定结果获取 1.获取标定结果前,本服务需要用户根据可视化效果确认标定结果的质量。 2.确认该标定结果质量合格后,用户可点击**确认入库**按钮将标定结果入库。之后可以在任务页面进行下载,未通过质检并入库的标定结果在任务页面不会出现下载地址。 3.外参格式解析。外参以yaml文件形式返回给用户,下面是一个外参结果文件的样例。 表1中说明了几个字段的含义。 ```bash header: seq: 0 stamp: secs: 1504765807 nsecs: 0 frame_id: novatel child_frame_id: velodyne64 transform: rotation: x: 0.02883904659307384 y: -0.03212457531272153 z: 0.697030811535172 w: 0.7157404339725393 translation: x: 0.000908140840832566 y: 1.596564931858745 z: 1 ``` 表2. 外参YAML文件字段含义 | 字段 | 含义 | | -------------- | ----------------------- | | header | 头信息,主要包含标定时间 | | child_frame_id | 所标定的源传感器ID,此时为HDL-64ES3 | | frame_id | 所标定的目标传感器ID,此时为Novatel | | rotation | 以四元数表示的外参旋转部分 | | translation | 外参的平移部分 | 4.外参使用方式 首先在`/apollo`目录下输入以下命令创建标定文件目录: ```bash mkdir -p modules/calibration/data/[CAR_ID]/ ``` 其中,**CAR\_ID**为标定车辆的车辆ID。然后将下载的外参yaml文件拷贝至对应的**CAR\_ID** 文件夹内。最后,在启动hmi后,选择需正确的**CAR\_ID**即可载入对应的标定yaml文件。 ### 错误说明 1. 数据解包错误:上传的数据不是一个合法的 `tar.gz` 文件。 2. MD5校验和错误:上传数据的MD5校验和与服务器端计算的MD5校验和不同,通常由网络传输问题引发。 3. 数据格式错误:上传的数据不是一个rosbag,或者bag里缺少指定的topic或包含其他非指定的topic,服务器端标定程序读取失败。 4. 无8字路径错误:在上传的数据中没有发现8字路径。需要确认录制的数据中是否包含至少一个8字形路径。 5. 组合惯导定位精度不足:在上传的数据中发现定位状态不符合要求。需要确认在录制过程中的定位状态为56。
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/传感器标定/apollo_2_0_sensor_calibration_guide.md
# Apollo 2.0 Sensor Calibration Guide This guide introduces the Apollo Sensor Calibration Service and describes the three new calibration tools in Apollo 2.0: - Camera-to-Camera Calibration - Camera-to-LiDAR Calibration - Radar-to-Camera Calibration - IMU-to-Vehicle Calibration ## About This Guide This guide provides the following information: - Overview - Preparation - Using the Calibration Tools - Obtaining Calibration Results - Validation Methods and Results ## Overview The new calibration tools in Apollo 2.0 (Camera-to-Camera Calibration, Camera-to-LiDAR Calibration, and Radar-to-Camera Calibration) are provided by an onboard executable program.For LiDAR-GNSS calibration, please refer to the [LiDAR-IMU calibration guide](../../10Hardware%20Integration%20and%20Calibration/%E4%BC%A0%E6%84%9F%E5%99%A8%E6%A0%87%E5%AE%9A/apollo_lidar_imu_calibration_guide.md). Velodyne HDL-64 users can also use the calibration service in Apollo 1.5. The benefit in using these tools is that they reduce the amount of work that the user must do. The user only has to start the corresponding calibration program, and the calibration work is performed and completes in real time. The user can then verify the calibration results, which are provided as `.yaml` files. ## Preparation Download [calibration tools](https://github.com/ApolloAuto/apollo/releases/download/v2.0.0/calibration.tar.gz), and extract files to `$APOLLO_HOME/modules/calibration`. APOLLO_HOME is the root directory of apollo repository. ### Well Calibrated Intrinsics of Camera Camera intrinsics contain focus length, principal points, distortion coefficients, and other information. Users can obtain the intrinsics from other camera calibration tools such as the [ROS Camera Calibration Tools](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration) and the [Camera Calibration Toolbox for Matlab](http://www.vision.caltech.edu/bouguetj/calib_doc/). After the calibration is completed, ***users should convert the result to a specific `yaml` format file manually***. Users must ensure that the `K` and `D` data is correct: - `K` refers to the camera matrix - `D` refers to the distortion parameters The following is an example of a camera intrinsic file: ```bash header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: short_camera height: 1080 width: 1920 distortion_model: plumb_bob D: [-0.535253, 0.259291, 0.004276, -0.000503, 0.0] K: [1959.678185, 0.0, 1003.592207, 0.0, 1953.786100, 507.820634, 0.0, 0.0, 1.0] R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] P: [1665.387817, 0.0, 1018.703332, 0.0, 0.0, 1867.912842, 506.628623, 0.0, 0.0, 0.0, 1.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 do_rectify: False ``` It is recommended that you perform the intrinsic calibration for every single camera instead of using unified intrinsic parameters for every camera. If you follow this practice, you can improve the accuracy of the extrinsic calibration results. ### Initial Extrinsic File The tools require the user to provide an initial extrinsic value as a reference. The following is an example of an initial extrinsic file of Camera-to-LiDAR, where `translation` is the shift distance between the camera and LiDAR. The `rotation` is the quaternion expression form of the rotation matrix. ```bash header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: velodyne64 child_frame_id: short_camera transform: rotation: y: 0.5 x: -0.5 w: 0.5 z: -0.5 translation: x: 0.0 y: 1.5 z: 2.0 ``` **NOTE:** The Camera-to-LiDAR Calibration is more dependent on initial extrinsic values. ***A large deviation can lead to calibration failure.*** Therefore, it is essential that you provide the most accurate, initial extrinsic value as conditions allow. ### Calibration Site Because the Camera-to-LiDAR Calibration method is used in natual environment, a good location can significantly improve the accuracy of the calibration. It is recommended that you select a calibration site that includes objects such as trees, poles, street lights, traffic signs, stationary objects, and clear traffic lines. Figure 1 is an example of a good choice for a calibration site: ![](images/calibration/sensor_calibration/calibration_place.png) <p align="center"> Figure 1. Good Choice for a Calibration Site </p> ### Required Topics Users must confirm that all sensor topics required by the program have output messages. For more information, see: [How to Check the Sensor Output?](../../15_FAQS/Calibration_FAQs.md) The sensor topics that the on-board program requires are listed in Tables 1, 2, and 3. **Table 1. The Required Topics of Camera-to-Camera Calibration** | Sensor | Topic Name | Topic Feq. (Hz) | | ------------ | ---------------------------------------- | --------------- | | Short_Camera | /apollo/sensor/camera/traffic/image_short | 9 | | Long_Camera | /apollo/sensor/camera/traffic/image_long | 9 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | **Table 2. The Required Topics of Camera-to-LiDAR Calibration** | Sensor | Topic Name | Topic Feq. (Hz) | | -------------- | ---------------------------------------- | --------------- | | Short_Camera | /apollo/sensor/camera/traffic/image_short | 9 | | LiDAR | /apollo/sensor/velodyne64/compensator/PointCloud2 | 10 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | **Table 3. The Required Topics of Radar-to-Camera Calibration** | Sensor | Topic Name | Topic Feq. (Hz) | | ------------ | ---------------------------------------- | --------------- | | Short_Camera | /apollo/sensor/camera/traffic/image_short | 9 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | ## Using the Calibration Tools This section provides the following information to use the three calibration tools: - Commands to run each tool - Data collection guidelines - Location of the configuration file - Types of data output Before you begin to use the tools, you must verify that the localization status is ***56*** or the calibration tools (programs) will ***not*** collect data. Type the following command to check localization status: ```bash rostopic echo /apollo/sensor/gnss/ins_stat ``` ### Camera-to-Camera Calibration Tool 1. Run the Camera-to-Camera Calibration Tool using these commands: ```bash cd /apollo/scripts bash sensor_calibration.sh camera_camera ``` 2. Follow these guidelines to collect data: * Because the two cameras have different timestamps, they cannot be completely synchronized, so it is important to drive the vehicle very slowly when recording the data. The slow speed of the vehicle can effectively alleviate the image mismatch that is caused by the different timestamps. * Make sure to enable a large enough overlap of the regions of the two camera images or the tool will ***not*** be able to perform the extrinsic calibration operation. 3. Note the location of the configuration file: ```bash /apollo/modules/calibration/camera_camera_calibrator/camera_camera_calibrtor.conf ``` Table 4 identifies and describes each element in the configuration file. **Table 4. Camera-to-Camera Calibration Configuration Description** | Configuration | Description | | -------------------------------- | ---------------------------------------- | | long_image_topic | telephoto camera image topic | | short_image_topic | wide-angle camera image topic | | odometry_topic | vehicle vodometry topic | | ins_stat_topic | vehicle locolization status topic | | long_camera_intrinsics_filename | intrinsic file of telephoto camera | | short_camera_intrinsics_filename | intrinsic file of wide-angle camera | | init_extrinsics_filename | initial extrinsic file | | output_path | calibration results output path | | max_speed_kmh | limitation of max vehicle speed, unit: km/h | 4. The types of output from the Camera-to-Camera Calibration Tool are: * The calibrated extrinsic file, provided as a `.yaml` file. * Validation images that include: * An image captured by the telephoto camera. * An image captured by the wide-angle camera. * A warp image blended with an undistorted wide-angle camera image and an undistorted telephoto camera image. ### Camera-to-LiDAR Calibration 1. Run the Camera-to-LiDAR Calibration Tool using these commands: ```bash cd /apollo/scripts bash sensor_calibration.sh lidar_camera ``` 2. Follow these guidelines to collect data: * Because the two cameras have different timestamps, they cannot be completely synchronized, so it is important to drive the vehicle very slowly when recording the data. The slow speed of the vehicle can effectively alleviate the image mismatch that is caused by the different timestamps. * Make sure that there are a certain number of (over 500) projection points in the camera image, or the tool ***cannot*** perform the extrinsic calibration operation. For this reason, this tool is only for wide angle cameras. 3. Note the location of the saved configuration file: ```bash /apollo/modules/calibration/lidar_camera_calibrator/camera_camera_calibrtor.conf ``` Table 5 identifies and describes each element in the configuration file. **Table 5. Camera-to-LiDAR Calibration Configuration Description** | Configuration | Description | | -------------------------- | ---------------------------------------- | | camera_topic | wide-angle camera image topic | | lidar_topic | LiDAR point cloud topic | | odometry_topic | vehicle odometry topic | | ins_stat_topic | vehicle localization status topic | | camera_intrinsics_filename | intrinsic file of camera | | init_extrinsics_filename | initial extrinsic file | | output_path | calibration results output path | | calib_stop_count | required stops of capturing data | | max_speed_kmh | limitation of max vehicle speed, unit: km/h | 4. The types of output from the Camera-to-LiDAR Calibration Tool are: * The calibrated extrinsic file, provided as a `.yaml` file * Two validation images that project the LiDAR point cloud onto a camera image: * One image is colored with depth * One image is colored with intensity ### Radar-to-Camera Calibration 1. Run the Radar-to-Camera Calibration Tool using these commands: ```bash cd /apollo/scripts bash sensor_calibration.sh radar_camera ``` 2. Follow this guideline to collect data: Drive the vehicle at a low speed and in a straight line to enable the calibration tool to capture data only under this set of conditions. 3. Note the location of the saved configuration file: ```bash /apollo/modules/calibration/radar_camera_calibrator/conf/radar_camera_calibrtor.conf ``` Table 6 identifies and describes each element in the configuration file. **Table 6. Radar-to-Camera Calibration Configuration Description** | Configuration | Description | | -------------------------- | ---------------------------------------- | | camera_topic | wide angle camera image topic | | odometry_topic | vehicle odometry topic | | ins_stat_topic | vehicle locolization status topic | | camera_intrinsics_filename | intrinsic file of camera | | init_extrinsics_filename | initial extrinsic file | | output_path | calibration results output path | | max_speed_kmh | limitation of max vehicle speed, unit: km/h | 4. The types of output from the Radar-to-Camera Calibration tool are: * The calibrated extrinsic file, provided as a `.yaml` file * A validation image that includes the projection result from Radar-to-LiDAR. You need to run the `radar_lidar_visualizer` tool to generate the image. See [Radar LiDAR Visualizer Projection Tool](####Radar LiDAR Visualizer Projection Tool) for more information. ## IMU-to-Vehicle Calibration 1. Download the [calibration tool](https://apollocache.blob.core.windows.net/apollo-cache/imu_car_calibrator.zip). 2. Start the vehicle to move before calibration. The vehicle should keep going straight at speed of 3m/s for 10s at least. There is no need to provide the intrinsic and initial extrinsic. 3. Required topic: INS /apollo/sensors/gnss/odemetry 100Hz 4. Run the IMU-to-Vehicle Calibration using these commands: ```bash cd /apollo bash scripts/sensor_calibration.sh imu_vehicle ``` 4. The result is saved as vehicle_imu_extrinsics.yaml in current path. Here is an example: ```bash header seq: 0 stamp: secs: 1522137131 nsecs: 319999933 frame_id: imu transform: translation: x: 0.0 y: 0.0 z: 0.0 rotation: x: -0.008324888458427 y: -0.000229845441991 z: 0.027597957866274 w: 0.999584411705604 child_frame_id: vehicle #pitch install error: -0.954337 #roll install error: 0.000000 #yaw install error: 3.163004 ``` ### (Optional) Run All Calibration Tools If necessary, users can run all calibration tools using these commands: ```bash cd /apollo/scripts bash sensor_calibration.sh all ``` ## Obtaining Calibration Results All calibration results are saved under the `output` path in the configuration files, and they are provided in `yaml` format. In addition, depending on the sensor, the calibration results are stored in different folders in the `output` directory as shown in Table 7: **Table 7. Path of Saved Calibration Results for Each Sensor** | Sensor | Path for Saved Results | | ------------ | ---------------------- | | Short_Camera | [output]/camera_params | | Long_Camera | [output]/camera_params | | Radar | [output]/radar_params | ## Validation Methods and Results When the calibration is complete, the corresponding calibration result verification image is generated in the `[output]/validation` directory. This section provides the background information and the corresponding validation method to use to evaluate verification images for each calibration tool. ### Camera-to-Camera Calibration * **Background Information:** In the warp image, the green channel is produced from the wide-angle camera image, and the red and blue channels are produced from the telephoto camera image. Users can compare the alignment result of the warp image to validate the precision of the calibrated extrinsic parameter. * **Validation Method:** In the fusion area of the warp image, judge the alignment of the scene 50 meters away from the vehicle. If the images coincide completely, the extrinsic parameter is satisfactory. However, if a pink or green ghost (displacement) appears, the extrinsic parameter is in error. When the error is greater than a certain range (for example, 20 pixels, determined by the actual usage), you need to re-calibrate the extrinsic parameter. Under general circumstances, due to the parallax, some dislocations may occur in the horizontal with close objects, but the vertical direction is not affected. This is a normal phenomenon. * **Examples: **As shown in the following examples, Figure 2 meets the precision requirements of the extrinsic parameter, and Figure 3 does not. ![](images/calibration/sensor_calibration/cam_cam_good.png) <a align="center"> Figure 2. Good Calibration Result for Camera-to-Camera Calibration </p> ![](images/calibration/sensor_calibration/cam_cam_error.png) <a align="center"> Figure 3. Bad Calibration Result for Camera-to-Camera Calibration</p> ### Camera-to-LiDAR Calibration * **Background Information:** In the point cloud projection images, users can see objects and signs with obvious edges and compare the alignment. * **Validation Method:** If the target is within 50 meters, its edge of point cloud can coincide with the edge of the image, and the accuracy of the calibration results can be proved to be very high. However, if there is a misplacement, the calibration results are in error. The extrinsic parameter is ***not*** available when the error is greater than a certain range (for example, 5 pixels, depending on the actual usage). * **Examples:** As shown in the following examples, Figure 4 meets the precision requirements of the extrinsic parameter, and Figure 5 does not. ![](images/calibration/sensor_calibration/cam_lidar_good.png) <p align="center"> Figure 4. Good Camera-to-LiDAR Calibration Validation Result </p> ![](images/calibration/sensor_calibration/cam_lidar_error.png) <p align="center"> Figure 5. Bad Camera-to-LiDAR Calibration Validation Result </p> ### Radar-to-Camera Calibration * **Background Information:** To verify the extrinsic output, use the LiDAR in the system as a medium. This approach enables you to obtain: * The extrinsic parameter of the radar relative to the LiDAR through the extrinsic value of the radar relative to the camera * The extrinsic value of the camera relative to the LiDAR You can then draw a bird's-eye-view fusion image, which fuses the radar data and the LiDAR data in the LiDAR coordinate system. You can use the alignment of the radar data and the LiDAR data in the bird's-eye-view fusion image to judge the accuracy of the extrinsic parameter. In the fusion image, all of the small white points indicate the LiDAR point cloud, while the large green solid circles indicate radar objects. * **Validation Method:** The alignment of the radar object and the LiDAR data in the bird's-eye-view fusion image shows the accuracy of the extrinsic parameter. If most of the targets coincide, it is satisfactory. However, if over 40% targets (especially vehicles) ***do not*** align, it is ***not*** satisfactory and you need to re-calibrate. * **Examples:** As shown in the following examples, Figure 6 meets the precision requirements of the extrinsic parameter, and Figure 7 does not. ![](images/calibration/sensor_calibration/radar_cam_good.png) <p align="center"> Figure 6. Good Camera-to-Radar Calibration Validation Result </p> ![](images/calibration/sensor_calibration/radar_cam_error.png) <p align="center"> Figure 7. Bad Camera-to-Radar Calibration Validation Result </p> #### **Radar LiDAR Visualizer Projection Tool** To obtain the fusion image of the radar data and the LiDAR point cloud, the calibration process automatically (if using `bash sensor_calibration.sh all`) or manually (if using `bash sensor_calibration.sh visualizer`) calls another projection tool, the `radar_lidar_visualizer`. The projection tool loads the extrinsic files of the Radar-to-Camera and the Camera-to-LiDAR. **IMPORTANT:** Before the projection tool starts, make sure that the two extrinsic parameters are well calibrated and exist in the specific path set in the configuration file (`radar_camera_extrinsics_filename` and `camera_lidar_extrinsics_filename`). 1. Run the `radar_lidar_visualizer` program using these commands: ``` cd /apollo/scripts bash sensor_calibration.sh visualizer ``` 2. Note the saved location of the configuration file of `radar_lidar_visualizer`: ```bash /apollo/modules/calibration/radar_lidar_visualizer/conf/radar_lidar_visualizer.conf ``` Table 8 identifies and describes each element in the projection tool configuration file. **Table 8. Projection Tool Radar-to-LiDAR Configuration Description** | Configuration File | Description | | -------------------------------- | --------------------------------------- | | radar_topic | Radar data topic | | lidar_topic | LiDAR point cloud topic | | radar_camera_extrinsics_filename | Calibrated extrinsic of Radar-to-Camera | | camera_lidar_extrinsics_filename | Calibrated extrinsic of Camera-to-LiDAR | | output_path | Validation results output path | 3. Note the location of the saved validation image: `[output]/validation`
0
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration
apollo_public_repos/apollo/docs/11_Hardware Integration and Calibration/传感器标定/apollo_lidar_imu_calibration_guide.md
## Apollo LiDAR - IMU Calibration Service ``` Note: This guide cannot be used for version Apollo 3.5 ``` Welcome to the Apollo sensor calibration service. This document describes the process of the extrinsic calibration service between 64-beam Light Detection And Ranging (LiDAR) and Inertial Navigation System (INS). ## Apollo Sensor Calibration Catalog - [Overview](#overview) - [Preparing the Sensors](#preparing-the-sensors) - [Recording calibration data](#recording-calibration-data) - [Uploading Calibration Data and Creating a Calibration Service Task](#uploading-calibration-data-and-creating-a-calibration-service-task) - [Obtaining calibration results](#obtaining-calibration-results) - [Types of Errors encountered](#types-of-errors) ### Overview The Apollo vehicle sensor calibration function provides the extrinsic calibration between Velodyne HDL-64ES3 and IMU. The calibration results can be used to transfer the obstacle location detected by LiDAR to the IMU coordinate system, and then to the world coordinate system. The results are provided in `.yaml` format files. ### Preparing the Sensors In order to calibrate the sensors, it is important to prepare them first, using the following steps: 1. Install 64-beams LiDAR and INS supported by Apollo, and then deploy the Docker environment. 2. Start up the 64-beams LiDAR and INS. The INS must be aligned when it is powered on. At this point, the car should be driven straight, then turned left and turned right in an open area, until the initialization is completed. 3. Confirm that all sensor topics required by this service have the following output : [How to Check the Sensor Output?](../15_FAQS/Calibration_FAQs.md) The topics required by the calibration service are shown in the following Table 1: Table 1. Sensor topics. | Sensor | Topic Name | Topic Feq. (Hz) | | --------- | ---------------------------------------- | --------------- | | HDL-64ES3 | /apollo/sensor/velodyne64/VelodyneScanUnified | 10 | | INS | /apollo/sensor/gnss/odometry | 100 | | INS | /apollo/sensor/gnss/ins_stat | 2 | 4. Confirm that the INS status is 56 when recording data. To learn how please go to : [How to Check INS Status?](../15_FAQS/Calibration_FAQs.md) 5. Choose an appropriate calibration field. ```An ideal calibration field requires no tall buildings around the calibration area. If buildings are near, low-rising building facades are preferred. Finally, the ground should be smooth, not rough, and it should be easy to drive the car following the trajectory that looks like the ∞ symbol as illustrated in Figure 1. An example of a good calibration field is shown in Figure 2. ``` ![](images/lidar_calibration/trajectory.png) <p align="center">Figure 1. The trajectory for calibration.</p> ![](images/lidar_calibration/field.png) <p align="center">Figure 2. Calibration field.</p> ### Recording Calibration Data After the preparation steps are completed, drive the vehicle to the calibration field to record the calibration data. 1. The recording script is `apollo/script/lidar_calibration.sh`. 2. Run the following command to record data: ```bash bash lidar_calibration.sh start_record ``` The recorded bag is under the directory `apollo/data/bag`. 3. Drive the car following a ∞ symbol path, using a controlled speed of 20-40km/h, and make the turning radius as small as possible. The total time length should within 3 minutes, but please make sure that your calibration drive contains at least one full ∞ symbol path. 4. After recording, run the following command to stop the data recording: ```bash bash lidar_calibration.sh stop_record ``` 5. Then, the program will detect whether or not the recorded bag contains all the required topics. After passing the test, the bag will be packaged into file `lidar_calib_data.tar.gz`, including the recorded rosbag and the corresponding MD5 checksum file. ### Uploading Calibration Data and Creating a Calibration Service Task After recording the calibration data, please login to the [calibration service page](https://console.bce.baidu.com/apollo/calibrator/index/list) to complete the calibration. 1. Enter the calibration service page and click the **New Task** button under the **Task Management** list to create a new calibration task. 2. After entering the New Task page, you need to fill in a simple description of this task. Then click the **Upload and create a task** button and select the upload calibration file to start uploading the calibration data. 3. After you start uploading the data, the page will display the Task Process View. The process figure is the upload progress page. The task will start to calibrate when the upload progress reaches 100%. Please keep the network unblocked during uploading. 4. When the data is uploaded, the Data Verification Process will begin, as shown in Figure 3. The validation process ensures data integrity and suitability. The validation list include: * Decompress test * MD5 checksum * Data format validation * ∞ symbol path validation * INS status validation If validation fails, the corresponding error message is prompted as seen in the third Result of the Data Validation screen below in Figure 3. ![](images/lidar_calibration/calib_valid_en.png) <p align="center">Figure 3. Calibration data verification.</p> 6. After data validation, the calibration process begins, as shown in Figure 4. A detailed calibration progress page is displayed to users. Depending on the size and quality of the data, the overall calibration time lasts about 10 - 30 minutes. You can check the progress at any time by opening the given page. ![](images/lidar_calibration/calib_progress_en.png) <p align="center">Figure 4. Calibration progress page.</p> 7. When calibration succeeds, click the **View detail** button to display a stitched point cloud. You can confirm the quality verification by checking the sharpness of the point cloud. If you are satisfied with the calibration quality, you can click **Confirm** to keep the result and download the calibration results by clicking **Download**. This fulfills the completion of the calibration process. For additional information, see: [How to Check Point Cloud Quality?](../15_FAQS/Calibration_FAQs.md) ### Obtaining Calibration Results 1. Before obtaining the calibration results, the service requires that you confirm the quality of the calibration results based on visualized point cloud. 2. After confirming the quality of the calibration result, you can click on the **Confirm** button to store the calibration result. After that, you can download the result on the Task page. The **Download** button will *NOT* appear on the task page if the result failed to pass quality verification. 3. Extrinsic file format instruction — The extrinsic is returned to you in a `.yaml` format file. We can look an at example of the format of the extrinsic file below: The field meanings shown in this example are defined in Table 2. ```bash header: seq: 0 stamp: secs: 1504765807 nsecs: 0 frame_id: novatel child_frame_id: velodyne64 transform: rotation: x: 0.02883904659307384 y: -0.03212457531272153 z: 0.697030811535172 w: 0.7157404339725393 translation: x: 0.000908140840832566 y: 1.596564931858745 z: 1 ``` Table 2. Definition of the keys in the yaml file | Field | Meaning | | ---------------- | ---------------------------------------- | | `header` | Header information, including timestamps. | | `child_frame_id` | Source sensor ID in calibration. Will be HDL-64ES3 here. | | `frame_id` | Target sensor ID in calibration. Will be Novatel here. | | `rotation` | Rotation part of the extrinsic parameters. Represented by a quaternion. | | `translation` | Translation part of the extrinsic parameters. | 4. How to use extrinsic parameters? Enter the following command to create the calibration file directory in the apollo directory: ```bash mkdir -p modules/calibration/data/[CAR_ID]/ ``` Here, **CAR\_ID** is the vehicle ID for calibrating vehicles. Then, copy the downloaded extrinsic yaml file to the corresponding **CAR\_ID** folder. Finally, after you start HMI, select the correct **CAR\_ID** to load the corresponding calibration yaml file. ### Types of Errors encountered 1. **Data unpacking error**: The uploaded data is not a valid `tar.gz` file 2. **MD5 checksum error**: If the MD5 checksum of the uploaded data differs from the MD5 checksum computed by the server side, it could be caused by network transmission problems. 3. **Data format error**: The uploaded data is not a rosbag, or necessary topics are missing or unexpected topics exist. The server-side calibration program failed to read it. 4. **No ∞ symbol path error**: No ∞ symbol path was found in the uploaded data. Verify that the recorded data contains at least one ∞ symbol path. 5. **INS status error**: In the uploaded data, the location does not meet the requirement. Ensure that the INS status is 56 during the data recording.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/how_to_add_a_new_evaluator_in_prediction_module.md
# How to add a new evaluator in Prediction module ## Introduction The Evaluator generates features (from the raw information of obstacles and the ego vehicle) to get the model output by applying the pre-trained deep learning model. ## Steps to add a new evaluator Please follow the steps to add a new evaluator named `NewEvaluator`. 1. Add a field in proto 2. Define a class that inherits `Evaluator` 3. Implement the class `NewEvaluator` 4. Update prediction conf 5. Upate the evaluator manager ### Define a class that inherits `Evaluator` Create a new file named `new_evaluator.h` in the folder `modules/prediction/evaluator/vehicle`. And define it like this: ```cpp #include "modules/prediction/evaluator/evaluator.h" namespace apollo { namespace prediction { class NewEvaluator : public Evaluator { public: NewEvaluator(); virtual ~NewEvaluator(); void Evaluate(Obstacle* obstacle_ptr) override; // Other useful functions and fields. }; } // namespace prediction } // namespace apollo ``` ### Implement the class `NewEvaluator` Create a new file named `new_evaluator.cc` in the same folder as that of `new_evaluator.h`. Implement it like this: ```cpp #include "modules/prediction/evaluator/vehicle/new_evaluator.h" namespace apollo { namespace prediction { NewEvaluator::NewEvaluator() { // Implement } NewEvaluator::~NewEvaluator() { // Implement } NewEvaluator::Evaluate(Obstacle* obstacle_ptr)() { // Extract features // Compute new_output by applying pre-trained model } // Other functions } // namespace prediction } // namespace apollo ``` ### Add a new evaluator in proto Add a new type of evaluator in `prediction_conf.proto`: ```cpp enum EvaluatorType { MLP_EVALUATOR = 0; NEW_EVALUATOR = 1; } ``` ### Update prediction_conf file In the file `modules/prediction/conf/prediction_conf.pb.txt`, update the field `evaluator_type` like this: ``` obstacle_conf { obstacle_type: VEHICLE obstacle_status: ON_LANE evaluator_type: NEW_EVALUATOR predictor_type: NEW_PREDICTOR } ``` ### Step 5: Upate the evaluator manager Update `CreateEvluator( ... )` like this: ```cpp case ObstacleConf::NEW_EVALUATOR: { evaluator_ptr.reset(new NewEvaluator()); break; } ``` Update `RegisterEvaluators()` like this: ```cpp RegisterEvaluator(ObstacleConf::NEW_EVALUATOR); ``` After following the steps above, the new evaluator should be created. ## Adding new features If you would like to add new features, follow the instructions that follow: ### Add a field in proto Assume the new evaluating result named `new_output` and also assume its type is `int32`. If the output is related directly to the obstacles, you can add it into `modules/prediction/proto/feature.proto` like this: ```cpp message Feature { // Other existing features optional int32 new_output = 1000; } ``` If the output is related to the lane sequences, please add it into `modules/prediction/proto/lane_graph.proto` like this: ```cpp message LaneSequence { // Other existing features optional int32 new_output = 1000; } ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/Class_Architecture_Planning.md
# Class Architecture and Overview -- Planning Module ## Data Output and Input ### Output Data The planning output data is defined in `planning.proto`, as shown below. ![img](images/class_architecture_planning/image001.png) #### *planning.proto* Inside the proto data definition, the planning output includes the total planned time and length, as well as the actual trajectory for control to execute, and is defined in `repeated apollo.common.TrajectoryPointtrajectory_point`. A trajectory point is derived from a `path_point`, where speed, acceleration and timing attributes are added. Each `trajectory_point` as defined in `pnc_point.proto`, and contains detailed attributes of the trajectory. ![img](images/class_architecture_planning/image002.png) ![img](images/class_architecture_planning/image003.png) In addition to the trajectory, the planning module also outputs rich annotation information. The important data fields are: - Estop - DecisionResult - Debug information `Estop` is a command that indicates errors and exceptions. For example, when the autonomous vehicle collides with an obstacle or cannot obey traffic rules, estop signals are sent. The `DecisionResult` data is used mainly for simulation display so that developers have a better understanding of the planning results. More detailed numerical intermediate results are stored in the debug information and sent out for debugging purposes. ## Input Data To compute the final published trajectory, the planning module leverages various input data sources. The planning input data sources are: - Routing - Perception and Prediction - Vehicle Status and Localization - HD-Map Routing defines the query concept “where I want to go” for the autonomous vehicle, and the message is defined in `routing.proto`. The `RoutingResponse` contains the `RoadSegment`, which identifies the road to follow or the lanes to use to reach the destination as shown below. ![img](images/class_architecture_planning/image004.png) The messages regarding the query concept “what is surrounding me” are defined mainly in `perception_obstacles.proto` and `traffic_light_detection.proto`. The `perception_obstacles.proto` defines the obstacles perceived by the perception module around the autonomous vehicle, while `traffic_light_detection` defines the perceived traffic light statuses (if any). In addition to the perceived obstacles, what is important for the planning module are the predicted trajectories for each perceived dynamic obstacle. Therefore, the `prediction.proto` wraps the `perception_obstacle` message with a predicted trajectory, as shown below. ![img](images/class_architecture_planning/image005.png) Each predicted trajectory has a probability associated with it, and one obstacle might have multiple predicted trajectories. In addition to the query concepts “where I want to go” and “what is surrounding me”, another important query concept is “where am I”. Such information is obtained from the HD-Map and Localization modules. Both localization and vehicle chassis information are incorporated in the messages of `VehicleState` that is defined in the `vehicle_state.proto`, as shown below. ![img](images/class_architecture_planning/image009.png) ## Code Structure and Class Hierarchy The code is organized as follows: The planning code entrance is the `planning.cc`. Inside the planner, the important class members are shown in the illustration below. ![img](images/class_architecture_planning/image006.png) The `ReferenceLineInfo` is a wrapper of the `ReferenceLine` class, which represents a smoothed guideline for planning. **Frame** contains all the data dependencies including the perceived obstacles with their predicted trajectories, and the current status of the autonomous vehicle. **HD-Map** is leveraged as a library inside the planning module for ad-hoc fashioned map queries. **EM Planner** does the actual planning and derives from the **Planner** class. Both the Em Planner that is used in the Apollo 2.0 release, and the previously released **RTK Planner** derive from the Planner class. ![img](images/class_architecture_planning/image007.png) For example, inside a planning cycle performed by the EM Planner, take an iterative approach where three categories of tasks interweave. The relationships of these “**decider/optimizer**” classes are illustrated below. ![img](images/class_architecture_planning/image008.png) - **Deciders** include traffic decider, path decider and speed decider. - **Path Optimizers** are the DP/QP path optimizers. - **Speed Optimizers** are the DP/QP speed optimizers. | **NOTE:** | | ---------------------------------------- | | DP means dynamic programming while QP means quadratic programming. After the computation, the final spatio-temporal trajectory is then discretized and published so that the downstream control module is able to execute it. |
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/how_to_add_a_new_predictor_in_prediction_module_cn.md
# 如何在预测模块中添加一个预测器 ## 简介 预测器为每个障碍物生成预测轨迹。在这里,假设我们想给我们的车辆增加一个新的预测器,用于其他类型的障碍,步骤如下: 1. 定义一个继承基类 `Predictor` 的类 2. 实现新类 `NewPredictor` 3. 在 `prediction_conf.proto`中添加一个新的预测期类型 4. 更新 prediction_conf 5. 更新预测器管理器(Predictor manager) ## 添加新预测器的步骤 如下步骤将会指导您在预测器中添加一个 `NewPredictor`。 ### 定义一个继承基类 `Predictor` 的类 在文件夹 `modules/prediction/predictor/vehicle`中创建一个名为`new_predictor.h`的文件,文件内容如下: ```cpp #include "modules/prediction/predictor/predictor.h" namespace apollo { namespace prediction { class NewPredictor : public Predictor { public: void Predict(Obstacle* obstacle) override; // Other useful functions and fields. }; } // namespace prediction } // namespace apollo ``` ### Implement the class `NewPredictor` 在创建了 `new_predictor.h`的文件夹中创建文件 `new_predictor.cc`。 文件内容如下: ```cpp #include "modules/prediction/predictor/vehicle/new_predictor.h" namespace apollo { namespace prediction { NewPredictor::Predict(Obstacle* obstacle)() { // Get the results from evaluator // Generate the predicted trajectory } // Other functions } // namespace prediction } // namespace apollo ``` ### 在 `prediction_conf.proto`中添加一个新的预测期类型 ``` enum PredictorType { LANE_SEQUENCE_PREDICTOR = 0; FREE_MOVE_PREDICTOR = 1; REGIONAL_PREDICTOR = 2; MOVE_SEQUENCE_PREDICTOR = 3; NEW_PREDICTOR = 4; } ``` ### 更新 prediction_conf 在 `modules/prediction/conf/prediction_conf.pb.txt`中, 更新 `predictor_type`部分如下: ``` obstacle_conf { obstacle_type: VEHICLE obstacle_status: ON_LANE evaluator_type: NEW_EVALUATOR predictor_type: NEW_PREDICTOR } ``` ### 更新预测器管理器(Predictor manager) 更新 `CreateEvluator( ... )` 如下: ```cpp case ObstacleConf::NEW_PREDICTOR: { predictor_ptr.reset(new NewPredictor()); break; } ``` 更新 `RegisterPredictors()` 如下: ```cpp RegisterPredictor(ObstacleConf::NEW_PREDICTOR); ``` 在完成以上步骤以后,一个新的预测器就创建好了。 ======= # 如何在预测模块中添加一个预测器 ## 简介 预测器为每个障碍物生成预测轨迹。在这里,假设我们想给我们的车辆增加一个新的预测器,用于其他类型的障碍,步骤如下: 1. 定义一个继承基类 `Predictor` 的类 2. 实现新类 `NewPredictor` 3. 在 `prediction_conf.proto`中添加一个新的预测期类型 4. 更新 prediction_conf 5. 更新预测器管理器(Predictor manager) ## 添加新预测器的步骤 如下步骤将会指导您在预测器中添加一个 `NewPredictor`。 ### 定义一个继承自基类 `Predictor` 的类 在文件夹 `modules/prediction/predictor/vehicle`中创建一个名为`new_predictor.h`的文件,文件内容如下: ```cpp #include "modules/prediction/predictor/predictor.h" namespace apollo { namespace prediction { class NewPredictor : public Predictor { public: void Predict(Obstacle* obstacle) override; // Other useful functions and fields. }; } // namespace prediction } // namespace apollo ``` ### Implement the class `NewPredictor` 在创建了 `new_predictor.h`的文件夹中创建文件 `new_predictor.cc`。文件内容如下: ```cpp #include "modules/prediction/predictor/vehicle/new_predictor.h" namespace apollo { namespace prediction { NewPredictor::Predict(Obstacle* obstacle)() { // Get the results from evaluator // Generate the predicted trajectory } // Other functions } // namespace prediction } // namespace apollo ``` ### 在 `prediction_conf.proto`中添加一个新的预测器类型 ``` enum PredictorType { LANE_SEQUENCE_PREDICTOR = 0; FREE_MOVE_PREDICTOR = 1; REGIONAL_PREDICTOR = 2; MOVE_SEQUENCE_PREDICTOR = 3; NEW_PREDICTOR = 4; } ``` ### 更新 prediction_conf 在 `modules/prediction/conf/prediction_conf.pb.txt`中,更新 `predictor_type`部分如下: ``` obstacle_conf { obstacle_type: VEHICLE obstacle_status: ON_LANE evaluator_type: NEW_EVALUATOR predictor_type: NEW_PREDICTOR } ``` ### 更新预测器管理器(Predictor manager) 更新 `CreateEvluator( ... )` 如下: ```cpp case ObstacleConf::NEW_PREDICTOR: { predictor_ptr.reset(new NewPredictor()); break; } ``` 更新 `RegisterPredictors()` 如下: ```cpp RegisterPredictor(ObstacleConf::NEW_PREDICTOR); ``` 完成以上步骤后,一个新的预测器就创建好了。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/prediction_predictor.md
# PREDICTION PREDICTOR # Introduction The prediction module comprises 4 main functionalities: Container, Scenario, Evaluator and Predictor. Predictor generates predicted trajectories for obstacles. Currently, the supported predictors include: - Empty: obstacles have no predicted trajectories - Single lane: Obstacles move along a single lane in highway navigation mode. Obstacles not on lane will be ignored. - Lane sequence: obstacle moves along the lanes - Move sequence: obstacle moves along the lanes by following its kinetic pattern - Free movement: obstacle moves freely - Regional movement: obstacle moves in a possible region - Junction: Obstacles move toward junction exits with high probabilities - Interaction predictor: compute the likelihood to create posterior prediction results after all evaluators have run. This predictor was created for caution level obstacles - Extrapolation predictor: extends the Semantic LSTM evaluator's results to create an 8 sec trajectory. Here, we mainly introduce three typical predictors,extrapolation predictor, move sequence predictor and interaction predictor,other predictors are similar to them. # Where is the code Please refer [prediction predictor](https://github.com/ApolloAuto/apollo/modules/prediction/predictor). # Code Reading ## Extrapolation predictor 1. This predictor is used to extend the Semantic LSTM evaluator's results to creat a long-term trajectroy(which is 8 sec). 2. There are two main kinds of extrapolation, extrapolate by lane and extrapolate by free move. 1. Base on a search radium and an angle threshold, which can be changed in perdiction config, we get most likely lane that best matches the short-term predicted trajectory obtained from Semantic LSTM evaluator. ```cpp LaneSearchResult SearchExtrapolationLane(const Trajectory& trajectory, const int num_tail_point); ``` 2. If the matching lane is found, we extend short-term predicted trajectory by lane. 1. Firstly, we remove points those are not in the matching lane. 2. Then, we project the modified short-term predicted trajectory onto the matching lane to get SL info. ```cpp static bool GetProjection( const Eigen::Vector2d& position, const std::shared_ptr<const hdmap::LaneInfo> lane_info, double* s,double* l); ``` 3. According to prediction horizon, we extend the modified short-term predicted trajectory along the mathcing lane with a constant-velocity module and get smooth points from lane. ```cpp static bool SmoothPointFromLane(const std::string& id, const double s, const double l, Eigen::Vector2d* point, double* heading); ``` 4. Note that the extraplation speed, which is used in constant-velocity module, is calculated by calling ```ComputeExtraplationSpeed```. ```cpp void ExtrapolateByLane(const LaneSearchResult& lane_search_result, const double extrapolation_speed, Trajectory* trajectory_ptr, ObstacleClusters* clusters_ptr); ``` ```cpp double ComputeExtraplationSpeed(const int num_tail_point, const Trajectory& trajectory); ``` 3. Otherwise we use free move module to extend. ```cpp void ExtrapolateByFreeMove(const int num_tail_point, const double extrapolation_speed, Trajectory* trajectory_ptr); ``` ## Move sequence predictor 1. Obstacle moves along the lanes by its kinetic pattern. 2. Ingore those lane sequences with lower probability. ```cpp void FilterLaneSequences( const Feature& feature, const std::string& lane_id, const Obstacle* ego_vehicle_ptr, const ADCTrajectoryContainer* adc_trajectory_container, std::vector<bool>* enable_lane_sequence); ``` 3. If there is a stop sign on the lane, we check whether ADC supposed to stop. ```cpp bool SupposedToStop(const Feature& feature, const double stop_distance, double* acceleration); ``` 1. If ADC is about to stop, we produce trajectroy with constant-acceleration module. ```cpp void DrawConstantAccelerationTrajectory( const Obstacle& obstacle, const LaneSequence& lane_sequence, const double total_time, const double period, const double acceleration, std::vector<apollo::common::TrajectoryPoint>* points); ``` 2. Otherwise we produce trajectory by obstacle's kinetic pattern. ```cpp bool DrawMoveSequenceTrajectoryPoints( const Obstacle& obstacle, const LaneSequence& lane_sequence, const double total_time, const double period, std::vector<apollo::common::TrajectoryPoint>* points); ``` ## Interaction predictor 1. Compute the likelihood to create posterier prediction results after all evaluators have run. This predictor was created for caution level obstacles. 2. Sampling ADC trajectory at a fixed interval(which can be changed in prediction gflag file). ```cpp void BuildADCTrajectory( const ADCTrajectoryContainer* adc_trajectory_container, const double time_resolution); ``` 3. Compute trajectory cost for each short-term predicted trajectory. The trajectory cost is weighted cost from different trajectory evluation metrics, such as acceleration, centripetal acceleration and collsion cost, which can be written in the following form: ``` total_cost = w_acc * cost_acc + w_centri * cost_centri + w_collision * cost_collision ``` Note that, the collsion cost is calucalated by the distance between ADC and obstacles. ```cpp double ComputeTrajectoryCost( const Obstacle& obstacle, const LaneSequence& lane_sequence, const double acceleration, const ADCTrajectoryContainer* adc_trajectory_container); ``` 4. We use the following equration to compute the likelihood for each short-term predicted trajectory. ``` likelihood = exp (-alpha * total_cost), the alpha can be changed in prediction gflag file. ``` ```cpp double ComputeLikelihood(const double cost); ``` 5. Base on the likelihood, we get the posterier prediction results. ```cpp double ComputePosterior(const double prior, const double likelihood); ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/prediction_evaluator.md
# PREDICTION EVALUATOR # Introduction The prediction module comprises 4 main functionalities: Container, Scenario, Evaluator and Predictor. The Evaluator predicts path and speed separately for any given obstacle. An evaluator evaluates a path by outputting a probability for it (lane sequence) using the given model stored in prediction/data/. The list of available evaluators include: - Cost evaluator: probability is calculated by a set of cost functions - MLP evaluator: probability is calculated using an MLP model - RNN evaluator: probability is calculated using an RNN model - Cruise MLP + CNN-1d evaluator: probability is calculated using a mix of MLP and CNN-1d models for the cruise scenario - Junction MLP evaluator: probability is calculated using a MLP model for junction scenario - Junction Map evaluator: probability is calculated using an semantic map-based CNN model for junction scenario. This evaluator was created for caution level obstacles - Social Interaction evaluator: this model is used for pedestrians, for short term trajectory prediction. It uses social LSTM. This evaluator was created for caution level obstacles - Semantic LSTM evaluator: this evaluator is used in the new Caution Obstacle model to generate short term trajectory points which are calculated using CNN and LSTM. Both vehicles and pedestrians are using this same model, but with different parameters # Where is the code Please refer [prediction evaluator](https://github.com/ApolloAuto/apollo/modules/prediction/evaluator). # Code Reading ## Solcial interaction evaluator 1. The evaluator uses social LSTM to predict short-term trajectroy for pedestrians with caution level. In the code, the evaluator is named by pedestrain interaction evalutor. 2. Extract features from obstacles. ```cpp bool ExtractFeatures(const Obstacle* obstacle_ptr, std::vector<double>* feature_values); ``` 3. Using social LSTM module to predict short-term trajectory by following steps: - Get social embedding - Get position embedding - Conduct single LSTM and update hidden states - Get a predicted trajectory ## Semantic LSTM evaluator 1. Get and process feature map by obstacles'id. 2. Build input features for torch. 3. Get predicted trajectory with different parameters for different types of obstacles. ## Junction map evaluator 1. Only care about obstalces at intersections. Obstacles are not closed to any junction exit cannot be evaluated by this evaluator. 2. Take the obstacle as the center and the orientation as the reference direction, 12 fan-shaped areas are divided. ![Diagram](images/prediction_evaluator_fig_1.png) 3. Since the juction exit is associated with these 12 fan-shaped areas, the probability can be calculated by solving the fan-shaped areas classification problem. 4. Assign all lane sequence probability. ## Junction mlp evaluator 1. Using a MLP model to solve the classification problem mentioned in junction map evaluator.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/how_to_train_prediction_mlp_model_cn.md
## 训练预测MLP深度学习模型 ### 前提条件 训练MLP深度学习模式有2个前提条件: #### 下载并安装Anaconda * 请从官网下载并安装Anaconda [website](https://www.anaconda.com/download) #### 安装依赖库 * **安装 numpy**: `conda install numpy` * **安装 tensorflow**: `conda install tensorflow` * **安装 keras**: `conda install -c conda-forge keras` * **安装 h5py**: `conda install h5py` * **安装 protobuf**: `conda install -c conda-forge protobuf` * **安装 PyTorch**: `conda install -c pytorch pytorch` ### 训练模型 请按照以下步骤使用演示cyber record来训练MLP模型。为了方便起见,我们把`Apollo`作为本地Apollo的路径,例如,`/home/username/apollo`。 1. 创建存储数据文件夹 `mkdir APOLLO/data/prediction`, 如果它不存在的话。 1. 在APOLLO文件夹下,启动docker `bash docker/scripts/dev_start.sh`。 1. 在APOLLO文件夹下,进入docker `bash docker/scripts/dev_into.sh`。 1. 在docker中,`/apollo/`路径下, 运行 `bash apollo.sh build` 编译代码。 1. 在docker中,`/apollo/`路径下, 从`/apollo/data/prediction` 拷贝演示cyber record到数据文件夹下: `cp /apollo/docs/demo_guide/demo_3.5.record /apollo/data/prediction/`。 1. 在docker中,`/apollo/`路径下, 运行bash脚本进行特征抽取: `bash modules/tools/prediction/mlp_train/feature_extraction.sh /apollo/data/prediction/ apollo/data/prediction/`, 运行完了以后,特征数据文件会出现在 `/apollo/data/prediction/`路径下. 1. 退出docker, 根据 `APOLLO/modules/tools/prediction/mlp_train/cruiseMLP_train.py` 和 `APOLLO/modules/tools/prediction/mlp_train/junctionMLP_train.py` 训练模型。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/apollo1.5_prediction_module_study_notes_cn.md
# Prediction模块分析 ## 作用: 预测障碍物的运动轨迹,每条轨迹都有一个概率值。 ## 输入: * 车辆位置信息:/apollo/localization/pose([pb_msgs/LocalizationEstimate])。 * 障碍物信息:/apollo/perception/obstacles([pb_msgs/PerceptionObstacles])。 ## 输出: 障碍物的运动轨迹。/apollo/prediction[pb_msgs/PredictionObstacle]。 ## 节点图: ![prediction data flow](images/prediction_node_arch.bmp) ## 子模块: * 容器: 存储订阅话题中的数据。 * 评估器:对于任意一个障碍物,评估器预测路径和速度。一个评估器通过使用_prediction/data/_ 下的评估模型对每条路径给出一个概率值,实现评估。 * 预测器:预测器为障碍物预测通过,当前通道包含以下几种: * 车道序列:障碍物只能依据车道移动。 * 自由移动:障碍物自由移动。 * 区域引动:障碍物只能在一定的区域移动。 ## 源码架构: * main.cc: 启动/prediction节点。 * prediction.cc和prediction.h: * Name()函数:返回节点名字prediction。 * Init()函数: * 使用配置文件prediction_conf.pb.txt设置prediction_conf_,主要设置preditor的产生的通道类型。 * 使用配置文件adapter.conf设置adapter_conf_,设置节点话题类型。 * 初始化AdapterManager,定义nodehandle和话题。 * 初始化ContainerManager,每一个接受话题创建一个Container,用于接受话题数据。 * 初始化EvaluatorManager:注册一个Evaluator:MLP_EVALUATOR;根据prediction_conf.pb.txt配置MLP_EVALUATOR,似乎关注路上的车辆,设置vehicle_on_lane_evaluator_。 * 初始化PredictorManager:注册四个Predictor,并设置了vehicle_on_lane_predictor_,vehicle_off_lane_predictor_和pedestrian_predictor_。 * 检测localization和perception节点是否准备好。 * 设置localization和perception数据的回调函数OnLocalization和OnPerception。 * OnLocalization函数: * 获取障碍物容器obstacles_container。 * 获取位置pose_container。 * 将新到的位置消息存入pose_container。 * 更新障碍物信息。 * OnPerception函数: * 获取障碍物容器obstacles_container。 * 将新到的障碍物信息存入obstacles_container。 * 运行Evaluator。 * 运行Predictor。 * 跟新待发布数据prediction_obstacles header结构,发布消息。 ## 评估器Evaluator: * 创建一个新的NewEvaluator: * data/mlp_vehicle_model.bin:利用深度学习实现的评估器核心部分。 * feature.proto或lane_graph.proto文件的配置输出,不清楚作用。 * 在evaluator/vehicle/目录下,以Evaluator为基类实现一个新的评估器类NewEvaluator。并参考mlp_evaluator实现类。 * prediction_conf.pb.txt文件中指定所实现的新评估器类。 * evaluator_manager.h中修改默认使用的评估器类。 * evaluator_manager.cc中的Run()函数: * 获取障碍物容器container。 * 遍历所有障碍物,利用障碍物id和障碍物容器获取障碍物信息obstacle。 * 针对lane上的障碍物,调用Evaluate进行评估。 * mlp_evaluator.cc中的Evaluate()函数: * 以单个障碍物为参数。计算单个障碍物的feature,并计算其概率。 * 每个obstacle_ptr中包含多个lane_graph_ptr。 * 利用obstacle_ptr和lane_graph_ptr可计算出feature_values * 由feature_values计算概率值probability。 * 将概率值probability设置到lane_sequence_ptr。 * 进一步的分析需要理解feature和lane_sequence概念,需要进入算法,暂时评估器到这里。 ## 预测器Predictor: * 功能:预测障碍物的未来轨迹。 * 创建一个新的预测器NewPredictor: * 在predictor/下新建目录vehicle。 * 在vehicle目录下创建new_predictor.h和new_predictor.cc,主要是继承与Predictor类定义并实现子类NewPredictor。具体实现可以参考vehicle同级目录free_move等。 * 更新配置文件prediction_conf.pb.txt,添加新预测器类型。 * 更新manager,在文件predictor_manager.h中修改默认预测器类型。 * prediction_manager.cc中的run()函数: * 获取障碍物容器container。 * 设置预测障碍物的时间戳。 * 根据预测障碍物中的id和容器获取障碍物信息obstacle。 * 根据预测障碍物中的类型设置预测器的类型predictor。 * 传入障碍物信息,执行预测器predictor->Predict(obstacle)。 * 将障碍物的所有轨道配置到预测到的障碍物中。并更新时间戳。 * 预测器函数Predict()函数: * 根据障碍物信息获取feature。 * 由feature获取num_lane_sequence。 * 遍历num_lane_sequence,通过feature获取sequence。 * 由sequence获取curr_lane_id。 * 由curr_lane_id通过DrawLaneSequenceTrajectoryPoints()函数获取TrajectoryPoint。 * 由TrajectoryPoint通过GenerateTrajectory()函数获取trajectory。 * 设置该trajectory的probability,存入trajectories_向量。 * 和Evaluator类似,进一步分析需了解具体算法,暂时到这里。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/jointly_prediction_planning_evaluator_cn.md
# 预测规划交互评估器 # 简介 预测模块主要由四个子模块组成,分别是: Container(信息容器), Scenario(场景选择器), Evaluator(评估器) and Predictor(预测器)。 评估器用于预测障碍物的速度和路径信息,通过给出路径置信度或短预测时域的轨迹信息,供后续的预测器进一步处理。评估器使用的模型文件存放在prediction/data/路径下。 预测规划交互评估器是针对交互障碍物设计的评估器,主要解决交互场景下(会车,狭窄道路,路口)障碍物预测问题,使用Vectornet和LSTM,考虑主车轨迹信息进行障碍物轨迹预测。 ![Diagram](images/interaction_model_fig_1.png) # Where is the code 可以参考 [jointly prediction planning evaluator代码](https://github.com/ApolloAuto/apollo/tree/master/modules/prediction/evaluator/vehicle). # Code Reading ## Interaction filter Please refer [interaction filter代码](https://github.com/ApolloAuto/apollo/tree/master/modules/prediction/scenario/interaction_filter). 1. 交互障碍物的获取是通过交互障碍物过滤器得到的,交互障碍物过滤器基于一定的规则筛选出与主车可能发生交互关系的障碍物。 2. 这些障碍物将被添加上交互标签,用于触发交互预测模型。 ```cpp void AssignInteractiveTag(); ``` ## Model inference 1. 模型使用的编码器是Vectornet,因此在调用模型进行轨迹预测前,需要将障碍物和地图信息组合成编码器需求的格式。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/jointly_prediction_planning_evaluator.md
# Inter-TNT (Jointly VectorNet-TNT-Interaction) Evaluator The prediction module comprises 4 main functionalities: Container, Scenario, Evaluator and Predictor. An Evaluator predicts trajectories and speeds for surrounding obstacles of autonomous vehicle. An evaluator evaluates a path(lane sequence) with a probability by the given model stored in the directory `modules/prediction/data/`. In Apollo 7.0, a new model named Inter-TNT is introduced to generate short-term trajectories. This model applies VectorNet as encoder and TNT as decoder, and latest planning trajectory of autonomous vehicle is used to interact with surrounding obstacles. Compared with the prediction model based on semantic map released in Apollo 6.0, the performance is increased by more than 20% in terms of minADE and minFDE, and the inference time is reduced from 15 ms to 10 ms. ![Diagram](images/interaction_model_fig_1.png) # Where is the code Please refer [jointly prediction planning evaluator](https://github.com/ApolloAuto/apollo/tree/master/modules/prediction/evaluator/vehicle). # Code Reading ## Interaction filter Please refer [interaction filter](https://github.com/ApolloAuto/apollo/tree/master/modules/prediction/scenario/interaction_filter). 1. The interaction filter is a rule-based filter for selecting interactive obstacles. 2. Such interactive obstacles will be labeled. ```cpp void AssignInteractiveTag(); ``` # Network Architecture The network architecture of the proposed "Inter-TNT" is illustrated as follows. The entire network is composed of three modules: an vectorized encoder, a target-driven decoder, and an interaction module. The vectorized trajectories of obstacles and autonomous vehicle (AV), along with HD maps, are first fed into the vectorized encoder to extract features. The target-driven decoder takes the extracted features as input and generates multi-modal trajectories for each obstacle. The main contribution of the proposed network is introducing an interaction mechanism, which could measure the interaction between obstacles and autonomous vehicle by re-weighting confidences of multi-modal trajectories. ![Diagram](images/VectorNet-TNT-Interaction.png) ## Encoder Basically, the encoder is mainly using an [VectorNet](https://arxiv.org/abs/2005.04259). ### Representation The trajectories of AV and all obstacles are represented as polylines in the form of sequential coordinate points. For each polyline, it contains start point, end point, obstacle length and some other attributes of vector. All points are transformed to the AV coordinate with y-axis as the heading direction and (0, 0) as the position for AV at time 0. After that, map elements are extracted from HDMap files. As elements of lane/road/junction/crosswalk are depicted in points in HD map, they are conveniently processed as polylines. ### VectorNet The polyline features are first extracted from a subgraph network and further fed into a globalgraph network (GCN) to encode contextual information. ## Decoder Our decoder implementation mainly follows the [TNT](https://arxiv.org/abs/2008.08294) paper. There are three steps in TNT. For more details, please refer to the original paper. ### Target Prediction For each obstacle, N points around the AV are uniformly sampled and M points are selected as target points. These target points are considered to be the potential final points of the predicted trajectories. ### Motion Estimation After selecting the potential target points, M trajectories are generated for each obstacle with its corresponding feature from encoder as input. ### Scoring and Selection Finally, a scoring and selection module is performed to generate likelihood scores of the M trajectories for each obstacle, and select a final set of trajectory predictions by likelihood scores. ## Interaction with Planning Trajectory After TNT decoder, K predicted trajectories for each obstacle are generated. In order to measure the interaction between AV and obstacles, we calculate the position and velocity differences between the latest planning trajectory and predicted obstacle trajectories. Note that we can also calculate a cost between the ground truth obstacle trajectory and AV planning trajectory, thus producing the true costs. That's how the loss is calculated in this step. # References 1. Gao, Jiyang, et al. "Vectornet: Encoding hd maps and agent dynamics from vectorized representation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. 2. Zhao, Hang, et al. "Tnt: Target-driven trajectory prediction." arXiv preprint arXiv:2008.08294 (2020). 3. Xu, Kecheng, et al. "Data driven prediction architecture for autonomous driving and its application on apollo platform." 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/how_to_train_prediction_mlp_model.md
## How to train the MLP Deep Learning Model ### Prerequisites There are 2 prerequisites to training the MLP Deep Learning Model: #### Download and Install Anaconda * Please download and install Anaconda from its [website](https://www.anaconda.com/download) #### Install Dependencies Run the following commands to install the necessary dependencies: * **Install numpy**: `conda install numpy` * **Install tensorflow**: `conda install tensorflow` * **Install keras**: `conda install -c conda-forge keras` * **Install h5py**: `conda install h5py` * **Install protobuf**: `conda install -c conda-forge protobuf` * **Install PyTorch**: `conda install -c pytorch pytorch` ### Train the Model The following steps are to be followed in order to train the MLP model using the released demo data. For convenience, we denote `APOLLO` as the path of the local apollo repository, for example, `/home/username/apollo` 1. Create a folder to store offline prediction data using the command `mkdir APOLLO/data/prediction` if it does not exist 1. Start dev docker using `bash docker/scripts/dev_start.sh` under the apollo folder 1. Enter dev docker using `bash docker/scripts/dev_into.sh` under apollo folder 1. In docker, under `/apollo/`, run `bash apollo.sh build` to compile 1. In docker, under `/apollo/`, copy the demo record into `/apollo/data/prediction` by the command: `cp /apollo/docs/demo_guide/demo_3.5.record /apollo/data/prediction/` 1. In docker, under `/apollo/`, run the bash script for feature extraction: `bash modules/tools/prediction/mlp_train/feature_extraction.sh /apollo/data/prediction/ apollo/data/prediction/`, then the feature files will be generated in the folder `/apollo/data/prediction/`. 1. Exit docker, train the cruise model and junction model according to `APOLLO/modules/tools/prediction/mlp_train/cruiseMLP_train.py` and `APOLLO/modules/tools/prediction/mlp_train/junctionMLP_train.py`
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/how_to_add_a_new_evaluator_in_prediction_module_cn.md
# 如何在预测模块中添加新评估器 ## 简介 评估器通过应用预训练的深度学习模型生成特征(来自障碍物和当前车辆的原始信息)以获得模型输出。 ## 添加评估器的步骤 按照下面的步骤添加名称为`NewEvaluator`的评估器。 1. 在proto中添加一个字段 2. 声明一个从`Evaluator`类继承的类`NewEvaluator` 3. 实现类`NewEvaluator` 4. 更新预测配置 5. 更新评估器管理 ### 声明一个从`Evaluator`类继承的类`NewEvaluator` `modules/prediction/evaluator/vehicle`目录下新建文件`new_evaluator.h`。声明如下: ```cpp #include "modules/prediction/evaluator/evaluator.h" namespace apollo { namespace prediction { class NewEvaluator : public Evaluator { public: NewEvaluator(); virtual ~NewEvaluator(); void Evaluate(Obstacle* obstacle_ptr) override; // Other useful functions and fields. }; } // namespace prediction } // namespace apollo ``` ### 实现类 `NewEvaluator` 在`new_evaluator.h`所在目录下新建文件`new_evaluator.cc`。实现如下: ```cpp #include "modules/prediction/evaluator/vehicle/new_evaluator.h" namespace apollo { namespace prediction { NewEvaluator::NewEvaluator() { // Implement } NewEvaluator::~NewEvaluator() { // Implement } NewEvaluator::Evaluate(Obstacle* obstacle_ptr)() { // Extract features // Compute new_output by applying pre-trained model } // Other functions } // namespace prediction } // namespace apollo ``` ### 在proto中添加新评估器 在`prediction_conf.proto`中添加新评估器类型: ```cpp enum EvaluatorType { MLP_EVALUATOR = 0; NEW_EVALUATOR = 1; } ``` ### 更新prediction_conf文件 在 `modules/prediction/conf/prediction_conf.pb.txt`中,按照如下方式更新字段`evaluator_type`: ``` obstacle_conf { obstacle_type: VEHICLE obstacle_status: ON_LANE evaluator_type: NEW_EVALUATOR predictor_type: NEW_PREDICTOR } ``` ### 更新评估器管理 按照如下方式更新`CreateEvluator( ... )` : ```cpp case ObstacleConf::NEW_EVALUATOR: { evaluator_ptr.reset(new NewEvaluator()); break; } ``` 按照如下方式更新`RegisterEvaluators()` : ```cpp RegisterEvaluator(ObstacleConf::NEW_EVALUATOR); ``` 完成上述步骤后,新评估器便创建成功了。 ## 添加新特性 如果你想添加新特性,请按照如下的步骤进行操作: ### 在proto中添加一个字段 假设新的评估结果名称是`new_output`且类型是`int32`。如果输出直接与障碍物相关,可以将它添加到`modules/prediction/proto/feature.proto`中,如下所示: ```cpp message Feature { // Other existing features optional int32 new_output = 1000; } ``` 如果输出与车道相关,请将其添加到`modules/prediction/proto/lane_graph.proto`中,如下所示: ```cpp message LaneSequence { // Other existing features optional int32 new_output = 1000; } ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/Class_Architecture_Planning_cn.md
# Planning模块架构和概述 ## 数据输入和输出 ### 输出数据 Planning模块的输出数据类型定义在`planning.proto`,如下图所示: ![img](images/class_architecture_planning/image001.png) #### *planning.proto* 在proto数据的定义中,输出数据包括总时间、总长度和确切的路径信息,输出数据由控制单元解析执行,输出数据结构定义在`repeated apollo.common.TrajectoryPointtrajectory_point`。 `trajectory point`类继承自`path_point`类,并新增了speed、acceleration和timing属性。 定义在`pnc_point.proto`中的`trajectory_point`包含了路径的详细属性。 ![img](images/class_architecture_planning/image002.png) ![img](images/class_architecture_planning/image003.png) 除了路径信息,Planning模块输出了多种注释信息。主要的注释数据包括: - Estop - DecisionResult - 调试信息 `Estop`是标示了错误和异常的指令。例如,当自动驾驶的车辆碰到了障碍物或者无法遵守交通规则时将发送estop信号。`DecisionResult`主要用于展示模拟的输出结果以方便开发者更好的了解Planning模块的计算结果。更多详细的中间值结果会被保存并输出作为调试信息供后续的调试使用。 ## 输入数据 为了计算最终的输出路径,Planning模块需要统一的规划多个输入数据。Planning模块的输入数据包括: - Routing - 感知和预测 - 车辆状态和定位 - 高清地图 Routing定义了概念性问题“我想去哪儿”,消息定义在`routing.proto`文件中。`RoutingResponse`包含了`RoadSegment`,`RoadSegment`指明了车辆到达目的地应该遵循的路线。 ![img](images/class_architecture_planning/image004.png) 关于概念性问题“我周围有什么”的消息定义在`perception_obstacles.proto`和`traffic_light_detection.proto`中。`perception_obstacles.proto`定义了表示车辆周围的障碍物的数据,车辆周围障碍物的数据由感知模块提供。`traffic_light_detection`定义了信号灯状态的数据。除了已被感知的障碍物外,动态障碍物的路径预测对Planning模块也是非常重要的数据,因此`prediction.proto`封装了`perception_obstacle`消息来表示预测路径。请参考下述图片: ![img](images/class_architecture_planning/image005.png) 每个预测的路径都有其单独的可能性,而且每个动态障碍物可能有多个预测路径。 除了概念性问题“我想去哪儿”和“我周围有什么”,另外一个重要的概念性问题是“我在哪”。关于该问题的数据通过高清地图和定位模块获得。定位信息和车辆车架信息被封装在`VehicleState`消息中,该消息定义在`vehicle_state.proto`,参考下述图片: ![img](images/class_architecture_planning/image009.png) ## 代码结构和类层次 代码组织方式如下图所示:Planning模块的入口是`planning.cc`。在Planning模块内部,重要的类在下图中展示。 ![img](images/class_architecture_planning/image006.png) `ReferenceLineInfo`对`ReferenceLine`类进行了封装,为Planning模块提供了平滑的指令执行序列。 **Frame**包含了所有的数据依赖关系,例如包含了预测路径信息的障碍物,自动驾驶车辆的状态等。 **HD-Ma**p在Planning模块内作为封装了多个数据的库使用,提供不同特点的地图数据查询需求。 **EM Planne**r执行具体的Planning任务,继承自**Planner**类。Apollo2.0中的**EM Planner**类和之前发布的**RTK Planner**类都继承自Planner类。 ![img](images/class_architecture_planning/image007.png) 例如,在EM Planner执行的一次planning循环的内部,采用迭代执行的方法,tasks的三个类别交替执行。“**决策/优化**”类的关系在下述图片中展示: ![img](images/class_architecture_planning/image008.png) - **Deciders** 包括 traffic decider, path decider and speed decider. - **Path Optimizers** 为DP/QP path optimizers. - **Speed Optimizers** 为DP/QP speed optimizers. | **附注:** | | ---------------------------------------- | | DP表示动态规划(dynamic programming),QP表示二次规划(quadratic programming)。经过计算步骤后,最终的路径数据经过处理后传递到下一个节点模块进行路径的执行。 |
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/07_Prediction/how_to_add_a_new_predictor_in_prediction_module.md
# How to add a new Predictor in Prediction module ## Introduction The Predictor generates the predicted trajectory for each obstacle. Here, let's assume we want to add a new predictor to our vehicle, for other types of obstacles, the procedure is very as follows: 1. Define a class that inherits `Predictor` 2. Implement the class `NewPredictor` 3. Add a new predictor type in proto `prediction_conf.proto` 4. Update prediction_conf 5. Upate the Predictor manager ## Steps to add a new predictor The following steps will add a Predictor `NewPredictor`. ### Define a class that inherits `Predictor` Create a new file named `new_predictor.h` in the folder `modules/prediction/predictor/vehicle` and define it as follows: ```cpp #include "modules/prediction/predictor/predictor.h" namespace apollo { namespace prediction { class NewPredictor : public Predictor { public: void Predict(Obstacle* obstacle) override; // Other useful functions and fields. }; } // namespace prediction } // namespace apollo ``` ### Implement the class `NewPredictor` Create a new file named `new_predictor.cc` in the same folder of `new_predictor.h`. Implement it like this: ```cpp #include "modules/prediction/predictor/vehicle/new_predictor.h" namespace apollo { namespace prediction { NewPredictor::Predict(Obstacle* obstacle)() { // Get the results from evaluator // Generate the predicted trajectory } // Other functions } // namespace prediction } // namespace apollo ``` ### Add a new predictor type in proto `prediction_conf.proto` ``` enum PredictorType { LANE_SEQUENCE_PREDICTOR = 0; FREE_MOVE_PREDICTOR = 1; REGIONAL_PREDICTOR = 2; MOVE_SEQUENCE_PREDICTOR = 3; NEW_PREDICTOR = 4; } ``` ### Update prediction_conf In the file `modules/prediction/conf/prediction_conf.pb.txt`, update the field `predictor_type` like this: ``` obstacle_conf { obstacle_type: VEHICLE obstacle_status: ON_LANE evaluator_type: NEW_EVALUATOR predictor_type: NEW_PREDICTOR } ``` ### Upate Predictor manager Update `CreateEvluator( ... )` like this: ```cpp case ObstacleConf::NEW_PREDICTOR: { predictor_ptr.reset(new NewPredictor()); break; } ``` Update `RegisterPredictors()` like this: ```cpp RegisterPredictor(ObstacleConf::NEW_PREDICTOR); ``` After completing the steps above, you would have created a new Predictor.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/how_to_do_performance_profiling.md
# How to do performance profiling The purpose of profiling a module is to use tools (here we use google-perftools) to examine the performance problems of a module. The Apollo development docker has all the profiling tools you need configured. Therefore, you can do all the following steps in the Apollo development docker. ## Build Apollo in profiling mode First, build Apollo in profiling mode ``` bash apollo.sh clean bash apollo.sh build_prof ``` ## Play a rosbag To profile a module, you need to provide its input data to make sure the majority of its code can be exercised. You can start play an information-rich rosbag by ``` rosbag play -l your_rosbag.bag ``` or after Apollo 3.5, run ``` cyber_record play -f your_record.record ``` ## Start module in profiling mode Start your module with the following command ``` CPUPROFILE=/tmp/${MODULE}.prof /path/to/module/bin/${MODULE} --flagfile=modules/${MODULE}/conf/${MODULE}.conf \ --${MODULE}_test_mode \ --${MODULE}_test_duration=60.0 \ --log_dir=/apollo/data/log ``` Where `MODULE` is the name of the module you want to test. or after Apollo 3.5, use ``` CPUPROFILE=/tmp/${MODULE}.prof mainboard -d /apollo/modules/${MODULE}/dag/${MODULE}.dag --flagfile=modules/${MODULE}/conf/${MODULE}.conf \ --${MODULE}_test_mode \ --${MODULE}_test_duration=60.0 \ --log_dir=/apollo/data/log ``` ## The profiling mode gflags Each module should have a pre-defined `${MODULE}_test_mode` and `${MODULE}_test_duration` gflag. These two flags tells the module to run for `${MODULE}_test_duration` amount of time when `${MODULE}_test_mode` is true. Most of Apollo modules already have these two gflags. If they does not exist in the module you are interested in, you can define it by yourself. You can refer to gflag `planning_test_mode` and `planning_test_duration` to see how they are being used. ## Create pdf report Finally you can create a pdf report to view the profiling result. ``` google-pprof --pdf --lines /path/to/module/bin/${MODULE} /tmp/${MODULE}.prof > ${MODULE}_profiling.pdf ``` or after 3.5, run ``` google-pprof --pdf --lines /path/to/module/component_lib/$lib{MODULE}_component_lib.so /tmp/${MODULE}.prof > ${MODULE}_profiling.pdf ``` ## Example Here is an example command of starting the planning module. ``` CPUPROFILE=/tmp/planning.prof /apollo/bazel-bin/modules/planning/planning \ --flagfile=modules/planning/conf/planning.conf \ --log_dir=/apollo/data/log \ --planning_test_mode \ --test_duration=65.0 google-pprof --pdf --lines /apollo/bazel-bin/modules/planning/planning /tmp/planning.prof > planning_prof.pdf ``` or after Apollo 3.5, run ``` CPUPROFILE=/tmp/planning.prof mainboard -d /apollo/modules/planning/dag/planning.dag \ --flagfile=modules/planning/conf/planning.conf \ --log_dir=/apollo/data/log \ --planning_test_mode \ --test_duration=65.0 google-pprof --pdf --lines /apollo/bazel-bin/modules/planning/libplanning_component_lib.so /tmp/planning.prof > planning_prof.pdf ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/how_to_build_and_run_python_app.md
# How to Build, Test and Run your Python Application Starting from Apollo 6.0, building and testing Python applications in Apollo is done using [Bazel](https://docs.bazel.build/versions/master/be/python.html) exclusively. We use Bazel Python rules to build, run, and test Python programs. This not only frees us from hand-crafting Protobuf dependencies and managing Python related Env variables manually, but also helps with managing third party Python module dependency. ## Create the BUILD file Generally you need a BUILD target for each python file, which could be one of - `py_library(name="lib_target", ...)` - `py_binary(name="bin_target", ...)` - `py_test(name="test_target", ...)` ### Example ```python load("@rules_python//python:defs.bzl", "py_binary", "py_library", "py_test") package(default_visibility = ["//visibility:public"]) py_binary( name = "foo", srcs = ["foo.py"], data = ["//path/to/a/data/dependency"], # which we invoke at run time, maybe a cc_binary etc. deps = [ ":foolib", "//path/to/a/py/library", ... ], ) py_library( name = "foolib" srcs = ["foolib.py"], deps = [ "//path/to/a/py/library", ... ], ) py_test( name = "foo_test", srcs = ["foo_test.py"], deps = [ ":foolib", "//path/to/a/py/library", ... ], ) ``` Above is a BUILD file template, you can also use the [BUILD](../../cyber/python/BUILD) and [BUILD](../../cyber/python/cyber_py3/examples/BUILD) file as examples. ## Build, Test and Run commands 1. To build any target: ```bash bazel build //path/to:target ``` 1. To run a binary target: ```bash bazel run //path/to:target ``` 1. To run a unit test target: ```bash bazel test //path/to:target_test ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/how_to_understand_architecture_and_workflow.md
HOW TO UNDERSTAND ARCHITECTURE AND WORKFLOW =========================================== ## Fundamentals to understand AplloAuto - core Autonomous vehicles \(AV\) dynamics are controlled by the planning engine through the Controller Area Network bus \(CAN bus\). The software reads data from hardware registers and writes them back just like we would in Assembly language. For high-precision computation, the Localization, Perception and Planning modules function as independent input sources, while output sources work together though the Peer2Peer (P2P) protocol. P2P is supported by the RPC network application. ApolloAuto uses ROS1 as the underlying network which means that ApolloAuto borrows the Master-Nodes framework from ROS1. Since xmlRPC from ROS1 is really old \(compared to the recent brpc and [grpc](https://yiakwy.github.io/blog/2017/10/01/gRPC-C-CORE)\), Baidu has developed its own protobuf version of RPC. In Baidu ApolloAuto, three stages of development have already been described 1. Dreamviewer Offline Simulation Engine & ApolloAuto core software module - Get a first taste on how the algorithms work for a car - We don't need to touch a real car or hardware and start development immediately 2. Core modules Integration: - Localization - Perception \(support third parties' solution like Mobileye ES4 chip based camera for L2 development\) process point cloud data from `Lidar` and return segmented objects info on request - Planning: compute the fine-tuned path, car dynamic controlling info for path segments from route service - Routine: local implementation of finding path segments through `Navigator` interface; Using A\*star algorithm. 3. HD Maps. One of the key differences from L2 level AV development. L4 AV machine needs Hdmap. Since a robot \(an autonomous vehicle \) needs to rebuild 3d world \(please check OpenCV [SLAM]() chapter\) in its microcomputer, reference object coordinates play a great role in relocating AV both in the map and the real world. 4. Cloud-based Online Simulation Drive Scenario Engine and Data Center. - As a partner of Baidu, you will be granted docker credentials to commit new images and replay the algorithm you developed on the cloud. - Create and manage complex scenarios to simulate real-world driving experiences ## ROS underlying Subscription and Publication mechanism and ApolloAuto modules structure #### ROS underlying Subscription and Publication mechanism So how does ROS1 based system communicate with each other and how does ApolloAuto make use of it? ROS has [tutorials](http://wiki.ros.org/ROS/Tutorials), and I will explain it quickly before we analyze ApolloAuto modules structure. ROS is a software, currently exclusively well supported by Ubuntu series. It has master roscore. > printenv | grep ROS default ros master uri is "http://localhost:11311. One can create an independent binary by performing ros::init and start it by performing ros::spin \(some kind of Linux event loop\) using c++ or python. The binary behind the freshly created package is called ***ROS node***. The node will register its name and IP address in Master in case of other nodes querying. Nodes communicate with each by directly constructing a TCP connection. If a node wants to read data from others, we call it subscriber. The typical format is ``` ... bla bla bla ros::NodeHandle h; ros::Subscriber sub = h.subscribe("topic_name", q_size, cb) .. bla bla bla ``` If a node wants to provide data for subscribers to read, we call it a publisher. The typical format is ``` ... bla bla bla ros::NodeHandle h; ros::Publisher pub = h.advertise<generated_msg_format_cls>("topic_name", q_size) ... bla bla bla ``` cb here is a callback executed when Linux kernel IO is ready. With these signatures bearing in mind, we can quickly analyze ApolloAuto module structures before diving deep into core modules implementation. #### apolloauto modules structure I have conducted full research about it but I cannot show you all of them. ApolloAuto modules/common/ provide basic micros to control ros::spin for each module and /modules/common/adaptor contains the most information on how a topic is registered. Every module will be registered from the [point](https://github.com/yiakwy/apollo/blob/master/modules/common/adapters/adapter_manager.cc#L50) . By reading configuration file defined ${MODULE_NAME}/conf, we can get basic information about topics a module subscribe and publish. Each module starts by firing "Init" interface and register callbacks. If you want to step by step debug ApolloAuto in gdb, make sure you have added breakpoints in those back. This also demonstrate that if you don't like what implemented by Baidu, just override the callback. ## Data preprocessing and Extended Kalman Filter Kalman Filter is mathematical interactive methods to converge to real estimation without knowing the whole real\-time input sequence. No matter what kind of data you need to process, you can rely on Kalman Filter. Extended Kalman Filter is used for 3d rigid movements in a matrix format. It is not hard. I recommend you a series tutorial from United States F15 director [Michel van Biezen](https://www.youtube.com/watch?v=CaCcOwJPytQ). Since it is used in input data preprocessing, you might see it in HD Maps, perception, planning and so on so forth. ## Selected modules analysis #### HMI & Dreamviewer There is not too much about hmi interface and dreamviewer but it is a good place to visualize the topics parameters. HMI is a simply simple python application based on Flask. Instead of using HTTP, it uses web socket to query ROS modules application. If you have experience on asynchronous HTTP downloaders, it is easy to understand, that an HTTP connection is just a socket connection file descriptor which we have already write HTTP headers, methods into that buffer. Once hmi flask backend receives a command, it will execute a subprocess to execute the corresponding binary. Dreamviewer, in contrast, works a little bit like frontend app written in React, Webpack, and Threejs \( WebGL, see /dreamview/backend/simulation_world, /dreamview/frontend/src/render \), techniques. It subscribes to messages from ROS nodes and draws it a frame after a frame. #### Perception Initially, this module implemented logics exclusively for Lidar and Radar processes. It is registered by AdapterManager as a ros node functioning as an info fusion system to output observed Obstacles info. In the latest version of the codes, different hardware input handlers of ROS nodes are specified in /perception/obstacles/onboard and implemented in different parallel locations, which consists of *Lidar, Radar, Traffic lights and GPS*. 1. Lidar: - HD Maps: get transformation matrix convert point world coordinates to local coordinates and build map polygons - ROI filter: get ROI and perform Kalman Filter on input data - Segmentation: A U-Net based \(a lot of variants\) Caffe model will be loaded and perform forward computation based on data from HD Maps and ROI filtering results - Object Building: Lidar return points \(x, y, z\). Hence you need to group them into "Obstacles" \(vector or set\) - Obstacles Tracker: Baidu is using HM solver from Google. For a large bipartite graph, KM algorithms in Lagrange format is usually deployed since SGD is extremely simple for that. 2. Radar: - Similar to Lidar with raw\_obstacles info from sensors. - ROI filter: get ROI objects and perform Kalman Filter on input data - Objects Tracker 3. Probability Fusion\(New in Apollo 1.5!\): - As far as I can understand, fusion system in ApolloAuto - It is typically one of most important parts: collects all the info and makes a final combination of information from sensors on the motherboard for track lists and rule-based cognitive engine - The major process is the association, hence HM algorithms here is used again as a bipartite graph. - Tracklists are maintained along timestamps and each list will be updated based on a probabilistic rules engine
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/apollo1.5_HDmap_study_notes_cn.md
# HD map 使用 ## 生成HD map: 由于apollo的hd map制作没有开放,所以目前hd map的生成是需要向百度提需求的。 如果想自己制作的话,apollo有提供建议如下: * 原始数据采集(视觉、激光雷达、GPS等)以及处理。 * 地图数据生成。从步骤一生成的数据通过算法或者人工的方式获取地图数据。 * 地图格式组织。将地图数据转换为Apollo的高精度地图格式(可以参照base_map.xml格式,其他的地图都可以从base_map.xml生成)。 * 注意:这三个步骤的工具均需要自己开发,如果只是小规模的简单测试,也可以参照base_map.xml格式手工组织数据。 ## 将HD map加入apollo1.5: 有两个方法,一个是通过添加一个新的目录,使用apollo系统;一个是替换原有目录下的地图文件。 ### 新加一个hd map: * 在/apollo/modules/map/data目录下,创建一个目录new_map。 * 将生成的hd map放入new_map中,如有配置文件,可以参考sunnyvale_office目录下的配置文件。 * 编译,执行bash apollo.sh build。 * 然后执行bash scripts/hmi.sh。 * 打开ip:8887,在选择地图的下拉框中就可以看到新加入的hd map了。 * 直接copy new_garage 重命名为new_garage_2测试的,测试通过。 * 注1:编译的时候,应该相当于将/apollo/modules/map/data/new_map注册到系统中去,以便启动hmi时,前端网页可以定位到/apollo/modules/map/data/new_map目录,进而加载其中的文件。也因此,可以有第二个方法加入hd map。 ### 利用现有的地图目录,加入地图: * 假设apollo1.5中,已经添加了new_map,此时只需要替换目录下的hd map所有的文件,这样不需要编译,即可使用新的hd map。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/how_to_add_an_external_dependency.md
# How to Add a New External Dependency The bazel files about third-party dependencies are all in the folder [third_party](../../third_party) which has a structure as following. ```shell third_party ├── absl │   ├── BUILD │   └── workspace.bzl ├── ACKNOWLEDGEMENT.txt ├── adolc │   ├── adolc.BUILD │   ├── BUILD │   └── workspace.bzl ├── ad_rss_lib │   ├── ad_rss_lib.BUILD │   ├── BUILD │   └── workspace.bzl ├── benchmark │   ├── benchmark.BUILD │   ├── BUILD │   └── workspace.bzl ├── boost │   ├── boost.BUILD │   ├── BUILD │   └── workspace.bzl ├── BUILD ...... ``` For each external denpendency library, there is a seperate folder that contains a `BUILD` file with the contents of: ```python package( default_visibility = ["//visibility:public"], ) ``` Further, libraries can be divided into several categories. ## 1. Library that supports bazel For example, - [Abseil](https://github.com/abseil/abseil-cpp) - [Google Test](https://github.com/google/googletest) Import it into workspace.bzl: ```python load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") def repo(): http_archive( name = "com_google_absl", sha256 = "f41868f7a938605c92936230081175d1eae87f6ea2c248f41077c8f88316f111", strip_prefix = "abseil-cpp-20200225.2", urls = [ "https://github.com/abseil/abseil-cpp/archive/20200225.2.tar.gz", ], ) ``` ## 2. Library with handcrafted BUILD file It's pretty common to do so. But it needs very solid knowledge with bazel. [workspace.bzl](../../third_party/yaml_cpp/workspace.bzl): ```python load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") def clean_dep(dep): return str(Label(dep)) def repo(): http_archive( name = "com_github_jbeder_yaml_cpp", build_file = clean_dep("//third_party/yaml_cpp:yaml_cpp.BUILD"), sha256 = "77ea1b90b3718aa0c324207cb29418f5bced2354c2e483a9523d98c3460af1ed", strip_prefix = "yaml-cpp-yaml-cpp-0.6.3", urls = [ "https://github.com/jbeder/yaml-cpp/archive/yaml-cpp-0.6.3.tar.gz", ], ) ``` [yaml.BUILD](../../third_party/yaml_cpp/yaml.BUILD): ```python load("@rules_cc//cc:defs.bzl", "cc_library") cc_library( name = "yaml-cpp", srcs = glob([ "src/*.cpp", ]), hdrs = glob([ "include/yaml-cpp/*.h", "src/*.h", ]), includes = [ "include", "src", ], visibility = ["//visibility:public"], ) ``` ## 3. Library which is pre-installed into the operating system It's NOT recommended, as it breaks the rule of a self-contained bazel WORKSPACE. However, some libraries are very complicated to build with bazel, while the operating system, such as Ubuntu, provides easy installation. Please do raise a discussion before doing so. Then we can add it to the docker image. For example, - [Poco](https://github.com/pocoproject/poco) [workspace.bzl](../../third_party/poco/workspace.bzl): ```python def clean_dep(dep): return str(Label(dep)) def repo(): native.new_local_repository( name = "poco", build_file = clean_dep("//third_party/poco:poco.BUILD"), path = "/opt/apollo/sysroot/include", ) ``` [poco.BUILD](../../third_party/poco/poco.BUILD): ```python load("@rules_cc//cc:defs.bzl", "cc_library") package(default_visibility = ["//visibility:public"]) licenses(["notice"]) cc_library( name = "PocoFoundation", includes = ["."], linkopts = [ "-L/opt/apollo/sysroot/lib", "-lPocoFoundation", ], ) ``` Note that in such case, you should include the headers like `#include <Poco/SharedLibrary.h>` instead of `#include "Poco/SharedLibrary.h"` as they are in the system path. ## Note For all of the above types of external dependencies, we also need to add them into [tools/workspace.bzl](../../tools/workspace.bzl) ## References For a detailed description on adding a dependency with Bazel, refer to the following: - [Workspace Rules](https://bazel.build/versions/master/docs/be/workspace.html) - [Working with external dependencies](https://docs.bazel.build/versions/master/external.html).
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/how_to_use_apollo_2.5_navigation_mode.md
# How to use Apollo 2.5 navigation mode ### This article is translated by the Apollo community team. Apollo is well received and highly commended by developers in the field of autonomous driving for its well-designed architecture, full functionality, robust open-source ecosystem, and standardized coding style. However, previous versions of Apollo had their perception, prediction, routing, and planning modules all heavily reliant on HDMaps, for which development was cumbersome and opaque. For many developers, this posed an insurmountable barrier. Given the inaccessibility of HDMaps, developers could only play demo data bag on Apollo's simulation tools, rather than deploy the Apollo system on vehicles for road testing. This greatly undermined the usage scenarios of Apollo and hampered the development and growth of the Apollo community. Obviously, the Apollo team had been aware of this problem. With months of hard work, they released a new navigation mode in Apollo 2.5 based on a relative map. Leveraging navigation mode, developers could easily deploy Apollo for road testing on a real vehicle. Relative map is the newest feature to be introduced in Apollo 2.5. From the architectural level, the relative map module is the middle layer linking the HDMap to the Perception module and the Planning module as seen in the image below. The relative map module generates real-time maps based on the vehicle’s coordinate system (the format is in the same format as HDMaps). The module also outputs reference lines for the Planning module to use. From the angle of developers, a navigation mode based on relative maps enables developers to implement real-vehicle road tests. As a result, barriers to development have been significantly reduced. ![Software OverView](../demo_guide/images/Software_Overview.png) The basic idea behind the navigation mode is: * Record the driving path of a manually driven vehicle on a desired path * Use Apollo tools to process the original path and obtain a smoothed out path (navigation line). This path is then used to * Replace the global route given by the routing module * Serve as the reference line for the planning modulefor generating the relative map. * In addition, the path can also be used in combination with the HDMap to replace the lane reference line in the HDMap (by default, the HDMap uses the lane centerline as the reference. However, this method may not suit certain circumstances where using the vehicle's actual navigation line instead could be a more effective solution). * A driver drives the vehicle to the starting point of a desired path, then selects the navigation mode and enables relevant modules in Dreamview. After the above configuration, the vehicle needs to be switched to autonomous driving status and run in this status. * While travelling in the autonomous mode, the perception module’s camera will dynamically detect obstacles and road boundaries, while the map module’s relative map sub-module generates a relative map in real time (using a relative coordinate system with the current position of the vehicle as the origin), based on the recorded path (navigation line) and the road boundaries. With the relative map created by the map module and obstacle information created by the perception module, the planning module will dynamically output a local driving path to the control module for execution. * At present, the navigation mode only supports single-lane driving. It can perform tasks such as acceleration and deceleration, car following, slowing down and stopping before obstacles, or nudge obstacles within the lane width. Subsequent versions will see further improvements to support multi-lane driving and traffic lights/signs detection. This article fully explains the build of Apollo 2.5, navigation line data collection and production, front-end compilation and configuration of Dreamview, and navigation mode usage, etc. Hopefully this will bring convenience to developers when properly using Apollo 2.5. ## 1. Building the Apollo 2.5 environment First, download the Apollo 2.5 source code from GitHub website. This can be done by either using git command or getting the compressed package directly from the web page. There are two options to build Apollo after downloading the source code to an appropriate directory: 1. in Visual Studio Code (recommended); 2. by using the command line. Of course, the common prerequisite is that Docker has already been successfully installed on your computer. You can use the script file [`install_docker.sh`](../../docker/scripts/install_docker.sh) to install Docker firstly. ### 1.1 Build with the Visual Studio Code Open Visual Studio Code and execute menu command `File -> Open Folder`. In the pop-up dialog, select a Apollo project source folder and click `OK`, as shown in the following figure: ![img](images/navigation_mode/open_directory_en.png) ![img](images/navigation_mode/choose_apollo_directory_en.png) Next, execute menu command `Tasks -> Run Build Task` or directly press `Ctrl + Shift + B` (shortcut keys which are the same as in Visual Studio and QT) to build a project. Docker will be launched when compiling if it has not yet been started. A superuser password needs to be entered in the terminal window at the bottom. After the command is executed, a display of `Terminal will be reused by tasks, press any key to close it.` in the terminal window at the bottom indicates that the build is successful. Keep good internet connection during the whole process, otherwise the dependencies cannot be downloaded. You may encounter some problems during the build. Solutions can be found in [this blog](https://blog.csdn.net/davidhopper/article/details/79349927) post and the [Help Doc](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md) on GitHub. ![img](images/navigation_mode/build_successfully_en.png) ### 1.2 Build in a terminal Press `Ctrl + Alt + T` to open a terminal and enter the following command to launch Docker: ``` bash cd your_apollo_project_root_dir # if you access from mainland China, it’s better to add “-C” option, visit mirror servers in mainland China will enable the highest download speed bash docker/scripts/dev_start.sh -C ``` Enter the following command to enter Docker: ``` bash bash docker/scripts/dev_into.sh ``` In Docker, execute the following command to build the Apollo project: ``` bash bash apollo.sh build ``` The steps are shown in the following figure: ![img](images/navigation_mode/build_with_command.png) ### 1.3 Change the UTM area ID in `Localization` module The default ID of the localization module in Apollo is the UTM coordinates of the US west coast. If you are in China, this ID must be changed. Outside of Docker, using vi or another text editor, open` [your_apollo_root_dir]/modules/localization/conf/localization.conf`, and change: ``` bash --local_utm_zone_id=10 ``` We are using the UTM area ID of Changsha area, for UTM sub-areas in China, please go to [this page](http://www.360doc.com/content/14/0729/10/3046928_397828751.shtml)): ``` bash --local_utm_zone_id=49 ``` `Note:` If the coordinates were not changed before recording data, they must not be changed when playing back data during offline testing. Changing the ID after recording will disturb navigation line locating! ### 1.4 Configuring your UTM area ID for Dreamview Open` [your_apollo_root_dir]/modules/common/data/global_flagfile.txt`, add this line at the bottom (we are using the UTM area ID of Changsha area, for UTM sub-areas in China, please go to [this page](http://www.360doc.com/content/14/0729/10/3046928_397828751.shtml)): ``` --local_utm_zone_id=49 ``` ## 2. Collect navigation line raw data Import the pre-specified Apollo file into the in-car IPC, enter Docker (follow steps in 1.2), and execute the following command to launch Dreamview: ``` bash bash scripts/bootstrap.sh ``` Open [http://localhost:8888](http://localhost:8888) in a Chrome or Firfox browser (do not use proxy), and enter the Dreamview interface: ![img](images/navigation_mode/dreamview_interface.png) * The driver controls the vehicle and parks at the starting location of the road test; * The operator clicks the `Module Controller` button in the toolbar from the left side of the Dreamview interface. In the `Module Controller` page, select `GPS`, `Localization`, and `Record Bag`. Note: If the recorded data bag will be used in offline test, also select `CAN Bus`. * The driver starts the engine and drives to the end location as planned; * The operator unselects the `Record Bag` option in the Dreamview interface, and a directory such as `2018-04-01-09-58-0`0 will be generated in the`/apollo/data/bag` directory (in Docker, an associative directory will be created on the dedicated host` [your your_apollo_root_dir]/data/bag`). The data bag (i.e. `2018-04-01-09-58-00.bag`) will be kept there. Take note of its path and filename as it will be needed next. `Note:` the default recording time in a bag is 1 minute, and the default size of a bag is 2048 MB, which can be edited in `/apollo/scripts/record_bag.sh`. For convenience, the next steps assume the `2018-04-01-09-58-00.bag` is in the `/apollo/data/bag` directory in this article. ## 3. Generation of navigation lines Either create a navigation line on an in-car IPC, or other computers. In both situations, we assume: we are already in Docker (Step 1.2), we have imported the data bag under the `/apollo/data/bag` directory, and the name of the file is `2018-04-01-09-58-00.bag` (which is not the name of the file in your computer, we are using it only as an example). ### 3.1 Extract raw data from the data bag In Docker, enter the command to extract the data from the data bag: ``` bash cd /apollo/modules/tools/navigator python extractor.py /apollo/data/bag/2018-04-01-09-58-00.bag ``` A raw data file `path_2018-04-01-09-58-00.bag.txt` will be generated under the current directory (assuming we are under `/apollo/modules/tools/navigator`) To check the data, enter the following command: ``` bash python viewer_raw.py ./path_2018-04-01-09-58-00.bag.txt ``` And a figure like the image below, will be generated: ![img](images/navigation_mode/view_raw_data.png) ### 3.2 Smoothen the raw data If the test drive was bumpy, the raw path data will not be smooth enough. It must be smoothed in Docker using the following command: ``` bash bash smooth.sh ./path_2018-04-01-09-58-00.bag.txt 200 ``` Note: `200` is the smoothing length, which is usually `150-200`. If this process failed, try to adjust this argument and smooth the data again. To verify the smoothed result, use the following command: ``` bash python viewer_smooth.py ./path_2018-04-01-09-58-00.bag.txt ./path_2018-04-01-09-58-00.bag.txt.smoothed ``` The first argument `./path_2018-04-01-09-58-00.bag.txt` is raw data, the second argument `./path_2018-04-01-09-58-00.bag.txt.smoothed` is the smoothed result. A figure like below will be generated: ![img](images/navigation_mode/view_smoothing_results.png) ## 4. Dreamview frontend compilation and configuration Dreamview frontend uses the Baidu Map by default. It can be changed to Google Maps by re-compiling frontend, as seen in the sub-sections below (Note: if you wish to continue with the default Map application, please ignore the sub-sections below): ### 4.1 Change navigation map settings Open the file`[your_apollo_root_dir]/modules/dreamview/frontend/src/store/config/ parameters.yml`, change the map settings to meet your needs: ``` bash navigation: # possible options: BaiduMap or GoogleMap map: "BaiduMap" # Google Map API: "https://maps.google.com/maps/api/js" # Baidu Map API: "https://api.map.baidu.com/api?v=3.0&ak=0kKZnWWhXEPfzIkklmzAa3dZ&callback=initMap" mapAPiUrl: "https://api.map.baidu.com/api?v=3.0&ak=0kKZnWWhXEPfzIkklmzAa3dZ&callback=initMap" ``` ### 4.2 Re-compile the Dreamview front end As in Step 1.2, enter Docker and execute the following command to compile Dreamview Front End: ``` bash #Install Dreamview front end dependent package. Note: you only need to execute it once, not every time. cd /apollo/modules/dreamview/frontend/ yarn install # Compile Dreamview front end cd /apollo bash apollo.sh build_fe ``` You might encounter errors like this one below, during the process: ``` ERROR in ../~/css-loader!../~/sass-loader/lib/loader.js?{"includePaths":["./node_modules"]}!./styles/main.scss* *Module build failed: Error: ENOENT: no such file or directory, scandir '/apollo/modules/dreamview/frontend/node_modules/node-sass/vendor'* ... (The error continues, but we have the information we need to debug it) ``` This is because of built-in dependent package inconsistency, which can be resolved by executing the following command in Docker: (`Note:` keep your internet connection steady, or you might not be able to download the dependent package again): ``` bash cd /apollo/modules/dreamview/frontend/ rm -rf node_modules yarn install cd /apollo bash apollo.sh build_fe ``` ## 5. Usage of the navigation mode ### 5.1. Open Dreamview and switch to the navigation mode Enter Docker, open Dreamview and execute the following command: ``` bash cd your_apollo_project_root_dir # If you haven’t opened Docker, do it first or ignore that step bash docker/scripts/dev_start.sh -C # Enter Docker bash docker/scripts/dev_into.sh # Start Dreamview and monitoring process bash scripts/bootstrap.sh ``` For offline mock tests, loop the data bag recorded in Step 2 `/apollo/data/bag/2018-04-01-09-58-00.bag` (data recorded in my device). Please ignore this step for real vehicle commissioning. ``` bash # For offline mock tests, loop the data bag recorded in step 2. Please ignore this step for real vehicle commissioning. rosbag play -l /apollo/data/bag/2018-04-01-09-58-00.bag ``` Open this website[ http://localhost:8888]( http://localhost:8888) in the browser (DO NOT use proxy), enter Dreamview interface, click on the dropdown box in the upper right, and select `Navigation` mode, as shown in the screenshot below: ![img](images/navigation_mode/enable_navigation_mode.png) ### 5.2 Enable relevant modules in the navigation mode Click on the `Module Controller` button in the toolbar on the left side of the Dreamview interface and enter the module controller page. For offline mock tests, select `Relative Map`, `Navi Planning`, and other modules as needed as shown in the screenshot below (the module that shows blank text is the Mobileye module, which will be visible only if the related hardware is installed and configured): ![img](images/navigation_mode/test_in_navigation_mode.png) For real vehicle commissioning, all modules except `Record Bag`, `Mobileye`(if Mobileye hardware has not been installed, it will be shown as blank text) and `Third Party Perception` should be activated, as displayed in the next screenshot: ![img](images/navigation_mode/drive_car_in_navigation_mode.png) ### 5.3 Send the navigation line data In Docker, execute the following command to send the navigation line data made in step 3: ``` bash cd /apollo/modules/tools/navigator python navigator.py ./path_2018-04-01-09-58-00.bag.txt.smoothed ``` The following screenshot shows the interface after Dreamview receives navigation line data during offline mock testing. You can see the Baidu Map interface in the upper left corner. Our navigation line is shown as red lines in the Baidu Map, and white lines in the main interface. ![img](images/navigation_mode/navigation_mode_with_reference_line_test.png) The next screenshot shows the interface after Dreamview receives navigation line data during real vehicle commissioning. You can see the Baidu Map interface in the upper left corner. Our navigation line is shown as red lines in Baidu Map, and yellow lines in the main interface. ![img](images/navigation_mode/navigation_mode_with_reference_line_car.png) A few tips to focus on: * If the navigation line is not displayed correctly in Dreamview interface, the reasons could be: * Navigation line data is not sent correctly - execute the sending command again to solve this problem * If Browser cache is inconsistent, press `Ctrl+R` or `F5` to reload the page or clear the cache * If the Dreamview back end service program is not executing correctly - restart the program in Docker, using: ``` bash # Stop Dreamview and monitoring process bash scripts/bootstrap.sh stop # Restart Dreamview and monitoring process bash scripts/bootstrap.sh ``` * Every time the vehicle returns to the starting point, it is necessary to send the navigation line data again, whether it is the offline mock test or the vehicle commissioning.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/apollo1.5_adapter_arch_study_notes_cn.md
# adapter设计架构代码分析 ## 代码分析: 主要文件:adapter_manager.cc, adapter_manager.h。 ### 类AdapterManager分析: * Init(AdapterManagerConfig &configs)函数:根据传入的AdapterManagerConfig类,配置ros节点,包括创建Nodehandle,创建订阅或者发布话题。 * Init(const std::string &adapter_config_filename)函数:根据传入的配置文件路径,通过调用Init(AdapterManagerConfig &configs)函数,配置ros节点。 * Initialized()函数:返回此ros nodehandle是否已经初始化,上两个函数中有设置。 * CreateTimer()函数创建一个周期timer,以一个类成员函数作为超时函数,在RTK localization函数中有使用到。 * node_handle_指针,指向node handle指针,后续话题的发布都是基于此。 * Enable##name()函数:创建nodehandle和topic,也就是init调用的函数。 * Add##name##Callback函数:添加回调函数。name换成相应的模块名字。 * observers_变量和Observe()函数://TODO ### 类AdapterManager使用: * adapter可用于创建node handle及其topic,可通过配置文件配置。 * 使用时只需要创建相应的配置文件(可参考/apollo/modules/perception/conf/adapter.conf,及相应的BUILD文件), * 调用接口(可参考/apollo/modules/perception/perception.cc中的Init()函数)。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/apollo1.5_routing_module_study_notes_cn.md
# routing模块分析 ## 功能: 基于请求,输出两点之间导航信息。 ## 输入: * 用户的起始和终止位置。/apollo/routing_request[pb_msgs/RoutingRequest]。 * HD map。 * /apollo/monitor[pb_msgs/MonitorMessage]。 ## 输出: * 请求的响应:/apollo/routing_response[pb_msgs/RoutingResponse],包含了最终的路由信息。 * /apollo/monitor[pb_msgs/MonitorMessage]。 ## 节点I/O: ![routing data flow](images/routing_node_arch.bmp) ## routing源码: ### routing.cc和routing.h * Name()函数:返回节点名字。 * Routing():构造函数,初始化monitor_。 * Init():节点初始化函数。 * 获取路由地图文件routing_map_file。 * 创建导航器navigator_ptr_。 * 读取导航配置文件:modules/routing/conf/routing.pb.txt到routing_conf_。 * 根据Adapter配置文件:modules/routing/conf/adapter.conf,创建nodehandle和响应topics。 * 设置路由请求话题的回调函数OnRouting_Request。这里为啥不直接使用rosservice?//TODO。 * OnRouting_Request():处理请求。 * 输入路由请求。 * 执行SearchRoute()函数,根据routing_request,寻找路由,并设置routing_response。 * 发布routing_response。 ### navigation.cc和navigation.h: * SearchRoute()函数: * 显示请求命令的信息。 * 检查导航器是否ready。构造函数时打开。 * 通过初始化函数,将请求中的点转化为graph中的格式,并存储在way_nodes和way_s。 * 通过a star算法由way_nodes和way_s计算出路由点result_nodes。 * 将路由点result_nodes合并到response结构。 * SearchRouteByStrategy()函数: 主要是利用A star算法寻找一个全局路径。 * GetWayNodes()函数: 将request转化为拓扑图graph_中的点。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/how_to_use_apollo_2.5_navigation_mode_cn.md
# Apollo 2.5版导航模式的使用方法 `Apollo`项目以其优异的系统架构、完整的模块功能、良好的开源生态及规范的代码风格,受到众多开发者的喜爱和好评。不过在`Apollo`之前的版本中,感知、预测、导航、规划模块均依赖于高精地图,而高精地图的制作方法繁琐且不透明,对于很多开发者而言,这是一个难以逾越的障碍。因为没有高精地图,很多人只能使用`Apollo`提供的模拟数据包进行走马观花式的观赏,而无法在测试道路上完成真枪实弹式的实车调试,这极大降低了`Apollo`项目带来的便利,也不利于自动驾驶开源社区的发展和壮大。显然,`Apollo`项目组已注意到该问题,经过他们几个月的艰苦努力,终于在2.5版开发了一种新的基于相对地图(`relative map`)的导航模式(`navigation mode`),利用该模式可顺利实施测试道路上的实车调试。 相对地图是Apollo2.5引入的新特性。从架构层面,相对地图模块是连接高精地图(`HD Map`)、感知(`Perception`)模块和规划(`Planning`)模块的中间层。相对地图模块会实时生成基于车身坐标系的地图(格式与高精地图一致),并且输出供规划模块使用的参考线。更多信息,可以参考[相对地图的说明文档](../../modules/map/relative_map/README.md)。从开发者友好性角度看,基于相对地图的导航模式,让开发者可以不依赖高精地图便可实施测试道路的实车调试,极大降低了开发者的使用门槛。 导航模式的基本思路是: 1. 通过人工驾驶方式录制测试道路上的行驶轨迹; 2. 利用`Apollo`工具对原始轨迹进行处理得到平滑轨迹,该轨迹既用于替代路由(`routing`)模块输出的导航路径,也是规划(`planning`)模块用到的参考线(或称指引线、中心线,`reference line`),还是生成相对地图(`relative map`)的基准线。此外,平滑轨迹还可用于替换高精地图内某些车道的参考线(默认情况下,高精地图将车道中心线作为参考线,在道路临时施工等特殊情形下该方式很不合适,需使用人工录制并平滑处理的轨迹替换特殊路段的车道参考线,当然本文不讨论该项内容); 3. 驾驶员将车辆行驶到测试道路起点,在`Dreamview`中打开导航(`Navigation`)选项及相关功能模块,切换到自动驾驶模式并启动车辆; 4. 自动驾驶过程中,感知(`perception`)模块的相机(`camera`)动态检测道路边界及障碍物,地图(`map`)模块下的相对地图(`relative map`)子模块基于参考线及道路边界实时地生成相对地图(使用以车辆当前位置为原点的相对坐标系),规划(`planning`)模块依据地图模块输出的相对地图和感知模块输出的障碍物信息,动态输出局部行驶路径给控制(`control`)模块执行。 5. 目前,导航模式仅支持单车道行驶,可完成加减速、跟车、遇障碍物减速停车或在车道宽度允许的情形下对障碍物绕行等功能,后续版本的导航模式将会进一步完善以支持多车道行驶、交通标志和红绿灯检测等。 本文对`Apollo2.5`版的构建、参考线数据采集与制作、`Dreamview`前端编译配置、导航模式使用等内容进行全面阐述,希望能给各位开发者正常使用`Apollo 2.5`版带来一定的便利。 ## 一、Apollo 2.5版的构建 首先从[GitHub网站](https://github.com/ApolloAuto/apollo)下载`Apollo2.5`版源代码,可以使用`git`命令下载,也可以直接通过网页下载压缩包。源代码下载完成并放置到合适的目录后,可以使用两种方法构建:1.在`Visual Studio Code`中构建(推荐);2.使用命令行构建。当然,两种方法都有一个前提,就是在你的机器上已经顺利安装了`Docker`。你可以使用`Apollo`提供的脚本文件[`install_docker.sh`](../../docker/scripts/install_docker.sh)安装`Docker`。 ### 1.1 在Visual Studio Code中构建 打开`Visual Studio Code`,执行菜单命令`文件->打开文件夹`,在弹出的对话框中,选择`Apollo项目`源文件夹,点击“确定”,如下图所示: ![img](images/navigation_mode/open_directory.png) ![img](images/navigation_mode/choose_apollo_directory.png) 之后,执行菜单命令`任务->运行生成任务`或直接按快捷键`Ctrl+Shift+B`(与`Visual Studio`和`QT`的快捷键一致)构建工程,若之前没有启动过`Docker`,则编译时会启动`Docker`,需在底部终端窗口输入超级用户密码。命令执行完毕,若在底部终端窗口出现`终端将被任务重用,按任意键关闭。`信息(如下图所示),则表示构建成功。整个过程**一定要保持网络畅通**,否则无法下载依赖包。构建过程可能会遇到一些问题,解决方法可参见我写的一篇[博客](https://blog.csdn.net/davidhopper/article/details/79349927) ,也可直接查看`GitHub`网站的[帮助文档](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md)。 ![img](images/navigation_mode/build_successfully.png) ### 1.2 在命令行中构建 按快捷键`Ctrl + Alt + T`打开命令行终端,输入如下命令启动`Docker`: ``` bash cd your_apollo_project_root_dir # 从中国大陆访问,最好加上“-C”选项,直接访问中国大陆镜像服务器以获取更快的下载速度 bash docker/scripts/dev_start.sh -C ``` 输入如下命令进入`Docker`: ``` bash bash docker/scripts/dev_into.sh ``` 在`Docker`内部,执行如下命令构建`Apollo`项目: ``` bash bash apollo.sh build ``` 整个操作如下图所示: ![img](images/navigation_mode/build_with_command.png) ### 1.3 修改定位模块UTM区域ID `Apollo`项目定位(`localization`)模块默认使用美国西部UTM坐标,在国内需要修改该值。在`Docker`外部,使用`vi`或其他文本编辑器,打开文件`[apollo项目根目录]/modules/localization/conf/localization.conf`,将下述内容: ``` bash --local_utm_zone_id=10 ``` 修改为下述内容(这是长沙地区的UTM区域ID,中国UTM分区可参考[该网页](http://www.360doc.com/content/14/0729/10/3046928_397828751.shtml)): ``` bash --local_utm_zone_id=49 ``` **注意:**如果录制数据时未修改上述内容,则线下模拟测试回放数据包时只能将错就错,千万不能再修改该值,否则地图上的参考线定位会出错!有一次我采集数据时,忘了修改该值,回放数据时又进行修改,结果导致参考线定位到了美国西海岸!我取消修改,按`F5`键刷新浏览器后显示就恢复正常了。 ### 1.4 配置Dreamview使用的UTM区域ID 打开文件`[apollo项目根目录]/modules/common/data/global_flagfile.txt`,在最后一行添加如下语句(这是长沙地区的UTM区域ID,中国UTM分区可参考[该网页](http://www.360doc.com/content/14/0729/10/3046928_397828751.shtml)): ``` --local_utm_zone_id=49 ``` ## 二、参考线原始数据的采集 将构建好的`Apollo`项目文件导入车内工控机,并按照**步骤1.2**的方法进入`Docker`,再执行如下命令,启动`Dreamview`服务端程序: ``` bash bash scripts/bootstrap.sh ``` 在浏览器中打开网页[http://localhost:8888](http://localhost:8888)(注意不要使用代理),进入`Dreamview`界面,如下图所示: ![img](images/navigation_mode/dreamview_interface.png) **1** 驾驶员将车辆驶入待测试路段起点; **2** 操作员点击`Dreamview`界面左侧工具栏中的`Module Controller`按钮,进入模块控制页面,选中`GPS`、`Localization`、`Record Bag`选项,**注意:如果采集的数据包需用于线下模拟测试,还需加上`CAN Bus`选项。** ![img](images/navigation_mode/options_for_data_recording.png) **3** 驾驶员从起点启动车辆并按预定路线行驶至终点; **4** 操作员关闭`Dreamview`界面中的`Record Bag`选项,此时会在`/apollo/data/bag`目录(这是`Docker`中的目录,宿主机上对应的目录为`[你的apollo根目录]/data/bag`)中生成一个类似于`2018-04-01-09-58-00`的目录,该目录中保存着类似于`2018-04-01-09-58-00.bag`的数据包。这就是我们所需的数据包,请记住它的路径及名称。**注意:**单个数据包文件的默认录制时长为1分钟,默认文件大小为2048MB,可通过修改文件`/apollo/scripts/record_bag.sh`来改变默认值。 为后文阐述方便起见,我假设数据包`2018-04-01-09-58-00.bag`直接存放于`/apollo/data/bag`目录。 ## 三、参考线的制作 参考线的制作既可在车内工控机内完成,也可在其他计算机上实施。无论在哪台计算机上制作,我们首先假定已按**步骤1.2**的方法进入`Docker`,并按照**步骤二**中录制的数据包放置在`/apollo/data/bag`目录中,且假定该文件名为`2018-04-01-09-58-00.bag`(在你的机器上并非如此,这样做只是为了后文阐述方便而已)。 ### 3.1 从原始数据包提取裸数据 在`Docker`内部,使用如下命令从原始数据包提取裸数据: ``` bash cd /apollo/modules/tools/navigator python extractor.py /apollo/data/bag/2018-04-01-09-58-00.bag ``` 上述命令会在当前目录(易知我们在`/apollo/modules/tools/navigator`目录中)生成一个提取后的裸数据文件:`path_2018-04-01-09-58-00.bag.txt`。 为了验证裸数据的正确性,可以使用如下命令查看: ``` bash python viewer_raw.py ./path_2018-04-01-09-58-00.bag.txt ``` 会显示类似下图的路径图: ![img](images/navigation_mode/view_raw_data.png) ### 3.2 对裸数据进行平滑处理 如果录制数据时,车辆行驶不够平顺,提取的裸轨迹数据可能会不光滑,有必要对其进行平滑处理。继续在`Docker`内部使用如下命令完成平滑处理: ``` bash bash smooth.sh ./path_2018-04-01-09-58-00.bag.txt 200 ``` 注意:上述命令中`200`是平滑处理的长度,该值一般为`150-200`,如果执行失败,可尝试调整该参数,再次进行平滑。 为了验证平滑结果的正确性,可以使用如下命令查看: ``` bash python viewer_smooth.py ./path_2018-04-01-09-58-00.bag.txt ./path_2018-04-01-09-58-00.bag.txt.smoothed ``` 其中,第一个参数`./path_2018-04-01-09-58-00.bag.txt`是裸数据,第二个参数`./path_2018-04-01-09-58-00.bag.txt.smoothed`是平滑结果,显示效果类似下图: ![img](images/navigation_mode/view_smoothing_results.png) ## 四、Dreamview前端的编译 `Dreamview`前端默认使用`Baidu`地图,也可修改为`Google`地图,但需重新编译`Dreamview`前端,具体方法如下(**注意**:如不需修改地图设置,可忽略该节内容): ### 4.1 更改导航地图 打开文件`[apollo项目根目录]/modules/dreamview/frontend/src/store/config/ parameters.yml`,根据需要将下述内容替换为`Google`地图或`Baidu`地图: ``` bash navigation: # possible options: BaiduMap or GoogleMap map: "BaiduMap" # Google Map API: "https://maps.google.com/maps/api/js" # Baidu Map API: "https://api.map.baidu.com/api?v=3.0&ak=0kKZnWWhXEPfzIkklmzAa3dZ&callback=initMap" mapAPiUrl: "https://api.map.baidu.com/api?v=3.0&ak=0kKZnWWhXEPfzIkklmzAa3dZ&callback=initMap" ``` ### 4.2 重新编译Dreamview前端 按照**步骤1.2**的方法进入`Docker`,运行如下命令编译`Dreamview`前端: ``` bash # 安装Dreamview前端依赖包,注意:该步骤只需执行一次,不必每次执行 cd /apollo/modules/dreamview/frontend/ yarn install # 编译Dreamview前端 cd /apollo bash apollo.sh build_fe ``` 编译过程可能会出现如下错误: ``` ERROR in ../~/css-loader!../~/sass-loader/lib/loader.js?{"includePaths":["./node_modules"]}!./styles/main.scss* *Module build failed: Error: ENOENT: no such file or directory, scandir '/apollo/modules/dreamview/frontend/node_modules/node-sass/vendor'* ... (后面还有一长串,不再一一列出) ``` 这是内部依赖包不一致造成的,**解决方法如下:** 在`Docker`内部,运行如下命令(注意:**一定要保持网络畅通**,否则无法重新下载依赖包): ``` bash cd /apollo/modules/dreamview/frontend/ rm -rf node_modules yarn install cd /apollo bash apollo.sh build_fe ``` ## 五、导航模式的使用 ### 5.1. 打开Dreamview并开启导航模式 进入`Docker`,启动`Dreamview`,命令如下: ``` bash cd your_apollo_project_root_dir # 如果没有启动Docker,首先启动,否则忽略该步 bash docker/scripts/dev_start.sh -C # 进入Docker bash docker/scripts/dev_into.sh # 启动Dreamview后台服务 bash scripts/bootstrap.sh ``` 若是线下模拟测试,则将**步骤二**中录制好的数据包`/apollo/data/bag/2018-04-01-09-58-00.bag`(这是我机器上的录制数据)循环播放;**若是实车调试,则忽略该步骤**。 ``` bash # 模拟测试情形下,循环播放录制数据;实车调试情形忽略该步骤 rosbag play -l /apollo/data/bag/2018-04-01-09-58-00.bag ``` 在浏览器中打开网页[http://localhost:8888](http://localhost:8888)(注意不要使用代理),进入`Dreamview`界面,点击右上方下拉框,将模式设置为`Navigation`(导航模式),如下图所示: ![img](images/navigation_mode/enable_navigation_mode.png) ### 5.2 启用导航模式下的相关功能模块 点击`Dreamview`界面左侧工具栏中的`Module Controller`按钮,进入模块控制页面。**若是线下模拟测试**,选中`Relative Map`、`Navi Planning`选项,其他模块根据需要开启,如下图所示(图中显示空白文本的模块是`Mobileye`模块,需安装配置好相关硬件后才可见)): ![img](images/navigation_mode/test_in_navigation_mode.png) **若是实车调试**,建议除`Record Bag`、`Mobileye`(若`Mobileye`硬件未安装,则会显示为空白文本)和`Third Party Perception`模块外,其余模块全部开启,如下图所示: ![img](images/navigation_mode/drive_car_in_navigation_mode.png) ### 5.3 发送参考线数据 在`Docker`内部,使用如下命令发送**步骤三**中制作的参考线数据: ``` bash cd /apollo/modules/tools/navigator python navigator.py ./path_2018-04-01-09-58-00.bag.txt.smoothed ``` 下图是**线下模拟测试情形下**`Dreamview`接收到参考线后的界面,注意界面左上角已出现了百度地图界面,我们发送的参考线在百度地图中以红线方式、在主界面中以白色车道线的方式展现。 ![img](images/navigation_mode/navigation_mode_with_reference_line_test.png) 下图是**实车调试情形下的**`Dreamview`接收到参考线后的界面,注意界面左上角已出现了百度地图界面,我们发送的参考线在百度地图中以红线方式、在主界面中以黄色车道线的方式展现。 ![img](images/navigation_mode/navigation_mode_with_reference_line_car.png) 需注意以下几点: (1) 如果发送参考线数据后,`Dreamview`界面不能正确显示参考线,可能有以下方面的原因:一是参考线数据未正确发送,解决办法是再次执行发送命令;二是浏览器缓存不一致,解决办法是按`Ctrl + R`或`F5`键刷新显示,或者清理浏览器缓存;三是`Dreamview`后台服务程序运行异常,解决办法是在`Docker`内部重启`Dreamview`后台服务,命令如下: ``` bash # 停止Dreamview后台服务 bash scripts/bootstrap.sh stop # 重新启动Dreamview后台服务 bash scripts/bootstrap.sh ``` (2) 每次车辆重新回到起点后,无论是线下模拟测试还是实车调试情形,**均需再次发送参考线数据**。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/Apollo_3.0_Software_Architecture_cn.md
# Apollo 3.0 软件架构 自动驾驶Apollo3.0核心软件模块包括: - **感知** — 感知模块识别自动驾驶车辆周围的世界。感知中有两个重要的子模块:障碍物检测和交通灯检测。 - **预测** — 预测模块预测感知障碍物的未来运动轨迹。 - **路由** — 路由模块告诉自动驾驶车辆如何通过一系列车道或道路到达其目的地。 - **规划** — 规划模块规划自动驾驶车辆的时间和空间轨迹。 - **控制** — 控制模块通过产生诸如油门,制动和转向的控制命令来执行规划模块产生的轨迹。 - **CanBus** — CanBus是将控制命令传递给车辆硬件的接口。它还将底盘信息传递给软件系统。 - **高精地图** — 该模块类似于库。它不是发布和订阅消息,而是经常用作查询引擎支持,以提供关于道路的特定结构化信息。 - **定位** — 定位模块利用GPS,LiDAR和IMU的各种信息源来定位自动驾驶车辆的位置。 - **HMI** — Apollo中的HMI和DreamView是一个用于查看车辆状态,测试其他模块以及实时控制车辆功能的模块. - **监控** — 车辆中所有模块的监控系统包括硬件。 - **Guardian** — 新的安全模块,用于干预监控检测到的失败和action center相应的功能。 执行操作中心功能并进行干预的新安全模块应监控检测故障。 ``` 注意:下面列出了每个模块的详细信息。 ``` 这些模块的交互如下图所示。 ![img](images/Apollo_3.0_SW.png) 每个模块都作为单独的基于CarOS的ROS节点运行。每个模块节点都发布和订阅特定topic。订阅的topic用作数据输入,而发布的topic用作数据输出。以下各节详细介绍了各模块情况。 ## 感知 感知依赖LiDAR点云数据和相机原始数据。除了这些传感器数据输入之外,交通灯检测依赖定位以及HD-Map。由于实时ad-hoc交通灯检测在计算上是不可行的,因此交通灯检测需要依赖定位确定何时何地开始通过相机捕获的图像检测交通灯。 对Apollo 3.0的更改: - CIPV检测/尾随 - 在单个车道内移动。 - 全线支持 - 粗线支持,可实现远程精确度。相机安装有高低两种不同的安装方式。 - 异步传感器融合 – 因为不同传感器的帧速率差异——雷达为10ms,相机为33ms,LiDAR为100ms,所以异步融合LiDAR,雷达和相机数据,并获取所有信息并得到数据点的功能非常重要。 - 在线姿态估计 - 在出现颠簸或斜坡时确定与估算角度变化,以确保传感器随汽车移动且角度/姿态相应地变化。 - 视觉定位 – 基于相机的视觉定位方案正在测试中。 - 超声波传感器 – 作为安全保障传感器,与Guardian一起用于自动紧急制动和停车。 ## 预测 预测模块负责预测所有感知障碍物的未来运动轨迹。输出预测消息封装了感知信息。预测订阅定位和感知障碍物消息,如下所示。 ![img](images/prediction.png) 当接收到定位更新时,预测模块更新其内部状态。当感知发出其发布感知障碍物消息时,触发预测实际执行。 ## 定位 定位模块聚合各种数据以定位自动驾驶车辆。有两种类型的定位模式:OnTimer和多传感器融合。 第一种基于RTK的定位方法,通过计时器的回调函数“OnTimer”实现,如下所示。 ![img](images/localization.png) 另一种定位方法是多传感器融合(MSF)方法,其中注册了一些事件触发的回调函数,如下所示。 ![img](images/localization_2.png) ## 路由 为了计算可通行车道和道路,路由模块需要知道起点和终点。通常,路由起点是自动驾驶车辆位置。重要的数据接口是一个名为`OnRoutingRequest`的事件触发函数,其中`RoutingResponse`的计算和发布如下所示。 ![img](images/routing.png) ## 规划 Apollo 2.0需要使用多个信息源来规划安全无碰撞的行驶轨迹,因此规划模块几乎与其他所有模块进行交互。 首先,规划模块获得预测模块的输出。预测输出封装了原始感知障碍物,规划模块订阅交通灯检测输出而不是感知障碍物输出。 然后,规划模块获取路由输出。在某些情况下,如果当前路由结果不可执行,则规划模块还可以通过发送路由请求来触发新的路由计算。 最后,规划模块需要知道定位信息(定位:我在哪里)以及当前的自动驾驶车辆信息(底盘:我的状态是什么)。规划模块由固定频率触发,主数据接口是调用`RunOnce`函数的`OnTimer`回调函数。 ![img](images/planning_1.png) 底盘,定位,交通灯和预测等数据依赖关系通过`AdapterManager`类进行管理。核心软件模块同样也由`AdapterManager`类管理。例如,定位通过`AdapterManager :: GetLocalization()`管理,如下所示。 ![img](images/planning_2.png) ## 控制 如规划模块中所述,控制将规划轨迹作为输入,并生成控制命令传递给CanBus。它有三个主要的数据接口:OnPad,OnMonitor和OnTimer。 ![img](images/control_1.png) `OnPad`和`OnMonitor`是仿真和HMI的交互接口。 主要数据接口是`OnTimer`,它定期产生实际的控制命令,如下所示。 ![img](images/control_2.png) ## CanBus CanBus有两个数据接口。 ![img](images/canbus_1.png) 第一个数据接口是基于计时器的发布者,回调函数为“OnTimer”。如果启用,此数据接口会定期发布底盘信息。 ![img](images/canbus_2.png) 第二个数据接口是一个基于事件的发布者,回调函数为“OnControlCommand”,当CanBus模块接收到控制命令时会触发该函数。 ## HMI Apollo中的HMI或DreamView是一个Web应用程序: - 可视化自动驾驶模块的输出,例如,规划轨迹,汽车定位,底盘状态等。 - 为用户提供人机交互界面,以查看硬件状态,打开/关闭模块,以及启动自动驾驶汽车。 - 提供调试工具,如PnC Monitor,以有效跟踪模块问题。 ## 监控 包括硬件在内的,车辆中所有模块的监控系统。监控模块从其他模块接收数据并传递给HMI,以便司机查看并确保所有模块都正常工作。如果模块或硬件发生故障,监控会向Guardian(新的操作中心模块)发送警报,然后决定需要采取哪些操作来防止系统崩溃。 ## Guardian 这个新模块根据Monitor发送的数据做出相应决定。Guardian有两个主要功能: - 所有模块都正常工作 - Guardian允许控制模块正常工作。控制信号被发送到CANBus,就像Guardian不存在一样。 - 监控检测到模块崩溃 - 如果监控检测到故障,Guardian将阻止控制信号到达CANBus并使汽车停止。 Guardian有三种方式决定如何停车并会依赖最终的Gatekeeper——超声波传感器, - 如果超声波传感器运行正常而未检测到障碍物,Guardian将使汽车缓慢停止 - 如果传感器没有响应,Guardian会硬制动,使车马上停止。 - 这是一种特殊情况,如果HMI通知驾驶员即将发生碰撞并且驾驶员在10秒内没有干预,Guardian会使用硬制动使汽车立即停止。 ``` 注意: 1.在上述任何一种情况下,如果Monitor检测到任何模块或硬件出现故障,Guardian将始终停止该车。 2.监控器和Guardian解耦以确保没有单点故障,并且可以为Guardian模块添加其他行为且不影响监控系统,监控还与HMI通信。 ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/Apollo_2.0_Software_Architecture.md
# Apollo 2.0 Software Architecture Core software modules running on the Apollo 2.0 powered autonomous vehicle include: - **Perception** — The perception module identifies the world surrounding the autonomous vehicle. There are two important submodules inside perception: obstacle detection and traffic light detection. - **Prediction** — The prediction module anticipates the future motion trajectories of the perceived obstacles. - **Routing** — The routing module tells the autonomous vehicle how to reach its destination via a series of lanes or roads. - **Planning** — The planning module plans the spatio-temporal trajectory for the autonomous vehicle to take. - **Control** — The control module executes the planned spatio-temporal trajectory by generating control commands such as throttle, brake, and steering. - **CanBus** — The CanBus is the interface that passes control commands to the vehicle hardware. It also passes chassis information to the software system. - **HD-Map** — This module is similar to a library. Instead of publishing and subscribing messages, it frequently functions as query engine support to provide ad-hoc structured information regarding the roads. - **Localization** — The localization module leverages various information sources such as GPS, LiDAR and IMU to estimate where the autonomous vehicle is located. The interactions of these modules are illustrated in the picture below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/Apollo_2_0_Software_Arch.png) Every module is running as a separate CarOS-based ROS node. Each module node publishes and subscribes certain topics. The subscribed topics serve as data input while the published topics serve as data output. The detailed interactions are described in the following sections. ## Perception Perception depends on the raw sensor data such as LiDAR point cloud data and camera data. In addition to these raw sensor data inputs, traffic light detection also depends on the localization data as well as the HD-Map. Because real-time ad-hoc traffic light detection is computationally infeasible, traffic light detection needs localization to determine when and where to start detecting traffic lights through the camera captured pictures. ## Prediction The prediction module estimates the future motion trajectories for all the perceived obstacles. The output prediction message wraps the perception information. Prediction subscribes to both localization and perception obstacle messages as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/prediction.png) When a localization update is received, the prediction module updates its internal status. The actual prediction is triggered when perception sends out its published perception obstacle message. ## Localization The routing module aggregates various data to locate the autonomous vehicle. There are two types of localization modes: OnTimer and Multiple SensorFusion. The first localization method is RTK-based, with a timer-based callback function `OnTimer`, as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/localization.png) The other localization method is the Multiple Sensor Fusion (MSF) method, where a bunch of event-triggered callback functions are registered, as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/localization_2.png) ## Routing The routing module needs to know the routing start point and routing end point, to compute the passage lanes and roads. Usually the routing start point is the autonomous vehicle location. The important data interface is an event triggered function called `OnRoutingRequest`, in which `RoutingResponse` is computed and published as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/routing.png) ## Planning Apollo 2.0 uses several information sources to plan a safe and collision free trajectory, so the planning module interacts with almost every other module. Initially, the planning module takes the prediction output. Because the prediction output wraps the original perceived obstacle, the planning module subscribes to the traffic light detection output rather than the perception obstacles output. Then, the planning module takes the routing output. Under certain scenarios, the planning module might also trigger a new routing computation by sending a routing request if the current route cannot be faithfully followed. Finally, the planning module needs to know the location (Localization: where I am) as well as the current autonomous vehicle information (Chassis: what is my status). The planning module is also triggered by a fixed frequency, and the main data interface is the `OnTimer` callback function that invokes the `RunOnce` function. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/planning_1.png) The data dependencies such as chassis, localization, traffic light, and prediction are managed through the `AdapterManager` class. The core software modules are similarly managed. For example, localization is managed through `AdapterManager::GetLocalization()` as shown below.![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/planning_2.png) ## Control As described in the planning module, control takes the planned trajectory as input, and generates the control command to pass to CanBus. It has three main data interfaces: OnPad, OnMonitor, and OnTimer. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/control_1.png) The `OnPad` and `OnMonitor` are routine interactions with the PAD-based human interface and simulations. The main data interface is the `OnTimer` interface, which periodically produces the actual control commands as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/control_2.png) ## CanBus The CanBus has two data interfaces as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/canbus_1.png) The first data interface is a timer-based publisher with the callback function `OnTimer`. This data interface periodically publishes the chassis information as well as chassis details, if enabled. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/canbus_2.png) The second data interface is an event-based publisher with a callback function `OnControlCommand`, which is triggered when the CanBus module receives control commands.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/Apollo_3.5_Software_Architecture.md
# Apollo 3.5 Software Architecture Core software modules running on the Apollo 3.5 powered autonomous vehicle include: - **Perception** — The perception module identifies the world surrounding the autonomous vehicle. There are two important submodules inside perception: obstacle detection and traffic light detection. - **Prediction** — The prediction module anticipates the future motion trajectories of the perceived obstacles. - **Routing** — The routing module tells the autonomous vehicle how to reach its destination via a series of lanes or roads. - **Planning** — The planning module plans the spatio-temporal trajectory for the autonomous vehicle to take. - **Control** — The control module executes the planned spatio-temporal trajectory by generating control commands such as throttle, brake, and steering. - **CanBus** — The CanBus is the interface that passes control commands to the vehicle hardware. It also passes chassis information to the software system. - **HD-Map** — This module is similar to a library. Instead of publishing and subscribing messages, it frequently functions as query engine support to provide ad-hoc structured information regarding the roads. - **Localization** — The localization module leverages various information sources such as GPS, LiDAR and IMU to estimate where the autonomous vehicle is located. - **HMI** - Human Machine Interface or DreamView in Apollo is a module for viewing the status of the vehicle, testing other modules and controlling the functioning of the vehicle in real-time. - **Monitor** - The surveillance system of all the modules in the vehicle including hardware. - **Guardian** - A new safety module that performs the function of an Action Center and intervenes should Monitor detect a failure. ``` Note: Detailed information on each of these modules is included below. ``` The interactions of these modules are illustrated in the picture below. ![img](images/Apollo_3_5_software_architecture.png) Every module is running as a separate CarOS-based ROS node. Each module node publishes and subscribes certain topics. The subscribed topics serve as data input while the published topics serve as data output. The detailed interactions are described in the following sections. ## Perception Apollo Perception 3.5 has following new features: * **Support for VLS-128 Line LiDAR** * **Obstacle detection through multiple cameras** * **Advanced traffic light detection** * **Configurable sensor fusion** The perception module incorporates the capability of using 5 cameras (2 front, 2 on either side and 1 rear) and 2 radars (front and rear) along with 3 16-line LiDARs (2 rear and 1 front) and 1 128-line LiDAR to recognize obstacles and fuse their individual tracks to obtain a final track list. The obstacle sub-module detects, classifies and tracks obstacles. This sub-module also predicts obstacle motion and position information (e.g., heading and velocity). For lane line, we construct lane instances by postprocessing lane parsing pixels and calculate the lane relative location to the ego-vehicle (L0, L1, R0, R1, etc.). ## Prediction The prediction module estimates the future motion trajectories for all the perceived obstacles. The output prediction message wraps the perception information. Prediction subscribes to localization, planning and perception obstacle messages as shown below. ![img](images/pred.png) When a localization update is received, the prediction module updates its internal status. The actual prediction is triggered when perception sends out its perception obstacle message. ## Localization The localization module aggregates various data to locate the autonomous vehicle. There are two types of localization modes: OnTimer and Multiple SensorFusion. The first localization method is RTK-based, with a timer-based callback function `OnTimer`, as shown below. ![img](images/localization1.png) The other localization method is the Multiple Sensor Fusion (MSF) method, where a bunch of event-triggered callback functions are registered, as shown below. ![img](images/localization2.png) ## Routing The routing module needs to know the routing start point and routing end point, to compute the passage lanes and roads. Usually the routing start point is the autonomous vehicle location. The `RoutingResponse` is computed and published as shown below. ![img](images/routing1.png) ## Planning Apollo 3.5 uses several information sources to plan a safe and collision free trajectory, so the planning module interacts with almost every other module. As Apollo matures and takes on different road conditions and driving use cases, planning has evolved to a more modular, scenario specific and wholistic approach. In this approach, each driving use case is treated as a different driving scenario. This is useful because an issue now reported in a particular scenario can be fixed without affecting the working of other scenarios as opposed to the previous versions, wherein an issue fix affected other driving use cases as they were all treated as a single driving scenario. Initially, the planning module takes the prediction output. Because the prediction output wraps the original perceived obstacle, the planning module subscribes to the traffic light detection output rather than the perception obstacles output. Then, the planning module takes the routing output. Under certain scenarios, the planning module might also trigger a new routing computation by sending a routing request if the current route cannot be faithfully followed. Finally, the planning module needs to know the location (Localization: where I am) as well as the current autonomous vehicle information (Chassis: what is my status). ![img](images/planning1.png) ## Control The Control takes the planned trajectory as input, and generates the control command to pass to CanBus. It has five main data interfaces: OnPad, OnMonitor, OnChassis, OnPlanning and OnLocalization. ![img](images/control1.png) The `OnPad` and `OnMonitor` are routine interactions with the PAD-based human interface and simulations. ## CanBus The CanBus has two data interfaces as shown below. ![img](images/canbus1.png) The first one is the `OnControlCommand` which is an event-based publisher with a callback function, which is triggered when the CanBus module receives control commands and the second one is `OnGuardianCommand`. ## HMI Human Machine Interface or DreamView in Apollo is a web application that: - visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc. - provides human-machine interface for user to view hardware status, turn on/off of modules, and start the autonomous driving car. - provides debugging tools, such as PnC Monitor to efficiently track module issues. ## Monitor The surveillance system of all the modules in the vehicle including hardware. Monitor receives Data from different modules and passes them on to HMI for the driver to view and ensure that all the modules are working without any issue. In the event of a module or hardware failure, monitor sends an alert to Guardian (new Action Center Module) which then decides on which action needs to be taken to prevent a crash. ## Guardian This new module is basically an action center that takes a decision based on the data that is sent by Monitor. There are 2 main functions of Guardian: - All modules working fine - Guardian allows the flow of control to work normally. Control signals are sent to CANBus as if Guardian were not present. - Module crash is detected by Monitor - if there is a failure detected by Monitor, Guardian will prevent Control signals from reaching CANBus and bring the car to a stop. There are 3 ways in which Guardian decides how to stop the car, and to do so, Guardian turns to the final Gatekeeper, Ultrasonic sensors, - If the Ultrasonic sensor is running fine without detecting an obstacle, Guardian will bring the car to a slow stop - If the sensor is not responding, Guardian applies a hard brake to bring the car to an immediate stop. - This is a special case, If the HMI informs the driver of an impending crash and the driver does not intervene for 10 seconds, Guardian applies a hard brake to bring the car to an immediate stop. ``` Note: 1. In either case above, Guardian will always stop the car should Monitor detect a failure in any module or hardware. 2. Monitor and Guardian are decoupled to ensure that there is not a single point of failure and also that with a module approach, the action center can be modified to include additional actions without affecting the functioning of the surveillance system as Monitor also communicates with HMI. ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/navigation_mode_tutorial_cn.md
### Apollo导航模式教程 **警告** > 本文档仅适用于 Apollo 2.5 及 3.0; Apollo 3.5及以后的版本不支持。 #### 1. 教程简介 无人驾驶系统利用实时感知信息和静态地图信息构建出完整驾驶环境,并在构建的环境中,依据routing数据,规划出行车轨迹,并由控制模块执行完成。Apollo导航模式在上述的框架下,针对高速、乡村道路等简单道路场景,进行了以下的提升: 1. Apollo导航模式以提高安全性和稳定性为目的,在驾驶环境中加入了静态引导信息,引导在驾驶环境中的轨迹规划,使其更安全更舒适。同时,引导信息也降低了对驾驶环境的完整性的要求 -- 即降低了对地图信息的要求。 2. Apollo导航模式使用了相对/车身坐标系。减少了sensor数据的转化。同时也支持各种驾驶模式之间的转化,以应对不同的驾驶场景和条件。 3. Apollo导航模式引入了百度地图的Routing功能,考虑实时路况信息,使得Routing的结果更实用,更精确,更稳定。也使得Apollo系统更易于落地和商用。 ##### 在本教程中,你将完成 学习完本教程后,你将能够在导航模式下进行规划模块(planning)的线下调试和开发。 ##### 在本教教中,你将掌握 - 如何设置Apollo导航模式 - 如何利用云端指引者发送指引线 - 如何利用录制的ros bag产生指引线并用线下指引者发送 - 如何进行规划模块的调试 ##### 在本教程中,你需要如下准备 - 下载并编译Apollo最新源码([Howto](https://github.com/ApolloAuto/apollo/tree/master/docs/demo_guide)) - 下载 [Apollo2.5 demo bag](https://github.com/ApolloAuto/apollo/releases/download/v2.5.0/demo_2.5.bag) #### 2. 配置导航模式 在导航模式下,有以下几个参数需要进行配置: - 感知方案:目前支持摄像头方案(CAMERA)和基于Mobileye的方案(MOBILEYE) - Apollo UTM Zone - 规划模块的Planner:目前支持EM, LATTICE, 和NAVI三种 - 系统限速:单位为米/秒 ##### 在Docker下修改配置文件 配置文件位于: ```bash /apollo/modules/tools/navigation/config/default.ini ``` 默认配置为: ```bash [PerceptionConf] # three perception solutions: MOBILEYE, CAMERA, and VELODYNE64 perception = CAMERA [LocalizationConf] utm_zone = 10 [PlanningConf] # three planners are available: EM, LATTICE, NAVI planner_type = EM # highest speed for planning algorithms, unit is meter per second speed_limit = 5 ``` 该默认配置为Apollo 2.5 Demo bag录制时的配置,在此教程中,我们直接使用。 ##### 生效配置信息 为了使配置生效,在Docker内的Apollo根目录下,运行如下命令 ```bash in_dev_docker:/apollo$ cd /apollo/modules/tools/navigation/config/ in_dev_docker:/apollo/modules/tools/navigation/config$ python navi_config.py default.ini ``` #### 3. 云端指引者的使用 ##### 回放demo bag 在进入Docker,启动Apollo之前,我们把[Apollo2.5 demo bag](https://github.com/ApolloAuto/apollo/releases/download/v2.5.0/demo_2.5.bag) 拷贝到Apollo代码根目录下的data目录中。 在Docker内编译成功后,我们用如下命令启动Dreamview: ```bash in_dev_docker:/apollo$ ./scripts/bootstrap.sh start ``` 并在本地浏览器中打开 ```bash http://localhost:8888 ``` 如下图所示,在模式框中选择“Navigation”。 ![img](images/navigation_mode_tutorial/navigation_mode_1_init.png) 然后在Docker内的apollo根目录下运行如下命令进行bag播放 ```bash in_dev_docker:/apollo$cd data in_dev_docker:/apollo/data$rosbag play demo_2.5.bag ``` 播放开始后,可以看到Dreamview界面如下 ![img](images/navigation_mode_tutorial/navigation_mode_2_play.png) ##### 请求云端指引线 在地图中选择一个目的地(沿canada路),点击地图视图中的红色Route按钮,云端指引者会接收到这个请求,并返回指引线,该指引线会被显示在地图视图中。如下图所示。 ![img](images/navigation_mode_tutorial/navigation_mode_3_cloud.png) 以上就是云端指引者的调用过程。 #### 4. 离线指引者工具的使用 目前云端指引者只覆盖了有限的区域。除了云端的服务之外,我们还提供了离线指引者工具来制作和发送线下指引线。在本教程中,我们以[Apollo2.5 demo bag](https://github.com/ApolloAuto/apollo/releases/download/v2.5.0/demo_2.5.bag)为例来生成指引线。 ##### 指引线的制作 生成指引线的步骤为 - 从bag中提取路径数据 ```bash in_dev_docker:/apollo$cd modules/tools/navigator in_dev_docker:/apollo/modules/tools/navigator$python extractor.py /apollo/data/demo_2.5.bag ``` 提取出来的路径数据在路径 ```bash in_dev_docker:/apollo/modules/tools/navigator$ ``` 中的 ```bash path_demo_2.5.bag.txt ``` - 平滑路径数据 ```bash in_dev_docker:/apollo/modules/tools/navigator$bash smooth.sh path_demo_2.5.bag.txt 200 ``` 平滑后的的数据在 ```bash in_dev_docker:/apollo/modules/tools/navigator$path_demo_2.5.bag.txt.smoothed ``` ##### 指引线的发送 得到平滑后的数据就可以发送到Apollo系统中,作为指引线,步骤为: ```bash in_dev_docker:/apollo/modules/tools/navigator$python navigator.py path_demo_2.5.bag.txt.smoothed ``` 发送完成后,Dreamview的地图视图中的红色指引线会更新为如下图所示: ![img](images/navigation_mode_tutorial/navigation_mode_4_offline.png) #### 5. 规划模块的调试 ##### 调试数据准备 利用bag来进行模块调试,首先要把bag中的相应ros message过滤掉。假设我们想要调试规划模块,我们需要把消息 ``` /apollo/planning ``` 过滤,使用以下命令 ```bash in_dev_docker:/apollo$cd data in_dev_docker:/apollo/data$rosbag filter demo_2.5.bag demo_2.5_no_planning.bag "topic != '/apollo/planning'" ``` 过滤后的bag位于 ```bash in_dev_docker:/apollo/data$demo_2.5_no_planning.bag ``` ##### 规划轨迹的产生 我们播放没有规划的bag,用下面的命令 ```bash in_dev_docker:/apollo/data$rosbag play demo_2.5_no_planning.bag ``` 在Dreamview中我们会看到车辆的规划轨迹没有输出,如下图 ![img](images/navigation_mode_tutorial/navigation_mode_5_no_planning.png) 我们在Dreamview中打开Navi Planning模块,如下图 ![img](images/navigation_mode_tutorial/navigation_mode_6_live_planning.png) 我们看到实时计算的车辆的规划轨迹显示在Dreamview中。这时你可以试着更改一些规划模块的配置 ``` in_dev_docker:/apollo/modules/planning/conf$planning_config_navi.pb.txt ``` 去了解,这些参数会对规划结果有什么影响。或者修改规划算法的代码,进行调试。 #### 6.结束 恭喜你完成了本教程。现在你应该了解 - 如何设置Apollo导航模式 - 如何利用云端指引者发送指引线 - 如何利用录制的ros bag产生指引线并用线下指引者发送 - 如何进行规划模块的调试 你也可以试着利用demo bag对其他一些模块进行调试。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/Apollo_3.0_Software_Architecture.md
# Apollo 3.0 Software Architecture Core software modules running on the Apollo 3.0 powered autonomous vehicle include: - **Perception** — The perception module identifies the world surrounding the autonomous vehicle. There are two important submodules inside perception: obstacle detection and traffic light detection. - **Prediction** — The prediction module anticipates the future motion trajectories of the perceived obstacles. - **Routing** — The routing module tells the autonomous vehicle how to reach its destination via a series of lanes or roads. - **Planning** — The planning module plans the spatio-temporal trajectory for the autonomous vehicle to take. - **Control** — The control module executes the planned spatio-temporal trajectory by generating control commands such as throttle, brake, and steering. - **CanBus** — The CanBus is the interface that passes control commands to the vehicle hardware. It also passes chassis information to the software system. - **HD-Map** — This module is similar to a library. Instead of publishing and subscribing messages, it frequently functions as query engine support to provide ad-hoc structured information regarding the roads. - **Localization** — The localization module leverages various information sources such as GPS, LiDAR and IMU to estimate where the autonomous vehicle is located. - **HMI** - Human Machine Interface or DreamView in Apollo is a module for viewing the status of the vehicle, testing other modules and controlling the functioning of the vehicle in real-time. - **Monitor** - The surveillance system of all the modules in the vehicle including hardware. - **Guardian** - A new safety module that performs the function of an Action Center and intervenes should Monitor detect a failure. ``` Note: Detailed information on each of these modules is included below. ``` The interactions of these modules are illustrated in the picture below. ![img](images/Apollo_3.0_SW.png) Every module is running as a separate CarOS-based ROS node. Each module node publishes and subscribes certain topics. The subscribed topics serve as data input while the published topics serve as data output. The detailed interactions are described in the following sections. ## Perception Perception depends on the raw sensor data such as LiDAR point cloud data and camera data. In addition to these raw sensor data inputs, traffic light detection also depends on the localization data as well as the HD-Map. Because real-time ad-hoc traffic light detection is computationally infeasible, traffic light detection needs localization to determine when and where to start detecting traffic lights through the camera captured pictures. Changes to Apollo 3.0: - CIPV detection/ Tailgating – moving within a single lane - Whole lane line support - bold line support for long range accuracy. There are 2 different types on installations for Camera, low and high installation. - Asynchronous sensor fusion – get all the information and get data points by asynchronously fusing LiDAR, Radar and Camera data. This is specifically important because of the frame rate differences in the different sensors – Radar is 10ms, Camera is 33ms and LiDAR is 100ms - Online pose estimation – determines angle change and estimates it when there are bumps or slopes to ensure that the sensors move with the car and the angle/pose changes accordingly - Visual localization – we now use camera for localization. This functionality is currently being tested. - Ultrasonic Sensor – Currently being tested as the final gatekeeper to be used in conjunction with Guardian for Automated Emergency brake and vertical/perpendicular parking. ## Prediction The prediction module estimates the future motion trajectories for all the perceived obstacles. The output prediction message wraps the perception information. Prediction subscribes to both localization and perception obstacle messages as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/prediction.png) When a localization update is received, the prediction module updates its internal status. The actual prediction is triggered when perception sends out its published perception obstacle message. ## Localization The localization module aggregates various data to locate the autonomous vehicle. There are two types of localization modes: OnTimer and Multiple SensorFusion. The first localization method is RTK-based, with a timer-based callback function `OnTimer`, as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/localization.png) The other localization method is the Multiple Sensor Fusion (MSF) method, where a bunch of event-triggered callback functions are registered, as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/localization_2.png) ## Routing The routing module needs to know the routing start point and routing end point, to compute the passage lanes and roads. Usually the routing start point is the autonomous vehicle location. The important data interface is an event triggered function called `OnRoutingRequest`, in which `RoutingResponse` is computed and published as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/routing.png) ## Planning Apollo 2.0 uses several information sources to plan a safe and collision free trajectory, so the planning module interacts with almost every other module. Initially, the planning module takes the prediction output. Because the prediction output wraps the original perceived obstacle, the planning module subscribes to the traffic light detection output rather than the perception obstacles output. Then, the planning module takes the routing output. Under certain scenarios, the planning module might also trigger a new routing computation by sending a routing request if the current route cannot be faithfully followed. Finally, the planning module needs to know the location (Localization: where I am) as well as the current autonomous vehicle information (Chassis: what is my status). The planning module is also triggered by a fixed frequency, and the main data interface is the `OnTimer` callback function that invokes the `RunOnce` function. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/planning_1.png) The data dependencies such as chassis, localization, traffic light, and prediction are managed through the `AdapterManager` class. The core software modules are similarly managed. For example, localization is managed through `AdapterManager::GetLocalization()` as shown below.![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/planning_2.png) ## Control As described in the planning module, control takes the planned trajectory as input, and generates the control command to pass to CanBus. It has three main data interfaces: OnPad, OnMonitor, and OnTimer. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/control_1.png) The `OnPad` and `OnMonitor` are routine interactions with the PAD-based human interface and simulations. The main data interface is the `OnTimer` interface, which periodically produces the actual control commands as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/control_2.png) ## CanBus The CanBus has two data interfaces as shown below. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/canbus_1.png) The first data interface is a timer-based publisher with the callback function `OnTimer`. This data interface periodically publishes the chassis information as well as chassis details, if enabled. ![img](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/canbus_2.png) The second data interface is an event-based publisher with a callback function `OnControlCommand`, which is triggered when the CanBus module receives control commands. ## HMI Human Machine Interface or DreamView in Apollo is a web application that: - visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc. - provides human-machine interface for user to view hardware status, turn on/off of modules, and start the autonomous driving car. - provides debugging tools, such as PnC Monitor to efficiently track module issues. ## Monitor The surveillance system of all the modules in the vehicle including hardware. Monitor receives Data from different modules and passes them on to HMI for the driver to view and ensure that all the modules are working without any issue. In the event of a module or hardware failure, monitor sends an alert to Guardian (new Action Center Module) which then decides on which action needs to be taken to prevent a crash. ## Guardian This new module is basically an action center that takes a decision based on the data that is sent by Monitor. There are 2 main functions of Guardian: - All modules working fine - Guardian allows the flow of control to work normally. Control signals are sent to CANBus as if Guardian were not present. - Module crash is detected by Monitor - if there is a failure detected by Monitor, Guardian will prevent Control signals from reaching CANBus and bring the car to a stop. There are 3 ways in which Guardian decides how to stop the car, and to do so, Guardian turns to the final Gatekeeper, Ultrasonic sensors, - If the Ultrasonic sensor is running fine without detecting an obstacle, Guardian will bring the car to a slow stop - If the sensor is not responding, Guardian applies a hard brake to bring the car to an immediate stop. - This is a special case, If the HMI informs the driver of an impending crash and the driver does not intervene for 10 seconds, Guardian applies a hard brake to bring the car to an immediate stop. ``` Note: 1. In either case above, Guardian will always stop the car should Monitor detect a failure in any module or hardware. 2. Monitor and Guardian are decoupled to ensure that there is not a single point of failure and also that with a module approach, the action center can be modified to include additional actions without affecting the functioning of the surveillance system as Monitor also communicates with HMI. ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/Apollo_5.5_Software_Architecture.md
# Apollo 5.5 Software Architecture Core software modules running on the Apollo 3.5 powered autonomous vehicle include: - **Perception** — The perception module identifies the world surrounding the autonomous vehicle. There are two important submodules inside perception: obstacle detection and traffic light detection. - **Prediction** — The prediction module anticipates the future motion trajectories of the perceived obstacles. - **Routing** — The routing module tells the autonomous vehicle how to reach its destination via a series of lanes or roads. - **Planning** — The planning module plans the spatio-temporal trajectory for the autonomous vehicle to take. - **Control** — The control module executes the planned spatio-temporal trajectory by generating control commands such as throttle, brake, and steering. - **CanBus** — The CanBus is the interface that passes control commands to the vehicle hardware. It also passes chassis information to the software system. - **HD-Map** — This module is similar to a library. Instead of publishing and subscribing messages, it frequently functions as query engine support to provide ad-hoc structured information regarding the roads. - **Localization** — The localization module leverages various information sources such as GPS, LiDAR and IMU to estimate where the autonomous vehicle is located. - **HMI** - Human Machine Interface or DreamView in Apollo is a module for viewing the status of the vehicle, testing other modules and controlling the functioning of the vehicle in real-time. - **Monitor** - The surveillance system of all the modules in the vehicle including hardware. - **Guardian** - A new safety module that performs the function of an Action Center and intervenes should Monitor detect a failure. - **Storytelling** - A new module that isolates and manages complex scenarios, creating stories that would trigger multiple modules' actions. All other modules can subscribe to this particular module. ``` Note: Detailed information on each of these modules is included below. ``` The interactions of these modules are illustrated in the picture below. ![img](images/Apollo_3_5_software_architecture.png) Every module is running as a separate CarOS-based ROS node. Each module node publishes and subscribes certain topics. The subscribed topics serve as data input while the published topics serve as data output. The detailed interactions are described in the following sections. ## Perception Apollo Perception 3.5 has following new features: * **Support for VLS-128 Line LiDAR** * **Obstacle detection through multiple cameras** * **Advanced traffic light detection** * **Configurable sensor fusion** The perception module incorporates the capability of using 5 cameras (2 front, 2 on either side and 1 rear) and 2 radars (front and rear) along with 3 16-line LiDARs (2 rear and 1 front) and 1 128-line LiDAR to recognize obstacles and fuse their individual tracks to obtain a final track list. The obstacle sub-module detects, classifies and tracks obstacles. This sub-module also predicts obstacle motion and position information (e.g., heading and velocity). For lane line, we construct lane instances by postprocessing lane parsing pixels and calculate the lane relative location to the ego-vehicle (L0, L1, R0, R1, etc.). ## Prediction The prediction module estimates the future motion trajectories for all the perceived obstacles. The output prediction message wraps the perception information. Prediction subscribes to localization, planning and perception obstacle messages as shown below. ![img](images/pred.png) When a localization update is received, the prediction module updates its internal status. The actual prediction is triggered when perception sends out its perception obstacle message. ## Localization The localization module aggregates various data to locate the autonomous vehicle. There are two types of localization modes: OnTimer and Multiple SensorFusion. The first localization method is RTK-based, with a timer-based callback function `OnTimer`, as shown below. ![img](images/localization1.png) The other localization method is the Multiple Sensor Fusion (MSF) method, where a bunch of event-triggered callback functions are registered, as shown below. ![img](images/localization2.png) ## Routing The routing module needs to know the routing start point and routing end point, to compute the passage lanes and roads. Usually the routing start point is the autonomous vehicle location. The `RoutingResponse` is computed and published as shown below. ![img](images/routing1.png) ## Planning Apollo 3.5 uses several information sources to plan a safe and collision free trajectory, so the planning module interacts with almost every other module. As Apollo matures and takes on different road conditions and driving use cases, planning has evolved to a more modular, scenario specific and wholistic approach. In this approach, each driving use case is treated as a different driving scenario. This is useful because an issue now reported in a particular scenario can be fixed without affecting the working of other scenarios as opposed to the previous versions, wherein an issue fix affected other driving use cases as they were all treated as a single driving scenario. Initially, the planning module takes the prediction output. Because the prediction output wraps the original perceived obstacle, the planning module subscribes to the traffic light detection output rather than the perception obstacles output. Then, the planning module takes the routing output. Under certain scenarios, the planning module might also trigger a new routing computation by sending a routing request if the current route cannot be faithfully followed. Finally, the planning module needs to know the location (Localization: where I am) as well as the current autonomous vehicle information (Chassis: what is my status). ![img](images/planning1.png) ## Control The Control takes the planned trajectory as input, and generates the control command to pass to CanBus. It has five main data interfaces: OnPad, OnMonitor, OnChassis, OnPlanning and OnLocalization. ![img](images/control1.png) The `OnPad` and `OnMonitor` are routine interactions with the PAD-based human interface and simulations. ## CanBus The CanBus has two data interfaces as shown below. ![img](images/canbus1.png) The first one is the `OnControlCommand` which is an event-based publisher with a callback function, which is triggered when the CanBus module receives control commands and the second one is `OnGuardianCommand`. ## HMI Human Machine Interface or DreamView in Apollo is a web application that: - visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc. - provides human-machine interface for user to view hardware status, turn on/off of modules, and start the autonomous driving car. - provides debugging tools, such as PnC Monitor to efficiently track module issues. ## Monitor The surveillance system of all the modules in the vehicle including hardware. Monitor receives Data from different modules and passes them on to HMI for the driver to view and ensure that all the modules are working without any issue. In the event of a module or hardware failure, monitor sends an alert to Guardian (new Action Center Module) which then decides on which action needs to be taken to prevent a crash. ## Guardian This new module is basically an action center that takes a decision based on the data that is sent by Monitor. There are 2 main functions of Guardian: - All modules working fine - Guardian allows the flow of control to work normally. Control signals are sent to CANBus as if Guardian were not present. - Module crash is detected by Monitor - if there is a failure detected by Monitor, Guardian will prevent Control signals from reaching CANBus and bring the car to a stop. There are 3 ways in which Guardian decides how to stop the car, and to do so, Guardian turns to the final Gatekeeper, Ultrasonic sensors, - If the Ultrasonic sensor is running fine without detecting an obstacle, Guardian will bring the car to a slow stop - If the sensor is not responding, Guardian applies a hard brake to bring the car to an immediate stop. - This is a special case, If the HMI informs the driver of an impending crash and the driver does not intervene for 10 seconds, Guardian applies a hard brake to bring the car to an immediate stop. ``` Note: 1. In either case above, Guardian will always stop the car should Monitor detect a failure in any module or hardware. 2. Monitor and Guardian are decoupled to ensure that there is not a single point of failure and also that with a module approach, the action center can be modified to include additional actions without affecting the functioning of the surveillance system as Monitor also communicates with HMI. ``` ## Storytelling Storytelling is a global and high-level Scenario Manager to help coordinate cross-module actions. In order to safely operate the autonomous vehicle on urban roads, complex planning scenarios are needed to ensure safe driving. These complex scenarios may involve different modules to ensure proper maneuvering. In order to avoid a sequential based approach to such scenarios, a new isolated scenario manager, the "Storytelling" module was created. This module creates stories which are complex scenarios that would trigger multiple modules' actions. Per some predefined rules, this module creates ne or multiple stories and publishes to `/apollo/storytelling` channel. The main advantage of this module is to fine tune the driving experience and also isolate complex scenarios packaging them into stories that can be subscribed to by other modules like Planning, Control etc.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/14_Others/apollo1.5_top_study_notes_cn.md
# Apollo 系统架构代码分析 以localization模块为例,其余模块节点类似。 ## 代码分析 主要文件: main.cc,apollo_app.cc,apollo_app.h。 ### mian.c文件 文件中只有一行代码:APOLLO_MAIN(apollo::localization::Localization) ,使用宏APOLLO_MAIN,开启了Localization节点,这里localization节点开始运行,这里的的节点与ros中的node概念一致,相当于一个进程。 ### APOLLO_MAIN宏解析: * APOLLO_MAIN宏定义位于"modules/common/apollo_app.h"文件。 * 设置log和SIGINT信号处理程序,收到信号,关闭本节点。 * 创建模块类对象,设置节点名字,调用基类(ApolloApp)的Spin()函数。 ### ApolloApp类: * Spin()函数属于类ApolloApp public成员函数,类ApolloApp是所有模块类的基类。 * Public成员有name()函数,用于获取模块名字。Spin()函数用于初始化、启动、当ros关闭时关闭模块节点。还有一个析构函数。 * Protected成员都是vritual接口,子类都会重写,在Spin()函数中调用,其实现实在具体各个模块内部。Init()函数完成加载模块的配置文件,创建订阅话题。Start()函数:注册回调函数,回调函数负责节点核心任务,通常由上游话题或者timer触发。Stop()函数,结束节点,正常时不会执行到。ReportModuleStatus()返回模块状态。apollo_app_sigint_handler()函数,信号处理函数。 * Spin()函数使用init(),start()和stop()函数完成模块节点的实现。此函数一般不会被重写,也就是使用ApolloApp的实现。
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/版本介绍/apollo_3.0_technical_tutorial_cn.md
# Apollo 3.0 技术指南 ## 概况 > 了解Apollo3.0基础概念和Apollo3.0快速入门指南 * [Apollo 3.0快速入门指南](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_3_0_quick_start_cn.md) ## 硬件和系统安装 > 了解Apollo3.0硬件和系统安装过程 * [Apollo 3.0硬件和系统安装指南](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_3_0_hardware_system_installation_guide_cn.md) ## 校准 > 了解校准的过程 * [Apollo激光雷达校准指南](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_1_5_lidar_calibration_guide_cn.md) * [Apollo 2.0传感器校准指南](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_2_0_sensor_calibration_guide_cn.md) * [多激光雷达全球导航卫星系统(Multiple-LiDAR GNSS)校准指南](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/multiple_lidar_gnss_calibration_guide_cn.md) * [Apollo坐标系统](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/coordination_cn.md) ## 软件安装 > 了解Apollo3.0的软件安装过程 * [Apollo软件安装指南](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_software_installation_guide_cn.md) * [如何调试Dreamview启动问题](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_debug_dreamview_start_problem_cn.md) * [运行线下演示](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/demo_guide/README_cn.md) ## Apollo系统架构和原理 > 了解核心模块的架构和原理 * [Apollo 3.0 软件架构](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/Apollo_3.0_Software_Architecture_cn.md "Apollo software architecture") * [3D 障碍物感知](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/3d_obstacle_perception_cn.md) * [Apollo 3.0感知](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/perception/README.md) * [二次规划(QP)样条路径优化](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/qp_spline_path_optimizer_cn.md) * [二次规划(QP)样条ST速度优化](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/qp_spline_st_speed_optimizer_cn.md) * [参考线平滑设定](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/reference_line_smoother_cn.md) * [交通信号灯感知](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/traffic_light_cn.md) ## 功能模块和相关扩展知识 > 了解Apollo功能模块和相关扩展知识 * [控制总线模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/canbus/README.md) * [通用模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/common/README.md) * [控制模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/control/README.md) * [数据模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/data/README.md) * [定位模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/localization/README.md) * [感知模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/perception/README.md) * [Planning模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/planning/README.md) * [预测模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/prediction/README.md) * [寻路模块](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/routing/README.md) * [如何添加新的GPS接收器](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_gps_receiver_cn.md) * [如何添加新的CAN卡](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_can_card_cn.md ) * [如何添加新的控制算法](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_control_algorithm_cn.md) * [如何在预测模块中添加新评估器](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_evaluator_in_prediction_module_cn.md) * [如何在预测模块中添加一个预测器](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_predictor_in_prediction_module_cn.md) * [如何在Apollo中添加新的车辆](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_vehicle_cn.md) * [如何添加新的外部依赖项](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_an_external_dependency_cn.md) ## 开发者工具 > 了解开发者工具 * [使用VSCode构建、调试Apollo项目](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md "How to build and debug Apollo in VSCode") * [DreamView用法介绍](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/dreamview_usage_table_cn.md)
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/版本介绍/apollo_3.5_technical_tutorial.md
# Apollo 3.5 Technical Tutorial ## Overview > Learn how to setup Apollo 3.5 * [Apollo 3.5 quick start](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/quickstart/apollo_3_5_quick_start.md) ## Hardware system installation > Learn the procedure of Apollo 3.5 hardware system installation * [Apollo 3.5 Hardware System Installation guide](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/quickstart/apollo_3_5_hardware_system_installation_guide.md) ## Software installation > Apollo 3.5 Software and Dreamview installation * [Apollo software installation guide](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/quickstart/apollo_software_installation_guide.md) * [How to Debug a Dreamview Start Problem](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_debug_dreamview_start_problem.md) ## Apollo Cyber > All you need to know about Apollo Cyber RT * [Apollo 3.5 all you need to know about Cyber RT](https://github.com/ApolloAuto/apollo/blob/r3.5.0/cyber/README.md) ## Calibration > Apollo Calibration service * [Calibration guide between LiDAR and INS](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/apollo_lidar_imu_calibration_guide.md) * [Guide for Camera-to-Camera calibration, Camera-to-LiDAR calibration, Radar-to-Camera calibration, IMU-to-Vehicle calibration](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/quickstart/apollo_2_0_sensor_calibration_guide.md) * [Multiple-LiDAR GNSS calibration guide](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/quickstart/multiple_lidar_gnss_calibration_guide.md) * [Apollo Coordinate System](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/coordination.pdf) ## Software architecture and algorithms > Deep dive into Apollo's modules and algorithms * [Apollo 3.5 software architecture](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/Apollo_3.5_Software_Architecture.md "Apollo software architecture") * [3D Obstacle Perception](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/3d_obstacle_perception.md) * [Apollo 3.5 Perception](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/perception/README.md) * [QP-Spline-Path Optimizer](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/qp_spline_path_optimizer.md) * [QP-Spline-ST-Speed Optimizer](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/qp_spline_st_speed_optimizer.md) * [Reference Line Smoother](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/reference_line_smoother.md) * [Traffic Light Perception](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/traffic_light.md) ## Software modules > Learn Apollo software modules and extension * [Canbus module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/canbus/README.md) * [Common module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/common/README.md) * [Control module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/control/README.md) * [Data module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/data/README.md) * [Localization module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/localization/README.md) * [Perception module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/perception/README.md) * [Planning module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/planning/README.md) * [Prediction module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/prediction/README.md) * [Routing module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/modules/routing/README.md) ## Version relevant How to guides > Find quick answers to commonly as How to questions for Apollo 3.5 * [How to Add a New GPS Receiver](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_add_a_gps_receiver.md) * [How to Add a New CAN Card](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_add_a_new_can_card.md ) * [How to Add a New Control Algorithm](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_add_a_new_control_algorithm.md) * [How to Add a New Evaluator in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_add_a_new_evaluator_in_prediction_module.md) * [How to Add a New Predictor in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_add_a_new_predictor_in_prediction_module.md) * [How to Add a New Vehicle](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_add_a_new_vehicle.md) * [How to Add a New External Dependency](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_add_an_external_dependency.md) ## Developer Tools > Learn Apollo developer tools * [How to build and debug Apollo in VSCode](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md "How to build and debug Apollo in VSCode") * [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/r3.5.0/docs/specs/dreamview_usage_table.md)
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/版本介绍/apollo_5.0_technical_tutorial.md
# Apollo 5.0 Technical Tutorial ## Overview > Learn how to setup Apollo 5.0 * [Apollo 5.0 quick start](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/quickstart/apollo_5_0_quick_start.md) ## Hardware system installation > Learn the procedure of Apollo 5.0 hardware system installation * [Apollo 5.0 Hardware System Installation guide](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/quickstart/apollo_3_5_hardware_system_installation_guide.md) ## Software installation > Apollo 5.0 Software and Dreamview installation * [Apollo software installation guide](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/quickstart/apollo_software_installation_guide.md) * [How to Build and Release](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_build_and_release.md) * [How to Debug a Dreamview Start Problem](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_debug_dreamview_start_problem.md) ## Apollo Cyber > All you need to know about Apollo Cyber RT * [Apollo 5.0 all you need to know about Cyber RT](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/cyber/README.md) ## Software architecture and algorithms > Deep dive into Apollo's modules and algorithms * [Apollo 5.0 software architecture](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/Apollo_3.5_Software_Architecture.md "Apollo software architecture") - The core software architecture for Apollo 5.5 remains the same as Apollo 3.5 * [3D Obstacle Perception](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/3d_obstacle_perception.md) * [Apollo 5.0 Perception](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/perception/README.md) * [Open Space Planner](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/Open_Space_Planner.md) * [QP-Spline-Path Optimizer](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/qp_spline_path_optimizer.md) * [QP-Spline-ST-Speed Optimizer](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/qp_spline_st_speed_optimizer.md) * [Reference Line Smoother](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/reference_line_smoother.md) * [Traffic Light Perception](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/traffic_light.md) ## Software modules > Learn Apollo software modules and extension * [Canbus module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/canbus/README.md) * [Common module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/common/README.md) * [Control module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/control/README.md) * [Localization module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/localization/README.md) * [Perception module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/perception/README.md) * [Planning module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/planning/README.md) * [Prediction module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/prediction/README.md) * [Routing module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/modules/routing/README.md) ## Version relevant How to guides * [How to run the new Map Verification tool](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_run_map_verification_tool.md) * [How to Add a New GPS Receiver](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_add_a_gps_receiver.md) * [How to Add a New CAN Card](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_add_a_new_can_card.md ) * [How to Add a New Control Algorithm](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_add_a_new_control_algorithm.md) * [How to Add a New Evaluator in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_add_a_new_evaluator_in_prediction_module.md) * [How to Add a New Predictor in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_add_a_new_predictor_in_prediction_module.md) * [How to Add a New Vehicle](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_add_a_new_vehicle.md) * [How to Add a New External Dependency](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_add_an_external_dependency.md) ## Developer Tools > Learn Apollo developer tools * [How to build and debug Apollo in VSCode](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md "How to build and debug Apollo in VSCode") * [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/dreamview_usage_table.md) * [Introduction to Dreamland](https://github.com/ApolloAuto/apollo/blob/r5.0.0/docs/specs/Dreamland_introduction.md)
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/版本介绍/apollo_2.5_technical_tutorial.md
# Apollo 2.5 Technical Tutorial ## Overview > Learn Apollo basic concepts and Apollo 2.5 quick start * [Apollo 2.5 quick start](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/quickstart/apollo_2_5_quick_start.md "Apollo 2.5 quick start") ## Hardware system installation > Learn the procedure of Apollo 2.5 hardware system installation * [Apollo 2.5 hardware system installation guide](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/quickstart/apollo_2_5_hardware_system_installation_guide_v1.md "Apollo 2.5 hardware system installation guide") ## Calibration > Learn the process of the calibration service * [Calibration guide between LiDAR and INS](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/apollo_lidar_imu_calibration_guide.md) * [Guide for Camera-to-Camera calibration, Camera-to-LiDAR calibration, Radar-to-Camera calibration, IMU-to-Vehicle calibration](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/quickstart/apollo_2_0_sensor_calibration_guide.md "Guide for Camera-to-Camera Calibration, Camera-to-LiDAR Calibration, Radar-to-Camera Calibration, IMU-to-Vehicle Calibration") * [Multiple-LiDAR GNSS calibration guide](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/quickstart/multiple_lidar_gnss_calibration_guide.md "Multiple-LiDAR GNSS calibration guide") * [Apollo coordination](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/coordination.pdf "Apollo coordination") ## Software installation > Learn the procedure of Apollo 2.5 software system installation * [Apollo software installation guide](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/quickstart/apollo_software_installation_guide.md "Apollo software installation guide") * [How to Debug a Dreamview Start Problem](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_debug_dreamview_start_problem.md) * [Run offline demo](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/demo_guide/README.md "Run offline demo") ## Software architecture and principles > Learn Apollo software architecture and principles of core modules * [Apollo software architecture](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/Apollo_2.0_Software_Architecture.md "Apollo software architecture") * [3D Obstacle Perception](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/3d_obstacle_perception.md) * [Apollo 2.5 Perception](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/perception_apollo_2.5.md) * [QP-Spline-Path Optimizer](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/qp_spline_path_optimizer.md) * [QP-Spline-ST-Speed Optimizer](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/qp_spline_st_speed_optimizer.md) * [Reference Line Smoother](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/reference_line_smoother.md) * [Traffic Light Perception](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/traffic_light.md) ## Software modules and extension > Learn Apollo software modules and extension * [Canbus module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/canbus/README.md) * [Common module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/common/README.md) * [Control module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/control/README.md) * [Data module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/data/README.md) * [Localization module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/localization/README.md) * [Perception module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/perception/README.md) * [Planning module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/planning/README.md) * [Prediction module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/prediction/README.md) * [Routing module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/modules/routing/README.md) * [How to Add a New GPS Receiver](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_add_a_gps_receiver.md "How to add a new GPS Receiver") * [How to Add a New CAN Card](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_add_a_new_can_card.md "How to Add a New CAN Card") * [How to Add a New Control Algorithm](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_add_a_new_control_algorithm.md "How to Add a New Control Algorithm") * [How to Add a New Evaluator in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_add_a_new_evaluator_in_prediction_module.md) * [How to Add a New Predictor in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_add_a_new_predictor_in_prediction_module.md) * [How to Add a New Vehicle](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_add_a_new_vehicle.md) * [How to Add a New External Dependency](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_add_an_external_dependency.md) ## Developer-Friendliness > Learn Apollo developer tools * [Apollo 2.5 map collection guide](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/quickstart/apollo_2_5_map_collection_guide.md "Apollo 2.5 map collection guide") * [How to build and debug Apollo in VSCode](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md "How to build and debug Apollo in VSCode") * [How to use Apollo 2.5 navigation mode](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md "[How to use Apollo 2.5 navigation mode") * [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/r2.5.0/docs/specs/dreamview_usage_table.md "Introduction of Dreamview")
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/版本介绍/apollo_3.0_technical_tutorial.md
# Apollo 3.0 Technical Tutorial ## Overview > Learn Apollo basic concepts and Apollo 3.0 quick start * [Apollo 3.0 quick start](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_3_0_quick_start.md) ## Hardware system installation > Learn the procedure of Apollo 3.0 hardware system installation * [Apollo 3.0 Hardware System Installation guide](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_3_0_hardware_system_installation_guide.md) ## Calibration > Learn the process of the calibration service * [Calibration guide between LiDAR and INS](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/apollo_lidar_imu_calibration_guide.md) * [Guide for Camera-to-Camera calibration, Camera-to-LiDAR calibration, Radar-to-Camera calibration, IMU-to-Vehicle calibration](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_2_0_sensor_calibration_guide.md) * [Multiple-LiDAR GNSS calibration guide](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/multiple_lidar_gnss_calibration_guide.md) * [Apollo Coordinate System](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/coordination.pdf) ## Software installation > Learn the procedure of Apollo 3.0 software system installation * [Apollo software installation guide](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/quickstart/apollo_software_installation_guide.md) * [How to Debug a Dreamview Start Problem](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_debug_dreamview_start_problem.md) * [Run offline demo](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/demo_guide/README.md) ## Software architecture and principles > Learn Apollo software architecture and principles of core modules * [Apollo software architecture](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/Apollo_3.0_Software_Architecture.md "Apollo software architecture") * [3D Obstacle Perception](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/3d_obstacle_perception.md) * [Apollo 3.0 Perception](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/perception/README.md) * [QP-Spline-Path Optimizer](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/qp_spline_path_optimizer.md) * [QP-Spline-ST-Speed Optimizer](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/qp_spline_st_speed_optimizer.md) * [Reference Line Smoother](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/reference_line_smoother.md) * [Traffic Light Perception](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/traffic_light.md) ## Software modules and extension > Learn Apollo software modules and extension * [Canbus module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/canbus/README.md) * [Common module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/common/README.md) * [Control module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/control/README.md) * [Data module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/data/README.md) * [Localization module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/localization/README.md) * [Perception module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/perception/README.md) * [Planning module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/planning/README.md) * [Prediction module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/prediction/README.md) * [Routing module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/modules/routing/README.md) * [How to Add a New GPS Receiver](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_gps_receiver.md) * [How to Add a New CAN Card](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_can_card.md ) * [How to Add a New Control Algorithm](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_control_algorithm.md) * [How to Add a New Evaluator in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_evaluator_in_prediction_module.md) * [How to Add a New Predictor in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_predictor_in_prediction_module.md) * [How to Add a New Vehicle](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_a_new_vehicle.md) * [How to Add a New External Dependency](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_add_an_external_dependency.md) ## Developer-Friendliness > Learn Apollo developer tools * [How to build and debug Apollo in VSCode](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md "How to build and debug Apollo in VSCode") * [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/r3.0.0/docs/specs/dreamview_usage_table.md)
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/版本介绍/apollo_5.5_technical_tutorial.md
# Apollo 5.5 Technical Tutorial ## Overview > Learn how to setup Apollo 5.5 * [Apollo 5.5 quick start](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/quickstart/apollo_5_5_quick_start.md) ## Hardware system installation > Learn the procedure of Apollo 5.5 hardware system installation * [Apollo 5.5 Hardware System Installation guide](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/quickstart/apollo_3_5_hardware_system_installation_guide.md) - The Hardware setup for Apollo 5.5 remains the same as Apollo 3.5 ## Software installation > Apollo 5.5 Software and Dreamview installation * [Apollo software installation guide](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/quickstart/apollo_software_installation_guide.md) * [How to Build and Release](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_build_and_release.md) * [How to Debug a Dreamview Start Problem](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_debug_dreamview_start_problem.md) ## Apollo Cyber > All you need to know about Apollo Cyber RT * [Apollo 5.5 all you need to know about Cyber RT](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/cyber/README.md) ## Software architecture and algorithms > Deep dive into Apollo's modules and algorithms * [Apollo 5.5 software architecture](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/Apollo_3.5_Software_Architecture.md "Apollo software architecture") - The core software architecture for Apollo 5.5 remains the same as Apollo 3.5 * [3D Obstacle Perception](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/3d_obstacle_perception.md) * [Apollo 5.5 Perception](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/perception/README.md) * [Open Space Planner](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/Open_Space_Planner.md) * [QP-Spline-Path Optimizer](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/qp_spline_path_optimizer.md) * [QP-Spline-ST-Speed Optimizer](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/qp_spline_st_speed_optimizer.md) * [Reference Line Smoother](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/reference_line_smoother.md) * [Traffic Light Perception](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/traffic_light.md) ## Software modules > Learn Apollo software modules and extension * [Canbus module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/canbus/README.md) * [Common module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/common/README.md) * [Control module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/control/README.md) * [Localization module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/localization/README.md) * [Perception module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/perception/README.md) * [Planning module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/planning/README.md) * [Prediction module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/prediction/README.md) * [Routing module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/modules/routing/README.md) ## Version relevant How to guides * [How to run the new Map Verification tool](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_run_map_verification_tool.md) * [How to Add a New GPS Receiver](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_add_a_gps_receiver.md) * [How to Add a New CAN Card](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_add_a_new_can_card.md ) * [How to Add a New Control Algorithm](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_add_a_new_control_algorithm.md) * [How to Add a New Evaluator in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_add_a_new_evaluator_in_prediction_module.md) * [How to Add a New Predictor in Prediction Module](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_add_a_new_predictor_in_prediction_module.md) * [How to Add a New Vehicle](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_add_a_new_vehicle.md) * [How to Add a New External Dependency](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_add_an_external_dependency.md) ## Developer Tools > Learn Apollo developer tools * [How to build and debug Apollo in VSCode](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md "How to build and debug Apollo in VSCode") * [Introduction of Dreamview](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/dreamview_usage_table.md) * [Introduction to Dreamland](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/specs/Dreamland_introduction.md)
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/bazel_in_apollo_an_overview.md
# Bazel in Apollo ## Overview Apollo uses [Bazel](https://bazel.build) as its underlying build system. Bazel is an open-source build and test tool with a human-readable, high-level build language suitable for medium and large projects. You may notice the `WORKSPACE` file under the root directory, and many `BUILD`, `*.BUILD`, `workspace.bzl` files in sub-directories (e.g. `third_party`). They are all Bazel files. ## Bazel Settings in Apollo ### bazelrc The content of `.bazelrc` (Apollo's overall Bazel configuration) under the root directory is as follows as of this writing: ``` try-import %workspace%/tools/bazel.rc try-import %workspace%/.apollo.bazelrc ``` `tools/bazel.rc` is for general settings, while `.apollo.bazelrc` was generated with the `config` sub-command of `apollo.sh`: You can run ```bash ./apollo.sh config --interactive ``` to configure it interactively, or ```bash ./apollo.sh config --noninteractive ``` should be run to configure it non-interactively. ### .bazelignore Besides `.bazelrc`, the `.bazelignore` file under Apollo workspace also governs Bazel’s behavior. Similar to `.gitignore`, `.bazelignore` is used to specify directories for Bazel to ignore. Currently, the `scripts`, `docker`, `docs` directories were specified. ### Bazel Distribution Files Directory Within `.apollo.bazelrc`, two directories were specified for Bazel: ``` startup --output_user_root="/apollo/.cache/bazel" common --distdir="/apollo/.cache/distdir" ``` The startup option `--output_user_root` was used to specify Bazel output directories (Ref: [Bazel Docs: Output Directory Layout](https://docs.bazel.build/versions/master/output_directories.html#output-directory-layout)). We specify it within Apollo root directory so that it can be mounted into Docker container by `docker/scripts/dev_start.sh` together with Apollo root directory on the host. According to [Bazel Docs: Distribution Files Directories](https://docs.bazel.build/versions/master/guide.html#distribution-files-directories), > Using the `--distdir=/path/to/directory` option, you can specify additional > read-only directories to look for files instead of fetching them. A file is > taken from such a directory if the file name is equal to the base name of the > URL and additionally the hash of the file is equal to the one specified in the > download request. Since this option is especially useful for users with not-stable-enough network connection, Apollo enabled a specific environment variable for that: `APOLLO_BAZEL_DIST_DIR`. You can configure it in `cyber/setup.bash`, and then run the following command for it to take effect. ``` source cyber/setup.bash ./apollo.sh config --noninteractive ``` ## Bazel Build, Test and Coverage Please refer to [Apollo Build and Test Explained](../../01_Installation%20Instructions/`apollo_build_and_test_explained.md). ### Apollo's Special ### Proto Files In Apollo 6.0, we recommend that proto files for each module be placed under some separate directory, say, `modules/a/proto`. Then you can run the following command to generate C++ and Python targets for all the proto files under that directory: ``` scripts/proto_build_generator.py modules/a/proto/BUILD ``` ### [Breaking Change] Python Files Starting from Apollo 6.0, Python libraries/binaries were also managed by Bazel. What you do in previous Apollo releases, say, ``` python3 modules/tools/mapshow/mapshow.py ``` , now should be done in either of the following approaches: ``` ./bazel-bin/modules/tools/mapshow/mapshow # Approach #1 bazel run modules/tools/mapshow:mapshow # Approach #2 ``` ## Further Reading Please refer to [Bazel Docs](https://docs.bazel.build) for more on Bazel. Thanks for reading!
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/how_to_use_ci_result_cn.md
# 在Apollo中如何使用CI结果 在Apollo项目中,一个PR(pull request)能否被合入取决于是否签署了CLA协议以及CI的结果。 ## CI将会检查哪些内容 Apollo的CI会按照以下步骤运行: 1. 将你的PR签入master的基础代码并进行构建 2. 对你的代码进行风格检查(包括 .cc、.h、.py、BUILD等) 3. 运行所有单元测试 所以,推荐在提交代码前执行以下命令: ``` ./apollo.sh lint ./apollo.sh build ./apollo.sh test ``` 当你的PR出现CI错误时,你可以点击下图中的`Details` ![build_failed](images/build_failed.png) 现在你就进入到了我们的CI系统,进入`Build Log`可以查看更详细的日志。 ![detail_log](images/build_log.png) ## 可能遇到的错误和解决办法 ### Error: "FAIL: //modules/perception/base:blob_cpplint" ![lint](images/lint.png) 这是由于代码风格检查失败。Apollo的代码使用Google代码风格,所以头文件应该按照建议的顺序排列。如果你没有找到建议内容,可以点击展开日志信息。 ### Error: "FAIL: //modules/perception/base:blob_test" ![test_failed](images/unit_test_failed.png) ![test_failed_log](images/unit_failed_log.png) 这是由于单元测试失败。你可以根据日志信息来修正单元测试。特别是发生超时问题时,你可以尝试将BUILD文件中的`size`设置从`small`修改为`medium`或`large`,这可能会有效。 如果遇到了更加复杂的情况,欢迎在你提交的PR下进行留言。
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/apollo_best_coding_practice.md
# Apollo Best Coding Practice 1. Always build, test, and lint all. ```bash ./apollo.sh check ``` 1. Always write unit tests and put them along with the source files. ```text foobar.h foobar.cc foobar_test.cc ``` 1. A Bazel target should contain at most one header and one source file. ```python cc_library( name = "foobar", hdrs = ["foobar.h"], srcs = ["foobar.cc"], deps = [ ... ], ) cc_test( name = "foobar_test", srcs = ["foobar_test.cc"], deps = [ ":foobar", ... ] ) ``` You can use `./apollo.sh format <path/to/BUILD>` to fix BUILD file style issues. 1. In general, Apollo follows [Google C++ coding style](https://google.github.io/styleguide/cppguide.html). You should run `scripts/clang_format.sh <path/to/cpp/dirs/or/files>` or `./apollo.sh format -c <path/to/cpp/dirs/or/files>` to fix C++ style issues. 1. Simple and unified function signature. ```C++ // 1. For input objects, const reference guarantes that it is valid, while // pointers might be NULL or wild. Don't give others the chance to break // you. // 2. For input scalars, just pass by value, which gives better locality and // thus performance. // 3. For output, it's the caller's responsibility to make sure the pointer // is valid. No need to do sanity check or mark it as "OutputType* const", // as pointer redirection is never allowed. void FooBar(const InputObjectType& input1, const InputScalaType input2, ..., OutputType* output1, ...); // RVO machanism will help you avoid unnecessary object copy. // See https://en.wikipedia.org/wiki/Copy_elision#Return_value_optimization OutputType FooBar(const InputType& input); ``` 1. Use const whenever possible. ```C++ // Variables that don't change. const size_t current_size = vec.size(); // Functions that have no side effect. const std::string& name() const; ``` 1. Prefer C++ headers over C headers. We prefer using `#include <ctime>` over `#include <time.h>`, `<cmath>` over `<math.h>`, `<cstdio>` over `<stdio.h>`, `<cstring>` over `<string.h>`, etc. 1. Include necessary headers **only**. No more, no less. Please also pay attention to header orders. Again, you can use `apollo.sh format -c` or `scripts/clang_format.sh` to fix header order issues. 1. List only direct dependencies in `deps` section of a Bazel target. Generally, only targets to which the included headers belongs should be listed as a dependency. For example, suppose `sandwich.h` includes `bread.h` which in turn includes `flour.h`. Since `sandwich.h` doesn't include `flour.h` directly (who wants flour in their sandwich?), the BUILD file would look like this: ```python cc_library( name = "sandwich", srcs = ["sandwich.cc"], hdrs = ["sandwich.h"], deps = [ ":bread", # BAD practice to uncomment the line below # ":flour", ], ) cc_library( name = "bread", srcs = ["bread.cc"], hdrs = ["bread.h"], deps = [":flour"], ) cc_library( name = "flour", srcs = ["flour.cc"], hdrs = ["flour.h"], ) ``` 1. Conform to the DRY principle. Don't repeat yourself, in any way. Avoid duplicate classes, functions, const variables, or a simple piece of code. Some examples: - It's fine to refer a name with full path once, like `apollo::common::util::Type`, but better to make a short alias if you need to use twice or more: `using apollo::common::util::Type;`. - It's fine to access a sub-field of proto once in cascade style, like `a_proto.field_1().field_2().field_3()`, but better to save the reference of a common part first if you need to access it twice or more: `const auto& field_2 = a_proto.field_1().field_2();`.
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/linters_and_formatters.md
# Linters and Formatters used in Apollo 6.0 ``` Programs should be written for people to read, and only incidentally for machines to execute. -- Harold Abelson ``` A great project is made out of consistent code. In the ideal world, you should not be able to tell who wrote a certain line of the code for the project. Modern linters and formatters help to close this gap by specifying a simple set of rules to be enforced on all developers working on the project. Such tools also stimulate developers to write better code by pointing out common mistakes and introducing good programming practices. Generally, linters are used for catching errors, whereas formatters are used to fix coding style problems. In this article, we will describe briefly the various linters and formatters used in Apollo. ## Apollo Coding Style As you may already know, Apollo adopted Google coding style for C/C++ and Python programs. You can refer to [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html) and [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html) for full text of their specifications. ## Linters in Apollo To enforce that everyone conform to Apollo coding style, the following linters are provided for developers to check style issues. **Note:** > As the time of this writing, Apollo CI system enforced style check on C++ > files only. We hope that linters for other languages will be online in the > near future. | Linters | Source Extensions | Usage | | :--------: | :-----------------------------------: | :------------------------ | | cpplint | .h/.c/.hpp/.cpp/.hh/.cc/.hxx/.cxx/.cu | bash apollo.sh lint --cpp | | flake8 | .py | bash apollo.sh lint --py | | shellcheck | .sh/.bash/.bashrc | bash apollo.sh lint --sh | To make sure your code conforms to Apollo coding style, you can use `./apollo.sh lint` to find any possible style problems and fix them manually. ## Formatters in Apollo To help ease your life with Apollo coding style, various formatters are pre-installed into Apollo Docker image, to help you auto-format your code, and avoid common mistakes when writing code. The following table lists the formatters currently integrated into Apollo, covering C/C++, Python, Bash, Bazel, Markdown, JSON and YAML files. | Formatters | Source Extensions | Usage | Formatter Config | | :----------: | :------------------------------------------: | :----------------------------------------------: | :--------------: | | clang-format | .h/.c/.hpp/.cpp/.hh/.cc/.hxx/.cxx/.cu/.proto | ./apollo.sh format -c <path/to/src/dir/or/files> | .clang-format | | autopep8 | .py | ./apollo.sh format -p <path/to/src/dir/or/files> | tox.ini | | buildifier | .BUILD/.bzl/.bazel/WORKSPACE/BUILD | ./apollo.sh format -b <path/to/src/dir/or/files> | N/A | | shfmt | .sh/.bash/.bashrc | ./apollo.sh format -s <path/to/src/dir/or/files> | .editorconfig | | prettier | .md/.json/.yml | ./apollo.sh format -m <path/to/src/dir/or/files> | .prettier.json | For easy use, you can format all files with types listed above with: ``` ./apollo.sh format <path/to/src/dir/or/files> ``` For example, ``` ./apollo.sh format WORKSPACE third_party/BUILD ./scripts/ ``` which will auto-format Bazel `WORKSPACE` file under `$APOLLO_ROOT_DIR`, `third_party/BUILD` file, and all the files under the `./scripts` directory. Note: > `./apollo.sh format` can also work outside Docker if relavant tools installed > properly. ## Conclusion To summarize, - Use `./apollo.sh lint` to check coding style errors. - Use `./apollo.sh format` to auto-format your code.
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/intro_to_apollo_release_build.md
# Introduction to Apollo Release Build ## Background For quite some time, binary distribution was unavailable for CyberRT-based Apollo. Users have to build the whole project themselves inside Apollo Dev container before they can run various modules and tools. This incapability of deployment causes trouble in certain situations. For example, people from SVL Simulator find that the total size of all Docker images, volumes and Bazel caches and build artifacts sums up to 40+ GB, which was rather inconvenient for their use case. ## Principle of Release Build Implementation The root cause of this incapability was Bazel's lack of out-of-box installation support similar to those provided in other build systems (e.g., `make install`). To resolve this issue, we borrowed ideas from the [Drake](https://github.com/RobotLocomotion/drake) project, and implemented the `install` extension in the Starlark language, which can be used to install binaries, shared libraries, resource files (config, data, launch files, dags, etc) and documents for the Apollo project. Installation for standalone binaries was rather straightforward. As you may already know, the core concept of the CyberRT framework was to load each module (e.g, Perception, Prediction, Planning, etc) dynamically as a component in the form of `libX_component.so`. Both the `mainboard` binary program and `libX_component.so` links thousands of other shared libraries themselves. For example, running `ldd` on the Planning module with the following command ```bash ldd bazel-bin/modules/planning/libplanning_component.so ``` will show the following output: ```text linux-vdso.so.1 (0x00007ffc8a77c000) libmodules_Splanning_Slibplanning_Ucomponent_Ulib.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibplanning_Ucomponent_Ulib.so (0x00007fe8a7f9f000) libmodules_Splanning_Slibnavi_Uplanning.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibnavi_Uplanning.so (0x00007fe8a7d81000) libmodules_Splanning_Slibon_Ulane_Uplanning.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibon_Ulane_Uplanning.so (0x00007fe8a7b53000) libmodules_Splanning_Slibplanning_Ubase.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibplanning_Ubase.so (0x00007fe8a7945000) libmodules_Splanning_Scommon_Ssmoothers_Slibsmoother.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Scommon_Ssmoothers_Slibsmoother.so (0x00007fe8a7739000) libmodules_Splanning_Splanner_Slibplanner_Udispatcher.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Splanner_Slibplanner_Udispatcher.so (0x00007fe8a752e000) ... ``` How to _install_ `libplanning_component.so` with all the shared objects (i.e., "\*.so" files) it links becomes the hardest issue to solve to implement the `install` rule. `patchelf` comes to rescue. All the shared objects can be retrieved from `runfiles_data`, while `patchelf --force-rpath --set-rpath` can be used to change RPATH settings. Refer to [tools/install/install.bzl](../../tools/install/install.bzl) for a thorough understanding. ## How to Run Release Build for Apollo To generate binary outputs for Apollo, simply run: ```bash ./apollo.sh release -c ``` where `-c` is an optional argument used for pre-cleaning. The output was located at `/apollo/output` inside Apollo Dev container. The command above is roughly equivalent to the following: ```bash bazel run --config=opt --config=gpu //:install \ -- --pre_clean /apollo/output ``` Please type `./apollo.sh release -h` for more usage of `apollo.sh release`. ## How to Run Apollo with Release Build Output In order to run Apollo Runtime Docker with release build output, type the following command from the root of release build output directory outside Apollo Dev Docker: ```bash bash docker/scripts/runtime_start.sh ``` For users in China, use `-g cn` to speed up pulling of Docker images/volumes. ```bash bash docker/scripts/runtime_start.sh -g cn ``` Log into Apollo Runtime Docker by running: ```bash bash docker/scripts/runtime_into.sh ``` Start Dreamview by running: ```bash ./scripts/bootstrap.sh ``` from inside Apollo Runtime container. ## How to Use `install` Rule to Install a Custom Module To _install_ a custom module, you can follow examples for installing other modules from the Apollo repository. Take the Planning module as an example. This is part of the topmost [BUILD](../../BUILD) file: ```python install( name = "install", deps = [ "//cyber:install", # ... "//modules/planning:install", # ... ], ) ``` This is a snippet from [modules/planning/BUILD](../../modules/planning/BUILD): ```python filegroup( name = "planning_conf", srcs = glob([ "conf/**", ]), ) filegroup( name = "runtime_data", srcs = glob([ "dag/*.dag", "launch/*.launch", ]) + [":planning_conf"], ) install( name = "install", data = [ ":runtime_data", ], targets = [ ":libplanning_component.so", ], deps = [ "//cyber:install", ], ) ``` ## Arguments of the `install` Bazel Rule From [tools/install/install.bzl](../../tools/install/install.bzl): ```python install = rule( attrs = { "deps": attr.label_list(providers = [InstallInfo]), "data": attr.label_list(allow_files = True), "data_dest": attr.string(default = "@PACKAGE@"), "data_strip_prefix": attr.string_list(), "targets": attr.label_list(), "library_dest": attr.string(default = "@PACKAGE@"), "library_strip_prefix": attr.string_list(), "mangled_library_dest": attr.string(default = "lib"), "mangled_library_strip_prefix": attr.string_list(), "runtime_dest": attr.string(default = "bin"), "runtime_strip_prefix": attr.string_list(), "rename": attr.string_dict(), "install_script_template": attr.label( allow_files = True, executable = True, cfg = "target", default = Label("//tools/install:install.py.in"), ), }, executable = True, implementation = _install_impl, ) ``` The detailed attributes of the `install` rule was listed below: | Argument | Notes | | -------------------- | ---------------------------------------------------------------------------------------- | | deps | List of other install rules that this rule should include. | | data | List of (platform-independent) resource files to install | | data_dest | Destination for resource files (default = "@PACKAGE@") | | data_strip_prefix | List of prefixes to remove from resource paths | | targets | List of targets to install | | runtime_dest | Destination for executable targets (default = "bin") | | runtime_strip_prefix | List of prefixes to remove from executable paths | | rename | Mapping of install paths to alternate file names, used to rename files upon installation | ## Limitations of Current Release Build Implementation - C++ support only. Installation for Python was not ready. - x86_64 support only. ARM64 support was not ready at the moment.
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/bridge_header_protocol.md
# Bridge Header Protocol This document describes the details about bridge header protocol. ## Introduction Bridge is responsible for forwarding the proto message specified in Apollo to the external module in UDP mode, and receiving the proto message from the external module. Due to the limitation of the UDP packet (the maximum packet size is 65535 bytes), the proto message exceeds this limit after serialization. It is possible, so the serialized packets need to be fragmented, and each slice is called a frame. The receiver receives all the frames and then combines and deserializes them. In order to conveniently distinguish each frame of data, to facilitate subsequent combination and deserialization, it is necessary to add frame header information at the head of each frame of data, as shown in the following figure: ``` +-----+------------+------------+-------+-----------+ |Flag |Header Size |Header Item |... ...|Header Item| +-----+------------+------------+-------+-----------+ ``` As shown in the figure above, the Header consists of three segments: Flag, Size, and Items, where the segment is separated by '\n'. Flag: The sign of this agreement, currently a string "ApolloBridgeHeader"; Size: the length of the entire Items section (unit: byte) Items: The specific content of the header, where each item has three components, each of which is divided by ':'. The specific schematic is shown in the figure below. ``` +----------+----------+-------------+ |Item Type |Item Size | Item Content| +----------+----------+-------------+ ``` The Item Type is defined as follows: ```c++ Enum HType { Header_Ver, // header version number, type: uint32_t Msg_Name, // proto name, such as Chassis, type: std::string Msg_ID, // sequence num of proto, guaranteed unique value in the same proto, type: uint32_t Msg_Size, // the size of the proto serialized, type: size_t Msg_Frames, // how many frames are proto serialized, type: uint32_t Frame_Size, // the size of the current frame, type: size_t Frame_Pos // The offset of the current frame in the meta-serialized buf, type: size_t Frame_Index,// The sequence number of the current frame in the total number of frames, starting from 0, type: uint32_t Time_Stamp, // timestamp, type: double Header_Tail,// indicates the maximum value of the currently supported Type, always at the end of the enum } ```
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/intro_to_apollo_release_build_cn.md
# Apollo Release Build 简介 ## 背景 基于 CyberRT 的 Apollo 在相当长的时间内一直没有二进制发布版本。用户需要先在 Apollo 开发容器内自行完成对整个项目的编译构建才能运行 Apollo 中的模块和工具。这 种部署上的不足,在若干情形下对用户相当不便。SVL 模拟器的开发者发现,将 Apollo 中 Docker 镜像、Docker 卷及 Bazel 缓存和构建查出加起来,足有 40 多 GB! ## Release Build 的实现原理 这种部署上的不足的根本原因,在于 Bazel 缺少其他构建系统通常具备的开箱即用的「安 装」支持,如`make install`. 为解决这一问题,我们借鉴了[Drake](https://github.com/RobotLocomotion/drake) 项目 中的「安装」实现,,利用 Starlark 语言,实现了 适用于 Apollo 的 Bazel「安装」扩 展,支持 Apollo 中二进制程序、共享库、资源文件(配置、数据、DAG 文件等)以及文档 的安装。 单独完备的二进制程序的安装是简单的。然而,CyberRT 框架的核心概念即为将每个模块( 如感知、预测、规划)作为组件,以共享库的形式(`libX_component.so`)动态加载。在 目前的 Bazel 构建下,`mainboard`二进制程序和`libX_component.so`链接了成千上百各 其他共享库对象。如,对规划模块运行如下`ldd`命令: ```bash ldd bazel-bin/modules/planning/libplanning_component.so ``` 会输出如下消息: ```text linux-vdso.so.1 (0x00007ffc8a77c000) libmodules_Splanning_Slibplanning_Ucomponent_Ulib.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibplanning_Ucomponent_Ulib.so (0x00007fe8a7f9f000) libmodules_Splanning_Slibnavi_Uplanning.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibnavi_Uplanning.so (0x00007fe8a7d81000) libmodules_Splanning_Slibon_Ulane_Uplanning.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibon_Ulane_Uplanning.so (0x00007fe8a7b53000) libmodules_Splanning_Slibplanning_Ubase.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Slibplanning_Ubase.so (0x00007fe8a7945000) libmodules_Splanning_Scommon_Ssmoothers_Slibsmoother.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Scommon_Ssmoothers_Slibsmoother.so (0x00007fe8a7739000) libmodules_Splanning_Splanner_Slibplanner_Udispatcher.so => /apollo/bazel-bin/modules/planning/../../_solib_local/libmodules_Splanning_Splanner_Slibplanner_Udispatcher.so (0x00007fe8a752e000) ... ``` 如何实现对`libplanning_component.so` 及其链接的所有共享库(后缀为".so")文件的「 安装」,成为实现`install`规则中最难的部分。 幸好有`patchelf`。利用 Bazel 中`runfiles_data`的概念来确定出链接的所有共享库文件 ,再利用`patchelf --force-rpath --set-rpath` 来修改其 RPATH 设置。 欲要更深入了解,请参考: [tools/install/install.bzl](../../tools/install/install.bzl)。 ## 如何执行 Release Build 构建 可运行如下命令以生成二进制发布构建产物: ```bash ./apollo.sh release -c ``` 其中,`-c`为可选参数,用于清理先前构建的残留。产物位于`/apollo/output`目录。 上述命令略等价于如下 Bazel 命令: ```bash bazel run --config=opt --config=gpu //:install \ -- --pre_clean /apollo/output ``` 可输入`./apollo.sh release -h` 查看`apollo.sh release`子命令的更多用法。 ## 通过二进制发布构建产物运行 Apollo 在二进制发布产物根目录下,运行如下命令以启动 Apollo Runtime Docker 镜像: ```bash bash docker/scripts/runtime_start.sh ``` 国内用户可使用`-g cn`选项来加速 Docker 镜像的拉取。 ```bash bash docker/scripts/runtime_start.sh -g cn ``` 运行如下命令以进入 Apollo Runtime Docker 环境: ```bash bash docker/scripts/runtime_into.sh ``` 启动 Dreaview: ```bash ./scripts/bootstrap.sh ``` ## 如何将`install`规则应用到任一自定义模块 欲实现自定义模块的*安装*,可参考 Apollo 代码中其他模块的示例,还是以规划模块为例 : 这是最上层的[BUILD](../../BUILD) 文件的一部分: ```python install( name = "install", deps = [ "//cyber:install", # ... "//modules/planning:install", # ... ], ) ``` 这是规划模块自身的 BUILD 文件 [modules/planning/BUILD](../../modules/planning/BUILD): ```python filegroup( name = "planning_conf", srcs = glob([ "conf/**", ]), ) filegroup( name = "runtime_data", srcs = glob([ "dag/*.dag", "launch/*.launch", ]) + [":planning_conf"], ) install( name = "install", data = [ ":runtime_data", ], targets = [ ":libplanning_component.so", ], deps = [ "//cyber:install", ], ) ``` ## `install`规则的参数列表 `install`规则定义在 [tools/install/install.bzl](../../tools/install/install.bzl): ```python install = rule( attrs = { "deps": attr.label_list(providers = [InstallInfo]), "data": attr.label_list(allow_files = True), "data_dest": attr.string(default = "@PACKAGE@"), "data_strip_prefix": attr.string_list(), "targets": attr.label_list(), "library_dest": attr.string(default = "@PACKAGE@"), "library_strip_prefix": attr.string_list(), "mangled_library_dest": attr.string(default = "lib"), "mangled_library_strip_prefix": attr.string_list(), "runtime_dest": attr.string(default = "bin"), "runtime_strip_prefix": attr.string_list(), "rename": attr.string_dict(), "install_script_template": attr.label( allow_files = True, executable = True, cfg = "target", default = Label("//tools/install:install.py.in"), ), }, executable = True, implementation = _install_impl, ) ``` 其具体参数列举如下 | 参数 | 含义 | | -------------------- | ----------------------------------------- | | deps | 本规则依赖的其它安装规则 | | data | 待安装的资源文件(平台无关)列表 | | data_dest | 资源文件目标安装地址 | | data_strip_prefix | 需去掉的资源文件路径前缀列表 | | targets | 待安装目标 | | runtime_dest | 可执行目标的目标安装地址,默认为 bin 目录 | | runtime_strip_prefix | 需去掉可执行目标路径的前缀 | | rename | 安装时的文件重命名 | ## 局限性 当前的 Release Build 实现 - 只支持 C++,不支持 Python。 - 只支持 x86_64 架构,Aarch64 支持尚待完善。
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/how_to_use_ci_result.md
# How to Use CI Result in Apollo In Apollo, whether a PR can be merged depends on CI result and CLA. ## What works will CI check? Apollo CI will run the following steps: 1. Checkout your PR into Apollo codebase and build 1. Lint your code including .cc, .h, .py, BUILD, etc 1. Run all unit tests So it's recommended that the following commands should be run before commiting your code. ``` ./apollo.sh lint ./apollo.sh build ./apollo.sh test ``` When your PR your PR got blocked by CI, you can click `Details` ![build_failed](images/build_failed.png) Now you are in our CI system, enter `Build Log` to see detailed fail log. ![detail_log](images/build_log.png) ## Possible Errors And Solution ### Error: "FAIL: //modules/perception/base:blob_cpplint" ![lint](images/lint.png) This is due to lint error. Apollo adopted Google coding style, so the header files shoud be in the suggested order. If you can't find the suggestion, please turn the log up and seek carefully. ### Error: "FAIL: //modules/perception/base:blob_test" ![test_failed](images/unit_test_failed.png) ![test_failed_log](images/unit_failed_log.png) This is due to a unit test failure. You can correct the unit test according to the log. Especially when timeout happens, you can try changing the `size` filed in BUILD from `small` to `medium` or `large`, hope it works. If more complicated situation happens, please feel free to comment in your PR.
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/how_to_create_pull_request.md
# How to Create a Pull Request This document is a brief step-by-step guide on creating pull requests for Apollo. Your can also refer to [GitHub: Using Pull Requests](https://help.github.com/articles/using-pull-requests/) for a thorough understanding. ## Step 1: Fork your own copy of ApolloAuto/apollo to your GitHub account This is done by clicking the "Fork" button on the top-right of [Apollo's Github Page](https://github.com/ApolloAuto/apollo) and following the guide there. ## Step 2: Clone your fork of the repo Note: > Please replace "YOUR_USERNAME" with your GitHub account in the descriptions > below. Open a terminal, type either of the following commands: ``` # Using SSH git clone git@github.com:YOUR_USERNAME/apollo.git # Using HTTPS git clone https://github.com/YOUR_USERNAME/apollo.git ``` ## Step 3: Set up your username and email for this repo ``` git config user.name "My Name" git config user.email "myname@example.com" ``` ## Step 4: Set official Apollo repo as upstream Configuring an upstream remote allows you to sync changes made in the upstream with your own fork. This is done with the following command: ``` # Using SSH git remote add upstream git@github.com:ApolloAuto/apollo.git # Using HTTPS git remote add upstream https://github.com/ApolloAuto/apollo.git ``` You can confirm that the upstream repo has been added by running: ``` git remote -v ``` If successful, it will show the list of remotes similar to the following: ``` origin git@github.com:YOUR_USERNAME/apollo.git (fetch) origin git@github.com:YOUR_USERNAME/apollo.git (push) upstream git@github.com:ApolloAuto/apollo.git (fetch) upstream git@github.com:ApolloAuto/apollo.git (push) ``` ## Step 5: Create a new branch; Make and commit changes ``` git checkout -b my_dev origin/master # Make your own changes on branch "my_dev" # blah blah ... # Commit to your own branch with commit msg: git commit -m "[module] brief description of the changes" ``` ## Step 6: Sync up with upstream ApolloAuto/apollo ``` git pull --rebase upstream master ``` ## Step 7: Push your local changes to your fork. ``` git push -f -u origin my_dev ``` ## Step 8: Generate a pull request Create a new pull request between "Apolloauto/apollo:master" and "YOUR_USERNAME/apollo:my_dev" by clicking the "Pull Request" button on [your forked Apollo repo page](https://github.com/YOUR_USERNAME/apollo) on GitHub. You can then follow the steps described by [GitHub: Creating a Pull Request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) on what to do next. Note: > Please don't forget to **add the description of your PR**. It can help > reviewers better understand the changes you have made and the intention for > those changes. Collaborators from our team will be glad to review and merge your commit! (This may take some time, please be patient.) ## Step 9: Done! Thanks a lot for your PR!
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/how_to_debug.md
# How to Debug Apollo ## Debugging Apollo The Apollo project runs in Docker and cannot be used directly on the host machine. It must be created in Docker with GDBServer. Debug the service process, and then use GDB to connect to the debug service process in Docker on the host machine. The specific operation methods are as follows: ### Prerequisites The main prerequisites contain collecting debugging information and installing the GDBServer if it is not already present in Docker #### Collecting debugging information When compiling Apollo projects, you will need to use debugging information options **build_dbg**. Optimization options such as **build_opt** or **build_opt_gpu** cannot be used. #### Install GDBServer inside Docker After entering Docker, you can use the following command to view if the GDBServer is present: ```bash gdbserver --version ``` If the prompt is similar to the following information: ```bash GNU gdbserver (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. gdbserver is free software, covered by the GNU General Public License. This gdbserver was configured as "x86_64-linux-gnu" ``` It means that GDBServer has been installed inside Docker. You should be able to view the prompt below. But if the GDBServer is not present and if you are prompted with the following information: ```bash bash: gdbserver: command not found ``` Then you would need to install the GDBServer using ```bash sudo apt-get -y update sudo apt-get install gdbserver ``` #### Start the Dreamview daemon Go to Docker and start Dreamview. The command is as follows: ```bash cd ${APOLLO_ROOT_DIR} # If Docker is not started, start it first, otherwise ignore this step bash docker/scripts/dev_start.sh # Enter Docker bash docker/scripts/dev_into.sh # Start Dreamview background service bash scripts/bootstrap.sh ``` #### Start the module that needs to be debugged Start the module to be debugged, either by using the command line or by using the Dreamview interface. The following is an example of debugging the **Planning** module from the Dreamview interface. - Open URL: <http://localhost:8888/> in Chrome - On Dreamview, click on the **SimControl** slider, as shown below: ![enable simcontrol](images/build_debug/enable_simcontrol.png) - Click on the `Module Controler` tab on the left toolbar and select the `Routing` and `Planning` options as shown below: ![start routing and planning](images/build_debug/start_routing_and_planning.png) - Click the `Default Routing` tab on the left toolbar, select `Route: Reverse Early Change Lane` or any of these options, send a `Routing Request` request, and generate a global navigation path, as shown below: ![check route reverse early change lane](images/build_debug/check_route_reverse_early_change_lane.png) #### Viewing the "Planning" Process ID Use the following command to view the "Planning" process ID: ```bash ps aux | grep mainboard | grep planning ``` The result in the following figure is similar to the previous figure, you can see that the `Planning` process ID is 4147. ![plannning id ps](images/build_debug/planning_id_ps.png) #### Debugging Planning module using GDBServer Next we need to carry out our key operations, using GDBServer to additionally debug the `Planning` process, the command is as follows: ```bash sudo gdbserver :1111 --attach 4147 ``` In the command above, ":1111" indicates that the debugging service process with the port "1111" is enabled, and "4147" indicates the "Planning" process ID. If the result is as shown below, the operation is successful. ![gdbserver attach debug](images/build_debug/gdbserver_attach_debug.png) After restarting a terminal and entering Docker, use the following command to see if the "gdbserver" process is running properly: ```bash ps aux | grep gdbserver ``` ![view gdbserver process](images/build_debug/view_gdbserver_process.png) #### Starting GDBServer with a Script File `docker/scripts/dev_start_gdb_server.sh` can start GDBServer directly on the host (outside Docker). Assuming that while debugging the planning module, the port number is 1111, the usage of `docker/scripts/dev_start_gdb_server.sh` is: ```bash # Start gdbserver directly on the host machine (outside Docker) bash docker/scripts/dev_start_gdb_server.sh planning 1111 ``` ### Possible Errors and their Solutions During the debugging process, you may encounter the following problems: #### the network connection is not smooth, can not be debugged #### Solution The solution is to ensure the network is smooth, and disable the agent tool ### Remote debugging During the R&D process, we also need to debug the Apollo project remotely on the industrial computer inside the vehicle, that is, connect the in-vehicle industrial computer with the SSH service on the debugging computer, start the relevant process in the industrial computer, and then perform remote debugging on the debugging computer. The following is an example of debugging the planning module: #### View the IP address of the industrial computer in the car On the industrial computer in the car, check the IP of the machine by the following command: ```bash ifconfig ``` #### Open Dreamview in the browser of the debugging computer and start the module to be debugged Assuming that the IP address of the industrial computer LAN is: `192.168.3.137`, open URL: <http://192.168.3.137:8888/> on your machine and start the module (`Planning`) to debug as shown in [Start the module that needs debugging](#Start-the-module-that-needs-to-be-debugged) section. ![remote show dreamview](images/build_debug/remote_show_dreamview.png) #### Use the SSH Command to Remotely Log In to the Industrial PC and Start the Gdbserver Service of the Industrial PC Assume that the user name of the industrial computer in the car is `xxxxx`, and the IP address of the LAN is `192.168.3.137`. Use the following command to remotely log in to the industrial computer: ```bash ssh xxxxx@192.168.3.137 ``` After successfully entering the IPC, assume that the Planning module needs to be debugged, and the port number is 1111, use the following command to start the gdbserver service of the in-vehicle IPC: ```bash # Switch to the Apollo project root directory on the industrial computer cd ~/code/apollo # Start the gdbserver service outside of Docker bash docker/scripts/dev_start_gdb_server.sh planning 1111 ``` As shown in the figure below, if you see a prompt similar to Listening on port 1111, the gdbserver service starts successfully. ![remote start gdbserver](images/build_debug/remote_start_gdbserver.png)
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/apollo_best_coding_practice_cn.md
# Apollo 编码最佳实践 1. 提交 PR 前记得先在本地通过编译、单元测试和代码检查。 ```bash ./apollo.sh check ``` 1. 请写单元测试,并随源文件一起提交。 ```text foobar.h foobar.cc foobar_test.cc ``` 1. 一个 Bazel 目标(Target)最多包含一个头文件和一个(`.cc`)源文件。 ```python cc_library( name = "foobar", hdrs = ["foobar.h"], srcs = ["foobar.cc"], deps = [ ... ], ) cc_test( name = "foobar_test", srcs = ["foobar_test.cc"], deps = [ ":foobar", ... ] ) ``` 可运行 `./apollo.sh format <path/to/BUILD>` 来修复 BUILD 文件的格式问题。 1. 总体上,Apollo 遵循 [Google C++风格指南](https://google.github.io/styleguide/cppguide.html). 通过运行`scripts/clang_format.sh <path/to/cpp/dirs/or/files>` 或 `./apollo.sh format -c <path/to/cpp/dirs/or/files>` 命令可修复 C++代码风格问题。 1. 确保简单且一致的函数签名。注释中请不要出现中文。 ```C++ // 1. For input objects, const reference guarantes that it is valid, while // pointers might be NULL or wild. Don't give others the chance to break // you. // 2. For input scalars, just pass by value, which gives better locality and // thus performance. // 3. For output, it's the caller's responsibility to make sure the pointer // is valid. No need to do sanity check or mark it as "OutputType* const", // as pointer redirection is never allowed. void FooBar(const InputObjectType& input1, const InputScalaType input2, ..., OutputType* output1, ...); // RVO machanism will help you avoid unnecessary object copy. // See https://en.wikipedia.org/wiki/Copy_elision#Return_value_optimization OutputType FooBar(const InputType& input); ``` 1. 尽可能使用`const` 修饰变量,函数。 ```C++ // Variables that don't change. const size_t current_size = vec.size(); // Functions that have no side effect. const std::string& name() const; ``` 1. 尽可能使用 C++对应头文件而非 C 语言的头文件。 如,鼓励使用 `#include <ctime>`, `#include <cmath>`, `#include <cstdio>`, `#include <cstring>` 的写法。请尽量杜绝使用`#include <time.h>`, `#include <math.h>`, `#include <stdio.h>`, `#include <string.h>` 的写法。 1. 只包含必需的头文件。不多,也不少。 另外,请注意头文件包含顺序。可运行 `apollo.sh format -c` 或 `scripts/clang_format.sh` 来修复头文件顺序问题。 1. 在 Bazel 目标的`deps`部分,只列出该目标的直接依赖。一般来说,只需要列举出该目 标包含的头文件所在的 Bazel 目标作为依赖项即可。 举例,假设`sandwich.h`包含`bread.h`,而`bread.h`又包含`flour.h`。由 于`sandwich.h`并不直接包含`flour.h` (毕竟,谁会想在三明治中加面粉呢?) ,BUILD 文件应写作: ```python cc_library( name = "sandwich", srcs = ["sandwich.cc"], hdrs = ["sandwich.h"], deps = [ ":bread", # BAD practice to uncomment the line below # ":flour", ], ) cc_library( name = "bread", srcs = ["bread.cc"], hdrs = ["bread.h"], deps = [":flour"], ) cc_library( name = "flour", srcs = ["flour.cc"], hdrs = ["flour.h"], ) ``` 1. 遵循 DRY(不要重复)的原则。 避免重复的类,函数,常量定义,尽量避免重复的代码块。举例: - 用完整路径引用某个名字,如 `apollo::common::util::Type`,一次是 OK 的。但如 果要使用两次或者更多次,建议设置一个短别名: `using apollo::common::util::Type;`. - 用级联的方式访问 Protobuf 中的子字段是 OK 的,如, `a_proto.field_1().field_2().field_3()`, 但如果要访问多次,最好将共同前缀部 分保存为引用: `const auto& field_2 = a_proto.field_1().field_2();`.
0
apollo_public_repos/apollo/docs/14_Others
apollo_public_repos/apollo/docs/14_Others/代码实践/how_to_create_pull_request_cn.md
# 如何创建合入请求(Pull Request) 本文档以 Apollo 项目为例,引导新手逐步熟悉创建代码合入请求(Pull Request)的步骤 。另,可参考[GitHub 页](https://help.github.com/articles/using-pull-requests/) 了解更多。 ## 第一步:创建您个人的 Apollo 代码分支 这可以通过点击[Apollo 的 GitHub 页](https://github.com/ApolloAuto/apollo) 右上角 的【Fork】按钮并按照其指引操作来完成。 ## 第二步:克隆您的 Apollo 分支仓库 注: > 请用您的 GitHub 用户名替换以下描述中的"YOUR_USERNAME"。 打开终端,输入以下任一命令: ``` # 使用 SSH 方式 git clone git@github.com:YOUR_USERNAME/apollo.git # 使用 HTTPS 方式 git clone https://github.com/YOUR_USERNAME/apollo.git ``` ## 第三步:设置您的用户名和电子邮件地址 ``` git config user.name "My Name" git config user.email "myname@example.com" ``` ## 第四步: 将 Apollo 官方仓库设为 upstream 上游分支 设置上游分支以便后续同步远端 upstream 分支的代码变更,这可以通过如下命令完成: ``` # 使用SSH git remote add upstream git@github.com:ApolloAuto/apollo.git # 使用 HTTPS git remote add upstream https://github.com/ApolloAuto/apollo.git ``` 通过如下命令查看 upstream 是否设置成功: ``` git remote -v ``` 如成功,则会显示如下的 remotes 列表: ``` origin git@github.com:YOUR_USERNAME/apollo.git (fetch) origin git@github.com:YOUR_USERNAME/apollo.git (push) upstream git@github.com:ApolloAuto/apollo.git (fetch) upstream git@github.com:ApolloAuto/apollo.git (push) ``` ## 第五步:创建分支,做出修改,提交变更 ``` git checkout -b my_dev origin/master # 在您的my_dev分支修复问题,添加新功能,等等 # ... # 将代码变动提交到您的本地分支,注意提交消息格式 git commit -m "[module] brief description of the changes" ``` ## 第六步:同步上游仓库变更 ``` git pull --rebase upstream master ``` ## 第七步:将您的本地修改推送到您个人的 Apollo 分支仓库 ``` git push -f -u origin my_dev ``` ## 第八步,生成代码合并请求 通过点击您的 Apollo 克隆 GitHub 页(通常 为https://github.com/YOUR_USERNAME/apollo) 上的"Pull Request" 按钮新建从 "YOUR_USERNAME/apollo:my_dev" 到 "Apolloauto/apollo:master" 的代码合并请求。 可参 考[GitHub 页:创建合并请求](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) 的描述来完成代码合并请求。 **注**: > 请不要忘了添加您的 PR 的描述。PR 描述可以帮助代码评审人员更好地理解您的代码变 > 更的意图。 作为开源项目,我们作为 Apollo 团队成员,很乐意评审并合入您提交的代码。但限于精力 ,可能您的 PR 从提交到合入需要花一些时间,请给我们一点耐心。 ## 最后:大功告成 感谢您的 PR!
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/13_Apollo Tool/how_to_leverage_scenario_editor.md
# How to Leverage Scenario Editor ## Introduction Simulation plays a central role in Apollo’s internal development cycle. Dreamland empowers developers and start-ups to run millions of miles of simulation daily, which dramatically accelerates the development cycle. So far Apollo simulation allowed external users to access over 200 sample scenarios which includes a diverse range of LogSim scenarios based on real world driving data and WorldSim scenarios that have been manually created by our simulation team. To learn more about Dreamland, refer to [our Dreamland Introduction Guide](../13_Apollo%20Tool/%E4%BA%91%E5%B9%B3%E5%8F%B0Apollo%20Studio/Dreamland_introduction.md) Several developers wrote in requesting that our Dreamland platform should support Scenario Creation and Editing which the Apollo team now proudly presents in Apollo 5.0! ## Setting up Scenario Editor 1. Login to your Dreamland account. For additional details on How to create an account, please refer to [our Dreamland Introduction Guide](../13_Apollo%20Tool/%E4%BA%91%E5%B9%B3%E5%8F%B0Apollo%20Studio/Dreamland_introduction.md) 2. Once inside the platform, the Scenario Editor can be accessed under `Scenario Management` or using the [following link](https://azure.apollo.auto/scenario-management/scenario-editor) ![](images/se_location1.png) 3. Once inside, you will have to complete the form on the screen as seen in the image below. As this app is in Beta testing, it is not open to all our developers. ![](../specs/images/form.png) 4. You should receive the following activation confirmation via email within 3 business days: ![](../specs/images/email.png) ## Using Scenario Editor Congratulations! You are now ready to use our scenario editor. 1. The first step is to select a map. Currently, we offer 2 maps - Sunnyvale and San Mateo ![](images/se_map.png) 2. Once a map has been selected, you will have access to the editor pane on the right along with other tools as seen below: ![](images/se_tools.png) You can navigate through the map using your arrow keys. Alternatively, you can right-click the mouse and drag it to move the map. If you are using a trackpad, you will have to double click and then drag with two fingers. Let's understand each tool along with its purpose ### General Action Tools The 4 General action tools can be found on the bottom right corner of the map. 1. **Zoom tool**: while you can use your trackpad to zoom in and out of the map, there exists the Zoom tool to help you zoom in and out of the map in case you do not have a trackpad ready. ![](images/se_zoom1.png) 2. **Re-center tool**: this tool allows you to locate your ego-car on the map even if you have moved away ![](images/se_center1.png) 3. **Ruler tool**: this tool allows you to measure the distance between two points. This tool is extremely useful when calculating the distance between the ego-car and obstacles or traffic lights on the map. ![](images/se_ruler.png) ![](images/se_distance.png) 4. **Add Route tool**: this tool can be used both for the ego-car as well as the obstacles you set in its path. For the ego-car you can only set its destination, but for obstacles, you can set multiple points that define their driving behavior. ![](images/se_addroute.png) ### Configuration Tools There are 4 types of configurations that you will need to set up in order to create a scenario, three of which are listed on the left-side of the map - General, Ego-car and Participants (Obstacles) and the last one is Traffic Light ![](images/se_config.png) #### General Configuration This configuration tool is selected by default upon the selection of a map. The form on your right requests general scenario information like the scenario name, duration, road structure, ego-car behavior along with which metrics you would like to track. Please note some fields are required while some are based on your discretion. Once you have set and confirmed all the parameters, please proceed to the `Ego Car` configuration tool. You can learn more about each parameter by hovering over the `?` sign next to each parameter. #### Ego Car Configuration This configuration tool allows you to set your Ego car on the map and configure its parameters. As soon as you select the tool icon, you can then hover over the map and place the car at your desired location. You will notice that your mouse pointer will turn into a cross until you place the ego-car on your map. Once placed, a form should appear on the right-hand side of the map, which allows you to configure the Ego car to set its speed, acceleration, along with your desired destination. ![](images/se_ego.png) The Ego car's heading can also be set by dragging the arrow linked to the ego car ![](images/heading.png) ``` Note: You can set the ego car’s end point by clicking on the “Add Route Point” icon in the lower right corner of the map. Described in the General Action tools section. ``` Once you have placed the Ego car's end point on the map, The end point coordinates will then appear on the right-hand attribute's window. You can drag the end point flag to change the ego car’s end point location. The “End point” coordinates will be automatically updated accordingly. ![](images/endpoints1.png) Finally, you can always come back and edit the existing attributes of the ego car by clicking on the ego car on the map. This will open its attributes tab in the right-hand attributes window. #### Participants' Configuration If you select `Participant` from the configuration menu, you can place your participant in your scenario by clicking on a desired location on the map. You will notice that your mouse pointer will turn into a cross until you place the new participant on your map. Once you place it, a form will appear on the right-hand attributes window as it did with `Ego Car`. Before you edit the fields on the form, you can change the position of the participant by clicking and dragging it. You can also modify its heading by clicking on the arrow head. Once you have finalized the heading and position of your participant, you can start working on specific details mentioned in the form - type, length, speed and motion type. ![](images/obstacle.png) In the Basic Information section, you will notice an auto-generated ID along with a description textbox. You could give your participant a suitable ID as well as a description about its expected behavior. You could also specify what is your participant's type, which will be set to `Car` by default. Upon selecting a different type, the participant on your screen will change accordingly. You will also need to determine its initial speed and other attributes including width, length and height. There are predetermined values for each vehicle type, which can be changed. In the Initial State section, you will need to set the speed of the participant which can be either set in `m/s` or `km/hr`. The coordinates and heading of the participant are preset and can be changed by directly editing the participant's position on the map. In Runtime Configuration, you can set whether the participant is mobile or static. Should you select static, you have finished setting up your participant and are ready to save. If you select mobile instead, you would need to set its `Trigger Type`. Once you have completed your mobile participant setup, click on the `add route point` button to set the participant's trajectory points as seen in the image below. ![](images/se_addroute.png) You can set a single destination, or add several points in between. You will also be able to add speed and change the speed of your participant on the form from one point to the next. Also, you can edit the location of the point on the screen by clicking on and dragging it to its desired locaiton. Finally, if you have added several trajectory points and do not know how to go back to your participant, you can use the `Re-center tool` (which is similar to the General Action re-center tool), but this re-center tool only works for your participants. ![](images/center2.png) Your final participant screen should appear as follows: ![](images/final_obs.png) #### Traffic Light Configuration This configuration tool will allow you to edit the traffic lights that are a part of your scenario. To activate this tool, look for a traffic light on the map and click on it which opens a configuration form on the right-hand side of your window. You will notice 2 constant attricutes ID and its coordinates. However, you will need to select a trigger type for the traffic light: - **Distance** - the traffic light will be triggered by the distance between the ego car and the light - **Time** - the light will be triggered by the scenario run time ![](images/traffic_light.png) You will also be required to set the `Initial State` of the traffic light. And once your trigger type is set, you will also be required to complete the `States` section, in terms of color and light durations for each state. If the traffic lights have reached the end of their configured states before the end of the scenario, the last state will remain until the end. ## Saving a Scenario You can save your scenario by clicking on `Save` in the file menu. ``` Note: The minimum requirements of saving a scenario are to configure all required attributes in the “General” and “Ego Car” configurations. If not, a pop-up window with a failure message will highlight what you are still required to configure. ``` ## Running a New Scenario 1. To locate and run your scenario requires you to trigger a `New Task` under `Task Management` ![](images/new_scenario.png) 2. click on `Select Scenarios` ![](images/select_scenario.png) 3. You can then search for your newly created scenario. An easy way to filter your private scenarios is to perform an instance search for your username in the `Search scenarios` field. ![](images/instance.png)
0
apollo_public_repos/apollo/docs/13_Apollo Tool
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/apply_fuel_account_cn.md
# 1. 开通云服务账号向导 - [1. 开通云服务账号向导](#1-开通云服务账号向导) - [1.1. 前提条件](#11-前提条件) - [1.2. 注册百度云BOS](#12-注册百度云BOS) - [1.3. 开通云服务账号](#13-开通云服务账号) ## 1.1. 前提条件 请与商务部门联系(邮件develop-kit@apollo.auto)获得授权 ## 1.2. 注册百度云BOS 按[百度云对象存储BOS注册与基本使用向导](./apply_bos_account_cn.md)注册百度云BOS ## 1.3. 开通云服务账号 打开[Dreamland网址](http://bce.apollo.auto/)选择用百度账号登录,登录后点击左侧菜单栏「用户帮助」里的「Fuel使用指南」菜单项如下图所示,并按照文档开通云服务账号。 ![mian](images/fuel_use_guide.png)
0
apollo_public_repos/apollo/docs/13_Apollo Tool
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/apply_bos_account_cn.md
# 百度云对象存储BOS注册与基本使用向导 - [百度云对象存储BOS注册与基本使用向导](#百度云对象存储bos注册与基本使用向导) - [概述](#概述) - [前提条件](#前提条件) - [主要步骤](#主要步骤) - [1、登录开通BOS](#1登录开通bos) - [2、创建bucket](#2创建bucket) - [3、使用BOS客户端](#3使用bos客户端) ## 概述 该用户手册旨在帮助用户完成注册百度云对象存储BOS购买服务,使用客户端登录BOS上传下载数据。 ## 前提条件 注册一个百度账号,供后续使用。[登录注册网址](https://passport.baidu.com/) ## 主要步骤 ### 1、登录开通BOS 打开[BOS主页](https://console.bce.baidu.com/)用百度账号登录,登录成功界面如下图所示: ![mian](images/login_main.png) 点击对象存储BOS,界面如下图所示: ![bos](images/login_bos.png) 点击立即开通按钮弹出界面如下: ![bos_main](images/login_bos_main.png) 在财务的位置点击充值按钮进行充值,弹出界面如下图所示: ![bos_recharge](images/login_bos_recharge.png) 充值是一种按使用标准存储空间的容量进行付费的模式,相对开发者套件云服务来说比较划算,余额还可以随时提取。 ### 2、创建bucket 在上上图中点击新建bucket,弹出界面如下图所示: ![create](images/bucket_create.png) 注意在所属地域的地方选择自己的区域,读写权限设置为私有,cdn计费方式选择按使用流量计费,这是一种后付费模式,使用多少流量,付多少费,比较划算。创建成功后界面如下图所示: ![main](images/bucket_main.png) 权限设置修改如下图所示: ![authority](images/bucket_authority.png) 创建成功后AK、SK查看方法: ![ak](images/bucket_AK.png) 点击红框中的Access Key,弹出界面如下所示: ![sk](images/bucket_SK.png) 点击Secret Key显示按钮输入验证码即可查看SK。 ### 3、使用BOS客户端 从[BOS桌面客户端](https://cloud.baidu.com/doc/BOS/s/lk4tnbkrm)上,下载桌面客户端软件,有Windows和Mac两种,下面以Windows为例来下载,安装成功后打开界面如下图所示: ![ak_sk](images/bos_client_ak_sk.png) 填写上自己的AK、SK后,点击向右的箭头,如下图所示: ![security](images/bos_client_security.png) 设置输入自己的安全码点击登录即可看到自己创建的bucket如下图所示: ![bucket](images/bos_client_bucket.png) 然后上传下载单个文件或者整个文件夹数据即可。
0
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/Vehicle_Calibration_Online 在线车辆校准/README.md
# Vehicle Calibration Online Service ### Note: 1. Please refer to [Apply_BOS_Account](../../Apollo_Fuel/apply_bos_account_cn.md) for account setup before you can use the service. 2. New service entrance point available at [Apollo Dreamland](http://bce.apollo.auto) 3. Chinese version tailing for D-kit is available at [Vehicle_Calibration_Online_cn.md](../../D-kit/Waypoint_Following/vehicle_calibration_online_cn.md) vehicle calibration system automatically generates calibration table for different vehicle models. It includes three parts: a frontend data collection monitor system, a data pipeline upload/download tool for uploading collected data and downloading generated calibration tables, and a visualization tool for performance evaluation. <!-- # Table of Contents 1\. [Frontend](#frontend) 2\. [Data](#data) - [Upload Tool](#upload) - [Download Tool](#download) 3\. [Visulization](#visulization) --> ## Frontend In DreamView, a data collection monitor is presented for monitoring the data calibration process. In vehicle calibration mode, collected data frames are visualized in the data calibration monitor. Data frames are categorized into different driving conditions according to their chassis information. The amount of collected data frames are indicated as progress bars. ### Setup In the on-vehicle DreamView environment, 1. Choose `vehicle calibration` in `--setup mode--`, 2. Choose `Data Collection Monitor` at `Others` panel. ![](images/calibration_table.png) The data collection monitor is displayed in DreamView. ### Data collection When driving, data frames are automatically processed by reading their chassis messages. When a data frame satisfy the speed criterion (speed equal or larger than 0.2 mps), the data frame is categorized by its steering, speed and throttle/brake information. The data collection process is presented by bars in data collection monitor. There are 21 bars in total in data collection monitor. The overall process is indicated by the top bar. The rest 20 bars indicate 20 driving conditions, including - Six brake conditions at different speed level - low speed (<10 mps) brake pulse - middle speed (10 mps ~ 20 mps ) brake pulse - high speed (>=20 mps) brake pulsing - low speed ( <10 mps) brake tap - middle speed (10 mps ~ 20 mps ) brake tap - high speed (>=20 mps) brake tap - Six throttle conditions at different speed level - low speed (<10 mps) under throttle - middle speed (10 mps ~ 20 mps ) under throttle - high speed (>=20 mps) under throttle - low speed ( <10 mps) harsh throttle - middle speed (10 mps ~ 20 mps ) harsh throttle - high speed (>=20 mps) harsh throttle - Eight steering angle conditions - left 0% ~ 20% - left 20% ~ 40% - left 40% ~ 60% - left 60% ~ 100% - right 0% ~ 20% - right 20% ~ 40% - right 40% ~ 60% - right 60% ~ 100% For each bar, there is a blue ribbon indicating collected data frames. When the blue ribbon fills the whole bar, the number of collected frames reaches the target number. There is also a number at right end of each bar indicating the completion percentage. For calibration table data collection, when the first 13 bars (total progress bar and 12 brake/throttle condition bars) reaches 100% the data collection process is considered as completed. For dynamic model data collection, the data collection process is completed when all bars reaches 100%. All data are saved in `nvme drive` or `data/record/` ### Vehicle Configuration The brake and throttle specs are different between vehicle models. Therefore, the criteria for brake pulsing/tap and hash/under throttle depend on vehicle models. The default setting is based on Lincoln MKZ model. For different vehicle model, these parameters is configurable at ``` /apollo/modules/dreamview/conf/mkz7_data_collection_table.pb.txt ``` (description) ## Folder Structure Requirement Before uploading your data, take a note of: 1. The folder structure to be maintained is: ![](images/file_system.png) 1. As seen above, the file structure to be maintained is ``` Origin Folder -> Task Folder ->Vehicle Folder -> Records + Configuration files ``` 1. A **task** folder needs to be created for your calibration job, such as task001, task002... 1. A vehicle folder needs to be created for your vehicle. The name of the folder should be the same as seen in Dreamview 1. Inside your folder, create a **Records** folder to hold the data 1. Store all the **Configuration files** along with the Records folder, within the **Vehicle** folder 1. The vehicle configuration file (vehicle_param.pb.txt) is updated since Apollo 5.0 and later, you should check it 1. One task folder can contain more than one vehicle folder, you can train more vehicles in one training job ### Upload Use [bosfs](https://cloud.baidu.com/doc/BOS/BOSCLI/8.5CBOS.20FS.html) to mount your bucket to local, for example, ``` BUCKET=<bucket> AK=<access key> SK=<secret key> MOUNT=/mnt/bos # It's required to provide correct BOS region. Please read the document # https://cloud.baidu.com/doc/BOS/S3.html#.E6.9C.8D.E5.8A.A1.E5.9F.9F.E5.90.8D REGION=bj mkdir -p "${MOUNT}" bosfs "${BUCKET}" "${MOUNT}" -o allow_other,logfile=/tmp/bos-${BUCKET}.log,endpoint=http://${REGION}.bcebos.com,ak=${AK},sk=${SK} ``` Then you can copy the prepared data folder to somewhere under /mnt/bos. ### Download No download needed, the results will be sent to your email associated with your BOS bucket. ## Result Visualization The docker environment does not support Matplotlib. Thus, result are visualized outside of the docker environment. The following two figures show the visualization result of PC training results. ![](images/throttle.png) ![](images/brake.png)
0
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/Control_Profiling 控制分析/README.md
# Control Profiling Service ## Overview Control Profiling Service is a cloud based service to evaluate the control and planning trajectories from road test or simulation records. ## Prerequisites - [Apollo](https://github.com/ApolloAuto/apollo) 6.0 or higher version. - Baidu Cloud BOS service registered according to [document](../apply_fuel_account_cn.md) - Fuel service account on [Apollo Dreamland](http://bce.apollo.auto/user-manual/fuel-service) ## Main Steps - Data collection - Job submission - Results analysis ## Data Collection ### Data Recording Finish one autonomous driving scenario with a closed loop test, e.g. RTK or close loop. ### Data Sanity Check - **Make sure the following channels are included in records before submitting them to cloud service**: | Modules | channel | items | |---|---|---| | Canbus | `/apollo/canbus/chassis` | exits without error message | | Control | `/apollo/control` | exits without error message | | Planning | `/apollo/planning` | - | | Localization | `/apollo/localization/pose` | - | | GPS | `apollo/sensor/gnss/best_pose` | `sol_type` to `NARROW_INT` | - You can check with `cyber_recorder`: ``` cyber_recorder info xxxxxx.record.xxxxx ``` ![](images/profiling_channel_check.png) ## Job Submission ### Upload data to BOS Here is the folder structure requirements for job submission: 1. A cyber record file containing the execution of open space planner scenario. 1. A configuration file `vehicle_param.pb.txt`; there is a sample file under `apollo/modules/common/data/vehicle_param.pb.txt`. ### Submit job in Dreamland Go to [Apollo Dreamland](http://bce.apollo.auto/), login with **Baidu** account, choose `Apollo Fuel --> Jobs`,`New Job`, `Control Profiling`,and input the correct BOS path as in [Upload data to BOS](###Upload-data-to-BOS) section: ![control_profiling_submit_job2_en](images/control_profiling_submit_job2_en.png) ![control_profiling_submit_job_en](images/control_profiling_submit_job_en.png) ## Results Analysis - After job is done, you should be expecting one email per job including `Grading results` and `Visualization results`. ![profiling_grading_results](images/profiling_grading_results.png) ![profiling_visualization_result](images/profiling_visualization_result.png)
0
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/Prediction_Pedestrian_Model_Training 感知行人模型训练/README.md
# Prediction Pedestrian Model Training Service ## Overview Prediction Pedestrian Model Treaining Service is a cloud based service to train a prediction model for pedestrian from the data you provide, to better fit the pedestrian behavior in your environment. ## Prerequisites - [Apollo](https://github.com/ApolloAuto/apollo) 6.0 or higher version. - Baidu Cloud BOS service registered according to [document](../apply_bos_account_cn.md) - Fuel service account on [Apollo Dreamland](http://bce.apollo.auto/user-manual/fuel-service) ## Main Steps - Data collection - Job submit - Trained model ## Data Collection ### Data Recording Finish several hours of autonomous driving with different scenarios, please make sure to have some pedestrians in sights. Just record them with `cyber_recorder`. ### Data Sanity Check - **make sure following channels are included in records before submitting them to cloud service**: | Modules | channel | items | |---|---|---| | Perception | `/apollo/perception/PerceptionObstacles` | exits without error message | | Localization | `/apollo/localization/pose` | - | - you can check with `cyber_recorder`: ``` cyber_recorder info xxxxxx.record.xxxxx ``` ## Job Submission ### Upload data to BOS 1)`Input Data Path` is **BOS root folder**,for users; 2)There should be two folders under your `Input Data Path`, one named `map` for the map you are using in Apollo format, the other is `records`, for the records you collected. #### Submit job requests in Dreamview Goto [Apollo Dreamland](http://bce.apollo.auto/login), login with **Baidu** account, choose `Apollo Fuel--> task`,`new task`, `prediction pedestrian model training`,and input correct path as in [Upload data to BOS](###Upload-data-to-BOS) section: #### Get Trained models - After job is done, you should be expecting emails including the results, the trained model will be in a `models` folder under your `Input Data Path`.
0
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/Open_Space_Planner_Profiling/README.md
# Open Space Planner Profiling Service ## Overview Open Space Profiling Service is a cloud based service to evaluate the open space planner trajectories from road test or simulation records. ## Prerequisites - [Apollo](https://github.com/ApolloAuto/apollo) 6.0 or higher version. - Baidu Cloud BOS service registered according to [document](../apply_bos_account_cn.md) - Fuel service account on [Apollo Dreamland](http://bce.apollo.auto/user-manual/fuel-service) ## Main Steps - Data collection - Job submission - Results analysis ## Data Collection ### Data Recording Finish one autonomous driving scenario with open space planner, e.g. Valet Parking, PullOver, Park and Go. ### Data Sanity Check - **Make sure the following channels are included in records before submitting them to cloud service**: | Modules | channel | items | |---|---|---| | Canbus | `/apollo/canbus/chassis` | exits without error message | | Control | `/apollo/control` | exits without error message | | Planning | `/apollo/planning` | - | | Localization | `/apollo/localization/pose` | - | | GPS | `apollo/sensor/gnss/best_pose` | `sol_type` to `NARROW_INT` | - You can check with `cyber_recorder`: ``` cyber_recorder info xxxxxx.record.xxxxx ``` ![](images/profiling_channel_check.png) ## Job Submission ### Upload data to BOS Here is the folder structure requirements for job submission: 1. A cyber record file containing the execution of open space planner scenario. 1. A configuration file `vehicle_param.pb.txt`; there is a sample file under `apollo/modules/common/data/vehicle_param.pb.txt`. ### Submit job in Dreamland Go to [Apollo Dreamland](http://bce.apollo.auto/), login with **Baidu** account, choose `Apollo Fuel --> Jobs`,`New Job`, `Open Space Planner Profiling`,and input the correct BOS path as in [Upload data to BOS](###Upload-data-to-BOS) section: ![profiling_submit_task1](images/open_space_job_submit.png) ## Results Analysis - After job is done, you should be expecting one email per job including `Grading results` and `Visualization results`. ![profiling_submit_task1](images/profiling_email.png)
0
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/Perception_Lidar_Model_Training 感知雷达模型训练/README.md
# Open Perception Lidar Model Training Service ## Overview Open Perception Lidar Model Training Service is a cloud-based service to train perception lidar model using pointpillars algorithm from your data, to better detect obstacles in your environment. ## Prerequisites - [Apollo](https://github.com/ApolloAuto/apollo) 6.0 or higher version. - Baidu Cloud BOS service registered according to [document](../apply_bos_account_cn.md) - Fuel service account on [Apollo Dreamland](http://bce.apollo.auto/user-manual/fuel-service) ## Main Steps - Data collection - Job submission - Model training result ## Data Collection ### Data Recording Collecting sensor data from lidar and cameras in different scenarios covering your autonomous driving environment as much as possible, please make sure the scenarios have different types of obstacles such as pedestrians and vehicles. Then labeling the sensor data using kitti data format. ### Data format - **We use [Kitti data format](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d) as training data format**: ``` INPUT_DATA_PATH: training: calib image_2 label_2 velodyne testing: calib image_2 velodyne train.txt val.txt trainval.txt test.txt ``` - Supported obstacle detection categories: ``` bus, Car, construction_vehicle, Truck, barrier, Cyclist, motorcycle, Pedestrian, traffic_cone ``` When labeling your data, `type` must be one of the above categories (please note the uppercase). ## Job Submission ### Upload data to BOS Requirements of the folder structure for job submission: 1. Input Data Path: upload your [data](###Data-format) to INPUT_DATA_PATH directory. 2. Output Data Path: if the model is trained successfully, an onnx file will be saved to the OUTPUT_DATA_PATH directory. ### Submit job on Dreamland Go to [Apollo Dreamland](http://bce.apollo.auto/), login with **Baidu** account, choose `Apollo Fuel --> Jobs`,`New Job`, `Perception Lidar Model Training`,and input the correct BOS path as in [Upload data to BOS](###Upload-data-to-BOS) section. ## Model Training Result - Once a job is done, you should be expecting one email per job including the results and `Model Path`. ![](images/perception_email.png)
0
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/Dynamic_Model 动力学模型/README.md
# Vehicle Dynamic Modeling Vehicle dynamic modeling service employs the supervised machine learning algorithm to generate a learning-based vehicle dynamic model, which can be used as the customized dynamic model in the Apollo simulation platform for control-in-the-loop simulation. The usage of the dynamic modeling asks for three main steps: (1) collect the training/test data for learning-based modeling, via a frontend data collection process monitoring system, (2) upload the collected data into cloud by locally build the standard BOS-linked folder, and (3) submit the service command via the online service webpage and expect the modeling results within email notice. <!-- # Table of Contents 1\. [Frontend](#frontend) 2\. [Data](#data) - [Upload Tool](#upload) - [Download Tool](#download) 3\. [Visulization](#visulization) --> ## Frontend In DreamView, a data collection monitor is presented for monitoring the data collection process. After the user selects the "Vehicle Calibration" option in the "setup_mode" menu, the data collection process is visualized in the data collection monitor. Driving data are categorized into different driving conditions, as shown in the following figure. The amounts of collected data (in unit of frame) are indicated as progress bars. ### Setup In the on-vehicle DreamView environment, 1. Choose `vehicle calibration` in `--setup mode--`, 2. Choose `Data Collection Monitor` at `Others` panel. ![](images/calibration_table.png) The data collection monitor is displayed in DreamView. ### Data collection When driving, data frames are automatically processed by identifying their driving status form the Chassis Channel messages. When a single data frame satisfies the speed criterion (speed equal or larger than 0.2 mps), the single data frame is categorized by its steering, speed and throttle/brake information. The data collection process is presented in progress bars of the data collection monitor. There are 21 progress bars in total in data collection monitor. The overall process is indicated by the top progress bar. The rest 20 progress bars indicate 20 driving conditions, including - Six brake conditions at different speed level - low speed (<10 mps) brake pulse - middle speed (10 mps ~ 20 mps ) brake pulse - high speed (>=20 mps) brake pulsing - low speed ( <10 mps) brake tap - middle speed (10 mps ~ 20 mps ) brake tap - high speed (>=20 mps) brake tap - Six throttle conditions at different speed level - low speed (<10 mps) under throttle - middle speed (10 mps ~ 20 mps ) under throttle - high speed (>=20 mps) under throttle - low speed ( <10 mps) harsh throttle - middle speed (10 mps ~ 20 mps ) harsh throttle - high speed (>=20 mps) harsh throttle - Eight steering angle conditions - left 0% ~ 20% - left 20% ~ 40% - left 40% ~ 60% - left 60% ~ 100% - right 0% ~ 20% - right 20% ~ 40% - right 40% ~ 60% - right 60% ~ 100% For each bar, there is a blue ribbon indicating collected data frames. When the blue ribbon fills the whole bar, the number of collected frames reaches the target number. There is also a number at right end of each bar indicating the completion percentage. For dynamic modeling data collection, when the all the progress bars reaches 100%, the data collection process is considered as "completed". All data are saved in `nvme drive` or `data/record/` ### Vehicle Configuration The brake and throttle specs are different between vehicle models. Therefore, the criteria for brake pulsing/tap and hash/under throttle depend on vehicle models. The default setting is based on Lincoln MKZ model. For different vehicle models, these parameters are configurable at ``` /apollo/modules/dreamview/conf/mkz7_data_collection_table.pb.txt ``` (description) ## Folder Structure Requirement Before uploading your data, take a note of: 1. The folder structure to be maintained is: ![](images/file_system.png) 1. As seen above, the file structure to be maintained is ``` Origin Folder -> Task Folder -> Vehicle Folder -> Records Folder + Configuration files ``` 1. A **task** folder needs to be created for your dynamic modeling job, such as task001, task002... 1. A vehicle folder needs to be created for your vehicle. The name of the folder should be the same as seen in Dreamview 1. Inside your folder, create a **Records** folder to hold the data 1. Store all the **Configuration files** along with the Records folder, within the **Vehicle** folder 1. The vehicle configuration file (vehicle_param.pb.txt) is updated since Apollo 5.0 and later, you should check it 1. One task folder can contain more than one vehicle folder, in other words, you may train models for multiple vehicles in one training job ### Upload Data Use [bosfs](https://cloud.baidu.com/doc/BOS/BOSCLI/8.5CBOS.20FS.html) to mount your bucket to local, for example, ``` BUCKET=<bucket> AK=<access key> SK=<secret key> MOUNT=/mnt/bos # It's required to provide correct BOS region. Please read the document # https://cloud.baidu.com/doc/BOS/S3.html#.E6.9C.8D.E5.8A.A1.E5.9F.9F.E5.90.8D REGION=bj mkdir -p "${MOUNT}" bosfs "${BUCKET}" "${MOUNT}" -o allow_other,logfile=/tmp/bos-${BUCKET}.log,endpoint=http://${REGION}.bcebos.com,ak=${AK},sk=${SK} ``` Then you can copy the prepared data folder to somewhere under /mnt/bos. ## Submit Job Via On-Line Service Website Login in the [Apollo webpage](http://bce.apollo.auto/) and choose the **Apollo Fuel -> New Job** in the functionality menu. Select the **Dynamic Model** option in the **New Job** menu,and then fill the **Input Data Path** with the data storage path starting from the root directory under your BOS folder, and choose whether click the **is backward** radio button (Only click it if you intend to train the dynamic model under the **backward driving** mode; otherwise, leave it blank). Finally, submit your job by clicking the **Submit Job** button。 ![](images/dynamic_model_job_submit.png) ## Receive Model Training Results After the dynamic modeling job successfully starts and your uploaded data passes the sanity check, the user will receive the **first notice email** at your registered email address. Then, after the dynamic modeling job is fully finished, the user will receive the **second notice email**, in which the generated model storage path and filtered data visualization path under your own BOS folder will be provided. ![](images/dynamic_model_email.png) To use these generated dynamic models in the simulation platform or the [Control_Auto_Tuning](../Control_Auto_Tuning%20%E6%8E%A7%E5%88%B6%E8%87%AA%E5%8A%A8%E8%B0%83%E6%95%B4/README.md) service, the users need to rename the received dynamic models and put them in the corresponding github repo path as follows: provide the forward-driving model at github **apollo/modules/control/conf/dynamic_model_forward.bin**; backward-driving model at github **apollo/modules/control/conf/dynamic_model_backward.bin**.
0
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel
apollo_public_repos/apollo/docs/13_Apollo Tool/Apollo Fuel/Control_Auto_Tuning 控制自动调整/README.md
# Control Parameter Auto-Tuning Service - [Control Parameter Auto-Tuning Service](#ControlParameterAuto-TuningService) - [Overview](#Overview) - [Prerequisites](#Prerequisites) - [Main Steps](#MainSteps) - [Baidu Cloud Storage BOS Registration](#BaiduCloudStorageBOSRegistration) - [Open Cloud Service Account](#OpenCloudServiceAccount) - [Task Configuration File Setting](#TaskConfigurationFileSetting) - [Task Configuration Protocol Buffers](#TaskConfigurationProtocolBuffers) - [Task Configuration File Example](#TaskConfigurationFileExample) - [Task Configuration Detailed Explanations](#TaskConfigurationDetailedExplanations) - [Customized Dynamic Models Guidance](#CustomizedDynamicModelsGuidance) - [Task Submission](#TaskSubmission) - [Task Configuration File Storage](#TaskConfigurationFileStorage) - [Submit Job via Webpage](#SubmitJobviaWebpage) - [Task Results Acquisition](#TaskResultsAcquisition) - [Receive Task Results Email](#ReceiveTaskResultsEmail) - [Task Results Analysis](#TaskResultsAnalysis) ## Overview Control parameter auto-tuning service utilizes the machine learning method to automatically optimize the control parameters of the PID, LQR, MPC, MRAC, etc. controllers used in the Apollo Control Module, to realize the full automation of controller tuning within offline simulation environment that saves massive manual tests with on-road experiments. It integrates with multiple Apollo online service tools including the dynamic modeling, simulation platform, and control profiling. Control parameter auto-tuning service 1) iteratively generates the new control parameters and evaluate the generated parameters by invoking the backend Apollo Simulation service, in which the pre-trained vehicle dynamic models are used to enable the control-in-the-loop simulation; 2) the simulation results are evaluated via the backend control profiling service; 3) and furthermore, tens of control metrics from the control profiling results are weighted and combined into one weighted score, and with this score as optimization target, the auto-tuning service continuously searches the new control parameters with better score in the promising zone, until it reaches the given step. ## Prerequisites Control parameter auto-tuning is executed in the control-in-the-loop simulation environment, and thus it request the users to provide their own vehicle dynamic model (i.e., the results of dynamic modeling service) for simulation; otherwise, the service will use the default vehicle dynamic model based on the MKZ vehicle model. Therefore, for a control parameter training service on the customized vehicle, some pre-required steps are listed as follows: - [Apollo](https://github.com/ApolloAuto/apollo) 6.0 or higher version - Cloud and simulation services registered according to [Apollo Dreamland](http://bce.apollo.auto/) - [Dynamic Modeling](../Dynamic_Model%20%E5%8A%A8%E5%8A%9B%E5%AD%A6%E6%A8%A1%E5%9E%8B/README.md) service ## Main Steps - Task Configuration File Setting - Task Submission - Task Results Acquisition ## Baidu Cloud Storage BOS Registration The registration please refer to [Baidu Cloud Storage BOS Register and User Manual](../apply_bos_account_cn.md) **Note:** The clients must use the registered `bucket`,and make sure that the `Bucket Name`、`Backet Area` are the same as the ones when registered。 ## Open Cloud Service Account Please contact with the business department to open the cloud service account and provide your `Bucket Name`、`Backet Area` mentioned in the last step ## Task Configuration File Setting ### Task Configuration Protocol Buffers The task configuration should be prepared in the form of the "XXX_tuner_params_config.pb.txt", with the Protocol Buffers file as follows: ![](images/tuner_param_config_proto.png) ### Task Configuration File Example According to the Protocol Buffers shown above, an example of the configuration file is shown as follows: ![](images/tuner_param_config_pb_txt.png) ### Task Configuration Detailed Explanations The detailed explanations of the message in the XXX_tuner_params_config.pb.txt is as follows: | Message | Detailed Explanations | Notes | |---|---|---| | `git_info.repo`| The repo name which will be executed in the Apollo simulation platform | The dynamic models used in simulation must be placed in the designed path in this github repo, following the [Customized Dynamic Models Guidance](#CustomizedDynamicModelsGuidance)| | `git_info.commit_id` | The commit id which will be executed in the Apollo simulation platform | If empty, by default the latest commit id will be used | | `tuner_parameters.user_tuning_module` | Set as **CONTROL** | Must be **CONTROL** otherwise cannot pass the task sanity check | | `tuner_parameters.user_conf_filename` | The control configuration file in the `git_info.repo`, which will be executed in the Apollo simulation platform | The tuned parameters and flags must be included in the configuration file | | `tuner_parameters.n_iter` | The iteration step number used to search for the optimized control parameters. Suggested values: **n_iter=200 for 1-2 tuned parameters; n_iter=300 for 3-4 tuned parameters; n_iter=400 for 5-6 tuned parameters; n_iter=500 for 7-8 tuned parameters; n_iter=600 or more for 9+ tuned parameters** | The more the iteration steps are, the higher the optimization accuracy (but slower training process) is; must be **< 1000** otherwise cannot pass the task sanity check| | `tuner_parameters.opt_max` | Set as **True** | Must be **True** otherwise cannot search for the optimized control parameters | | `tuner_parameters.flag` | (repeated message) Set as many flags (boolen parameters) as the users need in the users' control configuration file. The flag values (True/False) you set will overwrite the default flag values in control configuration file | The flags are NOT counted as **Tuned Parameter** | | `tuner_parameters.parameter` | (repeated message) Set as many parameters as the users need in the users' control configuration file. If the users set the **constant** property of the parameter, then the parameters will be treated as **Constant Parameter** and the set constant values will overwrite the default values in control configuration file; If the users set the **min** and **max** properties of the parameter, then the parameters will be treated as **Tuned Parameter** and the auto-tuning service will attempt to search the best parameter value through the range limited by **min** and **max** | The more the tuned parameters are, the more the optimization iteration step number (and the longer and slower auto-tuning process) may be needed | | `scenarios.id` | (repeated message) Set as many scenario IDs as the users need for the control performance evaluation in the parameter auto-tuning. Suggested IDs: **11014, 11015, 11016, 11017, 11018, 11019, 11020** | The users may also choose any available scenario ID from the public scenarios from the Apollo simulation platform. The more the scenario IDs are, the slower the auto-tuning process may be | | `dynamic_model` | Set as **ECHO_LINCOLN**, only if `git_info.repo` is set as the official Apollo repo | Ignored if the customized repo is used in `git_info.repo`| ### Customized Dynamic Models Guidance If the users intend to use their own dynamic models in simulation, then please provide the forward-driving model at github **apollo/modules/control/conf/dynamic_model_forward.bin**; backward-driving model at github **apollo/modules/control/conf/dynamic_model_backward.bin**. Please refer to [Dynamic Model](../Dynamic_Model%20%E5%8A%A8%E5%8A%9B%E5%AD%A6%E6%A8%A1%E5%9E%8B/README.md) for guidance on how to generate the customized dynamic models ## Task Submission ### Task Configuration File Storage Before using the auto-tuning service,set up the input file storage first according to [Baidu Cloud Storage BOS Register and User Manual](../apply_bos_account_cn.md) as indicated in [Baidu Cloud Storage BOS Registration](#Baidu Cloud Storage BOS Registration). Then, put the designed XXX_tuner_params_config.pb.txt file into any place under the users' BOS folder ### Submit Job via Webpage Login in the [Apollo webpage](http://bce.apollo.auto/) and choose the **Apollo Fuel -> New Job** in the functionality menu. Select the **Control Auto Tuning** option in the **New Job** menu,and then fill the **Input Data Path** with the task configuration file path starting from the root directory under the users' BOS folder (Note: **the Input Data Path must include the full configuration file name**, for example, 'input/XXX_tuner_params_config.pb.txt'). Finally, submit your job by clicking the **Submit Job** button。 ![](images/control_auto_tuning_webpage.png) ## Task Results Acquisition ### Receive Task Results Email - After the control parameter auto-tuning job successfully starts, the users' task configuration file will be tested by the sanity check procedure. If cannot pass the sanity check, then the user will receive the **failure notice email** at your registered email address, with the detailed failure reason. - If the control parameter auto-tuning job successfully passes the sanity check procedure, then after the job is fully finished, the user will receive the **results report email**, in which the parameter auto-tuning results and the linked configuration file will be provided as attachments. - Results Email ![](images/tuner_results_email.png) ### Task Results Analysis - The control auto-tuning results can be attained by accessing the report table in the email or in the attached tuner_results.txt file, with the detailed explanations as follows: | Message | Detailed Explanations | Notes | |---|---|---| | `Tuner_Config`| The source of the task configuration file | | `Base_Target` | The weighted score of the best control parameters | | `Base_Params` | The best control parameters | | `Optimization_Time` | The overall time consumption of the entire auto-tuning optimization process | Unit: seconds | | `Time_Efficiency` | The average time consumption of the single optimization iteration step | Unit: seconds / iteration |
0
apollo_public_repos/apollo/docs/13_Apollo Tool
apollo_public_repos/apollo/docs/13_Apollo Tool/云平台Apollo Studio/Dreamland_introduction.md
# Dreamland ## Introduction Dreamland is Apollo's web-based simulation platform. Based on an enormous amount of driving scenario data and large-scale cloud computing capacity, Apollo simulation engine creates a powerful testing environment for the development of an autonomous driving system, from algorithms to grading, and then back to improved algorithms. It enables the developers and start-ups to run millions of miles of simulation daily, which dramatically accelerates the development cycle. To access Dreamland, please visit [our Simulation website](http://apollo.auto/platform/simulation.html) ## Overview 1. **An Array of Scenarios:** The simulation platform allows users to choose different road types, obstacles, driving plans, and traffic light states. There are currently around 200 scenario cases provided including: - Different types of roads, such as intersection, U-turn lane, through lane, T-junction and curved lane. - Different types of obstacles, such as pedestrians, motor vehicles, bicycles etc. - Different driving plans, such as lane follow, U-turn, lane change, left-turn, right-turn and lane merge. - Different traffic light status, such as red, yellow and green. 2. **Execution Modes:** The simulation platform gives users a complete setup to run multiple scenarios parallelly in the cloud and verify modules in the Apollo environment. 3. **Automatic Grading System:** The current Automatic Grading System tests via 12 metrics: - Collision detection - Red-light violation detection - Speeding detection - Off-road detection - Arrival test - Hard braking detection - Acceleration test - Routing test - Lane-change in junction detection - Yield to pedestrians at crosswalks - Brake Taps - Stop at stop-signs These grading metrics test different aspects of autonomous driving, ranging from traffic and road safety to the rider's comfort. The Apollo team is committed to safety while providing excellent user experience during the drive hence these metrics are tailored to ensure a rigorous testing environment before the car is even put on the road. 4. **3D Visualization:** 3D Visualization illustrates real-time road conditions and helps to visualize the output from different modules. It also displays the status of the autonomous vehicle, such as velocity, heading etc. It also helps in visualizing the output of modules, such as routing, obstacles and planned trajectory. ## Scenarios Through Dreamland, you could run millions of scenarios on the Apollo platform, but broadly speaking, there are two types of scenarios: 1. **Worldsim:** Worldsim is synthetic data created manually with specific and well-defined obstacle behavior and traffic light status. They are simple yet effective for testing the autonomous car in a well-defined environment. They do however lack the complexity found in real-world traffic conditions. 2. **Logsim:** Logsim is extracted from real world data using our sensors. They are more realistic but also less deterministic. The obstacles perceived may be fuzzy and the traffic conditions are more complicated. ## Key Features 1. **Web Based:** Dreamland does not require you to download large packages or heavy software, it is a web based tool that can be accessed from any browser-friendly device 2. **Highly Customizable Scenarios:** With a comprehensive list of traffic elements you can fine tune Dreamland to suit your niche development. 3. **Rigorous Grading Metrics:** The grading metrics include: - Collision detection - Checks whether there is a collision (any distance between objects less than 0.1m is considered a collision) - Red-light violation detection - Checks whether the autonomous car runs a red light - Speeding detection - Checks whether the speed of the autonomous car exceeds the current speed limit - Off-road detection - Checks whether the autonomous car stays on the road - Arrival test - Checks whether the autonomous car arrives at its destination - Hard braking detection - Checks whether the autonomous car is braking too hard (deceleration is greater than 4m/s^2) - Acceleration test - Check whether the autonomous car is speeding up too fast (acceleration is greater than 4m/s^2) - Routing test - Checks whether a routing response is present - Lane-change in junction detection - Checks whether the planning trajectory makes a lane-change in a traffic junction - Yield to pedestrians at crosswalks - Checks whether the planning trajectory yields to pedestrians at crosswalks - Brake Taps - Checks whether the autonomous car has quick brake taps. - Stop at stop-signs - Checks whether the autonomous car stops at stop-signs 3. **Instant Verification on the Cloud:** Dreamland offers you the unique opportunity to instantly verify your Apollo build on the cloud and test your code against a vast and diverse set of scenarios that can run with the push of a button. ## Dreamland Tool 1. To access Dreamland, please visit [our Dreamland Homepage](https://azure.apollo.auto/) 2. You will then be redirected to a login screen. You could use either your existing accounts (Baidu, Google, Github, Microsoft) or create your own Dreamland account. ![](images/Dreamland_login.png) 3. Upon successful logging in, you will be redirected to the Dreamland Introduction page which includes a basic introduction and offerings ![](images/Dreamland_home.png) Dreamland platform offers a number of features that you could explore to help you accelerate your autonomous driving testing and deployment. 1. **User Manual** - This section includes documentation to help you get up and running with Dreamland. - [Quickstart](https://azure.apollo.auto/user-manual/quick-start): This section will walk you through testing your build using our APIs and also how to manage and edit existing scenarios. - [Scenario Editor](): The scenario editor is a new feature to be launched in Apollo 5.0 which enables our developers to create their own scenarios to test niche aspects of their algorithm. In order to use this feature, you will have to comeplete the form on the screen as seen in the image below: ![](images/form.png) You should receive the following activation confirmation via email: ![](images/email.png) Feel free to reach out to the team if you do not receive an activation confirmation within 3 days. - [FAQ](https://azure.apollo.auto/user-manual/faq): This section will answer frequently encountered issues or questions to help make using Dreamland simpler. For any additional questions or issues you may face, feel free to reach out to the team on [Apollo's Github issues](https://github.com/ApolloAuto/apollo/issues) 2. **Sample Scenarios:** A large number of existing worldsim and logsim scenarios that you can later use to test your build and algorithms. It is always good to acclimate yourself to the existing scenario list to help you decide which scenarios will benefit which aspects of your development. ![](images/Dreamland_sample.png) 3. **Scenario Management:** Scenario management helps you filter, search through and group scenarios together. You can then view your existing groups. This is especially helpful when running tasks in `Task Management` that does not require you to re-select from the scenario list and simply choose the existing group. ![](images/Dreamland_sm.png) 4. **Task Management:** Like Scenario Editor, Task Management is also a service offering currently in beta testing and open only to selective partners. In order to use this feature, you will have to comeplete the form on the screen and request for activation. The Task Management tab is extremely useful when testing any one particular type of scenario like side pass or U-turns. It helps test your algorithms against very specific test cases. Within the Task Management page, you can run a `New Task` to test your personal Apollo github repository against a list of scenarios. You will receive a summary of the task which highlights if the build passed or not, along with the passing rate of both worldsim and logsim scenarios and finally the total miles tested virtually. You can also view the number of failed scenarios along with a description detailing the failed timestamp and the grading metric which failed. Finally, you can run the comparison tool to check how your build performed versus previous builds. 5. **Daily Build:** The Daily Build shows how well the current Apollo official Github repository runs against all the scenarios. It is run once every morning Pacific Time. ![](images/Dreamland_build_1.png)
0
apollo_public_repos/apollo/docs/13_Apollo Tool
apollo_public_repos/apollo/docs/13_Apollo Tool/可视化交互工具Dremview/dreamview_usage_table_cn.md
# DreamView用法介绍 DreamView是一个web应用程序,提供如下的功能: 1. 可视化显示当前自动驾驶车辆模块的输出信息,例如规划路径、车辆定位、车架信息等。 2. 为使用者提供人机交互接口以监测车辆硬件状态,对模块进行开关操作,启动自动驾驶车辆等。 3. 提供调试工具,例如PnC监视器可以高效的跟踪模块输出的问题 ## 界面布局和特性 该应用程序的界面被划分为多个区域:标题、侧边栏、主视图和工具视图。 ### 标题 标题包含4个下拉列表,可以像下述图片所示进行操作: ![](images/dreamview_usage_table/header.png) 附注:导航模块是在Apollo 2.5版本引入的满足低成本测试的特性。在该模式下,Baidu或Google地图展现的是车辆的绝对位置,而主视图中展现的是车辆的相对位置。 ### 侧边栏和工具视图 ![](images/dreamview_usage_table/sidebar.png) 侧边栏控制着显示在工具视图中的模块 ### Tasks 在DreamView中使用者可以操作的tasks有: ![](images/dreamview_usage_table/tasks.png) * **Quick Start**: 当前选择的模式支持的指令。通常情况下, **setup**: 开启所有模块 **reset all**: 关闭所有模块 **start auto**: 开始车辆的自动驾驶 * **Others**: 工具经常使用的开关和按钮 * **Module Delay**: 从模块中输出的两次事件的时间延迟 * **Console**: 从Apollo平台输出的监视器信息 ### Module Controller 监视硬件状态和对模块进行开关操作 ![](images/dreamview_usage_table/module_controller.png) ### Layer Menu 显式控制各个元素是否显示的开关 ![](images/dreamview_usage_table/layer_menu.png) ### Route Editing 在向Routing模块发送寻路信息请求前可以编辑路径信息的可视化工具 ![](images/dreamview_usage_table/route_editing.png) ### Data Recorder 将问题报告给rosbag中的drive event的界面 ![](images/dreamview_usage_table/data_recorder.png) ### Default Routing 预先定义的路径或者路径点,该路径点称为兴趣点(POI)。 ![](images/dreamview_usage_table/default_routing.png) 如果打开了路径编辑模式,路径点可被显式的在地图上添加。 如果关闭了路径编辑模式,点击一个期望的POI会向服务器发送一次寻路请求。如果只选择了一个点,则寻路请求的起点是自动驾驶车辆的当前点。否则寻路请求的起点是选择路径点中的第一个点。 查看Map目录下的[default_end_way_point.txt](../../modules/map/data/demo/default_end_way_point.txt)文件可以编译POI信息。例如,如果选择的地图模式为“Demo”,则在`modules/map/data/demo`目录下可以查看对应的 [default_end_way_point.txt](../../modules/map/data/demo/default_end_way_point.txt) 文件。 ### 主视图 主视图在web页面中以动画的方式展示3D计算机图形 ![](images/dreamview_usage_table/mainview.png) 下表列举了主视图中各个元素: | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image002.png) | <ul><li>自动驾驶车辆 </li></ul> | | ![](images/dreamview_usage_table/0clip_image004.png) | <ul><li>车轮转动的比率</li> <li>左右转向灯的状态</li></ul> | | ![](images/dreamview_usage_table/0clip_image003.png) | <ul><li>交通信号灯状态</li></ul> | | ![](images/dreamview_usage_table/0clip_image005.png) |<ul><li> 驾驶状态(AUTO/DISENGAGED/MANUAL等) </li></ul> | | ![](images/dreamview_usage_table/0clip_image006.png) | <ul><li>行驶速度 km/h</li> <li>加速速率/刹车速率</li></ul> | | ![](images/dreamview_usage_table/0clip_image026.png) | <ul><li> 红色粗线条表示建议的寻路路径</li></ul> | | ![](images/dreamview_usage_table/0clip_image038.png) |<ul><li> 轻微移动物体决策—橙色表示应该避开的区域 </li></ul> | | ![](images/dreamview_usage_table/0clip_image062.png) |<ul><li> 绿色的粗曲线条带表示规划的轨迹 </li></ul> | #### 障碍物 | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image010.png) | <ul><li>车辆障碍物 </li></ul> | | ![](images/dreamview_usage_table/0clip_image012.png) | <ul><li>行人障碍物 </li></ul> | | ![](images/dreamview_usage_table/0clip_image014.png) | <ul><li>自行车障碍物 </li></ul> | | ![](images/dreamview_usage_table/0clip_image016.png) | <ul><li>未知障碍物 </li></ul> | | ![](images/dreamview_usage_table/0clip_image018.png) | <ul><li>速度方向显示了移动物体的方向,长度随速度按照比率变化</li></ul> | | ![](images/dreamview_usage_table/0clip_image020.png) | <ul><li>白色箭头显示了障碍物的移动方向</li></ul> | | ![](images/dreamview_usage_table/0clip_image022.png) | 黄色文字表示: <ul><li>障碍物的跟踪ID</li><li>自动驾驶车辆和障碍物的距离及障碍物速度</li></ul> | | ![](images/dreamview_usage_table/0clip_image024.png) | <ul><li>线条显示了障碍物的预测移动轨迹,线条标记为和障碍物同一个颜色</li></ul> | #### Planning决策 ##### 决策栅栏区 决策栅栏区显示了Planning模块对车辆障碍物做出的决策。每种类型的决策会表示为不同的颜色和图标,如下图所示: | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image028.png) | <ul><li>**停止** 表示物体主要的停止原因</li></ul> | | ![](images/dreamview_usage_table/0clip_image030.png) | <ul><li>**停止** 表示物体的停止原因n</li></ul> | | ![2](images/dreamview_usage_table/0clip_image032.png) | <ul><li>**跟车** 物体</li></ul> | | ![](images/dreamview_usage_table/0clip_image034.png) | <ul><li>**让行** 物体决策—点状的线条连接了各个物体</li></ul> | | ![](images/dreamview_usage_table/0clip_image036.png) | <ul><li>**超车** 物体决策—点状的线条连接了各个物体</li></ul> | 线路变更是一个特殊的决策,因此不显示决策栅栏区,而是将路线变更的图标显示在车辆上。 | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/change-lane-left.png) | <ul><li>变更到**左**车道 </li></ul>| | ![](images/dreamview_usage_table/change-lane-right.png) | <ul><li>变更到**右**车道 </li></ul>| 在优先通行的规则下,当在交叉路口的停车标志处做出让行决策时,被让行的物体在头顶会显示让行图标 | Visual Element | Depiction Explanation | | ---------------------------------------------------- | ------------------------------ | | ![](images/dreamview_usage_table/0clip_image037.png) | 停止标志处的让行物体 | ##### 停止原因 如果显示了停止决策栅栏区,则停止原因展示在停止图标的右侧。可能的停止原因和对应的图标为: | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image040.png) | <ul><li>**前方道路侧边区域** </li></ul>| | ![](images/dreamview_usage_table/0clip_image042.png) | <ul><li>**前方人行道** </li></ul>| | ![](images/dreamview_usage_table/0clip_image044.png) | <ul><li>**到达目的地** </li></ul>| | ![](images/dreamview_usage_table/0clip_image046.png) | <ul><li>**紧急停车** </li></ul> | | ![](images/dreamview_usage_table/0clip_image048.png) | <ul><li> **自动驾驶模式未准备好** </li></ul>| | ![](images/dreamview_usage_table/0clip_image050.png) | <ul><li>**障碍物阻塞道路**</li></ul> | | ![](images/dreamview_usage_table/0clip_image052.png) | <ul><li> **前方行人穿越** </li></ul> | | ![](images/dreamview_usage_table/0clip_image054.png) | <ul><li>**黄/红信号灯** </li></ul>| | ![](images/dreamview_usage_table/0clip_image056.png) | <ul><li> **前方有车辆** </li></ul> | | ![](images/dreamview_usage_table/0clip_image058.png) | <ul><li> **前方停止标志** </li></ul>| | ![](images/dreamview_usage_table/0clip_image060.png) | <ul><li>**前方让行标志** </li></ul> | #### 视图 可以在主视图中展示多种从**Layer Menu**选择的视图模式: | Visual Element | Point of View | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/default_view.png) | <ul><li>**默认视图** </li></ul> | | | ![](images/dreamview_usage_table/near_view.png) | <ul><li>**近距离视图** </li></ul> | | | ![](images/dreamview_usage_table/overhead_view.png) | <ul><li>**俯瞰视图** </li></ul> | | | ![](images/dreamview_usage_table/map_view.png) | **地图** <ul><li> 放大/缩小:滚动鼠标滚轮或使用两根手指滑动 </li><li> 移动:按下右键并拖拽或或使用三根手指滑动</li></ul> |
0
apollo_public_repos/apollo/docs/13_Apollo Tool
apollo_public_repos/apollo/docs/13_Apollo Tool/可视化交互工具Dremview/dreamview_usage_table.md
# Dreamview Usage Table Dreamview is a web application that, 1. visualizes the current output of relevant autonomous driving modules, e.g. planning trajectory, car localization, chassis status, etc. 2. provides human-machine interface for users to view hardware status, turn on/off of modules, and start the autonomous driving car. 3. provides debugging tools, such as PnC Monitor to efficiently track module issues. ## Layout and Features The application layout is divided into several regions: header, sidebar, main view, and tool view. ### Header The Header has 3 drop-downs that can be set as shown: ![](images/dreamview_usage_table/header.png) The Co-Driver switch is used to detect disengagement event automatically. Once detected, Dreamview will display a pop-up of the data recorder window for the co-driver to enter a new drive event. Depending on the mode chosen from the mode selector, the corresponding modules and commands, defined in [hmi.conf](https://github.com/ApolloAuto/apollo/blob/master/modules/dreamview/conf/hmi.conf), will be presented in the **Module Controller**, and **Quick Start**, respectively. Note: navigation mode is for the purpose of the low-cost feature introduced in Apollo 2.5. Under this mode, Baidu (or Google) Map presents the absolute position of the ego-vehicle, while the main view has all objects and map elements presented in relative positions to the ego-vehicle. ### Sidebar and Tool View ![](images/dreamview_usage_table/sidebar.png) Sidebar panel controls what is displayed in the tool view described below: ### Tasks All the tasks that you could perform in DreamView: ![](images/dreamview_usage_table/tasks.png) * **Quick Start**: commands supported from the selected mode. In general, **setup**: turns on all modules **reset all**: turns off all modules **start auto**: starts driving the vehicle autonomously * **Others**: switches and buttons for tools used frequently * **Module Delay**: the time delay between two messages for each topic * **Console**: monitor messages from the Apollo platform ### Module Controller A panel to view the hardware status and turn the modules on/off ![](images/dreamview_usage_table/module_controller.png) ### Layer Menu A toggle menu for visual elements displays. ![](images/dreamview_usage_table/layer_menu.png) ### Route Editing A visual tool to plan a route before sending the routing request to the Routing module ![](images/dreamview_usage_table/route_editing.png) ### Data Recorder A panel to report issues to drive event topic ("/apollo/drive_event") to rosbag. ![](images/dreamview_usage_table/data_recorder.png) ### Default Routing List of predefined routes or single points, known as point of interest (POI). ![](images/dreamview_usage_table/default_routing.png) If route editing is on, routing point(s) can be added visually on the map. If route editing is off, clicking a desired POI will send a routing request to the server. If the selected POI contains only a point, the start point of the routing request is the current position of the autonomous car; otherwise, the start position is the first point from the desired route. To edit POIs, see [default_end_way_point.txt](../../modules/map/data/demo/default_end_way_point.txt) file under the directory of the Map. For example, if the map selected from the map selector is "Demo", then [default_end_way_point.txt](../../modules/map/data/demo/default_end_way_point.txt) is located under `modules/map/data/demo`. ### Main view: Main view animated 3D computer graphics in a web browser. ![](images/dreamview_usage_table/mainview.png) Elements in the main view are listed in the table below: | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image002.png) | <ul><li>The autonomous car </li></ul> | | ![](images/dreamview_usage_table/0clip_image004.png) | <ul><li>The wheel steering percentage.</li> <li>The status of left/right turn signals (In an emergency situation, both signals will be on.)</li></ul> | | ![](images/dreamview_usage_table/0clip_image003.png) | <ul><li> The traffic signal detected </li></ul> | | ![](images/dreamview_usage_table/0clip_image005.png) |<ul><li> The driving mode (AUTO/DISENGAGED/MANUAL/etc.) </li></ul> | | ![](images/dreamview_usage_table/0clip_image006.png) | <ul><li>The driving speed in km/h as default. Click on the unit to change the unit.</li> <li>The accelerator/brake percentage</li></ul> | | ![](images/dreamview_usage_table/0clip_image026.png) | <ul><li> The red thick line shows the routing suggestion</li></ul> | | ![](images/dreamview_usage_table/0clip_image038.png) |<ul><li> Nudge object decision -- the orange zone indicates the area to avoid </li></ul> | | ![](images/dreamview_usage_table/0clip_image062.png) |<ul><li> The green thick curvy band indicates the planned trajectory </li></ul> | #### Obstacles | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image010.png) | <ul><li>Vehicle obstacle </li></ul> | | ![](images/dreamview_usage_table/0clip_image012.png) | <ul><li>Pedestrian obstacle </li></ul> | | ![](images/dreamview_usage_table/0clip_image014.png) | <ul><li>Bicycle obstacle </li></ul> | | ![](images/dreamview_usage_table/0clip_image016.png) | <ul><li>Unknown obstacle </li></ul> | | ![](images/dreamview_usage_table/0clip_image018.png) | <ul><li>The velocity arrow shows the direction of the movement with the length proportional to the magnitude</li></ul> | | ![](images/dreamview_usage_table/0clip_image020.png) | <ul><li>The white arrow shows the directional heading of the obstacle</li></ul> | | ![](images/dreamview_usage_table/0clip_image022.png) | The yellow text indicates: <ul><li>The tracking ID of the obstacle.</li><li>The distance from the autonomous car and obstacle speed.</li></ul> | | ![](images/dreamview_usage_table/0clip_image024.png) | <ul><li>The lines show the predicted movement of the obstacle with the same color as the obstacle</li></ul> | #### Planning Decision ##### Decision Fence Decision fences reflect decisions made by planning module to ego-vehicle (main) and obstacles (objects). Each type of decision is presented in different color and icon as shown below: | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image028.png) | <ul><li>**Stop** depicting the primary stopping reason</li></ul> | | ![](images/dreamview_usage_table/0clip_image030.png) | <ul><li>**Stop** depicting the object stopping reason</li></ul> | | ![2](images/dreamview_usage_table/0clip_image032.png) | <ul><li>**Follow** object</li></ul> | | ![](images/dreamview_usage_table/0clip_image034.png) | <ul><li>**Yield** object decision -- the dotted line connects with the respective object</li></ul> | | ![](images/dreamview_usage_table/0clip_image036.png) | <ul><li>**Overtake** object decision -- the dotted line connects with the respective object</li></ul> | Changing lane is a special decision and hence, instead of having decision fence, a change lane icon shows on the autonomous car: | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/change-lane-left.png) | <ul><li>Change to **Left** lane </li></ul>| | ![](images/dreamview_usage_table/change-lane-right.png) | <ul><li>Change to **Right** lane </li></ul>| When a yield decision is made based on the "Right of Way" laws at a stop-sign intersection, the obstacles to be yielded will have the yield icon on top: | Visual Element | Depiction Explanation | | ---------------------------------------------------- | ------------------------------ | | ![](images/dreamview_usage_table/0clip_image037.png) | Obstacle to yield at stop sign | ##### Stop reasons When a STOP decision fence is shown, the reason to stop is displayed on the right side of the stop icon. Possible reasons and the corresponding icons are: | Visual Element | Depiction Explanation | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/0clip_image040.png) | <ul><li> **Clear-zone in front** </li></ul>| | ![](images/dreamview_usage_table/0clip_image042.png) | <ul><li> **Crosswalk in front** </li></ul>| | ![](images/dreamview_usage_table/0clip_image044.png) | <ul><li> **Destination arrival** </li></ul>| | ![](images/dreamview_usage_table/0clip_image046.png) | <ul><li> **Emergency** </li></ul> | | ![](images/dreamview_usage_table/0clip_image048.png) | <ul><li> **Auto mode is not ready** </li></ul>| | ![](images/dreamview_usage_table/0clip_image050.png) | <ul><li> **Obstacle is blocking the route**</li></ul> | | ![](images/dreamview_usage_table/0clip_image052.png) | <ul><li> **Pedestrian crossing in front** </li></ul> | | ![](images/dreamview_usage_table/0clip_image054.png) | <ul><li> **Traffic light is yellow/red** </li></ul>| | ![](images/dreamview_usage_table/0clip_image056.png) | <ul><li> **Vehicle in front** </li></ul> | | ![](images/dreamview_usage_table/0clip_image058.png) | <ul><li> **Stop sign in front** </li></ul>| | ![](images/dreamview_usage_table/0clip_image059.png) | <ul><li> **Pull over** </li></ul>| | ![](images/dreamview_usage_table/0clip_image060.png) | <ul><li> **Yield sign in front** </li></ul> | #### Point of View Main view that reflects the point of view chosen from **Layer Menu**: | Visual Element | Point of View | | ---------------------------------------- | ---------------------------------------- | | ![](images/dreamview_usage_table/default_view.png) | <ul><li>**Default** </li></ul> | | | ![](images/dreamview_usage_table/near_view.png) | <ul><li>**Near** </li></ul> | | | ![](images/dreamview_usage_table/overhead_view.png) | <ul><li>**Overhead** </li></ul> | | | ![](images/dreamview_usage_table/map_view.png) | **Map** <ul><li> To zoom in/out: mouse scroll or pinch with two fingers </li><li> To move around:right-click and drag or swipe with three fingers</li></ul> | ## Shortcut Keys | Shortcut Keys | Description | | --------------- | --------------- | | 1 | Toggle **Task** panel | | 2 | Toggle **Module Controller** panel | | 3 | Toggle **Layer Menu** panel | | 4 | Toggle **Route Editing** panel | | 5 | Toggle **Data Recorder** panel | | 6 | Toggle **Audio Capture** panel | | 7 | Toggle **Default Routing** panel | | v | Rotate **Point of View** options | ## PnC Monitor To view the monitor: 1. Build Apollo and run Dreamview on your web browser 2. Turn on the "PNC Monitor" from the 'Others' panel. 3. On the right-hand side, you should be able to view the Planning, Control, Latency graphs as seen below ![](images/Dreamview_landing.png) ### Planning/Control Graphs The Planning/Control tab from the monitor plots various graphs to reflect the internal states of its modules. #### Customizable Graphs for Planning Module [planning_internal.proto](../../modules/planning/proto/planning_internal.proto#L180) is a protobuf that stores debugging information, which is processed by dreamview server and send to dreamview client to help engineers debug. For users who want to plot their own graphs for new planning algorithms: 1. Fill in the information of your "chart" defined in planning_internal.proto. 2. X/Y axis: [**chart.proto** ](../../modules/dreamview/proto/chart.proto) has "Options" that you could set for axis which include * min/max: minimum/maximum number for the scale * label_string: axis label * legend_display: to show or hide a chart legend. <img src="images/dreamview_usage_table/pncmonitor_options.png" width="600" height="300" /> 3. Dataset: * Type: each graph can have multiple lines, polygons, and/or car markers defined in [**chart.proto**](../../modules/dreamview/proto/chart.proto): * Line: <img src="images/dreamview_usage_table/pncmonitor_line.png" width="600" height="300" /> * Polygon: <img src="images/dreamview_usage_table/pncmonitor_polygon.png" width="600" height="300" /> * Car: <img src="images/dreamview_usage_table/pncmonitor_car.png" width="600" height="300" /> * Label: each dataset must have a unique "Label" to each chart in order to help dreamview identify which dataset to update. * Properties: for polygon and line, you can set styles. Dreamview uses **Chartjs.org** for graphs. Below are common ones: | Name | Description | Example | | ----------- | --------------------------------------- | ----------------------- | | color | The line color | rgba(27, 249, 105, 0.5) | | borderWidth | The line width | 2 | | pointRadius | The radius of the point shape | 1 | | fill | Whether to fill the area under the line | false | | showLine | Whether to draw the line | true | Refer to https://www.chartjs.org/docs/latest/charts/line.html for more properties. 4. Sample: You could look into [on_lane_planning.cc](../../modules/planning/on_lane_planning.cc#L562) for a code sample. #### Additional Planning Paths For users who want to render additional paths on dreamview 3D scene, add the desired paths to the "path" field in [planning_internal.proto](../../modules/planning/proto/planning_internal.proto#L164). These paths will be rendered when PnC Monitor is on: ![](images/dreamview_usage_table/pncmonitor_paths.png) Dreamview has predefined styles for the first four paths: | Properties | Path 1 | Path 2 | Path 3 | Path 4 | | ---------- | -------- | -------- | --------- | -------- | | width | 0.8 | 0.15 | 0.4 | 0.65 | | color | 0x01D1C1 | 0x36A2EB | 0x8DFCB4 | 0xD85656 | | opacity | 0.65 | 1 | 0.7 | 0.8 | | zOffset | 4 | 7 | 6 | 5 | If you have more than four paths to render or want to change the styles, edit the planning.pathProperties value in [dreamview/frtonend/dist/parameters.json](../../modules/dreamview/frontend/dist/parameters.json) . ### Latency graph The graph displays the difference in time when the module receives sensor input data to when it will publish this data. ![](images/Dreamview_landing2.png) The Latency Graph can be used to track the latency each individual faces. The graphs are coloured differently to help distinguish the modules and a key is included for better understanding. The graph is plotted as Latency measured in ms vs Timestamp measure in seconds as seen in the image below. ![](images/Latency.png)
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_lidar_tracker_algorithm_cn.md
# 如何添加新的lidar匹配算法 Perception中的lidar数据流如下: ![](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/lidar_perception_data_flow.png) 本篇文档所介绍的lidar检测算法位于图中的Recognition Component中。当前Recognition Component的架构如下: ![lidar recognition](images/lidar_recognition.png) 从以上结构中可以清楚地看到lidar匹配算法是位于Recognition Component的 `base_lidar_obstacle_tracking` 中的抽象成员类 `base_multi_target_tracker` 的派生类。下面将详细介绍如何基于当前结构添加新的lidar匹配算法。 Apollo默认的lidar匹配算法为MlfEngine,它可以轻松更改或替换为不同的算法。本篇文档将介绍如何引入新的lidar匹配算法,添加新算法的步骤如下: 1. 定义一个继承基类 `base_multi_target_tracker` 的类 2. 实现新类 `NewLidarTracker` 3. 为新类 `NewLidarTracker` 配置config的proto文件 4. 更新 lidar_obstacle_tracking.conf 为了更好的理解,下面对每个步骤进行详细的阐述: ## 定义一个继承基类 `base_multi_target_tracker` 的类 所有的lidar匹配算法都必须继承基类 `base_multi_target_tracker`,它定义了一组接口。 以下是匹配算法继承基类的示例: ```c++ namespace apollo { namespace perception { namespace lidar { class NewLidarTracker : public BaseMultiTargetTracker { public: NewLidarTracker(); virtual ~NewLidarTracker() = default; bool Init(const MultiTargetTrackerInitOptions& options = MultiTargetTrackerInitOptions()) override; bool Track(const MultiTargetTrackerOptions& options, LidarFrame* frame) override; std::string Name() const override; }; // class NewLidarTracker } // namespace lidar } // namespace perception } // namespace apollo ``` 基类 `base_multi_target_tracker` 已定义好各虚函数签名,接口信息如下: ```c++ struct MultiTargetTrackerInitOptions {}; struct MultiTargetTrackerOptions {}; struct LidarFrame { // point cloud std::shared_ptr<base::AttributePointCloud<base::PointF>> cloud; // world point cloud std::shared_ptr<base::AttributePointCloud<base::PointD>> world_cloud; // timestamp double timestamp = 0.0; // lidar to world pose Eigen::Affine3d lidar2world_pose = Eigen::Affine3d::Identity(); // lidar to world pose Eigen::Affine3d novatel2world_pose = Eigen::Affine3d::Identity(); // hdmap struct std::shared_ptr<base::HdmapStruct> hdmap_struct = nullptr; // segmented objects std::vector<std::shared_ptr<base::Object>> segmented_objects; // tracked objects std::vector<std::shared_ptr<base::Object>> tracked_objects; // point cloud roi indices base::PointIndices roi_indices; // point cloud non ground indices base::PointIndices non_ground_indices; // secondary segmentor indices base::PointIndices secondary_indices; // sensor info base::SensorInfo sensor_info; // reserve string std::string reserve; void Reset(); void FilterPointCloud(base::PointCloud<base::PointF> *filtered_cloud, const std::vector<uint32_t> &indices); }; ``` ## 实现新类 `NewLidarTracker` 为了确保新的匹配算法能顺利工作,`NewLidarTracker`至少需要重写`base_multi_target_tracker`中定义的接口Init(),Track()和Name()。其中Init()函数负责完成加载配置文件,初始化类成员等工作;而Track()则负责实现算法的主体流程。一个具体的`NewLidarTracker.cc`实现示例如下: ```c++ namespace apollo { namespace perception { namespace lidar { bool NewLidarTracker::Init(const MultiTargetTrackerInitOptions& options) { /* 你的算法初始化部分 */ } bool NewLidarTracker::Track(const MultiTargetTrackerOptions& options, LidarFrame* frame) { /* 你的算法实现部分 */ } std::string NewLidarTracker::Name() const { /* 返回你的匹配算法名称 */ } PERCEPTION_REGISTER_MULTITARGET_TRACKER(NewLidarTracker); //注册新的lidar_tracker } // namespace lidar } // namespace perception } // namespace apollo ``` ## 为新类 `NewLidarTracker` 配置config的proto文件 按照下面的步骤添加新lidar匹配算法的配置和参数信息: 1. 根据算法要求为新lidar匹配算法配置config的`proto`文件。作为示例,可以参考以下位置的`multi_lidar_fusion`的`proto`定义:`modules/perception/lidar/lib/tracker/multi_lidar_fusion/proto/multi_lidar_fustion_config.proto` 2. 定义新的`proto`之后,例如`newlidartracker_config.proto`,输入以下内容: ```protobuf syntax = "proto2"; package apollo.perception.lidar; message NewLidarTrackerConfig { double parameter1 = 1; int32 parameter2 = 2; } ``` 3. 参考如下内容更新 `modules/perception/production/conf/perception/lidar/config_manager.config`文件: ```protobuf model_config_path: "./conf/perception/lidar/modules/newlidartracker_config.config" ``` 4. 参考同级别目录下 `modules/multi_lidar_fusion.config` 内容创建 `newlidartracker.config`: ```protobuf model_configs { name: "NewLidarTracker" version: "1.0.0" string_params { name: "root_path" value: "./data/perception/lidar/models/newlidartracker" } } ``` 5. 参考 `multi_lidar_tracker` 在目录 `modules/perception/production/data/perception/lidar/models/` 中创建 `newlidartracker` 文件夹,并根据需求创建不同传感器的 `.conf` 文件: ``` 注意:此处 "*.conf" 文件应对应步骤1,2中的proto文件格式. ``` ## 更新 lidar_obstacle_tracking.conf 要使用Apollo系统中的新lidar匹配算法,需要将 `modules/perception/production/data/perception/lidar/models/lidar_obstacle_pipline` 中的对应传感器的 `lidar_obstacle_tracking.conf` 文件的 `multi_target_tracker` 字段值改为 "NewLidarTracker" 在完成以上步骤后,您的新lidar匹配算法便可在Apollo系统中生效。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/apollo1.5_perception_module_study_notes_cn.md
# Perception Module 分析 ## 功能: 感知障碍物,预测障碍物的运动轨迹。 ## 数据流图: ![perception data flow](images/perception_node_arch.bmp) ### 输入: * 点云数据:/apollo/sensor/velodyne64/compensator/PointCloud2。 * 坐标系转换关系:/tf(world->novatel)。 * HD Map。 * LiDAR外参:/tf_static(novatel->velodyne64)。 * 注:在最小仿真系统中,/tf,/tf_static和HD map都不是以topic形式作为输入的。 ### 输出: * 带航向和速度,障碍物的3D轨道消息:/apollo/perception/obstacles。 ## 代码分析: ### main.cc: * 主节点文件。 * 通过apollo顶层宏APOLLO_MAIN,创建ros节点Perception。 ### perception.h,perception.cc: * 模块主体文件。 * 定义实现Perception类,用于表述perception模块。 * Name()函数:返回模块名字,也就是ros节点名字,即"perception"。 * Init()函数:模块初始化函数。AdapterManager::Init()函数通过配置文件adapter.conf创建node handle,以及相应的topics;lidar_process_->Init()函数配置对激光雷达数据处理的算法;检测激光雷达是否有数据;注册点云数据的回调函数。 * OnPointCloud()函数:数据处理回调函数。判断lidar处理算法是否就绪;lidar_process_->Process()函数使用前面注册的雷达处理算法依次处理雷达数据;lidar_process_->GeneratePbMsg()函数使用处理后的数据生成障碍物信息;AdapterManager::PublishPerceptionObstacles()函数基于adapter架构发布障碍物信息。 ### lidar_process.h,lidar_process.c: * 激光雷达数据处理文件,主要包含激光雷达的处理算法。 * 定义实现LidarProcess类,用于处理激光雷达数据。 * init()函数:RegistAllAlgorithm()函数注册激光雷达处理函数,分别是HdmapROIFilter,CNNSegmentation,MinBoxObjectBuilder,HmObjectTracker其对应的时激光雷达数据处理流程。InitFrameDependence()函数配置HD map。InitAlgorithmPlugin()函数将roi_filter_,segmentor_,object_builder_,tracker_实例化,可能是只作为插件,并调用其init()方法。 * Process(const sensor_msgs::PointCloud2& message)函数:调用GetVelodyneTrans()函数获取velodyne2world坐标系转化关系,调用TransPointCloudToPCL()函数由Lidar数据生成PCL,存入point_cloud变量。然后调用Process(timestamp_, point_cloud, velodyne_trans)函数。 * Process(timestamp_, point_cloud, velodyne_trans)函数:hdmap_input_->GetROI()函数从HD map获取ROI;roi_filter_->Filter()函数获取ROI的索引indices;segmentor_->Segment()函数对根据indices对障碍物进行分割,object_builder_->Build()函数重建障碍物,构成6边形;tracker_->Track()函数 预测障碍物运动轨迹。这是perception算法的核心部分,其四个步骤分别使用四种不同的算法,详细算法需要进一步研究。此步结束之后,对激光雷达数据处理结束,剩余部分就是合成障碍物,然后发布出去。 ### HD map的引入: * HD map的引入是通过hdmap_input.cc和hdmap_input.h文件实现,通过定义HDMapInput类表述和操作HD map。 ### 激光雷达数据处理算法: * 每个算法对应obstacle\lidar\下的相应的一个目录,每个算法被包装为一个插件类。每种算法都牵涉比较深的相关数学理论知识,后续可将每一项作为专题研究,例如segment,其核心即是CNN算法。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/3d_obstacle_perception_cn.md
3D 障碍物感知 =================== Apollo解决的障碍物感知问题: - 高精地图ROI过滤器(HDMap ROI Filter) - 基于卷积神经网络分割(CNN Segmentation) - MinBox 障碍物边框构建(MinBox Builder) - HM对象跟踪(HM Object Tracker) 高精地图ROI过滤器 ------------------------------------- ROI(The Region of Interest)指定从高精地图检索到包含路面、路口的可驾驶区域。高精地图 ROI 过滤器(往下简称“过滤器”)处理在ROI之外的激光雷达点,去除背景对象,如路边建筑物和树木等,剩余的点云留待后续处理。 给定一个高精地图,每个激光雷达点的关系意味着它在ROI内部还是外部。 每个激光雷达点可以查询一个车辆周围区域的2D量化的查找表(LUT)。过滤器模块的输入和输出汇总于下表。 |输入 |输出 | |------------------------------------------------------------------------- |---------------------------------------------------------------------------| |点云: 激光雷达捕捉的3D点数据集 | 由高精地图定义的ROI内的输入点索引。 | |高精地图: 多边形集合,每个多边形均含有一个有序的点集。 | | 一般来说,Apollo 高精地图 ROI过滤器有以下三步: 1. 坐标转换 2. ROI LUT构造 3. ROI LUT点查询 ### 坐标转换 对于(高精地图ROI)过滤器来说,高精地图数据接口被定义为一系列多边形集合,每个集合由世界坐标系点组成有序点集。高精地图ROI点查询需要点云和多边形处在相同的坐标系,为此,Apollo将输入点云和HDMap多边形变换为来自激光雷达传感器位置的地方坐标系。 ### ROI LUT构造 Apollo采用网格显示查找表(LUT),将ROI量化为俯视图2D网格,以此决定输入点是在ROI之内还是之外。 如图1所示,该LUT覆盖了一个矩形区域,该区域位于高精地图边界上方,以普通视图周围的预定义空间范围为边界。它代表了与ROI关联网格的每个单元格(如用1/0表示在ROI的内部/外部)。 为了计算效率,Apollo使用 **扫描线算法**和 **位图编码**来构建ROI LUT。 <img src="images/3d_obstacle_perception/roi_lookup_table.png"> <div align=center>图 1 ROI显示查找表(LUT)</div> 蓝色线条标出了高精地图ROI的边界,包含路标与路口。红色加粗点表示对应于激光雷达传感器位置的地方坐标系原始位置。2D网格由8*8个绿色正方形组成,在ROI中的单元格,为蓝色填充的正方形,而之外的是黄色填充的正方形。 ### ROI LUT点查询 基于ROI LUT,查询每个输入点的关系使用两步认证。对于点查询过程,Apollo数据编译输出如下,: 1. 检查点在ROI LUT矩形区域之内还是之外。 2. 查询LUT中相对于ROI关联点的相应单元格。 3. 收集属于ROI的所有点,并输出其相对于输入点云的索引。 用户定义的参数可在配置文件`modules/perception/model/hdmap_roi_filter.config`中设置,HDMap ROI Filter 参数使用参考如下表格: |参数名称 |使用 |默认 | |------------------- |------------------------------------------------------------------------------ |------------| |range | 基于LiDAR传感器点的2D网格ROI LUT的图层范围),如(-70, 70)*(-70, 70) |70.0 米 | |cell_size | 用于量化2D网格的单元格的大小。 |0.25 米 | |extend_dist | 从多边形边界扩展ROI的距离。 |0.0 米 | 基于CNN的障碍物分割 ------------------------------------------------ 高精地图 ROI过滤之后,Apollo得到已过滤、只包含属于ROI内的点云,大部分背景障碍物,如路侧的建筑物、树木等均被移除,ROI内的点云被传递到分割模块。分割模块检测和划分前景障碍物,例如汽车,卡车,自行车和行人。 |输入 |输出 | |----------------------------------------------------------------------------|---------------------------------------------------------------| |点云(3D数据集) |对应于ROI中的障碍物对象数据集 | |表示在HDMap中定义的ROI内的点的点索引 | | Apollo 使用深度卷积神经网络提高障碍物识别与分割的精度,障碍物分割包含以下四步: - 通道特征提取 - 基于卷积神经网络的障碍物预测 - 障碍物集群 - 后期处理 卷积神经网络详细介绍如下: ### 通道特征提取 给定一个点云框架,Apollo在地方坐标系中构建俯视图(即投影到X-Y平面)2D网格。 基于点的X、Y坐标,相对于LiDAR传感器原点的预定范围内,每个点被量化为2D网格的一个单元。 量化后,Apollo计算网格内每个单元格中点的8个统计测量,这将是下一步中传递给CNN的输入通道特征。 计算的8个统计测量: 1. 单元格中点的最大高度 2. 单元格中最高点的强度 3. 单元格中点的平均高度 4. 单元格中点的平均强度 5. 单元格中的点数 6. 单元格中心相对于原点的角度 7. 单元格中心与原点之间的距离 8. 二进制值标示单元格是空还是被占用 ### 基于卷积神经网络的障碍物预测 基于上述通道特征,Apollo使用深度完全卷积神经网络(FCNN)来预测单元格障碍物属性,包括潜在物体中心的偏移位移(称为中心偏移)、对象性 积极性和物体高度。如图2所示,网络的输入为 *W* x *H* x *C* 通道图像,其中: - *W* 代表网格中的列数 - *H* 代表网格中的行数 - *C* 代表通道特征数 完全卷积神经网络由三层构成: - 下游编码层(特征编码器) - 上游解码层(特征解码器) - 障碍物属性预测层(预测器) 特征编码器将通道特征图像作为输入,并且随着特征抽取的增加而连续**下采样**其空间分辨率。 然后特征解码器逐渐对特征图像 **上采样**到输入2D网格的空间分辨率,可以恢复特征图像的空间细节,以促进单元格方向的障碍物位置、速度属性预测。 根据具有非线性激活(即ReLu)层的堆叠卷积/分散层来实现 **下采样**和 **上采样**操作。 <div align=center><img src="images/3d_obstacle_perception/FCNN.png" width="99%"></div> <div align=center>图 2 FCNN在单元格方向上的障碍物预测</div> ### 障碍物聚类 在基于CNN的预测之后,Apollo获取单个单元格的预测信息。利用四个单元对象属性图像,其中包含: - 中心偏移 - 对象性 - 积极性 - 对象高度 为生成障碍物,Apollo基于单元格中心偏移,预测构建有向图,并搜索连接的组件作为候选对象集群。 如图3所示,每个单元格是图的一个节点,并且基于单元格的中心偏移预测构建有向边,其指向对应于另一单元的父节点。 如图3,Apollo采用压缩的联合查找算法(Union Find algorithm )有效查找连接组件,每个组件都是候选障碍物对象集群。对象是单个单元格成为有效对象的概率。因此,Apollo将非对象单元定义为目标小于0.5的单元格。因此,Apollo过滤出每个候选对象集群的空单元格和非对象集。 <div align=center><img src="images/3d_obstacle_perception/obstacle_clustering.png" width="99%"></div> <div align=center>图 3 障碍聚类</div> (a) 红色箭头表示每个单元格对象中心偏移预测;蓝色填充对应于物体概率不小于0.5的对象单元。 (b) 固体红色多边形内的单元格组成候选对象集群。 由五角星填充的红色范围表示对应于连接组件子图的根节点(单元格)。 一个候选对象集群可以由其根节点彼此相邻的多个相邻连接组件组成。 ### 后期处理 聚类后,Apollo获得一组候选对象集,每个候选对象集包括若干单元格。 在后期处理中,Apollo首先对所涉及的单元格的积极性和物体高度值,平均计算每个候选群体的检测置信度分数和物体高度。 然后,Apollo去除相对于预测物体高度太高的点,并收集每个候选集中的有效单元格的点。 最后,Apollo删除具有非常低的可信度分数或小点数的候选聚类,以输出最终的障碍物集/分段。 用户定义的参数可以在`modules/perception/model/cnn_segmentation/cnnseg.conf`的配置文件中设置。 下表说明了CNN细分的参数用法和默认值: |参数名称 |使用说明 |默认值 | |-----------------------------------|--------------------------------------------------------------------------------------------|-----------| |objectness_thresh |用于在障碍物聚类步骤中过滤掉非对象单元的对象的阈值。 |0.5 | |use_all_grids_for_clustering |指定是否使用所有单元格在障碍物聚类步骤中构建图形的选项。如果不是,则仅考虑占用的单元格。 |true | |confidence_thresh |用于在后期处理过程中滤出候选聚类的检测置信度得分阈值。 |0.1 | |height_thresh |如果是非负数,则在后处理步骤中将过滤掉高于预测物体高度的点。 |0.5 meters | |min_pts_num |在后期处理中,删除具有小于min_pts_num点的候选集群。 |3 | |use_full_cloud |如果设置为true,则原始点云的所有点将用于提取通道特征。 否则仅使用输入点云的点(即,HDMap ROI过滤器之后的点)。 |true | |gpu_id |在基于CNN的障碍物预测步骤中使用的GPU设备的ID。 |0 | |feature_param {width} |2D网格的X轴上的单元格数。 |512 | |feature_param {height} |2D网格的Y轴上的单元格数。 |512 | |feature_param {range} |2D格栅相对于原点(LiDAR传感器)的范围。 |60 meters | **注意:提供的模型是一个样例,仅限于实验所用。** MinBox 障碍物边框构建 -------------- 对象构建器组件为检测到的障碍物建立一个边界框。因为LiDAR传感器的遮挡或距离,形成障碍物的点云可以是稀疏的,并且仅覆盖一部分表面。因此,盒构建器将恢复给定多边形点的完整边界框。即使点云稀疏,边界框的主要目的还是预估障碍物(例如,车辆)的方向。同样地,边框也用于可视化障碍物。 算法背后的想法是找到给定多边形点边缘的所有区域。在以下示例中,如果AB是边缘,则Apollo将其他多边形点投影到AB上,并建立具有最大距离的交点对,这是属于边框的边缘之一。然后直接获得边界框的另一边。通过迭代多边形中的所有边,在以下图4所示,Apollo确定了一个6边界边框,将选择具有最小面积的方案作为最终的边界框。 <div align=center><img src="images/3d_obstacle_perception/object_building.png"></div> <div align=center>图 4 MinBox 对象构建</div> HM对象跟踪 ----------------- HM对象跟踪器跟踪分段检测到的障碍物。通常,它通过将当前检测与现有跟踪列表相关联,来形成和更新跟踪列表,如不再存在,则删除旧的跟踪列表,并在识别出新的检测时生成新的跟踪列表。 更新后的跟踪列表的运动状态将在关联后进行估计。 在HM对象跟踪器中,**匈牙利算法**(Hungarian algorithm)用于检测到跟踪关联,并采用 **鲁棒卡尔曼滤波器**(Robust Kalman Filter) 进行运动估计。 ### 检测跟踪关联(Detection-to-Track Association) 当将检测与现有跟踪列表相关联时,Apollo构建了一个二分图,然后使用 **匈牙利算法**以最小成本(距离)找到最佳检测跟踪匹配。 **计算关联距离矩阵** 首先,建立一个关联距离矩阵。根据一系列关联特征(包括运动一致性,外观一致性等)计算给定检测和一条轨迹之间的距离。HM跟踪器距离计算中使用的一些特征如下所示: |关联特征名称 |描述 | |-------------------------|----------------------------------| |location_distance |评估运动一致性 | |direction_distance |评估运动一致性 | |bbox_size_distance |评估外观一致性 | |point_num_distance |评估外观一致性 | |histogram_distance |评估外观一致性 | 此外,还有一些重要的距离权重参数,用于将上述关联特征组合成最终距离测量。 **匈牙利算法的二分图匹配** 给定关联距离矩阵,如图5所示,Apollo构造了一个二分图,并使用 **匈牙利算法**通过最小化距离成本找到最佳的检测跟踪匹配。它解决了O(n\^3)时间复杂度中的赋值问题。 为了提高其计算性能,通过删除距离大于合理的最大距离阈值的顶点,将原始的二分图切割成子图后实现了匈牙利算法。 <div align=center><img src="images/3d_obstacle_perception/bipartite_graph_matching.png"></div> <div align=center>图 5 二分图匹配(Bipartite Graph Matching)</div> ### 跟踪动态预估 (Track Motion Estimation) 在检测到跟踪关联之后,HM对象跟踪器使用 **鲁棒卡尔曼滤波器**来利用恒定速度运动模型估计当前跟踪列表的运动状态。 运动状态包括锚点和速度,分别对应于3D位置及其3D速度。 为了克服由不完美的检测引起的可能的分心,在跟踪器的滤波算法中实现了鲁棒统计技术。 **观察冗余** 在一系列重复观测中选择速度测量,即滤波算法的输入,包括锚点移位、边界框中心偏移、边界框角点移位等。冗余观测将为滤波测量带来额外的鲁棒性, 因为所有观察失败的概率远远小于单次观察失败的概率。 **分解** 高斯滤波算法 (Gaussian Filter algorithms)总是假设它们的高斯分布产生噪声。 然而,这种假设可能在运动预估问题中失败,因为其测量的噪声可能来自直方分布。 为了克服更新增益的过度估计,在过滤过程中使用故障阈值。 **更新关联质量** 原始卡尔曼滤波器更新其状态不区分其测量的质量。 然而,质量是滤波噪声的有益提示,可以估计。 例如,在关联步骤中计算的距离可以是一个合理的测量质量估计。 根据关联质量更新过滤算法的状态,增强了运动估计问题的鲁棒性和平滑度。 HM对象跟踪器的高级工作流程如图6所示。 <div align=center><img src="images/3d_obstacle_perception/hm_object_tracker.png"></div> <div align=center>图 6 HM对象跟踪器工作流</div> 1)构造跟踪对象并将其转换为世界坐标。 2)预测现有跟踪列表的状态,并对其匹配检测。 3)在更新后的跟踪列表中更新运动状态,并收集跟踪结果。 ## 参考 - [匈牙利算法](https://zh.wikipedia.org/zh-cn/%E5%8C%88%E7%89%99%E5%88%A9%E7%AE%97%E6%B3%95) - [地方坐标系](https://baike.baidu.com/item/%E5%9C%B0%E6%96%B9%E5%9D%90%E6%A0%87%E7%B3%BB/5154246) - [Fully Convolutional Networks for Semantic Segmentation](https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf)
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/vectornet_lstm_evaluator.md
# VECTORNET_LSTM_EVALUATOR # Introduction Vectornet lstm evaluator is is based on a GNN encoding network taking environments and traffic agents as nodes. Comparing with semantic map CNN model, it has less parameters to learn and better performance. ![Diagram](images/vectornet.svg) # Where is the code Please refer the online inference [vectornet lstm evaluator](https://github.com/ApolloAuto/apollo/modules/prediction/evaluator/vehicle/vectornet_evaluator.h). ## References [1]: Gao, Jiyang, et al. "Vectornet: Encoding hd maps and agent dynamics from vectorized representation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_camera_tracker_algorithm_cn.md
# 如何添加新的camera匹配算法 Perception中的camera数据流如下: ![camera overview](images/Camera_overview.png) 本篇文档所介绍的camera匹配算法分为两种,分别为针对交通信号灯的匹配算法和针对障碍物的匹配算法(针对车道线的匹配算法虽然已预留接口类,但目前暂未部署)。这两种匹配算法分别位于图中的Traffic_light和Obstacle三两大Component中。各Component的架构如下: 交通信号灯感知: ![traffic light component](images/camera_traffic_light_detection.png) 障碍物感知: ![obstacle component](images/camera_obstacle_detection.png) 从以上结构中可以清楚地看到,各个component都有自己的抽象类成员 `base_XXX_tracker`。对应的匹配算法作为 `base_XXX_tracker` 的不同的派生类,继承各自的基类实现算法的部署。由于各tracker基类在结构上非常相似,下面将以 ` base_obstacle_tracker` 为例介绍如何基于当前结构添加新的camera障碍物匹配算法。新增交通信号灯匹配算法的步骤相同。 Apollo在Obstacle Detection中默认提供了1种camera匹配算法--OMTObstacleTracker,它们可以被轻松更改或替换为不同的算法。算法的输入都是经过检测算法识别的目标级障碍物信息,输出都是经过匹配跟踪算法筛选后的目标级障碍物信息。本篇文档将介绍如何引入新的Camera匹配算法,添加新算法的步骤如下: 1. 定义一个继承基类 `base_obstacle_tracker` 的类 2. 实现新类 `NewObstacleTracker` 3. 为新类 `NewObstacleTracker` 配置param的proto文件 4. 更新config文件使新的算法生效 为了更好的理解,下面对每个步骤进行详细的阐述: ## 定义一个继承基类 `base_obstacle_tracker` 的类 所有的camera匹配算法都必须继承基类`base_obstacle_tracker`,它定义了一组接口。 以下是匹配算法继承基类的示例: ```c++ namespace apollo { namespace perception { namespace camera { class NewObstacleTracker : public BaseObstacleTracker { public: NewObstacleTracker(); virtual ~NewObstacleTracker() = default; bool Init(const ObstacleTrackerInitOptions& options) override; bool Predict(const ObstacleTrackerOptions &options, CameraFrame *frame) override; bool Associate2D(const ObstacleTrackerOptions &options, CameraFrame *frame) override; bool Associate3D(const ObstacleTrackerOptions &options, CameraFrame *frame) override; bool Track(const ObstacleTrackerOptions& options, CameraFrame* frame) override; std::string Name() const override; }; // class NewObstacleTracker } // namespace camera } // namespace perception } // namespace apollo ``` 基类 `base_obstacle_tracker` 已定义好各虚函数签名,接口信息如下: ```c++ struct ObstacleTrackerInitOptions : public BaseInitOptions { float image_width; float image_height; }; struct ObstacleTrackerOptions {}; struct CameraFrame { // timestamp double timestamp = 0.0; // frame sequence id int frame_id = 0; // data provider DataProvider *data_provider = nullptr; // calibration service BaseCalibrationService *calibration_service = nullptr; // hdmap struct base::HdmapStructPtr hdmap_struct = nullptr; // tracker proposed objects std::vector<base::ObjectPtr> proposed_objects; // segmented objects std::vector<base::ObjectPtr> detected_objects; // tracked objects std::vector<base::ObjectPtr> tracked_objects; // feature of all detected object ( num x dim) // detect lane mark info std::vector<base::LaneLine> lane_objects; std::vector<float> pred_vpt; std::shared_ptr<base::Blob<float>> track_feature_blob = nullptr; std::shared_ptr<base::Blob<float>> lane_detected_blob = nullptr; // detected traffic lights std::vector<base::TrafficLightPtr> traffic_lights; // camera intrinsics Eigen::Matrix3f camera_k_matrix = Eigen::Matrix3f::Identity(); // narrow to obstacle projected_matrix Eigen::Matrix3d project_matrix = Eigen::Matrix3d::Identity(); // camera to world pose Eigen::Affine3d camera2world_pose = Eigen::Affine3d::Identity(); EIGEN_MAKE_ALIGNED_OPERATOR_NEW } EIGEN_ALIGN16; // struct CameraFrame ``` ## 实现新类 `NewObstacleTracker` 为了确保新的匹配算法能顺利工作,`NewObstacleTracker` 至少需要重写 `base_obstacle_tracker` 中定义的接口Init(),Track()和Name()。其中Init()函数负责完成加载配置文件,初始化类成员等工作;而Track()则负责实现算法的主体流程。一个具体的`NewObstacleTracker.cc`实现示例如下: ``` 注意:当前版本base_obstacle_tracker.h尚未将算法流程封装到Track()函数中,需要完全重写其所有接口函数。 ``` ```c++ namespace apollo { namespace perception { namespace camera { bool NewObstacleTracker::Init(const ObstacleTrackerInitOptions& options) { /* 你的算法初始化部分 */ } bool NewObstacleTracker::Track(const ObstacleTrackerInitOptions& options, CameraFrame *frame) { /* 你的算法实现部分 */ } bool NewObstacleTracker::Predict(const ObstacleTrackerOptions &options, CameraFrame *frame) { /* 你的算法实现部分--预测 */ } bool Associate2D(const ObstacleTrackerOptions &options, CameraFrame *frame){ /* 你的算法实现部分--2D匹配 */ } bool Associate3D(const ObstacleTrackerOptions &options, CameraFrame *frame){ /* 你的算法实现部分--3D匹配 */ } std::string NewObstacleTracker::Name() const { /* 返回你的匹配算法名称 */ } REGISTER_OBSTACLE_TRACKER(NewObstacleTracker); //注册新的camera_obstacle_tracker } // namespace camera } // namespace perception } // namespace apollo ``` ## 为新类 `NewObstacleTracker` 配置param的proto文件 按照下面的步骤添加新camera匹配算法的参数信息: 1. 根据算法要求为新camera匹配算法配置param的`proto`文件。当然,如果参数适配,您也可以直接使用现有的`proto`文件,或者对现有`proto`文件进行更改。作为示例,可以参考以下位置的`omt`的`proto`定义:`modules/perception/camera/lib/obstacle/tracker/omt/proto/omt.proto`。定义完成后在文件头部输入以下内容: ```protobuf syntax = "proto2"; package apollo.perception.camera.NewObstacleTracker; //你的param参数 ``` 2. 参考 `omt_obstacle_tracker` 在目录 `modules/perception/production/data/perception/camera/models/` 中创建 `new_obstacle_tracker` 文件夹,并根据需求创建 `*.pt` 文件: ``` 注意:此处 "*.pt" 文件应对应步骤1中的proto文件格式. ``` ## 更新config文件使新的算法生效 要使用Apollo系统中的新camera匹配算法,需要根据需求依次对以下config文件进行配置: 1. 参考如下内容更新 `modules/perception/production/conf/perception/camera/obstacle.pt`文件,将之前步骤中新建的 `*.pt` 配置到加载路径中: ```protobuf tracker_param { plugin_param{ name : "NewObstacleTracker" root_dir : "/apollo/modules/perception/production/data/perception/camera/models/new_obstacle_tracker" config_file : "*.pt" } } ``` 2. 若需要对步骤1中 `tracker_param` 的结构更新,或需要新增其他 `_param`,可在 `modules/perception/camera/app/proto/perception.proto` 文件中操作: ```protobuf message PluginParam { optional string name = 1; optional string root_dir = 2; optional string config_file = 3; } message TrackerParam { optional PluginParam plugin_param = 1; } ``` 3. 若步骤1中不直接使用 `obstacle.pt` 文件,而使用其他新建的 `*.pt` 文件,则需要更改 `modules/perception/production/conf/perception/camera/fusion_camera_detection_component.pb.txt`. 其对应的 `proto` 文件为 `modules/perception/onboard/proto/fusion_camera_detection_component.proto`: ```protobuf camera_obstacle_perception_conf_dir : "/apollo/modules/perception/production/conf/perception/camera" camera_obstacle_perception_conf_file : "NewObstacleTracker.pt" ``` 在完成以上步骤后,您的新camera匹配算法便可在Apollo系统中生效。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/perception_apollo_3.0_cn.md
# 感知 Apollo 3.0 June 27, 2018 ## 简介 Apollo 3.0 主要针对采用低成本传感器的L2级别自动驾驶车辆。在车道中的自动驾驶车辆通过一个前置摄像头和前置雷达要与关键车辆(在路径上最近的车辆)保持一定的距离。Apollo 3.0 支持在高速公路上不依赖地图的高速自动驾驶。深度网路学习处理图像数据,随着搜集更多的数据,深度网络的性能随着时间的推移将得到改善。 ***安全警告*** Apollo 3.0 不支持没有包含本地道路和说明标示的急转弯道路。感知模块是基于采用深度网络并结合有限数据的可视化检测技术。因此,在我们发布更好的网络之前,驾驶员应该小心驾驶并控制好车辆方向而不能依赖与自动驾驶。请在安全和限制区域进行试驾。 - ***推荐道路*** - ***道路两侧有清晰的白色车道线*** - ***禁止*** - ***急转弯道路*** - ***没有车道线标记的道路*** - ***路口*** - ***对接点或虚线车道线*** - ***公共道路*** ## 感知模块 每个模块的流程图如下所示。 ![Image](images/perception_flow_chart_apollo_3.0.png) **图 1: Apollo 3.0的流程图** ### 深度网络 深度网络摄取图像并为Apollo 3.0提供两个检测输出,车道线和对象。目前,对深度学习中使用单一任务还是协同训练任务还存在一些争议。诸如车道检测网络或物体检测网络的单一网络通常比一个协同训练的多任务网络执行得更好。然而,在给定有限资源的情况下,多个单独的网络将是昂贵的并且在处理中消耗更多时间。因此,对于经济设计而言,协同训练是不可避免的,并且在性能上会有一些妥协。在 Apollo 3.0, YOLO [1][2] 被用作对象和车道线检测的基础网络。该对象具有车辆、卡车、骑车人和行人类别,并由表示成具有方向信息的2-D边界框。通过使用具有一些修改的相同网络进行分段来检测车道线。对于整条车道线,我们有一个单独的网络,以提供更长的车道线,无论是车道线是离散的还是连续的。 ### 物体识别/跟踪 在交通场景中,有两类物体: 静态物体和动态物体。静态物体包括车道线、交通信号灯以及数以千计的以各种语言写成的交通标示。除了驾驶之外,道路上还有多个地标,主要用于视觉定位,包括路灯,障碍物,道路上的桥梁或任何天际线。对于静态物体,Apollo 3.0将仅检测车道线. 在动态物体中,Apollo在路上关心乘用车,卡车,骑自行车者,行人或任何其他物体,包括动物或身体部位。Apollo还可以根据物体所在的车道对物体进行分类。最重要的物体是CIPV(路径中最近的物体)。下一个重要对象将是相邻车道中的物体。 #### 2D-to-3D 边界框 给定一个2D盒子,其3D大小和相机方向,该模块搜索相机坐标系统中的3D位置,并使用该2D盒子的宽度,高度或2D区域估计精确的3D距离。该模块可在没有准确的外部相机参数的情况下工作。 #### 对象跟踪 对象跟踪模块利用多种信息,例如3D位置,2D图像补丁,2D框或深度学习ROI特征。 跟踪问题通过有效地组合线索来表达为多个假设数据关联,以提供路径和检测到的对象之间的最正确关联,从而获得每个对象的正确ID关联。 ### 车道检测/追踪 在静态对象中,我们在Apollo 3.0中将仅处理通道线。该车道用于纵向和横向控制。车道本身引导横向控制,并且在车道内的对象引导纵向控制。 #### 车道线 我们有两种类型的车道线,车道标记段和整个车道线。车道标记段用于视觉定位,整个车道线用于使车辆保持在车道内。 该通道可以由多组折线表示,例如下一个左侧车道线,左侧线,右侧线和下一个右侧线。给定来自深度网络的车道线热图,通过阈值化生成分段的二进制图像。该方法首先找到连接的组件并检测内部轮廓。然后,它基于自我车辆坐标系的地面空间中的轮廓边缘生成车道标记点。之后,它将这些车道标记与具有相应的相对空间(例如,左(L0),右(R0),下左(L1),下(右)(L2)等)标签的若干车道线对象相关联。 ### CIPV (最近路径车辆) CIPV是当前车道中最接近的车辆。对象由3D边界框表示,其从上到下视图的2D投影将对象定位在地面上。然后,检查每个对象是否在当前车道中。在当前车道的对象中,最接近的一个将被选为CIPV。 ### 跟车 跟车是跟随前车的一种策略。从跟踪对象和当前车辆运动中,估计对象的轨迹。该轨迹将指导对象如何在道路上作为一组移动并且可以预测未来的轨迹。有两种跟车尾随,一种是跟随特定车辆的纯尾随,另一种是CIPV引导的尾随,当检测到无车道线时,当前车辆遵循CIPV的轨迹。 输出可视化的快照如图2所示。 ![Image](images/perception_visualization_apollo_3.0.png) **图 2: Apollo 3.0中感知输出的可视化。左上角是基于图像的输出。左下角显示了对象的3D边界框。右图显示了车道线和物体的三维俯视图。CIPV标有红色边框。黄线表示每辆车的轨迹** ### 雷达 + 摄像头融合 给定多个传感器,它们的输出应以协同方式组合。Apollo 3.0,介绍了一套带雷达和摄像头的传感器。对于此过程,需要校准两个传感器。每个传感器都将使用Apollo 2.0中介绍的相同方法进行校准。校准后,输出将以3-D世界坐标表示,每个输出将通过它们在位置,大小,时间和每个传感器的效用方面的相似性进行融合。在学习了每个传感器的效用函数后,摄像机对横向距离的贡献更大,雷达对纵向距离测量的贡献更大。异步传感器融合算法也作为选项提供。 ### 伪车道 所有车道检测结果将在空间上临时组合以诱导伪车道,该车道将被反馈到规划和控制模块。某些车道线在某帧中不正确或缺失。为了提供平滑的车道线输出,使用车辆里程测量的历史车道线。当车辆移动时,保存每个帧的里程表,并且先前帧中的车道线也将保存在历史缓冲器中。检测到的与历史车道线不匹配的车道线将被移除,历史输出将替换车道线并提供给规划模块。 ### 超声波传感器 Apollo 3.0支持超声波传感器。每个超声波传感器通过CAN总线提供被检测对象的距离。来自每个超声波传感器的测量数据被收集并作为ROS主题广播。将来,在融合超声波传感器后,物体和边界的地图将作为ROS的输出发布。 ## 感知输出 PnC的输入将与之前基于激光雷达的系统的输入完全不同。 - 车道线输出 - 折线和/或多项式曲线 - 车道类型按位置:L1(左下车道线),L0(左车道线),R0(右车道线),R1(右下车道线 - 对象输出 - 3D长方体 - 相对速度和方向 - 类型:CIPV,PIHP,其他 - 分类:汽车,卡车,自行车,行人 - Drops:物体的轨迹 世界坐标是3D中的当前车辆坐标,其中后中心轴是原点。 ## 参考 [1] J Redmon, S Divvala, R Girshick, A Farhadi, "你只看一次:统一的实时物体检测" CVPR 2016 [2] J Redmon, A Farhadi, "YOLO9000: 更好, 更快, 更强," arXiv preprint
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_fusion_system_cn.md
# 如何添加新的fusion融合系统 Perception中的详细模型结构如下: ![](images/Fusion_overview.png) 本篇文档所介绍的fusion融合系统位于图中的Fusion Component中。当前Fusion Component的架构如下: ![fusion component](images/fusion.png) 从以上结构中可以清楚地看到fusion融合系统是位于Fusion Component的 `ObstacleMultiSensorFusion` 中的抽象成员类 `BaseFusionSystem` 的派生类。下面将详细介绍如何基于当前结构添加新的fusion融合系统。 Apollo默认提供了1种fusion融合系统 -- Probabilistic Fusion,它可以轻松更改或替换为不同的系统。每种系统的输入都是各传感器检测匹配跟踪后的目标级障碍物信息,输出都是融合匹配跟踪后的目标级障碍物信息。本篇文档将介绍如何引入新的fusion融合系统,添加新系统的步骤如下: 1. 定义一个继承基类 `base_fusion_system` 的类 2. 实现新类 `NewFusionSystem` 3. 为新类 `NewFusionSystem` 配置config的proto文件 4. 更新config文件使新的系统生效 为了更好的理解,下面对每个步骤进行详细的阐述: ## 定义一个继承基类 `base_fusion_system` 的类 所有的fusion融合系统都必须继承基类 `base_fusion_system`,它定义了融合系统的基础成员及其接口。 以下是融合系统继承基类的示例: ```c++ namespace apollo { namespace perception { namespace fusion { class NewFusionSystem : public BaseFusionSystem { public: NewFusionSystem(); ~NewFusionSystem(); NewFusionSystem(const NewFusionSystem&) = delete; NewFusionSystem& operator=(const NewFusionSystem&) = delete; bool Init(const FusionInitOptions& init_options) override; bool Fuse(const FusionOptions& options, const base::FrameConstPtr& sensor_frame, std::vector<base::ObjectPtr>* fused_objects) override; std::string Name() const override; }; // class NewFusionSystem } // namespace fusion } // namespace perception } // namespace apollo ``` 基类 `base_fusion_system` 已定义好各虚函数签名,接口信息如下: ```c++ struct FusionInitOptions { std::vector<std::string> main_sensors; }; struct FusionOptions {}; struct alignas(16) Frame { EIGEN_MAKE_ALIGNED_OPERATOR_NEW Frame() { sensor2world_pose.setIdentity(); } void Reset() { timestamp = 0.0; objects.clear(); sensor2world_pose.setIdentity(); sensor_info.Reset(); lidar_frame_supplement.Reset(); radar_frame_supplement.Reset(); camera_frame_supplement.Reset(); } // @brief sensor information SensorInfo sensor_info; double timestamp = 0.0; std::vector<std::shared_ptr<Object>> objects; Eigen::Affine3d sensor2world_pose; // sensor-specific frame supplements LidarFrameSupplement lidar_frame_supplement; RadarFrameSupplement radar_frame_supplement; CameraFrameSupplement camera_frame_supplement; UltrasonicFrameSupplement ultrasonic_frame_supplement; }; typedef std::shared_ptr<Frame> FramePtr; typedef std::shared_ptr<const Frame> FrameConstPtr; struct alignas(16) Object { EIGEN_MAKE_ALIGNED_OPERATOR_NEW Object(); std::string ToString() const; void Reset(); int id = -1; PointCloud<PointD> polygon; Eigen::Vector3f direction = Eigen::Vector3f(1, 0, 0); float theta = 0.0f; float theta_variance = 0.0f; Eigen::Vector3d center = Eigen::Vector3d(0, 0, 0); Eigen::Matrix3f center_uncertainty; Eigen::Vector3f size = Eigen::Vector3f(0, 0, 0); Eigen::Vector3f size_variance = Eigen::Vector3f(0, 0, 0); Eigen::Vector3d anchor_point = Eigen::Vector3d(0, 0, 0); ObjectType type = ObjectType::UNKNOWN; std::vector<float> type_probs; ObjectSubType sub_type = ObjectSubType::UNKNOWN; std::vector<float> sub_type_probs; float confidence = 1.0f; int track_id = -1; Eigen::Vector3f velocity = Eigen::Vector3f(0, 0, 0); Eigen::Matrix3f velocity_uncertainty; bool velocity_converged = true; float velocity_confidence = 1.0f; Eigen::Vector3f acceleration = Eigen::Vector3f(0, 0, 0); Eigen::Matrix3f acceleration_uncertainty; double tracking_time = 0.0; double latest_tracked_time = 0.0; MotionState motion_state = MotionState::UNKNOWN; std::array<Eigen::Vector3d, 100> drops; std::size_t drop_num = 0; bool b_cipv = false; CarLight car_light; LidarObjectSupplement lidar_supplement; RadarObjectSupplement radar_supplement; CameraObjectSupplement camera_supplement; FusionObjectSupplement fusion_supplement; }; using ObjectPtr = std::shared_ptr<Object>; using ObjectConstPtr = std::shared_ptr<const Object>; ``` ## 实现新类 `NewFusionSystem` 为了确保新的融合系统能顺利工作, `NewFusionSystem` 至少需要重写 `base_fusion_system` 中定义的接口Init(), Fuse()和Name()函数。其中Init()函数负责完成加载配置文件,初始化类成员等工作;而Fuse()函数则负责实现系统的主体流程。一个具体的`NewFusionSystem.cc`实现示例如下: ```c++ namespace apollo { namespace perception { namespace fusion { bool NewFusionSystem::Init(const FusionInitOptions& init_options) { /* 你的系统初始化部分 */ } bool NewFusionSystem::Fuse(const FusionOptions& options, const base::FrameConstPtr& sensor_frame, std::vector<base::ObjectPtr>* fused_objects) { /* 你的系统实现部分 */ } std::string NewFusionSystem::Name() const { /* 返回你的融合系统名称 */ } FUSION_REGISTER_FUSIONSYSTEM(NewFusionSystem); //注册新的fusion_system } // namespace fusion } // namespace perception } // namespace apollo ``` ## 为新类 `NewFusionSystem` 配置config的proto文件 按照下面的步骤添加新fusion融合系统的配置信息: 1. 根据系统要求为新fusion融合系统配置config的`proto`文件。作为示例,可以参考以下位置的`probabilistic_fusion_config`的`proto`定义:`modules/perception/proto/probabilistic_fusion_config.proto`. 2. 定义新的`proto`之后,例如`newfusionsystem_config.proto`,在文件头部输入以下内容: ```protobuf syntax = "proto2"; package apollo.perception.fusion; message NewFusionSystemConfig { double parameter1 = 1; int32 parameter2 = 2; } ``` 3. 参考如下内容更新 `modules/perception/production/conf/perception/fusion/config_manager.config`文件: ```protobuf model_config_path: "./conf/perception/fusion/modules/newfusionsystem.config" ``` 4. 参考同级别目录下 `modules/probabilistic_fusion.config` 内容创建 `newfusionsystem.config`: ```protobuf model_configs { # NewFusionSystem model. name: "NewFusionSystem" version: "1.0.0" string_params { name: "root_dir" value: "./data/perception/fusion/" } string_params { name: "config_file" value: "newfusionsystem.pt" } } ``` 5. 参考 `probabilistic_fusion.pt` 在目录 `modules/perception/production/data/perception/fusion/` 中创建 `newfusionsystem.pt` 文件: ``` 注意:此处 "*.pt" 文件应对应步骤1,2中的proto文件格式. ``` ## 更新config文件使新的系统生效 要使用Apollo系统中的新fusion融合系统,需要将 `modules/perception/production/data/perception/fusion/fusion_component_conf.pb.txt` 中的 `fusion_method` 字段值改为 "NewFusionSystem"。 在完成以上步骤后,您的新fusion融合系统便可在Apollo系统中生效。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/multiple_lidar_gnss_calibration_guide.md
# Multiple-LiDAR GNSS Calibration Guide ``` NOTE: Supports upto Apollo 3.0. Apollo 3.5 is not supported yet. ``` Welcome to use the Multiple-LiDAR GNSS calibration tool. This guide will show you the steps to successfully calibrate multiple LiDARs. ## Contents - Overview - Preparation - Using the Calibration Tool - Results and Validation ## Overview In many autonomous driving tasks such as HDMap production, the scans from multiple LiDARs need to be registered in a unified coordinate system. In this case, the extrinsic parameters of multiple LiDARs need to be carefully calibrated. This Multiple-LiDAR GNSS calibration tool is developed to solve this problem. ## Preparation 1. Download the [calibration tool](https://github.com/ApolloAuto/apollo/releases/download/v2.5.0/multi_lidar_gnss_calibrator_and_doc.zip), and extract files to `$APOLLO_HOME/modules/calibration`. APOLLO_HOME is the root directory of apollo repository. 2. Choose a calibration place according to the calibration guide provided in Apollo 1.5. 3. Make sure the GNSS is in a good status. To verify this, use `rostopic echo /apollo/sensor/gnss/best_pose` and check the number after keywords `latitude_std_dev`, `longitude_std_dev` and `height_std_dev`. The smaller the deviation, the better the calibration quality. *** We strongly recommend calibrating the sensors when deviations are smaller than 0.02.*** ## Using the Calibration Tool ### Record Calibration Data When the LiDARs and GNSS are ready, use `/apollo/modules/calibration/multi_lidar_gnss/record.sh` to record calibration data. Note that, this script is only for recording velodyne HDL64 and VLP16. For other purpose, some modification of this script is needed or just use rosbag record to do the same thing. Usually, 2 minites length of data is sufficient. After data capture, run `/apollo/modules/calibration/multi_lidar_gnss/calibrate.sh` to calibrate sensors. The script is composed of the following two steps. ### Export Data Once the calibration bag is recorded, use `/apollo/modules/calibration/exporter/export_msgs --config /apollo/modules/calibration/exporter/conf/export_config.yaml` to get sensor data. The only input of the exporter is a YAML configuration file as follow. ```bash bag_path: "/apollo/data/bag/calibration/" # The path where the calibration bag is placed. dump_dir: "/apollo/data/bag/calibration/export/" # The path where the sensor data will be placed using exporter topics: - /apollo/sensor/gnss/odometry: # Odometry topic name type: ApolloOdometry # Odometry type - /apollo/sensor/velodyne16/PointCloud2: # vlp16 topic name type: PointCloud2 # vlp16 type - /apollo/sensor/velodyne64/PointCloud2: # hdl64 topic name type: PointCloud2 # hdl64 type ``` Other topics of PointCloud2 type also can be exported, if new topics are added to the file in the rule as follow. ```bash - TOPIC_NAME: # topic name type: PointCloud2 ``` Till now, we only support `ApolloOdometry` and `PointCloud2`. ### Run the Calibration Tool If all sensor data are exported, run `/apollo/modules/calibration/lidar_gnss_calibrator/multi_lidar_gnss_calibrator --config /apollo/modules/calibration/lidar_gnss_calibrator/conf/multi_lidar_gnss_calibrator_config.yaml` will get the results. The input of the tool is a YAML configuration file as follow. ```bash # multi-LiDAR-GNSS calibration configurations data: odometry: "/apollo/data/bag/calibration/export/multi_lidar_gnss/_apollo_sensor_gnss_odometry/odometry" lidars: - velodyne16: path: "/apollo/data/bag/calibration/export/multi_lidar_gnss/_apollo_sensor_velodyne16_PointCloud2/" - velodyne64: path: "/apollo/data/bag/calibration/export/multi_lidar_gnss/_apollo_sensor_velodyne64_PointCloud2/" result: "/apollo/data/bag/calibration/export/multi_lidar_gnss/result/" calibration: init_extrinsics: velodyne16: translation: x: 0.0 y: 1.77 z: 1.1 rotation: x: 0.183014 y: -0.183014 z: 0.683008 w: 0.683008 velodyne64: translation: x: 0.0 y: 1.57 z: 1.3 rotation: x: 0.0 y: 0.0 z: 0.707 w: 0.707 steps: - source_lidars: ["velodyne64"] target_lidars: ["velodyne64"] lidar_type: "multiple" fix_target_lidars: false fix_z: true iteration: 3 - source_lidars: ["velodyne16"] target_lidars: ["velodyne16"] lidar_type: "multiple" fix_target_lidars: false fix_z: true iteration: 3 - source_lidars: ["velodyne16"] target_lidars: ["velodyne64"] lidar_type: "multiple" fix_target_lidars: true fix_z: false iteration: 3 ``` The `data` section tells the tool where to get point clouds and odometry file, and also where to save the results. Note that, the keywords in `lidar` node will be recognized as frame id for the LiDARs. The `calibration` section provides initial guess of the extrinsics. ***All extrinsics are from LiDAR to GNSS***, which means this transformation maps the coordinates of a point defined in the LiDAR coordinate system to the coordinates of this point defined in the GNSS coordinate system. The initial guess requires the rotation angle error less than 5 degrees, and the translation error less than 0.1 meter. The `steps` section specifies the calibration procedure. Each step is defined as follow and their meanings are in comments. ```bash - source_lidars: ["velodyne16"] # Source LiDAR in point cloud registration. target_lidars: ["velodyne64"] # Target LiDAR in point cloud registration. lidar_type: "multiple" # "multiple" for multi-beam LiDAR, otherwise "single" fix_target_lidars: true # Whether to fix extrinsics of target LiDARS. Only "true" when align different LiDARs. fix_z: false # Whether to fix the z component of translation. Only "false" when align different LiDARs. iteration: 3 # Iteration number ``` ## Results and Validation The calibration tool saves the results to `result` path as follow. ```bash . └── calib_result ├── velodyne16_novatel_extrinsics.yaml ├── velodyne16_result.pcd ├── velodyne16_result_rgb.pcd ├── velodyne64_novatel_extrinsics.yaml ├── velodyne64_result.pcd └── velodyne64_result_rgb.pcd ``` The two YAML files are extrinsics. To validate the results, use `pcl_viewer *_result.pcd` to check the registration quality. If the sensors are well calibrated, a large amount of details can be identified from the point cloud. For more details, please refer to the calibration guide in Apollo 1.5.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_camera_tracker_algorithm.md
# How to add a new camera tracker algorithm The processing flow of camera perception module is shown below: ![camera overview](images/Camera_overview.png) The 2 tracker algorithms introduced by this document were traffic_light_tracker and obstacle_tracker (lane_tracker is reserved but not used so far). These 2 trackers are located in their own component. The architecture of each component is showed below: Traffic Light: ![traffic light component](images/camera_traffic_light_detection.png) Obstacle: ![obstacle component](images/camera_obstacle_detection.png) As we can see clearly from above structure, each component has its own abstract class member `base_XXX_tracker`. Different derived tracker algorithms inherit `base_XXX_tracker` and implement their main flows to complete the deployment. Next, we will take `base_obstacle_tracker` as an example to introduce how to add a new camera tracker algorithm. You could also refer to this document if you want to add traffic light tracker. Apollo has provided one camera tracker algorithm in Obstacle Detection -- OMTObstacleTracker. It could be easily changed or replaced by other algorithms. The input of algorithm should be objective obstacle data processed by previous detector, while the output should be matched and tracked objective obstacle data. This document will introduce how to add a new camera tracker algorithm, the basic task sequence is listed below: 1. Define a class that inherits `base_obstacle_tracker` 2. Implement the class `NewObstacleTracker` 3. Add param proto file for `NewObstacleTracker` 4. Update config file to put your tracker into effect The steps are elaborated below for better understanding: ## Define a class that inherits `base_obstacle_tracker` All the camera tracker algorithms shall inherit `base_obstacle_tracker`,which defines a set of interfaces. Here is an example of the tracker implementation: ```c++ namespace apollo { namespace perception { namespace camera { class NewObstacleTracker : public BaseObstacleTracker { public: NewObstacleTracker(); virtual ~NewObstacleTracker() = default; bool Init(const ObstacleTrackerInitOptions& options) override; bool Predict(const ObstacleTrackerOptions &options, CameraFrame *frame) override; bool Associate2D(const ObstacleTrackerOptions &options, CameraFrame *frame) override; bool Associate3D(const ObstacleTrackerOptions &options, CameraFrame *frame) override; bool Track(const ObstacleTrackerOptions& options, CameraFrame* frame) override; std::string Name() const override; }; // class NewObstacleTracker } // namespace camera } // namespace perception } // namespace apollo ``` The function signature of `base_obstacle_tracker` is pre-defined: ```c++ struct ObstacleTrackerInitOptions : public BaseInitOptions { float image_width; float image_height; }; struct ObstacleTrackerOptions {}; struct CameraFrame { // timestamp double timestamp = 0.0; // frame sequence id int frame_id = 0; // data provider DataProvider *data_provider = nullptr; // calibration service BaseCalibrationService *calibration_service = nullptr; // hdmap struct base::HdmapStructPtr hdmap_struct = nullptr; // tracker proposed objects std::vector<base::ObjectPtr> proposed_objects; // segmented objects std::vector<base::ObjectPtr> detected_objects; // tracked objects std::vector<base::ObjectPtr> tracked_objects; // feature of all detected object ( num x dim) // detect lane mark info std::vector<base::LaneLine> lane_objects; std::vector<float> pred_vpt; std::shared_ptr<base::Blob<float>> track_feature_blob = nullptr; std::shared_ptr<base::Blob<float>> lane_detected_blob = nullptr; // detected traffic lights std::vector<base::TrafficLightPtr> traffic_lights; // camera intrinsics Eigen::Matrix3f camera_k_matrix = Eigen::Matrix3f::Identity(); // narrow to obstacle projected_matrix Eigen::Matrix3d project_matrix = Eigen::Matrix3d::Identity(); // camera to world pose Eigen::Affine3d camera2world_pose = Eigen::Affine3d::Identity(); EIGEN_MAKE_ALIGNED_OPERATOR_NEW } EIGEN_ALIGN16; // struct CameraFrame ``` ## Implement the class `NewObstacleTracker` To ensure the new tracker could function properly, `NewObstacleTracker` should at least override the interface Init(), Track(), Name() defined in `base_obstacle_tracker` Init() is resposible for config loading, class member initialization, etc. And Track() will implement the basic logic of algorithm. A concrete `NewObstacleTracker.cc` example is shown: ``` Note:Currently, the algorithm pipeline has not been encapsulated into the Track() function of base_obstacle_tracker.h. Therefore, all the virtual interface should be re-writen. ``` ```c++ namespace apollo { namespace perception { namespace camera { bool NewObstacleTracker::Init(const ObstacleTrackerInitOptions& options) { /* Initialization of your tracker */ } bool NewObstacleTracker::Track(const ObstacleTrackerInitOptions& options, CameraFrame *frame) { /* Implementation of your tracker */ } bool NewObstacleTracker::Predict(const ObstacleTrackerOptions &options, CameraFrame *frame) { /* Implementation of your tracker -- Predict */ } bool Associate2D(const ObstacleTrackerOptions &options, CameraFrame *frame){ /* Implementation of your tracker -- Associate2D */ } bool Associate3D(const ObstacleTrackerOptions &options, CameraFrame *frame){ /* Implementation of your tracker -- Associate3D */ } std::string NewObstacleTracker::Name() const { /* Return your tracker's name */ } REGISTER_OBSTACLE_TRACKER(NewObstacleTracker); //register the new tracker } // namespace camera } // namespace perception } // namespace apollo ``` ## Add param proto file for `NewObstacleTracker` Follow the steps below to add parameters for your new camera tracker: 1. Create the `proto` file for parameters according to the requirement of your tracker. If the parameters are compatible, you can use or just modify current `proto` directly. As an example, you can refer to the `proto` file from `omt Tracker` at `modules/perception/camera/lib/obstacle/tracker/omt/proto/omt.proto`. Remember to include the following content once you finished your definition: ```protobuf syntax = "proto2"; package apollo.perception.camera.NewObstacleTracker; //Your parameters ``` 2. Refer to `omt_obstacle_tracker` at `modules/perception/production/data/perception/camera/models/` and create your `new_obstacle_tracker` folder and `*.pt` file: ``` Note:The "*.pt" file should have the format defined in step one ``` ## Update config file to put your tracker into effect To use your new camera tracker algorithm in Apollo, you have to config the following files according to your demand: 1. Refer to the following content to update `modules/perception/production/conf/perception/camera/obstacle.pt`,put your `*.pt` file created in previous step to the load path: ```protobuf tracker_param { plugin_param{ name : "NewObstacleTracker" root_dir : "/apollo/modules/perception/production/data/perception/camera/models/new_obstacle_tracker" config_file : "*.pt" } } ``` 2. If you want to modify the structure of `tracker_param` shown in step one or just add a new `_param`, your can do that at `modules/perception/camera/app/proto/perception.proto`: ```protobuf message PluginParam { optional string name = 1; optional string root_dir = 2; optional string config_file = 3; } message TrackerParam { optional PluginParam plugin_param = 1; } ``` 3. If you create a new `*.pt` instead of using `obstacle.pt` given in step one, you also have to modify `modules/perception/production/conf/perception/camera/fusion_camera_detection_component.pb.txt`. The corresponding `proto` file is `modules/perception/onboard/proto/fusion_camera_detection_component.proto`: ```protobuf camera_obstacle_perception_conf_dir : "/apollo/modules/perception/production/conf/perception/camera" camera_obstacle_perception_conf_file : "NewObstacleTracker.pt" ``` Once you finished the above modifications, you new camera tracker should take effect in Apollo.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_lidar_tracker_algorithm.md
# How to add a new lidar tracker algorithm The processing flow of lidar perception module is shown below: : ![](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/lidar_perception_data_flow.png) The tracker algorithm introduced by this document is located at Recognition Component listed below. Current architecture of Recognition Component is shown: ![lidar recognition](images/lidar_recognition.png) As we can see from above structure, lidar tracker algorithm, such as MlfEngine, is the derived class of `base_multi_target_tracker` which acts as a abstract class member of `base_lidar_obstacle_tracking` located in Recognition Component. Next, We will introduce how to add a new lidar tracker algorithm. The default tracking algorithm of Apollo is MlfEngine,which cloud be easily changed or replaced by other algorithms. This document will introduce how to add a new lidar tracker algorithm, the basic task sequence is listed below: 1. Define a class that inherits `base_multi_target_tracker` 2. Implement the class `NewLidarTracker` 3. Add config and param proto file for `NewLidarTracker` 4. Update lidar_obstacle_tracking.conf The steps are elaborated below for better understanding: ## Define a class that inherits `base_multi_target_tracker` All the lidar tracker algorithms shall inherit `base_multi_target_tracker`,which defines a set of interfaces. Here is an example of the tracker implementation: ```c++ namespace apollo { namespace perception { namespace lidar { class NewLidarTracker : public BaseMultiTargetTracker { public: NewLidarTracker(); virtual ~NewLidarTracker() = default; bool Init(const MultiTargetTrackerInitOptions& options = MultiTargetTrackerInitOptions()) override; bool Track(const MultiTargetTrackerOptions& options, LidarFrame* frame) override; std::string Name() const override; }; // class NewLidarTracker } // namespace lidar } // namespace perception } // namespace apollo ``` The function signature of `base_multi_target_tracker` is pre-defined: ```c++ struct MultiTargetTrackerInitOptions {}; struct MultiTargetTrackerOptions {}; struct LidarFrame { // point cloud std::shared_ptr<base::AttributePointCloud<base::PointF>> cloud; // world point cloud std::shared_ptr<base::AttributePointCloud<base::PointD>> world_cloud; // timestamp double timestamp = 0.0; // lidar to world pose Eigen::Affine3d lidar2world_pose = Eigen::Affine3d::Identity(); // lidar to world pose Eigen::Affine3d novatel2world_pose = Eigen::Affine3d::Identity(); // hdmap struct std::shared_ptr<base::HdmapStruct> hdmap_struct = nullptr; // segmented objects std::vector<std::shared_ptr<base::Object>> segmented_objects; // tracked objects std::vector<std::shared_ptr<base::Object>> tracked_objects; // point cloud roi indices base::PointIndices roi_indices; // point cloud non ground indices base::PointIndices non_ground_indices; // secondary segmentor indices base::PointIndices secondary_indices; // sensor info base::SensorInfo sensor_info; // reserve string std::string reserve; void Reset(); void FilterPointCloud(base::PointCloud<base::PointF> *filtered_cloud, const std::vector<uint32_t> &indices); }; ``` ## Implement the class `NewLidarTracker` To ensure the new tracker could function properly, `NewLidarTracker` should at least override the interface Init(), Track(), Name() defined in `base_multi_target_tracker`. Init() is resposible for config loading, class member initialization, etc. And Track() will implement the basic logic of algorithm. A concrete `NewLidarTracker.cc` example is shown: ```c++ namespace apollo { namespace perception { namespace lidar { bool NewLidarTracker::Init(const MultiTargetTrackerInitOptions& options) { /* Initialization of your tracker */ } bool NewLidarTracker::Track(const MultiTargetTrackerOptions& options, LidarFrame* frame) { /* Implementation of your tracker */ } std::string NewLidarTracker::Name() const { /* Return your tracker's name */ } PERCEPTION_REGISTER_MULTITARGET_TRACKER(NewLidarTracker); //register the new tracker } // namespace lidar } // namespace perception } // namespace apollo ``` ## Add config and param proto file for `NewLidarTracker` Follow the following steps to add config and param proto file for the new tracker: 1. Define a `proto` for the new tracker configurations according to the requirements of your algorithm. As a reference, you can found and follow the `proto` definition of `multi_lidar_fusion` at `modules/perception/lidar/lib/tracker/multi_lidar_fusion/proto/multi_lidar_fustion_config.proto` 2. Once finishing your `proto`, for example `newlidartracker_config.proto`, add the following content: ```protobuf syntax = "proto2"; package apollo.perception.lidar; message NewLidarTrackerConfig { double parameter1 = 1; int32 parameter2 = 2; } ``` 3. Refer to `modules/perception/production/conf/perception/lidar/config_manager.config` and add your tracker path: ```protobuf model_config_path: "./conf/perception/lidar/modules/newlidartracker_config.config" ``` 4. Refer to the `newlidartracker.config` in the same folder and create `modules/multi_lidar_fusion.config`: ```protobuf model_configs { name: "NewLidarTracker" version: "1.0.0" string_params { name: "root_path" value: "./data/perception/lidar/models/newlidartracker" } } ``` 5. Refer to `multi_lidar_tracker` and create `newlidartracker` folder at `modules/perception/production/data/perception/lidar/models/`. Add `.conf` files for different sensors: ``` Note:The "*.conf" file should have the same structure with the "proto" file defined in step 1,2. ``` ## Update lidar_obstacle_tracking.conf To use your new lidar tracker algorithm in Apollo,you need to modify the value of `multi_target_tracker` to your tracker's name in `lidar_obstacle_tracking.conf` located in corresponding sensor folder in `modules/perception/production/data/perception/lidar/models/lidar_obstacle_pipline` Once you finished the above modifications, you new tracker should take effect in Apollo.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/vectornet_lstm_evaluator_cn.md
# VECTORNET LSTM 评估器 # 简介 Vectornet lstm evaluator 基于对环境和其他交通参与者的图网络编码。与基于卷积网络编码的语义地图模型相比,它有着更少的参数和更好的表现。 ![Diagram](images/vectornet.svg) # 代码流程及框架 线上推理代码 [vectornet evaluator](https://github.com/ApolloAuto/apollo/modules/prediction/evaluator/vehicle/vectornet_evaluator.h)
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_gps_receiver.md
# How to add a new GPS Receiver ## Introduction GPS receiver is a device that receives information from GPS satellites and then calculates the device's geographical position, velocity and precise time. The device usually includes a receiver, an IMU (depends on the model), an interface to a wheel encoder, and a fusion engine that combines information from those sensors. The Default GPS receiver used in Apollo is Novatel cards. The instruction demonstrates how to add and use a new GPS Receiver. ## Steps to add a new GPS Receiver Please follow the steps below to add a new GPS Receiver. 1. Implement the new data parser for the new GPS receiver, by inheriting class `Parser` 2. Add new interfaces in `Parser` class for the new GPS receiver 3. In `config.proto`, add the new data format for the new GPS receiver 4. In function `create_parser` from file data_parser.cpp, add the new parser instance for the new GPS receiver Let's look at how to add the GPS Receiver using the above-mentioned steps for Receiver: `u-blox`. ### Step 1 Let us implement the new data parser for the new GPS receiver, by inheriting class `Parser`: ```cpp class UbloxParser : public Parser { public: UbloxParser(); virtual MessageType get_message(MessagePtr& message_ptr); private: bool verify_checksum(); Parser::MessageType prepare_message(MessagePtr& message_ptr); // The handle_xxx functions return whether a message is ready. bool handle_esf_raw(const ublox::EsfRaw* raw, size_t data_size); bool handle_esf_ins(const ublox::EsfIns* ins); bool handle_hnr_pvt(const ublox::HnrPvt* pvt); bool handle_nav_att(const ublox::NavAtt *att); bool handle_nav_pvt(const ublox::NavPvt* pvt); bool handle_nav_cov(const ublox::NavCov *cov); bool handle_rxm_rawx(const ublox::RxmRawx *raw); double _gps_seconds_base = -1.0; double _gyro_scale = 0.0; double _accel_scale = 0.0; float _imu_measurement_span = 0.0; int _imu_frame_mapping = 5; double _imu_measurement_time_previous = -1.0; std::vector<uint8_t> _buffer; size_t _total_length = 0; ::apollo::drivers::gnss::Gnss _gnss; ::apollo::drivers::gnss::Imu _imu; ::apollo::drivers::gnss::Ins _ins; }; ``` ### Step 2 Let us now add the new interfaces in the Parser class for the new GPS receiver: Add the function `create_ublox` in `Parser` class: ```cpp class Parser { public: // Return a pointer to a NovAtel parser. The caller should take ownership. static Parser* create_novatel(); // Return a pointer to a u-blox parser. The caller should take ownership. static Parser* create_ublox(); virtual ~Parser() {} // Updates the parser with new data. The caller must keep the data valid until get_message() // returns NONE. void update(const uint8_t* data, size_t length) { _data = data; _data_end = data + length; } void update(const std::string& data) { update(reinterpret_cast<const uint8_t*>(data.data()), data.size()); } enum class MessageType { NONE, GNSS, GNSS_RANGE, IMU, INS, WHEEL, EPHEMERIDES, OBSERVATION, GPGGA, }; // Gets a parsed protobuf message. The caller must consume the message before calling another // get_message() or update(); virtual MessageType get_message(MessagePtr& message_ptr) = 0; protected: Parser() {} // Point to the beginning and end of data. Do not take ownership. const uint8_t* _data = nullptr; const uint8_t* _data_end = nullptr; private: DISABLE_COPY_AND_ASSIGN(Parser); }; Parser* Parser::create_ublox() { return new UbloxParser(); } ``` ### Step 3 In config.proto, let us add the new data format definition for the new GPS receiver: Add `UBLOX_TEXT` and `UBLOX_BINARY` in the config file: modules/drivers/gnss/proto/config.proto ```txt message Stream { enum Format { UNKNOWN = 0; NMEA = 1; RTCM_V2 = 2; RTCM_V3 = 3; NOVATEL_TEXT = 10; NOVATEL_BINARY = 11; UBLOX_TEXT = 20; UBLOX_BINARY = 21; } ... ... ``` ### Step 4 In function `create_parser` from file data_parser.cpp, let us add the new parser instance for the new GPS receiver. We will do so by adding code to process `config::Stream::UBLOX_BINARY` as below: ``` cpp Parser* create_parser(config::Stream::Format format, bool is_base_station = false) { switch (format) { case config::Stream::NOVATEL_BINARY: return Parser::create_novatel(); case config::Stream::UBLOX_BINARY: return Parser::create_ubloxl(); default: return nullptr; } } ```
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/traffic_light_cn.md
# 交通信号灯感知 本文档详细的介绍了Apollo2.0中交通信号感知模块的工作原理。 ## 简介 交通信号灯感知模块通过使用摄像头提供精确全面的路面交通信号灯状态。 通常情况下,交通信号灯有3种状态: - 红 - 黄 - 绿 然而当信号灯不能正常工作时,它可能是黑色的或者闪烁着红灯或黄灯。有时候在摄像头的视野内找不到信号灯,从而导致无法正确检测信号灯状态。 为了覆盖全部的情况,交通信号灯感知模块提供了5种信号灯状态输出: - 红 - 黄 - 绿 - 黑 - 未知 该模块的高精地图功能反复的检测车辆前方是否有信号灯出现。在给定车辆的位置后,可以通过查询高精地图获取信号灯的边界,并用边界上的4个点来表示信号灯。如果存在信号灯,则信号灯位置信息将从世界坐标系投射到图片坐标系。 Apollo已经证明了仅仅使用一个固定视野的摄像头无法识别所有的信号灯。存在这种限制的原因是: - 感知范围应该大于100米 - 信号灯的高度和路口的宽度变化范围很大 结果是Apollo2.0使用了2个摄像头来扩大感知范围。 - 一个**远距摄像头**,焦距是25毫米,被用来观察前方远距离的信号灯。远距摄像头捕获的信号灯在图片上展现的非常大而且容易被检测。但是远距摄像头的视野有限制,如果路线不够直或者车辆太过于靠近信号灯,经常无法拍摄到信号灯。 - 一个**广角摄像头**。焦距是6毫米,是对远距摄像头视野不足的补充。 该模块会根据当前信号灯的投射状态决定使用哪个摄像头。虽然只有两个摄像头,但是该模块的算法被设计的可以控制多个摄像头。 下述图片展示了使用远距摄像头(上图)和广角摄像头(下图)检测到信号灯的图片。 ![telephoto camera](images/traffic_light/long.jpg) ![wide angle camera](images/traffic_light/short.jpg) # 数据管道 数据管道有两个主要的部分,会在下面章节中介绍 - 预处理阶段 - 信号灯投射 - 摄像头选择 - 图像和信号灯缓存同步 - 处理阶段 - 调整—提供精确的信号灯边界盒 - 识别—提供每个边界盒的颜色 - 修正—根据时间顺序关系修正颜色 ## 预处理阶段 没有必要在每一帧的图像中去检测信号灯。信号灯的变化频率是很低的而且计算机的资源也有限。通常,从不同摄像头输入的图像信息会几乎同时的到达,但是只有一个会进入管道的处理阶段。因此图像的遴选和匹配是很必要的。 ### 输入输出 本章节介绍了预处理阶段的输入输出数据。输入数据可以通过订阅Apollo相关模块数据来获得,或者直接读取本地的存储文件。输出数据被传输到下一层的处理阶段。 #### 输入数据 - 可以通过订阅以下topic来获取不同摄像头的图像数据: - `/apollo/sensor/camera/traffic/image_long` - `/apollo/sensor/camera/traffic/image_short` - 定位信息,通过查询以下topic获得: - `/tf` - 高精地图 - 校准结果 #### 输出数据 - 被选择的摄像头输出的的图像信息 - 从世界坐标系投射到图像坐标系的信号灯边界盒 ### 摄像头选择 使用一个唯一的ID和其边界上的4个点来表示信号灯,每个点都是世界坐标系中的3维坐标点。 下例展示了一个典型的信号灯记录信息`signal info`。给出车辆位置后,4个边界点可以通过查询高精地图获得。 ```protobuf signal info: id { id: "xxx" } boundary { point { x: ... y: ... z: ... } point { x: ... y: ... z: ... } point { x: ... y: ... z: ... } point { x: ... y: ... z: ... } } ``` 3维世界坐标系中的边界点随后被投射到每个摄像头图像的2维坐标系。对每个信号灯而言,远距摄像头图像上展示的4个投射点区域更大,这比广角摄像头更容易检测信号灯。最后会选择具有最长的焦距且能够看到所有信号灯的摄像头图片作为输出图像。投射到该图像上的信号边界盒将作为输出的边界盒。 被选择的摄像头的ID和时间戳缓存在队列中: ``` C++ struct ImageLights { CarPose pose; CameraId camera_id; double timestamp; size_t num_signal; ... other ... }; ``` 至此,我们需要的所有信息包括定位信息、校准结果和高精地图。因为投射不依赖于图像的内容,所以选择可以在任何时间完成。在图像信息到达时进行选择仅仅是为了简单。而且,并不是图像信息一到达就要进行选择,通常会设置选择的时间间隔。 ### 图像同步 图像信息包含了摄像头ID和时间戳。摄像头ID和时间戳的组合用来找到可能存在的缓存信息。如果能在缓存区找到和该图像的摄像头ID一样且时间戳相差很小的缓存信息,则该图像会被传输到处理阶段。所有不合适的缓存信息会被丢弃。 ## 处理阶段 该阶段分为3个步骤,每个步骤重点执行一个任务: - 调整 — 在ROI中检测信号灯边界盒 - 识别 — 鉴别边界盒的颜色 - 修正 — 根据信号灯颜色的时间顺序关系修正颜色 ### 输入输出 本章节介绍处理阶段的输入和输出数据。输入数据从预处理阶段获得,输出数据作为鉴别信号灯的结果。 #### 输入数据 - 被选择的摄像头图像信息 - 一组边界盒信息 #### 输出数据 - 一组带有颜色标签的边界盒信息 ### 调整 被定位信息、校准信息和高精地图信息影响的投射点 ***不是完全可靠的*** 。通过投射的信号灯位置计算的一个大的兴趣区域(Region of Interest ROI)被用来确定信号灯精确的边界盒。 在下述图片中,蓝色的长方形表示被投射的信号灯的边界盒,实际上和信号灯的准确位置有一定的偏差。大的黄色长方形是ROI。 ![example](images/traffic_light/example.jpg) 信号灯检测是一个常规的卷积神经网络检测任务,它接收带有ROI信息的图像作为输入数据,顺序输出边界盒。输出结果中的信号灯数量可能多于输入数据。 Apollo会根据输入信号灯的位置、形状及检测的评分选择合适的信号灯。如果CNN在ROI内找不到任何的信号灯,则输入数据中的信号灯将被标记为未知,且跳过剩下的两个步骤。 ### 识别 信号灯识别是一个常规的卷积神经网络鉴别任务,它接收带有ROI信息的图像和一组边界盒信息作为输入数据。输出数据是一个`$4\times n$ vector`, 表示每个边界盒是黑色、红色、黄色和绿色的概率。 当且仅当概率足够大时,有最大概率的类别会被识别为信号灯的状态。否则信号灯状态被设置为未知,表示状态未确定。 ### 修正 因为信号灯可能会闪烁或者被遮挡,并且识别阶段也 ***并不是*** 完美的,输出的信号灯状态可能不是真正的状态。修正信号灯状态是很有必要的。 如果修正器接收到一个确定的信号灯状态例如红色或者绿色,则修正器保存该状态并直接输出。如果接收到黑色或者未知,修正器会检测状态保存列表。如果信号灯状态已经确定持续了一段时间,那么将保存的状态输出。否则将黑色或者未知输出。 因为时间顺序关系的存在,黄色只会在绿色之后红色之前出现,所以为了安全的考虑,在绿色出现之前任何红色之后的黄色都会被设置为红色。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_lidar_detector_algorithm.md
# How to add a new lidar detector algorithm The processing flow of lidar perception module is shown below: ![](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/lidar_perception_data_flow.png) The detector algorithm introduced by this document is located at Detection Component listed below. Current architecture of Detection Component is shown: ![lidar detection high-level](images/lidar_detection_1.png) ![lidar detection](images/lidar_detection_2.png) As we can see from above structure, lidar detector algorithm, such as PointPillars, is the derived class of `base_lidar_detector` which acts as a abstract class member of `base_lidar_obstacle_detection` located in Detection Component. Next, We will introduce how to add a new lidar detector algorithm. Apollo has provided two lidar detector algorithms -- PointPillars and CNN (NCut will no longer be updated). Both of them could be easily changed or replaced by other algorithms. The input of algorithm should be original points cloud data, while the output should be obastacle object data. This document will introduce how to add a new lidar detector algorithm, the basic task sequence is listed below: 1. Define a class that inherits `base_lidar_detector` 2. Implement the class `NewLidarDetector` 3. Add config and param proto file for `NewLidarDetector` 4. Update lidar_obstacle_detection.conf The steps are elaborated below for better understanding: ## Define a class that inherits `base_lidar_detector` All the lidar detector algorithms shall inherit `base_lidar_detector`,which defines a set of interfaces. Here is an example of the detector implementation: ```c++ namespace apollo { namespace perception { namespace lidar { class NewLidarDetector : public BaseLidarDetector { public: NewLidarDetector(); virtual ~NewLidarDetector() = default; bool Init(const LidarDetectorInitOptions& options = LidarDetectorInitOptions()) override; bool Detect(const LidarDetectorOptions& options, LidarFrame* frame) override; std::string Name() const override; }; // class NewLidarDetector } // namespace lidar } // namespace perception } // namespace apollo ``` The function signature of `base_lidar_detector` is pre-defined: ```c++ struct LidarDetectorInitOptions { std::string sensor_name = "velodyne64"; }; struct LidarDetectorOptions {}; struct LidarFrame { // point cloud std::shared_ptr<base::AttributePointCloud<base::PointF>> cloud; // world point cloud std::shared_ptr<base::AttributePointCloud<base::PointD>> world_cloud; // timestamp double timestamp = 0.0; // lidar to world pose Eigen::Affine3d lidar2world_pose = Eigen::Affine3d::Identity(); // lidar to world pose Eigen::Affine3d novatel2world_pose = Eigen::Affine3d::Identity(); // hdmap struct std::shared_ptr<base::HdmapStruct> hdmap_struct = nullptr; // segmented objects std::vector<std::shared_ptr<base::Object>> segmented_objects; // tracked objects std::vector<std::shared_ptr<base::Object>> tracked_objects; // point cloud roi indices base::PointIndices roi_indices; // point cloud non ground indices base::PointIndices non_ground_indices; // secondary segmentor indices base::PointIndices secondary_indices; // sensor info base::SensorInfo sensor_info; // reserve string std::string reserve; void Reset(); void FilterPointCloud(base::PointCloud<base::PointF> *filtered_cloud, const std::vector<uint32_t> &indices); }; ``` ## Implement the class `NewLidarDetector` To ensure the new detector could function properly, `NewLidarDetector` should at least override the interface Init(), Detect(), Name() defined in `base_lidar_detector`. Init() is resposible for config loading, class member initialization, etc. And Detect() will implement the basic logic of algorithm. A concrete `NewLidarDetector.cc` example is shown: ```c++ namespace apollo { namespace perception { namespace lidar { bool NewLidarDetector::Init(const LidarDetectorInitOptions& options) { /* Initialization of your detector */ } bool NewLidarDetector::Detect(const LidarDetectorOptions& options, LidarFrame* frame) { /* Implementation of your detector */ } std::string NewLidarDetector::Name() const { /* Return your detector's name */ } PERCEPTION_REGISTER_LIDARDETECTOR(NewLidarDetector); //register the new detector } // namespace lidar } // namespace perception } // namespace apollo ``` ## Add config and param proto file for `NewLidarDetector` Follow the following steps to add config and param proto file for the new detector: 1. Define a `proto` for the new detector configurations according to the requirements of your algorithm. As a reference, you can found and follow the `proto` definition of `cnn_segmentation` at `modules/perception/lidar/lib/detector/cnn_segmentation/proto/cnnseg_config.proto` 2. Once finishing your `proto`, for example `newlidardetector_config.proto`, add the following content: ```protobuf syntax = "proto2"; package apollo.perception.lidar; message NewLidarDetectorConfig { double parameter1 = 1; int32 parameter2 = 2; } ``` 3. Define a `proto` for the new detector parameters according to the requirements of your algorithm. Also, as a reference, you can found and follow the `proto` definition of `cnn_segmentation` at `modules/perception/lidar/lib/detector/cnn_segmentation/proto/cnnseg_param.proto`. Similarly, add the following content once finished: ```protobuf syntax = "proto2"; package apollo.perception.lidar; //your parameters ``` 4. Refer to `modules/perception/production/conf/perception/lidar/config_manager.config` and add your detector path: ```protobuf model_config_path: "./conf/perception/lidar/modules/newlidardetector_config.config" ``` 5. Refer to the `modules/cnnseg.config` in the same folder and create `newlidardetector.config`: ```protobuf model_configs { name: "NewLidarDetector" version: "1.0.0" string_params { name: "root_path" value: "./data/perception/lidar/models/newlidardetector" } } ``` 6. Refer to `cnnseg` and create `newlidardetector` folder at `modules/perception/production/data/perception/lidar/models/`. Add `.conf` files for different sensors: ``` Note:The "*.conf" and "*param.conf" file should have the same structure with the "proto" files defined in step 1,2,3. ``` ## Update lidar_obstacle_detection.conf To use your new lidar detector algorithm in Apollo,you need to modify the value of `detector` to your detector's name in `lidar_obstacle_detection.conf` located in corresponding sensor folder in `modules/perception/production/data/perception/lidar/models/lidar_obstacle_pipline` Once you finished the above modifications, you new detector should take effect in Apollo.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_verify_lidar_function_review.md
## Scenario 4: Perceptual lidar function test This scenario introduces how to use the software package to start the perception lidar module, helping developers to get familiar with the Apollo perception module and lay the foundation. You can observe the detection results during the operation of the perception lidar by playing the record data package provided by Apollo. ### Prerequisites This document assumes that you have followed [Package Installation](https://apollo.baidu.com/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/%E5%AE%89%E8%A3%85%E8%AF%B4%E6%98%8E/%E8%BD%AF%E4%BB%B6%E5%8C%85%E5%AE%89%E8%A3%85/%E8%BD%AF%E4%BB%B6%E5%8C%85%E5%AE%89%E8%A3%85/#%E5%AE%89%E8%A3%85apollo%E7%8E%AF%E5%A2%83%E7%AE%A1%E7%90%86%E5%B7%A5%E5%85%B7) > Install the Apollo environment management tool to complete step 1 and step 2. Compared with the above three scenarios, the function of the test perception module needs to use the GPU, so the GPU image of the software package must be obtained for test verification . ### Step 1: Start and enter the Apollo Docker environment 1. Create a workspace: ```shell mkdir apollo_v8.0 cd apollo_v8.0 ``` 2. Enter the following command to enter the container environment in GPU mode: ```bash aem start_gpu -f ``` 3. Enter the following command to enter the container: ```bash aem enter ``` 4. Initialize the workspace: ```shell aem init ``` ### Step 2: Download the record data package 1. Enter the following command to download the data package: ```bash wget https://apollo-system.bj.bcebos.com/dataset/6.0_edu/sensor_rgb.tar.xz ``` 2. Create a directory and extract the downloaded installation package into this directory: ```bash sudo mkdir -p ./data/bag/ sudo tar -xzvf sensor_rgb.tar.xz -C ./data/bag/ ``` ### Step 3: Install DreamView In the same terminal, enter the following command to install the DreamView program. ```bash buildtool install --legacy dreamview-dev monitor-dev ``` ### Step 4: Install transform, perception and localization 1. In the same terminal, enter the following command to install the perception program. ```bash buildtool install --legacy perception-dev ``` 2. Enter the following commands to install the localization , v2x and transform programs. ```bash buildtool install --legacy localization-dev v2x-dev transform-dev ``` ### Step 5: Module running 1. In the same terminal, enter the following command to start Apollo's DreamView program. ```bash aem bootstrap start ``` Open the browser and enter the `localhost:8888` address, select the model, vehicle configuration, and map. ![包-dreamview2.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/%E5%8C%85-dreamview2_3d6fa5c.png) Click the Module Controller module in the status bar on the left side of the page to enable the Transform module: ![包- transform.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/%E5%8C%85-%20transform_e13afb6.png) 2. Use the mainboard method to enable the Lidar module: ```bash mainboard -d /apollo/modules/perception/production/dag/dag_streaming_perception_lidar.dag ``` ### Step 6: Result verification 1. It is necessary to use the -k parameter to mask out the perception channel data contained in the record. ```bash cyber_recorder play -f ./data/bag/sensor_rgb.record -k /perception/vehicle/obstacles /apollo/perception/obstacles /apollo/perception/traffic_light /apollo/prediction ``` 2. Verify detection results: View detection results in DreamView. Click LayerMenu in the toolbar on the left side of Dreamview, open the Point Cloud in Perception, and select the corresponding channel to view the point cloud data. Check whether the 3D detection results can correspond to the Lidar sensor data. ![结果1.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/%E7%BB%93%E6%9E%9C1_35771bc.png) View Results: ![包-结果2.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/%E5%8C%85-%E7%BB%93%E6%9E%9C2_1be66a4.png) ### Step 7: Model Replacement The following describes the parameter configuration and replacement process of the MASK_PILLARS_DETECTION, CNN_SEGMENTATION and CENTER_POINT_DETECTION models in the lidar detection process. You can easily replace these configurations in `lidar_detection_pipeline.pb.txt` to load and run different models. #### MASK_PILLARS_DETECTION model replacement Modify the configuration file content in `lidar_detection_pipeline.pb.txt`: ```bash vim /apollo/modules/perception/pipeline/config/lidar_detection_pipeline.pb.txt ``` Replace stage_type with MASK_PILLARS_DETECTION: ```bash stage_type: MASK_PILLARS_DETECTION ``` ![mask1.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/mask1_f067520.png) And modify the configuration file information content of the corresponding stage: ```bash stage_config: { stage_type: MASK_PILLARS_DETECTION enabled: true } ``` ![mask2.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/mask2_f5373a5.png) After saving the modified configuration file, enable the Lidar module and play the record to verify the detection result: ![mask3.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/mask3_7d0d0bd.png) #### CNN_SEGMENTATION model replacement Modify the configuration file content in `lidar_detection_pipeline.pb.txt`: replace stage_type with CNN_SEGMENTATION, and modify the configuration file content of the corresponding stage accordingly: ```bash stage_type: CNN_SEGMENTATION stage_config: { stage_type: CNN_SEGMENTATION enabled: true cnnseg_config: { sensor_name: "velodyne128" param_file: "/apollo/modules/perception/production/data/perception/lidar/models/cnnseg/cnnseg64_param.conf" proto_file: "/apollo/modules/perception/production/data/perception/lidar/models/cnnseg/cnnseg64_caffe/deploy.prototxt" weight_file: "/apollo/modules/perception/production/data/perception/lidar/models/cnnseg/cnnseg64_caffe/deploy.caffemodel" engine_file: "/apollo/modules/perception/production/data/perception/lidar/models/cnnseg/cnnseg64_caffe/engine.conf" } } ``` Enable the lidar module and play the record verification detection result: ![cnn1.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/cnn1_8d22d9f.png) #### CENTER_POINT_DETECTION model replacement Replace stage_type with CENTER_POINT_DETECTION: ```bash stage_type: CENTER_POINT_DETECTION ``` Modify the configuration file information content of the corresponding stage: ```bash stage_config: { stage_type: CENTER_POINT_DETECTION enabled: true } ``` Enable the lidar module and play the record verification detection result: ![centor1.png](https://bce.bdstatic.com/doc/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/centor1_edea96e.png)
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_run_perception_module_on_your_local_computer.md
# How to Run Perception Module on Your Local Computer The perception module requires Nvidia GPU and CUDA installed to run the perception algorithms with Caffe. We have already installed the CUDA and Caffe libraries in the released docker. However, the Nvidia GPU driver is not installed in the released dev docker image. To run the perception module with CUDA acceleration, we suggest to install the exactly same version of Nvidia driver in the docker as the one installed in your host machine, and build Apollo with GPU option. We provide a step-by-step instruction on running perception module with Nvidia GPU as below: 1. Get into the docker container via: ```bash $APOLLO_HOME/docker/scripts/dev_start.sh $APOLLO_HOME/docker/scripts/dev_into.sh ``` 2. Build Apollo ```bash ./apollo.sh build_opt_gpu ``` 3. Run bootstrap.sh ```bash bootstrap.sh ``` 4. Launch Dreamview from your web browser by typing following address http://localhost:8888/ 5. Select your car and map using the dropdowm options in the top right corner in Dreamview 6. Select the transform button in Dreamview or type the following command in your terminal ```bash cyber_launch start /apollo/modules/transform/launch/static_transform.launch ``` 7. If the image is compressed, launch the image decompression module ``` cyber_launch start modules/drivers/tools/image_decompress/launch/image_decompress.launch ``` 8. Launch the perception modules - If you want to launch all modules ``` cyber_launch start /apollo/modules/perception/production/launch/perception_all.launch ``` - If you want to test camera-based obstacle and lane detection ``` cyber_launch start /apollo/modules/perception/production/launch/perception_camera.launch ``` If you want to visualize camera-based results overlaid on the captured image and in bird view, mark `enable_visualization: true` in `‘modules/perception/production/conf/perception/camera/fusion_camera_detection_component.pb.txt` befor executing the above command. It will pop up when you play recorded data in point 9 Also, If you want to enable CIPO, add ‘enable_cipv: true’ as a new line in the same file - If you want to test lane detection alone use ``` mainboard -d ./modules/perception/production/dag/dag_streaming_perception_lane.dag ``` If you want to visualize lane results overlaid on the captured image and in bird view, mark `enable_visualization: true` in `modules/perception/production/conf/perception/camera/lane_detection_component.config` before executing the above command. It will pop up when you play recorded data in point 9 - If you want to test traffic light detection module alone use ``` cyber_launch start /apollo/modules/perception/production/launch/perception_trafficlight.launch ``` If you want to visualize the traffic light detection results overlaid on the captured image, mark `—start_visualizer=true` in `apollo/modules/perception/production/conf/perception/perception_common.flag` before executing the above command. It will pop up when you play recorded data in point 9 9. Play your recorded bag ``` cyber_recorder play -f /apollo/data/bag/anybag -r 0.2 ``` Please note that the Nvidia driver should be installed appropriately even if the perception module is running in Caffe CPU_ONLY mode (i.e., using `./apollo.sh build` or `./apollo.sh build_opt` to build the perception module). Please see the detailed instruction of perception module in [the perception README](../../modules/perception/README.md).
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/perception_apollo_5.0.md
# Perception Apollo 5.0 June 27, 2019 ## Introduction Apollo 5.0 Perception module introduced a few major features to provide diverse functionality, a more reliable platform and a more robust solution to enhance your AV performance. These include: * **Supports Caffe and PaddlePaddle**: [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which was originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu. * **Online sensor calibration service** * **Manual camera calibration** * **Closest In-Path Object (CIPO) Detection** * **Vanishing Point Detection** ***Safety alert*** Apollo 5.0 *does not* support a high curvature road, roads without lane lines including local roads and intersections. The perception module is based on visual detection using a deep network with limited data. Therefore, before we release a better network, the driver should be careful while driving and always be ready to disengage the autonomous driving mode by intervening (hit the brakes or turn the steering wheel). While testing Apollo 3.0, please choose a path that has the necessary conditions mentioned above and be vigilant. ## Perception module The flow chart of Apollo 5.0 Perception module: ![Image](images/Apollo3.5_perception_detail.png) To learn more about individual sub-modules, please visit [Perception - Apollo 3.0](./perception_apollo_3.0.md) ### Supports PaddlePaddle The Apollo platform's perception module actively depended on Caffe for its modelling, but will now support PaddlePaddle, an open source platform developed by Baidu to support its various deep learning projects. Some features include: - **PCNNSeg**: Object detection from 128-channel lidar or a fusion of three 16-channel lidars using PaddlePaddle - **PCameraDetector**: Object detection from a camera - **PlaneDetector**: Lane line detection from a camera #### Using PaddlePaddle Features 1. To use the PaddlePaddle model for Camera Obstacle Detector, set `camera_obstacle_perception_conf_file` to `obstacle_paddle.pt` in the following [configuration file](https://github.com/ApolloAuto/apollo/blob/master/modules/perception/production/conf/perception/camera/fusion_camera_detection_component.pb.txt) 2. To use the PaddlePaddle model for LiDAR Obstacle Detector, set `use_paddle` to `true` in the following [configuration file](https://github.com/ApolloAuto/apollo/blob/master/modules/perception/production/data/perception/lidar/models/cnnseg/velodyne128/cnnseg.conf) ### Online sensor calibration service Apollo currently offers a robust calibration service to support your calibration requirements from LiDARs to IMU to Cameras. This service is currently being offered to select partners only. If you would like to learn more about the calibration service, please reach out to us via email: **apollopartner@baidu.com** ### Manual Camera Calibration In Apollo 5.0, Perception launched a manual camera calibration tool for camera extrinsic parameters. This tool is simple, reliable and user-friendly. It comes equipped with a visualizer and the calibration can be performed using your keyboard. It helps to estimate the camera's orientation (pitch, yaw, roll). It provides a vanishing point, horizon, and top down view as guidelines. Users would need to change the 3 angles to align a horizon and make the lane lines parallel. The process of manual calibration can be seen below: ![](images/Manual_calib.png) ### Closest In-Path Object (CIPO) Detection The CIPO includes detection of key objects on the road for longitudinal control. It utilizes the object and ego-lane line detection output. It creates a virtual ego lane line using the vehicle's ego motion prediction. Any vehicle model including Sphere model, Bicycle model and 4-wheel tire model can be used for the ego motion prediction. Based on the vehicle model using the translation of velocity and angular velocity, the length and curvature of the pseudo lanes are determined. Some examples of CIPO using Pseudo lane lines can be seen below: 1. CIPO used for curved roads ![](images/CIPO_1.png) 2. CIPO for a street with no lane lines ![](images/CIPO_2.png) ### Vanishing Point Detection In Apollo 5.0, an additional branch of network is attached to the end of the lane encoder to detect the vanishing point. This branch is composed of convolutional layers and fully connected layers, where convolutional layers translate lane features for the vanishing point task and fully connected layers make a global summary of the whole image to output the vanishing point location. Instead of giving an output in `x`, `y` coordinate directly, the output of vanishing point is in the form of `dx`, `dy` which indicate its distances to the image center in `x`, `y` coordinates. The new branch of network is trained separately by using pre-trained lane features directly, where the model weights with respect to the lane line network is fixed. The Flow Diagram is included below, note that the red color denotes the flow of the vanishing point detection algorithm. ![](images/Vpt.png) Two challenging visual examples of our vanishing point detection with lane network output are shown below: 1. Illustrates the case that vanishing point can be detected when there is obstacle blocking the view: ![](images/Vpt1.png) 2. Illustrates the case of turning road with altitude changes: ![](images/Vpt2.png) #### Key Features - Regression to `(dx, dy)` rather than `(x, y)` reduces the search space - Additional convolution layer is needed for feature translation which casts CNN features for vanishing point purpose - Fully Connected layer is applied for holistic spatial summary of information, which is required for vanishing point estimation - The branch design supports diverse training strategies, e.g. fine tune pre-trained laneline model, only train the subnet with direct use of laneline features, co-train of multi-task network ## Output of Perception The input of Planning and Control modules will be quite different with that of the previous Lidar-based system for Apollo 3.0. - Lane line output - Polyline and/or a polynomial curve - Lane type by position: L1(next left lane line), L0(left lane line), R0(right lane line), R1(next right lane line) - Object output - 3D rectangular cuboid - Relative velocity and direction - Type: CIPV, PIHP, others - Classification type: car, truck, bike, pedestrian - Drops: trajectory of an object The world coordinate system is used as ego-coordinate in 3D where the rear center axle is an origin. If you want to try our perception modules and their associated visualizer, please refer to the [following document](./how_to_run_perception_module_on_your_local_computer.md)
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/3d_obstacle_perception.md
# 3D Obstacle Perception There are three main components of 3D obstacle perception: - LiDAR Obstacle Perception - RADAR Obstacle Perception - Obstacle Results Fusion ## LiDAR Obstacle Perception The following sections describe the obstacle perception pipeline given input as 3D point cloud data from the LiDAR sensor that are resolved by Apollo: - HDMap Region of Interest (ROI) Filter - Convolutional Neural Networks (CNN) Segmentation - MinBox Builder - HM ObjectTracker - Sequential TypeFusion ### HDMap Region of Interest (ROI) Filter The Region of Interest (ROI) specifies the drivable area that includes road surfaces and junctions that are retrieved from the HD (high-resolution) map. The HDMap ROI filter processes LiDAR points that are outside the ROI, removing background objects, e.g., buildings and trees around the road. What remains is the point cloud in the ROI for subsequent processing. Given an HDMap, the affiliation of each LiDAR point indicates whether it is inside or outside the ROI. Each LiDAR point can be queried with a lookup table (LUT) of 2D quantization of the region around the car. The input and output of the HDMap ROI filter module are summarized in the table below. | Input | Output | | ---------------------------------------- | ---------------------------------------- | | The point cloud: A set of 3D points captured from LiDAR Sensor. | The indices of input points that are inside the ROI defined by HDMap. | | HDMap: A set of polygons, each of which is an ordered set of points. | | The Apollo HDMap ROI filter generally consists of three successive steps: - Coordinate transformation. - ROI LUT construction. - Point inquiry with ROI LUT. #### Coordinate Transformation For the HDMap ROI filter, the data interface for HDMap is defined by a set of polygons, each of which is actually an ordered set of points in the world coordinate system. Running an inquiry on the points with the HDMap ROI requires that the point cloud and polygons are represented in the same coordinate system. For this purpose, Apollo transforms the points of the input point cloud and the HDMap polygons into a local coordinate system that originates from the LiDAR sensor’s location. #### ROI LUT Construction To determine an input point, whether inside or outside the ROI, Apollo adopts a grid-wise LUT that quantifies the ROI into a birds-eye view 2D grid. As shown in Figure 1, this LUT covers a rectangle region, bounded by a predefined spatial range around the general view from above in the boundary of HDMap. Then it represents the affiliation with the ROI for each cell of the grid (i.e., 1/0 represents it is inside/outside the ROI). For computational efficiency, Apollo uses a scan line algorithm and bitmap encoding to construct the ROI LUT. <div align=center><img src="images/3d_obstacle_perception/roi_lookup_table.png"></div> <div align=center>Figure 1 Illustration of ROI lookup table (LUT)</div> The blue lines indicate the boundary of HDMap ROI, including road surfaces and junctions. The red solid dot represents the origin of the local coordinate system corresponding to the LiDAR sensor location. The 2D grid is composed of 8×8 cells that are shown as green squares. The cells inside the ROI are blue-filled squares while the ones outside the ROI are yellow-filled squares. #### Point Inquiry with ROI LUT Based on the ROI LUT, the affiliation of each input point is queried using a two-step verification. Then, Apollo conducts data compilation and output as described below. For the point inquiry process, Apollo: - Identifies whether the point is inside or outside the rectangle region of ROI LUT. - Queries the corresponding cell of the point in the LUT for its affiliation with respect to the ROI. - Collects all the points that belong to the ROI and outputs their indices with respect to the input point cloud. Set the user-defined parameters in the configuration file: `modules/perception/production/data/perception/lidar/models/roi_filter/hdmap_roi_filter/hdmap_roi_filter.conf`. The table below describes the usage of parameters for HDMap ROI Filter. | Parameter Name | Usage | Default | | -------------- | ---------------------------------------- | ----------- | | range | The range of ROI LUT (the 2D grid) with respect to the origin (LiDAR sensor). | 120.0 meters | | cell_size | The size of cells for quantizing the 2D grid. | 0.25 meter | | extend_dist | The distance that the ROI extends from the polygon boundary. | 0.0 meter | | no_edge_table | use edge_table for polygon mask generation. | false | | set_roi_service| enable roi_service to perception lidar modules. | true | ### Convolutional Neural Networks (CNN) Segmentation After identifying the surrounding environment using the HDMap ROI filter, Apollo obtains the filtered point cloud that includes *only* the points inside the ROI (i.e., the drivable road and junction areas). Most of the background obstacles, such as buildings and trees around the road region, have been removed, and the point cloud inside the ROI is fed into the segmentation module. This process detects and segments out foreground obstacles, e.g., cars, trucks, bicycles, and pedestrians. | Input | Output | | ---------------------------------------- | ---------------------------------------- | | The point cloud (a set of 3D points) | A set of objects corresponding to obstacles in the ROI. | | The point indices that indicate points inside the ROI as defined in HDMap | | Apollo uses a deep CNN for accurate obstacle detection and segmentation. The Apollo CNN segmentation consists of four successive steps: - Channel Feature Extraction. - CNN-Based Obstacle Prediction. - Obstacle Clustering. - Post-processing. The following sections describe the deep CNN in detail. #### Channel Feature Extraction Given a point cloud frame, Apollo builds a birds-eye view (i.e., projected to the X-Y plane) that is a 2D grid in the local coordinate system. Each point within a predefined range with respect to the origin (i.e., the LiDAR sensor) is quantized into one cell of the 2D grid based on its X and Y coordinates. After quantization, Apollo computes 8 statistical measurements of the points for each cell of the grid, which will be the input channel features fed into the CNN in the subsequent step. The statistical measurements computed are the: - Maximum height of points in the cell. - Intensity of the highest point in the cell. - Mean height of points in the cell. - Mean intensity of points in the cell. - Number of points in the cell. - Angle of the cell’s center with respect to the origin. - Distance between the cell’s center and the origin. - Binary value indicating whether the cell is empty or occupied. #### CNN-Based Obstacle Prediction Based on the channel features described above, Apollo uses a deep fully-convolutional neural network (FCNN) to predict the cell-wise obstacle attributes including the offset displacement with respect to the potential object center — called center offset, (see Figure 2 below), objectness, positiveness, and object height. As shown in Figure 2, the input of the network is a *W*×*H*×*C* channel image where: - *W* represents the column number of the grid. - *H* represents the row number of the grid. - *C* represents the number of channel features. The FCNN is composed of three layers: - Downstream encoding layers (feature encoder). - Upstream decoding layers (feature decoder). - Obstacle attribute prediction layers (predictor). The feature encoder takes the channel feature image as input and successively down-samples its spatial resolution with increasing feature abstraction. Then the feature decoder gradually up-samples the encoded feature image to the spatial resolution of the input 2D grid, which can recover the spatial details of the feature image to facilitate the cell-wise obstacle attribute prediction. The down-sampling and up-sampling operations are implemented in terms of stacked convolution/devolution layers with non-linear activation (i.e., ReLu) layers. <div align=center><img src="images/3d_obstacle_perception/FCNN-with-class.png"></div> <div align=center>Figure 2 The FCNN for cell-wise obstacle prediction</div> #### Obstacle Clustering After the CNN-based prediction step, Apollo obtains prediction information for individual cells. Apollo utilizes five cell object attribute images that contain the: - Center offset - Objectness - Positiveness - Object height - Class probability To generate obstacle objects, Apollo constructs a directed graph, based on the cell center offset prediction, and searches the connected components as candidate object clusters. As shown in Figure 3, each cell is a node of the graph and the directed edge is built based on the center offset prediction of the cell, which points to its parent node corresponding to another cell. Given this graph, Apollo adopts a compressed Union Find algorithm to efficiently find the connected components, each of which is a candidate obstacle object cluster. The objectness is the probability of being a valid object for one individual cell. So Apollo defines the non-object cells as the ones with the objectness of less than 0.5. Thus, Apollo filters out the empty cells and non-object ones for each candidate object cluster. <div align=center><img src="images/3d_obstacle_perception/obstacle_clustering.png"></div> <div align=center>Figure 3 Illustration of obstacle clustering</div> - The red arrow represents the object center offset prediction for each cell. - The blue mask corresponds to the object cells for which the objectness probability is no less than 0.5. - The cells within the solid red polygon compose a candidate object cluster. - The red filled five-pointed stars indicate the root nodes (cells) of sub-graphs that correspond to the connected components. One candidate object cluster can be composed of multiple neighboring connected components whose root nodes are adjacent to each other. The class probabilities are summed up over the nodes (cells) within the object cluster for each candidate obstacle type, including vehicle, pedestrian, bicyclist and unknown. The obstacle type corresponding to the maximum-averaged probability is the final classification result of the object cluster. #### Post-processing After clustering, Apollo obtains a set of candidate object clusters each of which includes several cells. In the post-processing step, Apollo first computes the detection confidence score and object height for each candidate cluster by averaging the positiveness and object height values of its involved cells respectively. Then, Apollo removes the points that are too high with respect to the predicted object height and collects the points of valid cells for each candidate cluster. Finally, Apollo removes the candidate clusters that have either a very low confidence score or a small number of points, to output the final obstacle clusters/segments. Set the user-defined parameters in the configuration file `modules/perception/production/data/perception/lidar/models/cnnseg/velodyne128/cnnseg_param.conf`. The table below explains the parameter usage and default values for CNN Segmentation. | Parameter Name | Usage | Default | | ---------------------------- | ---------------------------------------- | ---------- | | objectness_thresh | The threshold of objectness for filtering out non-object cells in the obstacle clustering step. | 0.5 | | model_type | Network type, e.g., RTNet means tensorRT accelerated network | RTNet | | confidence_thresh | The detection confidence score threshold for filtering out the candidate clusters in the post-processing step. | 0.1 | | confidence_range | The confident range with respect to the origin (the LiDAR sensor)for good quality detection.| 85.0 meters | | height_thresh | If it is non-negative, the points that are higher than the predicted object height by height_thresh are filtered out in the post-processing step. | 0.5 meters | | min_pts_num | In the post-processing step, the candidate clusters with less than min_pts_num points are removed. | 3 | | ground_detector | Ground surface detector type. | SpatioTemporalGroundDetector | | gpu_id | The ID of the GPU device used in the CNN-based obstacle prediction step. | 0 | | roi_filter | The ROI filter type, with help of the HDmap. | HdmapROIFilter | | network_param {instance_pt_blob, etc} | The types of different caffe input and outputlayer blob. | layer predefined | | feature_param {width} | The number of cells in X (column) axis of the 2D grid. | 864 | | feature_param {height} | The number of cells in Y (row) axis of the 2D grid. | 864 | | feature_param {min_height} | The minimum height with respect to the origin (the LiDAR sensor). | -5.0 meters | | feature_param {max_height} | The maximum height with respect to the origin (the LiDAR sensor). | 5.0 meters | | feature_param {use_intensity_feature} | Enable input channel internsity feature. | false | | feature_param {use_constant_feature} | Enable input channel constant feature. | false | | feature_param {point_cloud_range} | The range of the 2D grid with respect to the origin (the LiDAR sensor). | 90 meters | **Note: the provided model is a sample for experiment purpose only.** ### MinBox Builder The object builder component establishes a bounding box for the detected obstacles. Due to occlusions or the distance to the LiDAR sensor, the point cloud forming an obstacle can be sparse and cover only a portion of the surfaces. Thus, the box builder works to recover the full bounding box given the polygon point. The main purpose of the bounding box is to estimate the heading of the obstacle (e.g., vehicle) even if the point cloud is sparse. Equally, the bounding box is used to visualize the obstacles. The idea behind the algorithm is to find the all areas given an edge of the polygon point. In the following example, if AB is the edge, Apollo projects other polygon points onto AB and establishes the pair of intersections that has the maximum distance. That’s one of the edges belonging to the bounding box. Then it is straightforward to obtain the other edge of the bounding box. By iterating all edges in the polygon, as shown in Figure 4, Apollo determines a 6-edge bounding box. Apollo then selects the solution that has the minimum area as the final bounding box. <div align=center><img src="images/3d_obstacle_perception/object_building.png"></div> <div align=center>Figure 4 Illustration of MinBox Object Builder</div> ### HM Object Tracker The HM object tracker is designed to track obstacles detected by the segmentation step. In general, it forms and updates track lists by associating current detections with existing track lists, deletes the old track lists if they no longer persist, and spawns new track lists if new detections are identified. The motion state of the updated track lists are estimated after association. In the HM object tracker, the Hungarian algorithm is used for detection-to-track association, and a Robust Kalman Filter is adopted for motion estimation. #### Detection-to-Track Association When associating detection to existing track lists, Apollo constructs a bipartite graph and then uses the Hungarian algorithm to find the best detection-to-track matching with minimum cost (distance). ##### **Computing Association Distance Matrix** In the first step, an association distance matrix is established. The distance between a given detection and one track is calculated according to a series of association features including motion consistency and appearance consistency. Some features used in HM tracker’s distance computing are shown below: | Association Feature Name | Evaluating Consistency Description | | ------------------------ | ---------------------------------- | | location_distance | Motion | | direction_distance | Motion | | bbox_size_distance | Appearance | | point_num_distance | Appearance | | histogram_distance | Appearance | Additionally, there are some important parameters of distance weights that are used for combining the above-mentioned association features into a final distance measurement. **Bipartite Graph Matching via Hungarian Algorithm** Given the association distance matrix, as shown in Figure 5, Apollo constructs a bipartite graph and uses the Hungarian algorithm to find the best detection-to-track matching via minimizing the distance cost. It solves the assignment problem within O(n\^3) time complexity. To boost computing performance, the Hungarian algorithm is implemented after cutting the original bipartite graph into subgraphs, by deleting vertices with a distance greater than a reasonable maximum distance threshold. <div align=center><img src="images/3d_obstacle_perception/bipartite_graph_matching.png"></div> <div align=center>Figure 5 Illustration of Bipartite Graph Matching</div> #### Track Motion Estimation After the detection-to-track association, the HM object tracker uses a Robust Kalman Filter to estimate the motion states of current track lists with a constant velocity motion model. The motion states include the belief anchor point and belief velocity, which correspond to the 3D position and the 3D velocity respectively. To overcome possible distraction caused from imperfect detections, Robust Statistics techniques are implemented in the tracker’s filtering algorithm. **Observation Redundancy** The measurement of velocity that is the input of the filtering algorithm is selected among a series of redundant observations, which include anchor point shift, bounding box center shift, and bounding box corner point shift. Redundant observations bring extra robustness to filtering measurement, because the probability that all observations fail is much less than the probability that a single observation fails. **Breakdown** Gaussian Filter algorithms assume their noises are generated from Gaussian distribution. However, this hypothesis may fail in a motion estimation problem because the noise of its measurement may draw from fat-tail distributions. Apollo uses a breakdown threshold in the filtering process to neutralize the over-estimation of update gain. **Update according Association Quality** The original Kalman Filter updates its states without distinguishing the quality of its measurements. However, the quality of measurement is a beneficial indicator of filtering noise and can be estimated. For instance, the distance calculated in the association step could be a reasonable estimate of the quality of measurement. Updating the state of the filtering algorithm according to the association quality enhances robustness and smoothness to the motion estimation problem. A high-level workflow of HM object tracker is given in Figure 6. <div align=center><img src="images/3d_obstacle_perception/hm_object_tracker.png"></div> <div align=center>Figure 6 Workflow of HM Object Tracker</div> The main points in an HM object tracker workflow are: 1) Construct the tracked objects and transform them into world coordinates. 2) Predict the states of existing track lists and match detections to them. 3) Update the motion state of updated track lists and collect the tracking results. ### Sequential Type Fusion To smooth the obstacle type and reduce the type switch over the entire trajectory, Apollo utilizes a sequential type fusion algorithm based on a linear-chain Conditional Random Field (CRF), which can be formulated as follows: ![CRF_eq1](images/3d_obstacle_perception/CRF_eq1.png) ![CRF_eq2](images/3d_obstacle_perception/CRF_eq2.png) where the unary term acts on each single node, while the binary one acts on each edge. The probability in the unary term is the class probability output by the CNN-based prediction, and the state transition probability in the binary term is modeled by the obstacle type transition from time *t-1* to time *t*, which is statistically learned from large amounts of obstacle trajectories. Specifically, Apollo also uses a learned confusion matrix to indicate the probability of changing from the predicted type to ground truth type to optimize the original CNN-based class probability. Using the Viterbi algorithm, the sequential obstacle type is optimized by solving the following problem: ![CRF_eq3](images/3d_obstacle_perception/CRF_eq3.png) ## Radar Detector Given the radar data from the sensor, follow a basic process such as the one described below. First, the track ID should be extended, because Apollo needs a global track ID for ID association. The original radar sensor provides an ID with only 8 bits, so it is difficult to determine if two objects with the same ID in two adjacent frames denote a single object in the tracking history, especially if there is a frame dropping problem. Apollo uses the measurement state provided by the radar sensor to handle this problem. Meanwhile, Apollo assigns a new track ID to the object that is far away from the object with the same track ID as in the previous frame. Second, use a false positive filter to remove noise. Apollo sets some threshold via radar data to filter results that could be noise. Then, objects are built according the radar data as a unified object format. Apollo translates the objects into world coordinates via calibration results. The original radar sensor provides the relative velocity of the object, so Apollo uses the host car velocity from localization. Apollo adds these two velocities to denote the absolute velocity of the detected object. Finally, the HDMap ROI filter is used to obtain the interested objects. Only objects inside the ROI are used by the sensor fusion algorithm. ## Obstacle Results Fusion The sensor fusion module is designed to fuse LiDAR tracking results and radar detection results. Apollo first matches the sensor results with the fusion items by tracking their IDs. Then it computes the association matrix for unmatched sensor results and unmatched fusion items to get an optimal matching result. For the matched sensor result, update the corresponding fusion item using the Adaptive Kalman Filter. For the unmatched sensor result, create a new fusion item. Remove any stale unmatched fusion items. ### Fusion Items Management Apollo has the concept of publish-sensor. The given radar results are cached. The given LiDAR results trigger the fusion action. The frequency of sensor fusion output is the same as the frequency of publish sensor. Apollo's publish-sensor is LiDAR. The sensor results feed the fusion pipeline sorted by the sensor time stamp. Apollo keeps all sensor results. The object survival time is set for different sensor objects in Apollo. An object is kept alive if at least one sensor result survives. The Apollo perception module provides fusion results of LiDAR and radar in the short-range area around the car and radar-only results for the long distance. ### Sensor Results to Fusion Lists Association When associating sensor results to the fusion lists, Apollo first matches the identical track ID of the same sensor, then constructs a bipartite graph and uses the Hungarian algorithm to find the best result-to-fusion matching of the unmatched sensor results and fusion lists, via minimizing the distance cost. The Hungarian algorithm is the same algorithm that the HM Object Tracker uses. The distance cost is computed by the Euclidean distance of the anchor points of the sensor result and fusion item. ### Motion Fusion Apollo uses the Adaptive Kalman filter to estimate the motion of a current item with a constant acceleration motion model. The motion state includes its belief anchor point, belief velocity and belief acceleration, which correspond to the 3D position, its 3D velocity and acceleration respectively. Apollo uses only position and velocity from sensor results. In motion fusion, Apollo caches the state of all sensor results and computes the acceleration via the Kalman Filter. Apollo provides uncertainty of position and velocity in LiDAR tracker and radar detector data. Apollo feeds all the states and uncertainties to the Adaptive Kalman Filter and obtains the fused results. Apollo uses a breakdown threshold in the filtering process to neutralize the over-estimation of update gain.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_fusion_system.md
# How to add a new fusion system The detailed processing flow of fusion is shown below: ![](images/Fusion_overview.png) The fusion system introduced by this document is located at fusion Component listed below. Current architecture of Fusion Component is shown: ![fusion component](images/fusion.png) As we can see from above structure, fusion system is the derived class of `BaseFusionSystem` which acts as a abstract class member of `ObstacleMultiSensorFusion` located in Fusion Component. Next, We will introduce how to add a new fusion system based on current structure. Apollo has provided one fusion system -- Probabilistic fusion. It could be easily changed or replaced by other systems. The input of system should be objective obstacle data generated by detection and track of upstream sensors, while the output should be fused and tracked objective obastacle data. This document will introduce how to add a new fusion system, the basic task sequence is listed below: 1. Define a class that inherits `base_fusion_system` 2. Implement the class `NewFusionSystem` 3. Add config proto file for `NewFusionSystem` 4. Update config file to put your system into effect The steps are elaborated below for better understanding: ## Define a class that inherits `base_fusion_system` All the fusion systems shall inherit `base_fusion_system`,which defines basic class members and a set of interfaces. Here is an example of the system implementation: ```c++ namespace apollo { namespace perception { namespace fusion { class NewFusionSystem : public BaseFusionSystem { public: NewFusionSystem(); ~NewFusionSystem(); NewFusionSystem(const NewFusionSystem&) = delete; NewFusionSystem& operator=(const NewFusionSystem&) = delete; bool Init(const FusionInitOptions& init_options) override; bool Fuse(const FusionOptions& options, const base::FrameConstPtr& sensor_frame, std::vector<base::ObjectPtr>* fused_objects) override; std::string Name() const override; }; // class NewFusionSystem } // namespace fusion } // namespace perception } // namespace apollo ``` The function signature of `base_fusion_system` is pre-defined: ```c++ struct FusionInitOptions { std::vector<std::string> main_sensors; }; struct FusionOptions {}; struct alignas(16) Frame { EIGEN_MAKE_ALIGNED_OPERATOR_NEW Frame() { sensor2world_pose.setIdentity(); } void Reset() { timestamp = 0.0; objects.clear(); sensor2world_pose.setIdentity(); sensor_info.Reset(); lidar_frame_supplement.Reset(); radar_frame_supplement.Reset(); camera_frame_supplement.Reset(); } // @brief sensor information SensorInfo sensor_info; double timestamp = 0.0; std::vector<std::shared_ptr<Object>> objects; Eigen::Affine3d sensor2world_pose; // sensor-specific frame supplements LidarFrameSupplement lidar_frame_supplement; RadarFrameSupplement radar_frame_supplement; CameraFrameSupplement camera_frame_supplement; UltrasonicFrameSupplement ultrasonic_frame_supplement; }; typedef std::shared_ptr<Frame> FramePtr; typedef std::shared_ptr<const Frame> FrameConstPtr; struct alignas(16) Object { EIGEN_MAKE_ALIGNED_OPERATOR_NEW Object(); std::string ToString() const; void Reset(); int id = -1; PointCloud<PointD> polygon; Eigen::Vector3f direction = Eigen::Vector3f(1, 0, 0); float theta = 0.0f; float theta_variance = 0.0f; Eigen::Vector3d center = Eigen::Vector3d(0, 0, 0); Eigen::Matrix3f center_uncertainty; Eigen::Vector3f size = Eigen::Vector3f(0, 0, 0); Eigen::Vector3f size_variance = Eigen::Vector3f(0, 0, 0); Eigen::Vector3d anchor_point = Eigen::Vector3d(0, 0, 0); ObjectType type = ObjectType::UNKNOWN; std::vector<float> type_probs; ObjectSubType sub_type = ObjectSubType::UNKNOWN; std::vector<float> sub_type_probs; float confidence = 1.0f; int track_id = -1; Eigen::Vector3f velocity = Eigen::Vector3f(0, 0, 0); Eigen::Matrix3f velocity_uncertainty; bool velocity_converged = true; float velocity_confidence = 1.0f; Eigen::Vector3f acceleration = Eigen::Vector3f(0, 0, 0); Eigen::Matrix3f acceleration_uncertainty; double tracking_time = 0.0; double latest_tracked_time = 0.0; MotionState motion_state = MotionState::UNKNOWN; std::array<Eigen::Vector3d, 100> drops; std::size_t drop_num = 0; bool b_cipv = false; CarLight car_light; LidarObjectSupplement lidar_supplement; RadarObjectSupplement radar_supplement; CameraObjectSupplement camera_supplement; FusionObjectSupplement fusion_supplement; }; using ObjectPtr = std::shared_ptr<Object>; using ObjectConstPtr = std::shared_ptr<const Object>; ``` ## Implement the class `NewFusionSystem` To ensure the new system could function properly, `NewFusionSystem` should at least override the interface Init(), Fuse(), Name() defined in `base_fusion_system`. Init() is resposible for config loading, class member initialization, etc. And Fuse() will implement the basic logic of system. A concrete `NewFusionSystem.cc` example is shown: ```c++ namespace apollo { namespace perception { namespace fusion { bool NewFusionSystem::Init(const FusionInitOptions& init_options) { /* Initialization of your system */ } bool NewFusionSystem::Fuse(const FusionOptions& options, const base::FrameConstPtr& sensor_frame, std::vector<base::ObjectPtr>* fused_objects) { /* Implementation of your system */ } std::string NewFusionSystem::Name() const { /* Return your system's name */ } FUSION_REGISTER_FUSIONSYSTEM(NewFusionSystem); //register the new fusion_system } // namespace fusion } // namespace perception } // namespace apollo ``` ## Add config and param proto file for `NewFusionSystem` Follow the following steps to add config proto file for the new system: 1. Define a `proto` for the new system configurations according to the requirements of your algorithm. As a reference, you can found and follow the `proto` definition of `probabilistic_fusion_config` at `modules/perception/proto/probabilistic_fusion_config.proto` 2. Once finishing your `proto`, for example `newfusionsystem_config.proto`, add the following content at the file header: ```protobuf syntax = "proto2"; package apollo.perception.fusion; message NewFusionSystemConfig { double parameter1 = 1; int32 parameter2 = 2; } ``` 3. Refer to `modules/perception/production/conf/perception/fusion/config_manager.config` and add your system path: ```protobuf model_config_path: "./conf/perception/fusion/modules/newfusionsystem.config" ``` 4. Refer to the `modules/probabilistic_fusion.config` in the same folder and create `newfusionsystem.config`: ```protobuf model_configs { # NewFusionSystem model. name: "NewFusionSystem" version: "1.0.0" string_params { name: "root_dir" value: "./data/perception/fusion/" } string_params { name: "config_file" value: "newfusionsystem.pt" } } ``` 5. Refer to `probabilistic_fusion.pt` and create `newfusionsystem.pt` file at `modules/perception/production/data/perception/fusion/`: ``` Note:The "*.pt" file should have the same format with the "proto" files defined in step 1,2. ``` ## Update config file to put your system into effect To use your new fusion system in Apollo,you need to modify the value of `fusion_method` to your system's name in `fusion_component_conf.pb.txt` located in corresponding folder in `modules/perception/production/data/perception/fusion/` Once you finished the above modifications, you new fusion system should take effect in Apollo.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_lidar_driver.md
# How to add a new Lidar driver ## Introduction Lidar is a commonly used environment-aware sensor. Lidar uses pulsed laser to illuminate a target, receives the reflected pulse from the target, then calculates the distance to the target based on the time of laser return. Differences in multiple measurements can then be used to make digital 3-D representations of the environment. As default, Apollo platform support multiple types of Lidar drivers, including 16 channels, 32 channels, 64 channels and 128 channels Velodyne lidars. This manual describes the major functions of Lidar driver and how to add a new lidar driver in Apollo platform. ## What's inside a lidar driver Taking velodyne lidar driver as an example, there are three major components: 1. [Driver](../../modules/drivers/lidar/velodyne/driver): Driver receives UDP data packets from lidar sensor, and packages the data packets into a frame of scanning data in the format of VelodyneScan. VelodyneScan is defined in file below: ``` modules/drivers/lidar/velodyne/proto/velodyne.proto ``` 2. [Parser](../../modules/drivers/lidar/velodyne/parser): Parser takes one frame data in format of VelodyneScan as input, converts the cloud points in the frame from spherical coordinate system to Cartesian coordinates system, then sends out the point cloud as output. The pointcloud format is defined in file below: ``` modules/drivers/proto/pointcloud.proto ``` 3. [Compensator](../../modules/drivers/lidar/velodyne/compensator): Compensator takes pointcloud data and pose data as inputs. Based on the corresponding pose information for each cloud point, it converts each cloud point information aligned with the latest time in the current lidar scan frame, minimizing the motion error due the movement of the vehicle. Thus, each cloud point needs carry its own timestamp information. ## Steps to add a new Lidar driver #### 1. Get familiar with Apollo Cyber RT framework. Please refer to the [manuals of Apollo Cyber RT](../04_CyberRT/README.md). #### 2. Define message for raw data Apollo already define the format of pointcloud. For new lidar, you only need to define the protobuf message for the raw scannning data. Those raw data will be used for archive and offline development. Compared to processed pointcloud data, raw data saves a lot of storage spaces for long term. The new message of the scan data can be define as below: ```c++ // a scan message sample message ScanData { optional apollo.common.Header header = 1; // apollo header optional Model model = 2; // device model optional Mode mode = 3; // work mode // device serial number, corresponds to a specific calibration file optional string sn = 4; repeated bytes raw_data = 5; // raw scan data } ``` In velodyne driver, the scan data message is define as [VelodyneScan](../../modules/drivers/lidar/velodyne/proto/velodyne.proto#L29). #### 3. Access the raw data Each seconds, Lidar will generate a lot of data, so it relied on UDP to efficiently transport the raw data. You need to create a DriverComponent class, which inherits the Component withotu any parameter. In its Init function, you need to start a async polling thread, whic will receive Lidar data from the specific port. Then depending on the Lidar's frequency, the DriverComponent needs to package all the packets in a fix period into a frame of ScanData. Eventually, the writer will send the ScanData through a corresponding channel. ```c++ // Inherit component with no template parameters, // do not receive message from any channel class DriverComponent : public Component<> { public: ~VelodyneDriverComponent(); bool Init() override { poll_thread_.reset(new thread([this]{ this->Poll(); })); } private: void Poll() { while (apollo::cyber::Ok()) { // poll data from port xxx // ... austo scan = std::make_shared<ScanData>(); // pack ScanData // ... writer_.write(scan); } } std::shared_ptr<std::thread> poll_thread_; std::shared_ptr<apollo::cyber::Writer<ScanData>> writer_; }; CYBER_REGISTER_COMPONENT(DriverComponent) ``` #### 4. Parse the scan data, convert to pointcloud If the new lidar driver already provides the pointcloud data in Cartesian coordinates system, then you just need to store those data in the protobuf format defined in Apollo. The Parser converts the lidar raw data to the pointcloud format in Cartesian coordinates system. Parser takes ScanData as input. For each cloud point, it will parse the timestamp, x/y/z coordinates and intensity, then packages all the cloudpoint information into a frame of pointcloud. Each cloud point transformed into the FLU (Front: x, Left: y, Up: z)coordinates with Lidar as the origin point. ```c++ message PointXYZIT { optional float x = 1 [default = nan]; optional float y = 2 [default = nan]; optional float z = 3 [default = nan]; optional uint32 intensity = 4 [default = 0]; optional uint64 timestamp = 5 [default = 0]; } ``` Then you need to create a new ParserComponent,which inherits Components templates with ScanData. ParserComponent takes ScanData as input, then generates pointcloud message and sents it out. ```c++ ... class ParserComponent : public Component<ScanData> { public: bool Init() override { ... } bool Proc(const std::shared_ptr<ScanData>& scan_msg) override { // get a pointcloud object from objects pool auto point_cloud_out = point_cloud_pool_->GetObject(); // clear befor using point_cloud_out->clear(); // parse scan data and generate pointcloud parser_->parse(scan_msg, point_cloud_out); // write pointcloud to a specific channel writer_->write(point_cloud); } private: std::shared_ptr<Writer<PointCloud>> writer_; std::unique_ptr<Parser> parser_ = nullptr; std::shared_ptr<CCObjectPool<PointCloud>> point_cloud_pool_ = nullptr; int pool_size_ = 8; }; CYBER_REGISTER_COMPONENT(ParserComponent) ``` #### 5. Motion compensation for pointcloud Motion compensation is optional depends on lidar hardware design. E.g. if the the pointcloud information from lidar already have the motion error included, then no compensator needed as extra steps. Otherwise, you need your own compensator. However, if each cloud point in your lidar's output carries its own timestamp information, you can probably reuse the current compensator without any changes. #### 6. Configure the dag file After done with each component, you just need to configure the DAG config file to add each component into the data processing pipeline. E.g. lidar_driver.dag: ```python # Define all coms in DAG streaming. module_config { module_library : "/apollo/bazel-bin/modules/drivers/lidar/xxx/driver/libxxx_driver_component.so" components { class_name : "DriverComponent" config { name : "xxx_driver" config_file_path : "/path/to/lidar_driver_conf.pb.txt" } } } module_config { module_library : "/apollo/bazel-bin/modules/drivers/lidar/xxx/parser/libxxx_parser_component.so" components { class_name : "ParserComponent" config { name : "xxx_parser" config_file_path : "/path/to/lidar_parser_conf.pb.txt" readers { channel: "/apollo/sensor/xxx/Scan" } } } } module_config { module_library : "/apollo/bazel-bin/modules/drivers/lidar/xxx/compensator/libxxx_compensator_component.so" components { class_name : "CompensatorComponent" config { name : "pointcloud_compensator" config_file_path : "/apollo/modules/drivers/lidar/xxx/conf/xxx_compensator_conf.pb.txt" readers {channel: "/apollo/sensor/xxx/PointCloud2"} } } } ``` #### 7. Run the lidar driver and visualize the pointlcoud output After finishing all the previous steps, you can use the following command to start your new lidar driver. ```bash mainboard -d /path/to/lidar_driver.dag ``` To visualize the pointcloud output, you can run `cyber_visualizer` and choose the right channel for the pointcloud.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/traffic_light.md
# Traffic Light Perception This document provides the details about how traffic light perception functions in Apollo 2.0. ## Introduction The Traffic Light Perception Module is designed to provide accurate and comprehensive traffic light status using cameras. Typically, the traffic light has three states: - Red - Yellow - Green However, if the traffic light is not working, it might display the color black or show a flashing red or yellow light. Sometimes the traffic light cannot be found in the camera's field of vision and the module fails to recognize its status. To account for all situations, the Traffic Light Perception Module provides output for five states: - Red - Yellow - Green - Black - Unknown The module's HD-Map queries repeatedly to know whether there are lights present in front of the vehicle. The traffic light is represented by the four points on its boundary, which can be obtained by querying the HD-Map, given the car's location. The traffic light is projected from world coordinates to image coordinates if there is a light in front of the vehicle. Apollo has determined that using a single camera, which has a constant field of vision, cannot see traffic lights everywhere. This limitation is due to the following factors: - The perception range should be above 100 meters - The height of the traffic lights or the width of crossing varies widely Consequently, Apollo 2.0 uses two cameras to enlarge its perception range: - A **telephoto** **camera**, whose focus length is 25 mm, is installed to observe forward, distant traffic lights. Traffic lights that are captured in a telephoto camera are very large and easy to detect. However, the field of vision of a telephoto camera is quite limited. The lights are often outside of the image if the lane is not straight enough, or if the car is in very close proximity to the light. - A **wide-angle camera**, whose focus length is 6 mm, is equipped to provide a supplementary field of vision. The module decides which camera to use adaptively based on the light projection. Although there are only two cameras on the Apollo car, the algorithm can handle multiple cameras. The following photos show the detection of traffic lights using a telephoto camera (for the first photo) and a wide-angle camera (for the second photo). ![telephoto camera](images/traffic_light/long.jpg) ![wide angle camera](images/traffic_light/short.jpg) # Pipeline The Pipeline has two main parts and is described in following sections: - Pre-process - Traffic light projection - Camera selection - Image and cached lights sync - Process - Rectify — Provide the accurate traffic light bounding boxes - Recognize — Provide the color of each bounding box - Revise — Correct the color based on the time sequence ## Pre-process There is no need to detect lights in every frame of an image. The status of a traffic light changes in low frequency and the computing resources are limited. Normally, images from different cameras would arrive almost simultaneously, and only one is fed to the Process part of the Pipeline. Therefore, the selection and the matching of images are necessary. ### Input/Output This section describes the input and the output of the Pre-process module. The input is obtained by subscribing to topic names from Apollo or directly reading them from locally stored files, and the output is fed to the successive Process module. #### Input - Images from different cameras, acquired by subscribing to the topic name: - `/apollo/sensor/camera/traffic/image_long` - `/apollo/sensor/camera/traffic/image_short` - Localization, acquired by querying the topic: - `/tf` - HD Map - Calibration results #### Output - Image from the selected camera - Traffic light bounding box projected from world coordinates to image coordinates ### Camera Selection The traffic light is represented by a unique ID and four points on its boundary, each of which is described as a 3D point in the world coordinate system. The following example shows a typical entry for traffic light `signal info`. The four boundary points can be obtained by querying the HD Map, given the car's location. ```protobuf signal info: id { id: "xxx" } boundary { point { x: ... y: ... z: ... } point { x: ... y: ... z: ... } point { x: ... y: ... z: ... } point { x: ... y: ... z: ... } } ``` The boundary points in the 3D world coordinates are then projected to the 2D image coordinates of each camera. For one traffic light, the bounding box described by the four projected points in the telephoto camera image has a larger area. It is better for detection than that in the wide-range image. Consequently, the image from the camera with the longest focal length that can see all the lights will be selected as the output image. The traffic light bounding box projected on this image will be the output bounding box. The selected camera ID with timestamp is cached in queue, as shown below: ``` C++ struct ImageLights { CarPose pose; CameraId camera_id; double timestamp; size_t num_signal; ... other ... }; ``` Thus far, all the information that we need includes the localization, the calibration results, and the HD Map. The selection can be performed at any time as the projection is independent of the image content. The task of performing the selection when images arrive is just for simplicity. Moreover, image selection does not need to be performed upon the arrival of every image, and a time interval for the selection is set. ### Image Sync Images arrive with a timestamp and a camera ID. The pairing of a timestamp and a camera ID is used to find the appropriate cached information. If the image can find a cached record with same camera ID and a small difference between timestamps, the image can be published to the Process module. All inappropriate images are abandoned. ## Process The Process module is divided into three steps, with each step focusing on one task: - Rectifier — Detects a traffic light bounding box in a ROI. - Recognizer— Classifies the bounding box's color. - Reviser — Correct color using sequential information. ### Input/Output This section describes the data input and output of the Process. The input is obtained from the Pre-process module and the output is published as a traffic light topic. #### Input - Image from a selected camera - A set of bounding boxes #### Output - A set of bounding boxes with color labels. ### Rectifier The projected position, which is affected by the calibration, localization, and the HD-Map label, is ***not completely reliable***. A larger region of interest (ROI), calculated using the projected light's position, is used to find the accurate boundingbox for the traffic light. In the photo below, the blue rectangle indicates the projected light bounding box, which has a large offset to the actual light. The big, yellow rectangle is the ROI. ![example](images/traffic_light/example.jpg) The traffic light detection is implemented as a regular convolutional neural network (CNN) detection task. It receives an image with an ROI as input, and serial bounding boxes as output. There might be more lights in the ROI than those in input. Apollo needs to select the proper lights according to the detection score, and the input lights' position and shape. If the CNN network cannot find any lights in the ROI, the status from the input lights is marked as unknown and the two remaining steps (Recognizer and Reviser) are skipped. ### Recognizer The traffic light recognition is implemented as a typical CNN classification task. The network receives an image with a ROI and a list of bounding boxes as input. The output of network is a `$4\times n$ vector`, representing four probabilities for each box to be black, red, yellow, and green. The class with maximum probability will be regarded as the light's status, if and only if the probability is large enough. Otherwise, the light's status will be set to black, which means that the status is not certain. ### Reviser Because a traffic light can be flashing or shaded, and the Recognizer is ***not*** perfect, the current status may fail to represent the real status. A Reviser that could correct the status is necessary. If the Reviser receives a definitive status such as red or green, the Reviser saves and outputs the status directly. If the received status is black or unknown, the Reviser looks up the saved map. When the status of this light is certain for a period of time, the Reviser outputs this saved status. Otherwise, the status of black or unknown is sent as output. Because of the time sequence, yellow only exists ***after*** green and ***before*** red. Any yellow ***after red*** is reset to red for the sake of safety until green displays.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_lidar_detector_algorithm_cn.md
# 如何添加新的lidar检测算法 Perception中的lidar数据流如下: ![](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/lidar_perception_data_flow.png) 本篇文档所介绍的lidar检测算法位于图中的Detection Component中。当前Detection Component的架构如下: ![lidar detection high-level](images/lidar_detection_1.png) ![lidar detection](images/lidar_detection_2.png) 从以上结构中可以清楚地看到lidar检测算法是位于Detection Component的 `base_lidar_obstacle_detection` 中的抽象成员类 `base_lidar_detector` 的派生类。下面将详细介绍如何基于当前结构添加新的lidar检测算法。 Apollo默认提供了2种lidar检测算法--PointPillars和CNN(NCut不再维护),可以轻松更改或替换为不同的算法。每种算法的输入都是原始点云信息,输出都是目标级障碍物信息。本篇文档将介绍如何引入新的lidar检测算法,添加新算法的步骤如下: 1. 定义一个继承基类 `base_lidar_detector` 的类 2. 实现新类 `NewLidarDetector` 3. 为新类 `NewLidarDetector` 配置config和param的proto文件 4. 更新 lidar_obstacle_detection.conf 为了更好的理解,下面对每个步骤进行详细的阐述: ## 定义一个继承基类 `base_lidar_detector` 的类 所有的lidar检测算法都必须继承基类`base_lidar_detector`,它定义了一组接口。 以下是检测算法继承基类的示例: ```c++ namespace apollo { namespace perception { namespace lidar { class NewLidarDetector : public BaseLidarDetector { public: NewLidarDetector(); virtual ~NewLidarDetector() = default; bool Init(const LidarDetectorInitOptions& options = LidarDetectorInitOptions()) override; bool Detect(const LidarDetectorOptions& options, LidarFrame* frame) override; std::string Name() const override; }; // class NewLidarDetector } // namespace lidar } // namespace perception } // namespace apollo ``` 基类 `base_lidar_detector` 已定义好各虚函数签名,接口信息如下: ```c++ struct LidarDetectorInitOptions { std::string sensor_name = "velodyne64"; }; struct LidarDetectorOptions {}; struct LidarFrame { // point cloud std::shared_ptr<base::AttributePointCloud<base::PointF>> cloud; // world point cloud std::shared_ptr<base::AttributePointCloud<base::PointD>> world_cloud; // timestamp double timestamp = 0.0; // lidar to world pose Eigen::Affine3d lidar2world_pose = Eigen::Affine3d::Identity(); // lidar to world pose Eigen::Affine3d novatel2world_pose = Eigen::Affine3d::Identity(); // hdmap struct std::shared_ptr<base::HdmapStruct> hdmap_struct = nullptr; // segmented objects std::vector<std::shared_ptr<base::Object>> segmented_objects; // tracked objects std::vector<std::shared_ptr<base::Object>> tracked_objects; // point cloud roi indices base::PointIndices roi_indices; // point cloud non ground indices base::PointIndices non_ground_indices; // secondary segmentor indices base::PointIndices secondary_indices; // sensor info base::SensorInfo sensor_info; // reserve string std::string reserve; void Reset(); void FilterPointCloud(base::PointCloud<base::PointF> *filtered_cloud, const std::vector<uint32_t> &indices); }; ``` ## 实现新类 `NewLidarDetector` 为了确保新的检测算法能顺利工作,`NewLidarDetector`至少需要重写`base_lidar_detector`中定义的接口Init(),Detect()和Name()。其中Init()函数负责完成加载配置文件,初始化类成员等工作;而Detect()则负责实现算法的主体流程。一个具体的`NewLidarDetector.cc`实现示例如下: ```c++ namespace apollo { namespace perception { namespace lidar { bool NewLidarDetector::Init(const LidarDetectorInitOptions& options) { /* 你的算法初始化部分 */ } bool NewLidarDetector::Detect(const LidarDetectorOptions& options, LidarFrame* frame) { /* 你的算法实现部分 */ } std::string NewLidarDetector::Name() const { /* 返回你的检测算法名称 */ } PERCEPTION_REGISTER_LIDARDETECTOR(NewLidarDetector); //注册新的lidar_detector } // namespace lidar } // namespace perception } // namespace apollo ``` ## 为新类 `NewLidarDetector` 配置config和param的proto文件 按照下面的步骤添加新lidar检测算法的配置和参数信息: 1. 根据算法要求为新lidar检测算法配置config的`proto`文件。作为示例,可以参考以下位置的`cnn_segmentation`的`proto`定义:`modules/perception/lidar/lib/detector/cnn_segmentation/proto/cnnseg_config.proto` 2. 定义新的`proto`之后,例如`newlidardetector_config.proto`,输入以下内容: ```protobuf syntax = "proto2"; package apollo.perception.lidar; message NewLidarDetectorConfig { double parameter1 = 1; int32 parameter2 = 2; } ``` 3. 根据算法要求为新lidar检测算法配置param的`proto`文件。作为示例,可以参考以下位置的`cnn_segmentation`的`proto`定义:`modules/perception/lidar/lib/detector/cnn_segmentation/proto/cnnseg_param.proto`。同样地,在定义完成后输入以下内容: ```protobuf syntax = "proto2"; package apollo.perception.lidar; //你的param参数 ``` 4. 参考如下内容更新 `modules/perception/production/conf/perception/lidar/config_manager.config`文件: ```protobuf model_config_path: "./conf/perception/lidar/modules/newlidardetector_config.config" ``` 5. 参考同级别目录下 `modules/cnnseg.config` 内容创建 `newlidardetector.config`: ```protobuf model_configs { name: "NewLidarDetector" version: "1.0.0" string_params { name: "root_path" value: "./data/perception/lidar/models/newlidardetector" } } ``` 6. 参考 `cnnseg` 在目录 `modules/perception/production/data/perception/lidar/models/` 中创建 `newlidardetector` 文件夹,并根据需求创建不同传感器的 `.conf` 文件: ``` 注意:此处 "*.conf" 和 "*param.conf" 文件应对应步骤1,2,3中的proto文件格式. ``` ## 更新 lidar_obstacle_detection.conf 要使用Apollo系统中的新lidar检测算法,需要将 `modules/perception/production/data/perception/lidar/models/lidar_obstacle_pipline` 中的对应传感器的 `lidar_obstacle_detection.conf` 文件的 `detector` 字段值改为 "NewLidarDetector" 在完成以上步骤后,您的新lidar检测算法便可在Apollo系统中生效。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/multiple_lidar_gnss_calibration_guide_cn.md
# 多激光雷达全球导航卫星系统(Multiple-LiDAR GNSS)校准指南 欢迎使用多激光雷达全球导航卫星系统校准工具。本指南将向您展示如何成功校准多个LiDAR的步骤。 ## 内容 - 概述 - 准备 - 使用校准工具 - 结果与验证 ## 概述 在许多自动驾驶任务,如HDMap的制作,多个激光雷达扫描结果需要注册在一个统一的坐标系统。在这种情况下,需要对多个LIDARs的外部参数进行仔细校准。为了解决这个问题,开发了多激光雷达GNSS校准工具。 ## 准备 下载校准工具,并将文件提取到$APOLLO_HOME/modules /calibration。$APOLLO_HOME是APOLLO repository的根目录。 根据Apollo 1.5提供的校准指南选择校准位置。 确保GNSS处于良好状态。为了验证这一点,使用‘rostopic echo /apollo/sensor/gnss/best_pose’并检查关键词latitude_std_dev, longitude_std_dev 和height_std_dev后的数量,偏差越小,校准质量越好。 我们强烈建议在偏差小于0.02时校准传感器。 ## 使用校准工具 ### 记录校准数据 当LIDARS和GNSS准备就绪时,使用/apollo/modules/calibration/multi_lidar_gnss/record.sh记录校准数据。请注意,此脚本仅用于记录velodyne HDL64 和VLP16。为了其他目的,需要修改这个脚本,或者只需要使用rosbag record来做同样的事情。通常,2分钟的数据长度就足够了。在数据捕获之后,运行/apollo/modules/calibration/multi_lidar_gnss/calibrate.sh校准传感器。脚本由以下两个步骤组成。 ### 出口数据 一旦校准包被记录,使用/apollo/modules/calibration/exporter/export_msgs --config /apollo/modules/calibration/exporter/conf/export_config.yaml获得传感器数据。exporter的唯一输入是一个YAML配置文件,如下所示。 ```bash bag_path: "/apollo/data/bag/calibration/" # The path where the calibration bag is placed. dump_dir: "/apollo/data/bag/calibration/export/" # The path where the sensor data will be placed using exporter topics: - /apollo/sensor/gnss/odometry: # Odometry topic name type: ApolloOdometry # Odometry type - /apollo/sensor/velodyne16/PointCloud2: # vlp16 topic name type: PointCloud2 # vlp16 type - /apollo/sensor/velodyne64/PointCloud2: # hdl64 topic name type: PointCloud2 # hdl64 type ``` 如果将新topic按如下的规则添加到文件中,也可以导出PointCloud2 types的其他topic。 ```bash - TOPIC_NAME: # topic name type: PointCloud2 ``` 到目前为止,我们只支持ApolloOdometry和PointCloud2。 ### 运行校准工具 如果输出所有传感器数据,运行/apollo/modules/calibration/lidar_gnss_calibrator/multi_lidar_gnss_calibrator --config /apollo/modules/calibration/lidar_gnss_calibrator/conf/multi_lidar_gnss_calibrator_config.yaml将得到结果。该工具的输入是一个YAML配置文件,如下所示。 ```bash # multi-LiDAR-GNSS calibration configurations data: odometry: "/apollo/data/bag/calibration/export/multi_lidar_gnss/_apollo_sensor_gnss_odometry/odometry" lidars: - velodyne16: path: "/apollo/data/bag/calibration/export/multi_lidar_gnss/_apollo_sensor_velodyne16_PointCloud2/" - velodyne64: path: "/apollo/data/bag/calibration/export/multi_lidar_gnss/_apollo_sensor_velodyne64_PointCloud2/" result: "/apollo/data/bag/calibration/export/multi_lidar_gnss/result/" calibration: init_extrinsics: velodyne16: translation: x: 0.0 y: 1.77 z: 1.1 rotation: x: 0.183014 y: -0.183014 z: 0.683008 w: 0.683008 velodyne64: translation: x: 0.0 y: 1.57 z: 1.3 rotation: x: 0.0 y: 0.0 z: 0.707 w: 0.707 steps: - source_lidars: ["velodyne64"] target_lidars: ["velodyne64"] lidar_type: "multiple" fix_target_lidars: false fix_z: true iteration: 3 - source_lidars: ["velodyne16"] target_lidars: ["velodyne16"] lidar_type: "multiple" fix_target_lidars: false fix_z: true iteration: 3 - source_lidars: ["velodyne16"] target_lidars: ["velodyne64"] lidar_type: "multiple" fix_target_lidars: true fix_z: false iteration: 3 ``` 数据部分告诉工具在哪里获取点云和测距文件,以及在哪里保存结果。注意,LIDAR节点中的关键字将被识别为LiDARs的Frame ID。 校准部分提供了外部信息的初始猜测。所有的外部信息都是从激光雷达到GNSS,这意味着这种变换将激光雷达坐标系中定义的点的坐标映射到GNSS坐标系中定义的这一点的坐标。初始猜测要求旋转角度误差小于5度,平移误差小于0.1米。 步骤部分详细说明了校准过程。每个步骤被如下定义并且它们的含义在注释中。 ```bash - source_lidars: ["velodyne16"] # Source LiDAR in point cloud registration. target_lidars: ["velodyne64"] # Target LiDAR in point cloud registration. lidar_type: "multiple" # "multiple" for multi-beam LiDAR, otherwise "single" fix_target_lidars: true # Whether to fix extrinsics of target LiDARS. Only "true" when align different LiDARs. fix_z: false # Whether to fix the z component of translation. Only "false" when align different LiDARs. iteration: 3 # Iteration number ``` ## 结果和验证 校准工具将结果保存到结果路径中。 ```bash . └── calib_result ├── velodyne16_novatel_extrinsics.yaml ├── velodyne16_result.pcd ├── velodyne16_result_rgb.pcd ├── velodyne64_novatel_extrinsics.yaml ├── velodyne64_result.pcd └── velodyne64_result_rgb.pcd ``` 这两个YAML文件是外部的。为了验证结果,使用pcl_viewer *_result.pcd检查注册质量。如果传感器校准好了,大量的细节可以从点云中识别出来。欲了解更多详情,请参阅校准指南Apollo 1.5。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_monitor_perf_with_oprofile_on_tx2.md
oprofile is a performance monitoring tool that runs on linux system. Generally, oprofile has been included in linux system modules, but oprofile module is not supported on ARM, so we have to install it manually. oprofile supports multi-thread program. It records the number of function calls, and can also output the source code to show results user-friendly. So it fits well to performance monitoring on TX2. #### Download Download the newest version of oprofile ```bash $ wget http://prdownloads.sourceforge.net/oprofile/oprofile-1.4.0.tar.gz $ tar zxvf oprofile-1.4.0.tar.gz $ cd oprofile-1.4.0 ``` #### Installation ```bash $ sudo apt-get install libpopt-dev libiberty-dev binutils-dev $ ./configure $ make -j4 $ sudo make install ``` #### Test After installation,perform command `operf` to see if it could get CPU information normally. If it fails and the output is like this: `unable to open /sys/devices/system/cpu/cpu0/online` This is because TX2 only launches 4 CPU cores by default. The other two CPU cores are closed. Use below command to open the other two CPU cores: ```bash $ sudo nvpmodel -m 0 ``` #### How to use oprofile provides variety of commands, the commands `operf`, `opreport` and `opannotate` are used more often. Take testing perception as an example. * 1.modify `script/apollo_bash.sh`, add command `operf` behind keyword `nuhup` in line 239. As shown below: ![operf_command](images/TX2/operf_command.png) * 2.use script like `./script/perception.sh` to launch perception program And now, operf is collecting data of perception program. Stopping perception program in any approach can stop the collecting process. #### Output After stopping perception program, a directory named oprofile_data is generated in current directory. Use command `opreport` to show the total statistics: ```bash $ opreport ``` As shown below: ![opreport_](images/TX2/opreport_.png) Use command `opreport` to show the statistics of each function: ```bash $ opreport -l bazel-bin/modules/perception/perception ``` Because there is too much information to show, so it’s better to save the output into a file. ```bash $ opreport -l bazel-bin/modules/perception/perception > perception_op_funcs.md ``` As shown below: ![opreport_file](images/TX2/opreport_file.png) Use command `opannotate` to show the statistics with source code: ```bash $ opannotate -s bazel-bin/modules/perception/perception > perception_op_details.md ``` Notice: It only supports one operf program to be running at any time, so when operf program is running, we could launch another program with `operf`. oprofile official website: [http://oprofile.sourceforge.net/news/](http://oprofile.sourceforge.net/news/) oprofile user manual: [http://oprofile.sourceforge.net/doc/index.html](http://oprofile.sourceforge.net/doc/index.html)
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/perception_apollo_2.5.md
# Perception Apollo 2.5 April 19, 2018 ## Introduction Apollo 2.5 aims for Level-2 autonomous driving with low cost sensors. An autonomous vehicle will stay in the lane and keep a distance with a closest in-path vehicle (CIPV) using a single front-facing camera and a frontal radar. Apollo 2.5 supports high-speed autonomous driving on highway without any map. The deep network was learned to process an image data. The performance of the deep network will be improved over time as collecting more data. ***Safety alert*** Apollo 2.5 *does not* support a high curvature road, a road without lane marks including local roads and intersections. The perception module is based on the visual detection using a deep network with limited data. Therefor, before we release a better network, the driver should be careful in driving and always be ready to disengage the autonomous driving by turning the wheel to the right direction. Please perform the test drive at the safe and restricted area. - ***Recommended road*** - ***Road with clear white lane lines on both sides*** - ***Avoid*** - ***High curvature road*** - ***Road without lane line marks*** - ***Intersection*** - ***Butt dots or dotted lane lines*** - ***Public road*** ## Perception modules The flow chart of each module is shown below. ![Image](images/perception_flow_chart_apollo_2.5.png) **Figure 1: Flow diagram of lane keeping system** ### Deep network Deep network ingests an image and provides two detection outputs, lane lines and objects for Apollo 2.5. There is an ongoing debate on individual task and co-trained task for deep learning. Individual networks such as a lane detection network or an object detection network usually perform better than one co-trained multi-task network. However, with given limited resources, multiple individual networks will be costly and consume more time in processing. Therefore, for the economic design, co-train is inevitable with some compromise in performance. In Apollo 2.5, YOLO [1][2] was used as a base network of object and lane detection. The object has vehicle, truck, cyclist, and pedestrian categories and represented by a 2-D bounding box with orientation information. The lane lines are detected by segmentation using the same network with some modification. ### Network optimization In literature, there are multiple approaches of network optimization for real time processing of high framerate images. Rather than using 32bit float, a network with INT8 was implemented to achieve real-time implementation. TensorRT may be used to optimize the network. ### Object detection/tracking In a traffic scene, there are two kinds of objects: stationary objects and dynamic objects. Stationary objects include lane lines, traffic lights, and thousands of traffic signs written in different languages. Other than driving, there are multiple landmarks on the road mostly for visual localization including streetlamp, barrier, bridge on top of the road, or any skyline. For stationary object, we will detect only lane lines in Apollo 2.5. Among dynamic objects, we care passenger vehicles, trucks, cyclists, pedestrians, or any other object including animal or body parts on the road. We can also categorize object based on which lane the object is in. The most important object is CIPV (closest object in our path). Next important objects would be the one in neighbor lanes. #### 2D-to-3D bounding box Given a 2D box, with its 3D size and orientation in camera, this module searches the 3D position in a camera coordinate system and estimates an accurate 3D distance using either the width, the height, or the 2D area of that 2D box. The module works without accurate extrinsic camera parameters. #### Object tracking The object tracking module utilizes multiple cues such as 3D position, 2D image patches, 2D boxes, or deep learning ROI features. The tracking problem is formulated as multiple hypothesis data association by combining the cues efficiently to provide the most correct association between tracks and detected object, thus obtaining correct ID association for each object. ### Lane detection/tracking Among static objects, we will handle lane lines only in Apollo 2.5. The lane is for both longitudinal and lateral control. A lane itself guides lateral control and an object in the lane guides longitudinal control. #### Lane lines The lane can be represented by multiple sets of polylines such as next left lane line, left line, right line, and next right line. Given a heatmap of lane lines from the deep network, the segmented binary image is generated by thresholding. The method first finds the connected components and detects the inner contours. Then it generates lane marker points based on the contour edges in the ground space of ego-vehicle coordinate system. After that, it associates these lane markers into several lane line objects with corresponding relative spatial (e.g., left(L0), right(R0), next left(L1), next right(L2), etc.) labels. ### CIPV (Closest-In Path Vehicle) A CIPV is the closest vehicle in our ego-lane. An object is represented by 3D bounding box and its 2D projection from the top-down view localizes the object on the ground. Then, each object will be checked if it is in the ego-lane or not. Among the objects in our ego-lane, the closest one will be selected as a CIPV. ### Radar + camera fusion Given multiple sensors, their output should be combined in a synergic fashion. Apollo 2.5. introduces a sensor set with a radar and a camera. For this process, both sensors need to be calibrated. Each sensor will be calibrated using the same method introduced in Apollo 2.0. After calibration, the output will be represented in a 3-D world coordinate and each output will be fused by their similarity in location, size, time and the utility of each sensor. After learning the utility function of each sensor, the camera contributes more on lateral distance and the radar contributes more on longitudinal distance measurement. ### Virtual lane All lane detection results will be combined spatially and temporarily to induce the virtual lane which will be fed to planning and control. Some lane lines would be incorrect or missing in a certain frame. To provide the smooth lane line output, the history of lane lines using vehicle odometry is used. As the vehicle moves, the odometer of each frame is saved and lane lines in previous frames will be also saved in the history buffer. The detected lane line which does not match with the history lane lines will be removed and the history output will replace the lane line and be provided to the planning module. ## Output of perception The input of PnC will be quite different with that of the previous lidar-based system. - Lane line output - Polyline and/or a polynomial curve - Lane type by position: L1(next left lane line), L0(left lane line), R0(right lane line), R1(next right lane line) - Object output - 3D rectangular cuboid - Relative velocity and direction - Type: CIPV, PIHP, others - Classification type: car, truck, bike, pedestrian The world coordinate will be ego-coordinate in 3D where the rear center axle is an origin. ## References [1] J Redmon, S Divvala, R Girshick, A Farhadi, "You only look once: Unified, real-time object detection," CVPR 2016 [2] J Redmon, A Farhadi, "YOLO9000: Better, Faster, Stronger," arXiv preprint
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_camera_detector_algorithm.md
# How to add a new camera detector algorithm The processing flow of camera perception module is shown below: ![camera overview](images/Camera_overview.png) The 3 detector algorithms introduced by this document were traffic_light_detector, land_detector, obstacle_detector. These 3 detectors are located in their own component. The architecture of each component is showed below: Traffic Light: ![traffic light component](images/camera_traffic_light_detection.png) Lane: ![lane component](images/camera_lane_detection.png) Obstacle: ![obstacle component](images/camera_obstacle_detection.png) As we can see clearly from above structure, each component has its own abstract class member `base_XXX_detector`. Different derived detector algorithms inherit `base_XXX_detector` and implement their main flows to complete the deployment. Next, we will take `base_obstacle_detector` as an example to introduce how to add a new camera detector algorithm. You could also refer to this document if you want to add traffic light detector or lane detector. Apollo has provided three camera detector algorithms in Obstacle Detection -- Smoke,Yolo, YoloV4. All of them could be easily changed or replaced by other algorithms. The input of algorithm should be preprocessed image data, while the output should be obastacle object data. This document will introduce how to add a new camera detector algorithm, the basic task sequence is listed below: 1. Define a class that inherits `base_obstacle_detector` 2. Implement the class `NewObstacleDetector` 3. Add param proto file for `NewObstacleDetector` 4. Update config file to put your detector into effect The steps are elaborated below for better understanding: ## Define a class that inherits `base_obstacle_detector` All the camera detector algorithms shall inherit `base_obstacle_detector`,which defines a set of interfaces. Here is an example of the detector implementation: ```c++ namespace apollo { namespace perception { namespace camera { class NewObstacleDetector : public BaseObstacleDetector { public: NewObstacleDetector(); virtual ~NewObstacleDetector() = default; bool Init(const ObstacleDetectorInitOptions &options = ObstacleDetectorInitOptions()) override; bool Detect(const ObstacleDetectorOptions &options, CameraFrame *frame) override; std::string Name() const override; }; // class NewObstacleDetector } // namespace camera } // namespace perception } // namespace apollo ``` The function signature of `base_obstacle_detector` is pre-defined: ```c++ struct ObstacleDetectorInitOptions : public BaseInitOptions { std::shared_ptr<base::BaseCameraModel> base_camera_model = nullptr; Eigen::Matrix3f intrinsics; EIGEN_MAKE_ALIGNED_OPERATOR_NEW } EIGEN_ALIGN16; struct ObstacleDetectorOptions {}; struct CameraFrame { // timestamp double timestamp = 0.0; // frame sequence id int frame_id = 0; // data provider DataProvider *data_provider = nullptr; // calibration service BaseCalibrationService *calibration_service = nullptr; // hdmap struct base::HdmapStructPtr hdmap_struct = nullptr; // tracker proposed objects std::vector<base::ObjectPtr> proposed_objects; // segmented objects std::vector<base::ObjectPtr> detected_objects; // tracked objects std::vector<base::ObjectPtr> tracked_objects; // feature of all detected object ( num x dim) // detect lane mark info std::vector<base::LaneLine> lane_objects; std::vector<float> pred_vpt; std::shared_ptr<base::Blob<float>> track_feature_blob = nullptr; std::shared_ptr<base::Blob<float>> lane_detected_blob = nullptr; // detected traffic lights std::vector<base::TrafficLightPtr> traffic_lights; // camera intrinsics Eigen::Matrix3f camera_k_matrix = Eigen::Matrix3f::Identity(); // narrow to obstacle projected_matrix Eigen::Matrix3d project_matrix = Eigen::Matrix3d::Identity(); // camera to world pose Eigen::Affine3d camera2world_pose = Eigen::Affine3d::Identity(); EIGEN_MAKE_ALIGNED_OPERATOR_NEW } EIGEN_ALIGN16; // struct CameraFrame ``` ## Implement the class `NewObstacleDetector` To ensure the new detector could function properly, `NewObstacleDetector` should at least override the interface Init(), Detect(), Name() defined in `base_obstacle_detector` Init() is resposible for config loading, class member initialization, etc. And Detect() will implement the basic logic of algorithm. A concrete `NewObstacleDetector.cc` example is shown: ```c++ namespace apollo { namespace perception { namespace camera { bool NewObstacleDetector::Init(const ObstacleDetectorInitOptions &options) { /* Initialization of your detector */ } bool NewObstacleDetector::Detect(const ObstacleDetectorOptions &options, CameraFrame *frame) { /* Implementation of your detector */ } std::string NewObstacleDetector::Name() const { /* Return your detector's name */ } REGISTER_OBSTACLE_DETECTOR(NewObstacleDetector); //register the new detector } // namespace camera } // namespace perception } // namespace apollo ``` ## Add param proto file for `NewObstacleDetector` Follow the steps below to add parameters for your new camera detector: 1. Create the `proto` file for parameters according to the requirement of the detector. If the parameters are compatible, you can use or just modify current `proto` directly. As an example, you can refer to the `proto` file from `smoke detector` at `modules/perception/camera/lib/obstacle/detector/smoke/proto/smoke.proto`. Remember to include the following content once you finished your definition: ```protobuf syntax = "proto2"; package apollo.perception.camera.NewObstacleDetector; // Your parameters ``` 2. Refer to `yolo_obstacle_detector` at `modules/perception/production/data/perception/camera/models/` and create your `newobstacledetector` folder and `*.pt` file: ``` Note:The "*.pt" file should have the format defined in step one ``` ## Update config file to put your detector into effect To use your new camera detector algorithm in Apollo, you have to config the following files according to your demand: 1. Refer to the following content to update `modules/perception/production/conf/perception/camera/obstacle.pt`,put your `*.pt` file created in previous step to the load path: ```protobuf detector_param { plugin_param{ name : "NewObstacleDetector" root_dir : "/apollo/modules/perception/production/data/perception/camera/models/newobstacledetector" config_file : "*.pt" } camera_name : "front_12mm" } ``` 2. If you want to modify the structure of `detector_param` shown in step one or just add a new `_param`, your can do that at `modules/perception/camera/app/proto/perception.proto`: ```protobuf message PluginParam { optional string name = 1; optional string root_dir = 2; optional string config_file = 3; } message DetectorParam { optional PluginParam plugin_param = 1; optional string camera_name = 2; } ``` 3. If you create a new `*.pt` instead of using `obstacle.pt` given in step one, you also have to modify `modules/perception/production/conf/perception/camera/fusion_camera_detection_component.pb.txt`. The corresponding `proto` file is `modules/perception/onboard/proto/fusion_camera_detection_component.proto`: ```protobuf camera_obstacle_perception_conf_dir : "/apollo/modules/perception/production/conf/perception/camera" camera_obstacle_perception_conf_file : "NewObstacleDetector.pt" ``` Once you finished the above modifications, you new camera detector should take effect in Apollo.
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/perception_apollo_3.0.md
# Perception Apollo 3.0 June 27, 2018 ## Introduction Apollo 3.0 introduced a production level solution for the low-cost, closed venue driving scenario that is used as the foundation for commercialized products. The Perception module introduced a few major features to provide more diverse functionalities and a more reliable, robust perception in AV performance, which are: * **CIPV(Closest In-Path Vehicle) detection and Tailgaiting**: The vehicle in front of the ego-car is detected and its trajectory is estimated for more efficient tailgating and lane keeping when lane detection is unreliable. * **Asynchronous sensor fusion**: unlike the previous version, Perception in Apollo 3.0 is capable of consolidating all the information and data points by asynchronously fusing LiDAR, Radar and Camera data. Such conditions allow for more comprehensive data capture and reflect more practical sensor environments. * **Online pose estimation**: This new feature estimates the pose of an ego-vehicle for every single frame. This feature helps to drive through bumps or slopes on the road with more accurate 3D scene understanding. * **Ultrasonic sensors**: Perception in Apollo 3.0 now works with ultrasonic sensors. The output can be used for Automated Emergency Brake (AEB) and vertical/perpendicular parking. * **Whole lane line**: Unlike previous lane line segments, this whole lane line feature will provide more accurate and long range detection of lane lines. * **Visual localization**: Camera's are currently being tested to aide and enhance localization * **16 beam LiDAR support** ***Safety alert*** Apollo 3.0 *does not* support a high curvature road, roads without lane lines including local roads and intersections. The perception module is based on visual detection using a deep network with limited data. Therefore, before we release a better network, the driver should be careful while driving and always be ready to disengage the autonomous driving mode by intervening (hit the brakes or turn the steering wheel). While testing Apollo 3.0, please choose a path that has the necessary conditions mentioned above and be vigilant. - ***Recommended road*** - ***Road with clear white lane lines on both sides*** - ***Avoid*** - ***High curvature road*** - ***Road without lane line marks*** - ***Intersections*** - ***Dotted lane lines*** - ***Public roads with a lot of pedestrians or cars*** ## Perception module The flow chart of Apollo 3.0 Perception module: ![Image](images/perception_flow_chart_apollo_3.0.png) The sub-modules are discussed in the following section. ### Deep Network Deep Network ingests an image and provides two detection outputs, lane lines and objects for Apollo 3.0. There is an ongoing debate on individual tasks and co-trained tasks for deep learning. Individual networks such as a lane detection network or an object detection network usually perform better than one co-trained multi-task network. However, multiple individual networks will be costly and consume more time in processing. Therefore, the preferred economic choice is co-trained network. In Apollo 3.0, YOLO [1][2] was used as a base network of object and lane segment detection. The object has vehicle, truck, cyclist, and pedestrian categories and represented by a 2-D bounding box with orientation information. The lane lines are detected by segmentation using the same network with some modification. For whole lane line, we have an individual network to provide longer lane lines in cases of either whole or broken lines. ### Object Detection/Tracking In a traffic setting, there are two kinds of objects: stationary objects and dynamic objects. Stationary objects include lane lines, traffic lights, and thousands of traffic signs written in different languages. Other than driving, there are multiple landmarks on the road mostly for visual localization including street lamps, barriers, bridge on top of the road, or any other skyline construction. Among all those objects, Apollo 3.0 will detect only lane lines. Among dynamic objects, Apollo can detect passenger vehicles, trucks, cyclists, pedestrians, or any other object including animals on the road. Apollo can also categorize objects based on which lane the object is in. The most important object is CIPV (closest in path vehicle or object). In order of importance, objects present in neighbouring lanes fall in the second category. #### 2D-to-3D Bounding Box Given a 2D box, with its 3D size and orientation in the camera, this module searches the 3D position in the camera's coordinate system and estimates an accurate 3D distance using either the width, the height, or the 2D area of that 2D box. The module works without accurate extrinsic camera parameters. #### Object Tracking The object tracking sub-module utilizes multiple cues such as 3D position, 2D image patches, 2D boxes, or deep learning ROI features. The tracking problem is formulated as multiple hypothesis data association by combining the cues efficiently to provide the most correct association between tracks and detected object, thus obtaining correct ID association for each object. ### Lane Detection/Tracking Among static objects, we will handle lane lines only, in Apollo 3.0. The lane is for both longitudinal and lateral control. A lane itself guides lateral control and an object in the lane guides longitudinal control. #### Lane Lines We have two types of lane lines, lane mark segment and whole lane line. The lane mark segment is used for visual localization and whole lane line is used for lane keeping. The lane can be represented by multiple sets of polylines such as next left lane line, left line, right line, and next right lane line. Given a heatmap of lane lines from the Deep Network, the segmented binary image is generated through **Thresholding**. The method first finds the connected components and detects the inner contours. Then it generates lane marker points based on the contour edges in the ground space of ego-vehicle coordinate system. After that, it associates these lane markers into several lane line objects with corresponding relative spatial (e.g., left(L0), right(R0), next left(L1), next right(L2), etc.) labels. ### CIPV (Closest-In Path Vehicle) A CIPV is the closest vehicle in the ego-lane. An object is represented by 3D bounding box and its 2D projection from the top-down view localizes the object on the ground. Then, each object will be checked if it is in the ego-lane or not. Among the objects in our ego-lane, the closest one will be selected as a CIPV. ### Tailgating Tailgating is a maneuver to follow the vehicle or object in front of the autonomous car. From the tracked objects and ego-vehicle motion, the trajectories of objects are estimated. This trajectory will guide how the objects are moving as a group on the road and the future trajectory can be predicted. There is two kinds of tailgating, the one is pure tailgating by following the specific car and the other is CIPV-guided tailgating, which the ego-vehicle follows the CIPV's trajectory when the no lane line is detected. The snapshot of visualization of the output is shown in the figure below: ![Image](images/perception_visualization_apollo_3.0.png) The figure above depicts visualization of the Perception output in Apollo 3.0. The top left image shows image-based output. The bottom-left image shows the 3D bounding box of objects. Therefore, the left image shows 3-D top-down view of lane lines and objects. The CIPV is marked with a red bounding box. The yellow lines depicts the trajectory of each vehicle ### Radar + Camera Output Fusion Given multiple sensors, their output should be combined in a synergic fashion. Apollo 3.0. introduces a sensor set with a radar and a camera. For this process, both sensors need to be calibrated. Each sensor will be calibrated using the same method introduced in Apollo 2.0. After calibration, the output will be represented in a 3-D world coordinate system and each output will be fused by their similarity in location, size, time and the utility of each sensor. After learning the utility function of each sensor, the camera contributes more on lateral distance and the radar contributes more on longitudinal distance measurement. Asynchronous sensor fusion algorithm can also be used as an option. ### Pseudo Lane All lane detection results will be combined spatially and temporarily to induce the pseudo lane which will be fed to Planning and Control modules. Some lane lines would be incorrect or missing in a certain frame. To provide the smooth lane line output, the history of lane lines using vehicle odometry is used. As the vehicle moves, the odometer of each frame is saved and lane lines in previous frames will be also saved in the history buffer. The detected lane line which does not match with the history lane lines will be removed and the history output will replace the lane line and be provided to the planning module. ### Ultrasonic Sensors Apollo 3.0 supports ultrasonic sensors. Each ultrasonic sensor provides the distance of a detected object through the CANBus. The distance measurement from the ultrasonic sensor is then gathered and broadcasted as a ROS topic. In the future, after fusing ultrasonic sensor output, the map of objects and boundary will be published as a ROS output. ## Output of Perception The input of Planning and Control modules will be quite different with that of the previous Lidar-based system for Apollo 3.0. - Lane line output - Polyline and/or a polynomial curve - Lane type by position: L1(next left lane line), L0(left lane line), R0(right lane line), R1(next right lane line) - Object output - 3D rectangular cuboid - Relative velocity and direction - Type: CIPV, PIHP, others - Classification type: car, truck, bike, pedestrian - Drops: trajectory of an object The world coordinate systen is used as ego-coordinate in 3D where the rear center axle is an origin. ## References [1] J Redmon, S Divvala, R Girshick, A Farhadi, "You only look once: Unified, real-time object detection," CVPR 2016 [2] J Redmon, A Farhadi, "YOLO9000: Better, Faster, Stronger," arXiv preprint
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_monitor_perf_with_oprofile_on_tx2_cn.md
oprofile是运行在linux系统上对应用程序进行性能测试的工具。linux系统中已经自带了oprofile的相关工具,但是oprofile module在arm平台没有支持,所以我们需要手动安装oprofile。 oprofile对多线程支持良好,可以对函数调用次数及源码进行分析,所以非常适合在TX2上使用。 #### 下载 下载最新版本的oprofile ```bash $ wget http://prdownloads.sourceforge.net/oprofile/oprofile-1.4.0.tar.gz $ tar zxvf oprofile-1.4.0.tar.gz $ cd oprofile-1.4.0 ``` #### 安装oprofile ```bash $ sudo apt-get install libpopt-dev libiberty-dev binutils-dev $ ./configure $ make -j4 $ sudo make install ``` #### 测试 安装好后,执行`operf`命令查看能否正常获取cpu信息。如果出现如下报错: `unable to open /sys/devices/system/cpu/cpu0/online` 这是因为默认TX2只开启了4个CPU,有2个CPU处于未开启状态。 执行如下命令开启额外的两个CPU: ```bash $ sudo nvpmodel -m 0 ``` #### 使用方法 oprofile提供了多种命令,通常情况下我们使用比较多的是`operf`,`opreport`和`opannotate`。 以测试perception模块为例。 * 1.修改`script/apollo_bash.sh`脚本,文件第239行的`nohup`后面增加`operf`指令,如图: ![operf_command](images/TX2/operf_command.png) * 2.使用脚本如`./script/perception.sh`启动perception模块 这样operf就会进行perception进程的运行数据统计。使用任意方法停止perception进程即可停止数据收集。 #### 测试数据查看 停止perception进程后,在当前文件夹下将生成文件夹oprofile_data。 使用指令`opreport`查看模块的总体占比: ```bash $ opreport ``` 结果示例为: ![opreport_](images/TX2/opreport_.png) 使用`opreport`查看函数占比: ```bash $ opreport -l bazel-bin/modules/perception/perception ``` 因为输出信息很多,所以需要将上述结果保存为文本文件 ```bash $ opreport -l bazel-bin/modules/perception/perception > perception_op_funcs.md ``` 结果示例为: ![opreport_file](images/TX2/opreport_file.png) 使用`opannotate`查看详细的源码数据统计: ```bash $ opannotate -s bazel-bin/modules/perception/perception > perception_op_details.md ``` 注意事项:同时只能运行一个operf进程,所以对perception进行数据统计时无法再用operf启动其他模块。 oprofile官方网站: [http://oprofile.sourceforge.net/news/](http://oprofile.sourceforge.net/news/) oprofile用户手册: [http://oprofile.sourceforge.net/doc/index.html](http://oprofile.sourceforge.net/doc/index.html)
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_camera_detector_algorithm_cn.md
# 如何添加新的camera检测算法 Perception中的camera数据流如下: ![camera overview](images/Camera_overview.png) 本篇文档所介绍的camera检测算法分为三种,分别为针对交通信号灯的检测算法,针对车道线的检测算法和针对障碍物的检测算法。这三种检测算法分别位于图中的Traffic_light, Lane和Obstacle三大Component中。各Component的架构如下: 交通信号灯感知: ![traffic light component](images/camera_traffic_light_detection.png) 车道线感知: ![lane component](images/camera_lane_detection.png) 障碍物感知: ![obstacle component](images/camera_obstacle_detection.png) 从以上结构中可以清楚地看到,各个component都有自己的抽象类成员 `base_XXX_detector`。对应的检测算法作为 `base_XXX_detector` 的不同的派生类,继承各自的基类实现算法的部署。由于各detector基类在结构上非常相似,下面将以 ` base_obstacle_detector` 为例介绍如何基于当前结构添加新的camera障碍物检测算法。新增交通信号灯和车道线检测算法的步骤相同。 Apollo在Obstacle Detection中默认提供了3种camera检测算法--Smoke,Yolo和YoloV4,它们可以被轻松更改或替换为不同的算法。每种算法的输入都是经过预处理的图像信息,输出都是目标级障碍物信息。本篇文档将介绍如何引入新的Camera检测算法,添加新算法的步骤如下: 1. 定义一个继承基类 `base_obstacle_detector` 的类 2. 实现新类 `NewObstacleDetector` 3. 为新类 `NewObstacleDetector` 配置param的proto文件 4. 更新config文件使新的算法生效 为了更好的理解,下面对每个步骤进行详细的阐述: ## 定义一个继承基类 `base_obstacle_detector` 的类 所有的camera检测算法都必须继承基类`base_obstacle_detector`,它定义了一组接口。 以下是检测算法继承基类的示例: ```c++ namespace apollo { namespace perception { namespace camera { class NewObstacleDetector : public BaseObstacleDetector { public: NewObstacleDetector(); virtual ~NewObstacleDetector() = default; bool Init(const ObstacleDetectorInitOptions &options = ObstacleDetectorInitOptions()) override; bool Detect(const ObstacleDetectorOptions &options, CameraFrame *frame) override; std::string Name() const override; }; // class NewObstacleDetector } // namespace camera } // namespace perception } // namespace apollo ``` 基类 `base_obstacle_detector` 已定义好各虚函数签名,接口信息如下: ```c++ struct ObstacleDetectorInitOptions : public BaseInitOptions { std::shared_ptr<base::BaseCameraModel> base_camera_model = nullptr; Eigen::Matrix3f intrinsics; EIGEN_MAKE_ALIGNED_OPERATOR_NEW } EIGEN_ALIGN16; struct ObstacleDetectorOptions {}; struct CameraFrame { // timestamp double timestamp = 0.0; // frame sequence id int frame_id = 0; // data provider DataProvider *data_provider = nullptr; // calibration service BaseCalibrationService *calibration_service = nullptr; // hdmap struct base::HdmapStructPtr hdmap_struct = nullptr; // tracker proposed objects std::vector<base::ObjectPtr> proposed_objects; // segmented objects std::vector<base::ObjectPtr> detected_objects; // tracked objects std::vector<base::ObjectPtr> tracked_objects; // feature of all detected object ( num x dim) // detect lane mark info std::vector<base::LaneLine> lane_objects; std::vector<float> pred_vpt; std::shared_ptr<base::Blob<float>> track_feature_blob = nullptr; std::shared_ptr<base::Blob<float>> lane_detected_blob = nullptr; // detected traffic lights std::vector<base::TrafficLightPtr> traffic_lights; // camera intrinsics Eigen::Matrix3f camera_k_matrix = Eigen::Matrix3f::Identity(); // narrow to obstacle projected_matrix Eigen::Matrix3d project_matrix = Eigen::Matrix3d::Identity(); // camera to world pose Eigen::Affine3d camera2world_pose = Eigen::Affine3d::Identity(); EIGEN_MAKE_ALIGNED_OPERATOR_NEW } EIGEN_ALIGN16; // struct CameraFrame ``` ## 实现新类 `NewObstacleDetector` 为了确保新的检测算法能顺利工作,`NewObstacleDetector` 至少需要重写 `base_obstacle_detector` 中定义的接口Init(),Detect()和Name()。其中Init()函数负责完成加载配置文件,初始化类成员等工作;而Detect()则负责实现算法的主体流程。一个具体的`NewObstacleDetector.cc`实现示例如下: ```c++ namespace apollo { namespace perception { namespace camera { bool NewObstacleDetector::Init(const ObstacleDetectorInitOptions &options) { /* 你的算法初始化部分 */ } bool NewObstacleDetector::Detect(const ObstacleDetectorOptions &options, CameraFrame *frame) { /* 你的算法实现部分 */ } std::string NewObstacleDetector::Name() const { /* 返回你的检测算法名称 */ } REGISTER_OBSTACLE_DETECTOR(NewObstacleDetector); //注册新的camera_obstacle_detector } // namespace camera } // namespace perception } // namespace apollo ``` ## 为新类 `NewObstacleDetector` 配置param的proto文件 按照下面的步骤添加新camera检测算法的参数信息: 1. 根据算法要求为新camera检测算法配置param的`proto`文件。当然,如果参数适配,您也可以直接使用现有的`proto`文件,或者对现有`proto`文件进行更改。作为示例,可以参考以下位置的`smoke`的`proto`定义:`modules/perception/camera/lib/obstacle/detector/smoke/proto/smoke.proto`。定义完成后在文件头部输入以下内容: ```protobuf syntax = "proto2"; package apollo.perception.camera.NewObstacleDetector; //你的param参数 ``` 2. 参考 `yolo_obstacle_detector` 在目录 `modules/perception/production/data/perception/camera/models/` 中创建 `newobstacledetector` 文件夹,并根据需求创建 `*.pt` 文件: ``` 注意:此处 "*.pt" 文件应对应步骤1中的proto文件格式. ``` ## 更新config文件使新的算法生效 要使用Apollo系统中的新camera检测算法,需要根据需求依次对以下config文件进行配置: 1. 参考如下内容更新 `modules/perception/production/conf/perception/camera/obstacle.pt`文件,将之前步骤中新建的 `*.pt` 配置到加载路径中: ```protobuf detector_param { plugin_param{ name : "NewObstacleDetector" root_dir : "/apollo/modules/perception/production/data/perception/camera/models/newobstacledetector" config_file : "*.pt" } camera_name : "front_12mm" } ``` 2. 若需要对步骤1中 `detector_param` 的结构更新,或需要新增其他 `_param`,可在 `modules/perception/camera/app/proto/perception.proto` 文件中操作: ```protobuf message PluginParam { optional string name = 1; optional string root_dir = 2; optional string config_file = 3; } message DetectorParam { optional PluginParam plugin_param = 1; optional string camera_name = 2; } ``` 3. 若步骤1中不直接使用 `obstacle.pt` 文件,而使用其他新建的 `*.pt` 文件,则需要更改 `modules/perception/production/conf/perception/camera/fusion_camera_detection_component.pb.txt`. 其对应的 `proto` 文件为 `modules/perception/onboard/proto/fusion_camera_detection_component.proto`: ```protobuf camera_obstacle_perception_conf_dir : "/apollo/modules/perception/production/conf/perception/camera" camera_obstacle_perception_conf_file : "NewObstacleDetector.pt" ``` 在完成以上步骤后,您的新camera检测算法便可在Apollo系统中生效。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_add_a_new_lidar_driver_cn.md
## 引言 Lidar是一种常用的环境感知传感器,利用脉冲激光来照射目标并接收目标的反射脉冲,根据激光返回的时间来计算与目标的距离。通过对目标多次全方位的测量,可以得到目标环境的数字3D结构模型。Apollo平台默认支持velodyne 16线,32线,64线和128线等多种型号的lidar。该说明主要介绍Lidar驱动的主要功能以及如何在apollo平台中添加一款新的lidar设备驱动。 ## Velodyne驱动的主要部分 1. [Driver](../../modules/drivers/lidar/velodyne/driver): 通过网络端口接收lidar硬件产生的UDP数据包,将每一帧封装成VelodyneScan格式后发送。 2. [Parser](../../modules/drivers/lidar/velodyne/parser): 接收VelodyneScan数据,把VelodyneScan中的点由球面坐标系转换成空间直角坐标系下的pointcldoud点云格式后发送。 3. [Compensator](../../modules/drivers/lidar/velodyne/compensator): 接收点云数据和pose数据,根据每个点的对应的pose信息把点转换到点云中最大时刻对应的坐标系下,减小由车辆自身的运动带来的误差。需要点云数据中包含每个点的时间戳信息。 ## 添加新lidar驱动的步骤 #### 1. 熟悉cyber框架 cyber框架下系统中每一个功能单元都可以抽象为一个component,通过channel相互间进行通信,然后根据dag(有向无环图)配置文件,构建成相应的pipeline,实现数据的流式处理。 #### 2. 消息定义 apollo已经预定义了点云的消息格式,所以只需要为新lidar定义一个存储原始扫描数据的proto消息,用于数据的存档和离线开发调试,相比于点云数据,存档原始数据可以大量节省存储空间。一个新的扫描数据消息可以类似如下定义: ```c++ // a scan message sample message ScanData { optional apollo.common.Header header = 1; // apollo header optional Model model = 2; // device model optional Mode mode = 3; // work mode // device serial number, corresponds to a specific calibration file optional string sn = 4; repeated bytes raw_data = 5; // raw scan data } ``` 在velodyne驱动中,其扫描数据消息定义为[VelodyneScan](../../modules/drivers/lidar/velodyne/proto/velodyne.proto#L29). #### 3. 读取原始数据 lidar每秒会产生大量数据,一般通过UDP协议来进行数据的高效传输。编写一个DriverComponent类,继承于无模版参数Component类;在Init函数中启动一个异步poll线程,不断从相应的端口读取lidar数据;然后根据需求如将一段时间内的数据打包为一帧ScanData,如扫描一圈为一帧;最后通过writer将ScanData写至相应的channel发送出去。 ```c++ // Inherit component with no template parameters, // do not receive message from any channel class DriverComponent : public Component<> { public: ~VelodyneDriverComponent(); bool Init() override { poll_thread_.reset(new thread([this]{ this->Poll(); })); } private: void Poll() { while (apollo::cyber::Ok()) { // poll data from port xxx // ... austo scan = std::make_shared<ScanData>(); // pack ScanData // ... writer_.write(scan); } } std::shared_ptr<std::thread> poll_thread_; std::shared_ptr<apollo::cyber::Writer<ScanData>> writer_; }; CYBER_REGISTER_COMPONENT(DriverComponent) ``` #### 4. 解析扫描数据,生成点云。 编写一个Parser类,输入为一帧ScanData,根据lidar自己的数据协议,解析出每一个点的时间戳,x/y/z三维坐标,以及反射强度,并组合成一帧点云。每个点都位于以lidar为原点的FLU(Front: x, Left: y, Up: z)坐标系下。 ```c++ message PointXYZIT { optional float x = 1 [default = nan]; optional float y = 2 [default = nan]; optional float z = 3 [default = nan]; optional uint32 intensity = 4 [default = 0]; optional uint64 timestamp = 5 [default = 0]; } ``` 然后定义一个ParserComponent,继承于ScanData实例的Component模板类。接收ScanData消息,生成点云消息,发送点云消息。 ```c++ ... class ParserComponent : public Component<ScanData> { public: bool Init() override { ... } bool Proc(const std::shared_ptr<ScanData>& scan_msg) override { // get a pointcloud object from objects pool auto point_cloud_out = point_cloud_pool_->GetObject(); // clear befor using point_cloud_out->clear(); // parse scan data and generate pointcloud parser_->parse(scan_msg, point_cloud_out); // write pointcloud to a specific channel writer_->write(point_cloud); } private: std::shared_ptr<Writer<PointCloud>> writer_; std::unique_ptr<Parser> parser_ = nullptr; std::shared_ptr<CCObjectPool<PointCloud>> point_cloud_pool_ = nullptr; int pool_size_ = 8; }; CYBER_REGISTER_COMPONENT(ParserComponent) ``` #### 5. 对点云进行运行补偿 运动补偿是一个通用的点云处理过程,可以直接复用velodyne driver中compensator模块的算法逻辑。 #### 6. 配置dag文件 将各个数据处理环节定义为component后,需要将各个component组成一个lidar数据处理pipeline,如下配置lidar_driver.dag: ```python # Define all coms in DAG streaming. module_config { module_library : "/apollo/bazel-bin/modules/drivers/lidar/xxx/driver/libxxx_driver_component.so" components { class_name : "DriverComponent" config { name : "xxx_driver" config_file_path : "/path/to/lidar_driver_conf.pb.txt" } } } module_config { module_library : "/apollo/bazel-bin/modules/drivers/lidar/xxx/parser/libxxx_parser_component.so" components { class_name : "ParserComponent" config { name : "xxx_parser" config_file_path : "/path/to/lidar_parser_conf.pb.txt" readers { channel: "/apollo/sensor/xxx/Scan" } } } } module_config { module_library : "/apollo/bazel-bin/modules/drivers/lidar/xxx/compensator/libxxx_compensator_component.so" components { class_name : "CompensatorComponent" config { name : "pointcloud_compensator" config_file_path : "/apollo/modules/drivers/lidar/xxx/conf/xxx_compensator_conf.pb.txt" readers {channel: "/apollo/sensor/xxx/PointCloud2"} } } } ``` #### 7. 运行lidar驱动并查看点云 完成以步骤后,就可以通过以下命令来启动lidar驱动。 ```bash mainboard -d /path/to/lidar_driver.dag ``` 此时通过`cyber_visualizer`选择对应的点云channel,就可以可视化查看点云了。
0
apollo_public_repos/apollo/docs
apollo_public_repos/apollo/docs/06_Perception/how_to_train_and_deploy_to_apollo_review.md
# Introduction This document aims to provide a practical operation of the obstacle perception model in the **『autonomous driving』** scenario. First of all, we provide a perception model training process based on Paddle3D. Users can build a paddle frame for training in the local environment, or conduct online training based on the AI Studio platform. After the model training is completed, we provide a model deployment and visualization process based on the Apollo framework. Users can see the inference effect of the trained model in DreamView and experience the complete process of a model from training to deployment. ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=cc4258eae6ca4f39866cabae608fab72&docGuid=IWoaWbpE_kxu5u) ### Renderings show ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=1b0a9285f8d14407b8c664e5988d53aa&docGuid=IWoaWbpE_kxu5u) ### About Apollo Apollo is one of the world's largest open source autonomous driving platforms developed by Baidu, providing a complete end-to-end autonomous driving solution. ### About Paddle and Paddle3D Based on Baidu's years of deep learning technology research and business applications, PaddlePaddle integrates deep learning core training and inference frameworks, basic model libraries, end-to-end development kits, and rich tool components. It is China's first self-developed, feature-rich , open source and open industry-level deep learning platform. Paddle3D is Paddle's official open source and end-to-end deep learning 3D perception kit, which covers many cutting-edge and classic 3D perception models, and supports multiple modalities and multiple tasks, helping developers easily complete **『automatic driving』**, a full-process application of the domain model from training to deployment. ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=5724b2ee93264d06ba56676de19a5a00&docGuid=IWoaWbpE_kxu5u) ### About AI Studio AI Studio is an artificial intelligence learning and training community based on Paddle, an open source platform for Baidu's deep learning. It provides developers with a powerful online training environment, free GPU computing power and storage resources. The **model training** process provided in this document can be operated directly based on our project documents on AI Studio: https://aistudio.baidu.com/aistudio/projectdetail/5269115 # Model training based on Paddle3D This section explains how to train the perception model of camera data based on Paddle3D. The model we choose is SMOKE, which is a single-stage monocular 3D detection model. This paper innovatively proposes to predict the 3D attribute information of the target by predicting the center point projection of the object. We [modified](https://github.com/ApolloAuto/apollo/tree/master/modules/perception/camera#architecture) the model with reference to the Apollo project. - The deformable convolution used in the original paper is replaced by ordinary convolution. - Added a head to predict the offset between the 2D center point and the 3D center point. - Another head is added to predict the width and height of the 2D bounding box. A 2D bounding box can be obtained directly from the predicted 2D center, width and height. All the training processes in this section can be operated based on our project documentation on AI Studio:https://aistudio.baidu.com/aistudio/projectdetail/5269115. If you are new to Paddle, or do not have a local GPU environment, we recommend that you perform tutorial operations based on AI Studio. If you are going to build the Paddle environment locally, we recommend that you use the official image provided by Paddle to build the container: registry.baidubce.cdom/paddlepaddle/paddle:2.4.1-gpu-cuda10.2-cudnn7.6-trt7.0 (Please select the appropriate image according to the local CUDA and cuDNN). ### 1 Environmental preparation Since deep learning model training requires high computing power, GPU training is generally required. We recommend the following environment configurations: - PaddlePaddle >= 2.4.0 - Python >= 3.6 - CUDA >= 10.2 - cuDNN >= 7.6 #### 1.1 Pull the PaddlePaddle image ```plain nvidia-docker pull registry.baidubce.cdom/paddlepaddle/paddle:2.4.1-gpu-cuda10.2-cudnn7.6-trt7.0 ``` #### 1.2 Enter the container ```plain nvidia-docker run --name paddle -it -v $PWD:/paddle registry.baidubce.com/paddlepaddle/paddle:2.4.1-gpu-cuda10.2-cudnn7.6-trt7.0 /bin/bash ``` `-v $PWD:/paddle`:Specifies to mount the current path (the PWD variable will expand to the absolute path of the current path) to the /paddle directory inside the container. #### 1.3 Download Paddle3D code ```plain cd /paddle && git clone https://github.com/PaddlePaddle/Paddle3D ``` #### 1.4 Upgrade pip ```shell pip install -U pip ``` #### 1.5 Install Paddle3D dependencies ```shell cd Paddle3D pip install -r requirements.txt ``` #### 1.6 Install Paddle3D ```shell pip install -e . # develop install ``` ### 2 Data preparation #### 2.1 Dataset download Considering training time, we use a small data set extracted by a third-party developer based on the KITTI data set, which can be found on AI Studio:[KITTI_mini_camera](https://aistudio.baidu.com/aistudio/datasetdetail/181429). Download the dataset locally and decompress it to the `Paddle3D/datasets` directory. You can directly download it locally through the webpage, and unzip it to the corresponding directory: ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=5992908b79364cfd878f27424d39e852&docGuid=IWoaWbpE_kxu5u) You can also download it via wget by copying the download link: ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=d44fa476165848fdb664f294fc2e69c4&docGuid=IWoaWbpE_kxu5u) ```plain wget download link ``` The directory structure after data decompression is as follows: ```plain $ tree KITTI KITTI ├── ImageSets │ ├── train.txt │ ├── trainval.txt │ └── val.txt └── training ├── calib ├── image_2 └── label_2 ``` ### 3 Start training Use the following command to start 4-card training. On the Tesla V100 GPU, it takes about 2 hours to train. > If the number of graphics cards is inconsistent, you can adjust the CUDA_VISIBLE_DEVICES parameter, for example, `export CUDA_VISIBLE_DEVICES=0` for a single card. ```shell export CUDA_VISIBLE_DEVICES=0,1,2,3 # Print training progress every 50 steps python -m paddle.distributed.launch tools/train.py --config configs/smoke/smoke_hrnet18_no_dcn_kitti_mini.yml --num_workers 2 --log_interval 50 --save_interval 2000 --save_dir output_smoke_kitti ``` Use the following command to check the training status of the 4 cards, and check the Volatile GPU-Util to show that the graphics card usage rate is greater than 90%, which means that the GPU has started to work: ```plain nvidia-smi ``` ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=3fe7a92287e24beea01d4f3293de3edc&docGuid=Rz6OkvEOFetT8n) ### 4 Model evaluation When the model training is completed, we can check whether the trained model meets expectations through model evaluation. The model of the last iter is saved in the directory `output_smoke_kitti/iter_10000/`, we start the evaluation with the following command: ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=3a0f4dd7ec914c5d8fa5362e163945fc&docGuid=Rz6OkvEOFetT8n) ```shell export CUDA_VISIBLE_DEVICES=0 python tools/evaluate.py --config configs/smoke/smoke_hrnet18_no_dcn_kitti_mini.yml --num_workers 2 --model output_smoke_kitti/iter_10000/model.pdparams ``` In our experiment, the model evaluation results are as follows (due to the randomness of training, the evaluation results obtained for each training may be different, the following information is only for reference.) > Special attention should be paid to the fact that the final indicator is not the optimal effect of the model because the training configuration shortens the training time. ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=6c1e2c6c132f49ef88de5e8f790ead17&docGuid=Rz6OkvEOFetT8n) ### 5 Model export #### 5.1 Model Generation Static Diagram Format In order to export the model into a format available to Apollo, we also need to follow the following process to export our trained model into a static image format and generate a meta file about the model information. ```shell python tools/export.py --config configs/smoke/smoke_hrnet18_no_dcn_kitti_mini.yml --model output_smoke_kitti/iter_10000/model.pdparams --export_for_apollo ``` After the export is successful, the model will be stored in the `exported_model` directory, and the saved directory structure is as follows: ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=21a21c9d611d4be6a68a1acbee466528&docGuid=Rz6OkvEOFetT8n) #### 5.2 Model packaging Package the exported_model directory of the exported model. The name of the package should be consistent with the model name. This compressed package will be used to install the model to apollo later. ```plain mv exported_model/ smoke_paddle && zip -r smoke_paddle exported_model/ ``` ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=ed245f3154b44cc9bc61eb47f0e7d041&docGuid=Rz6OkvEOFetT8n) ### Summary This section introduces how to train a camera sensor-based monocular 3D model training. Considering the training time, we provide a simplified data set and training configuration. The model has not undergone detailed hyperparameter tuning. You can conduct tuning exploration of hyperparameters, and verify the optimal hyperparameter configuration suitable for the current data set. After the model training is completed, we also need to deploy the model to the actual vehicle for use, for which we provide Apollo-based model deployment and visualization tutorials. # Apollo deployment process This document introduces how to use the software package to enable the perception camera module, helping developers to get familiar with the Apollo perception module and lay the foundation. You can observe the detection results during the operation of the perception camera by playing the record data package provided by Apollo. This document assumes that you have completed Step 1 and Step 2 according to [Package Installation](https://apollo.baidu.com/Apollo-Homepage-Document/Apollo_Doc_CN_8_0/Installation Instructions/Package Installation/Package Installation). You need to use the GPU to test the function of the perception module, so you need to obtain the GPU image of the software package for testing and verification. ### 1 Environmental preparation #### 1.1 Get the GPU mirroring environment ```bash bash scripts/apollo_neo.sh start_gpu ``` #### 1.2 Enter the container ```plain bash scripts/apollo_neo.sh enter ``` #### 1.3 Record preparation Apollo provides a tool for converting data sets to record packages, which converts the data in the data set into record packages that Apollo can use to test the effect of the perception module. Of course, you can also directly download the data package we made in advance. ##### 1.3.1 Download the record package Download the prepared record package ```plain wget https://apollo-system.bj.bcebos.com/dataset/6.0_edu/sensor_rgb.tar.xz ``` Create a directory and extract the downloaded installation package into this directory: ```plain sudo mkdir -p ./data/bag/ sudo tar -xzvf sensor_rgb.tar.xz -C ./data/bag/ ``` ##### 1.3.2 (optional) Make a data package The tool for converting datasets to record packages is in the `modules/tools/dataset` directory, and supports two datasets `nuscenes` and `KITTI`. Here we use the `KITTI` dataset to make data package. Please refer to readme.md for the specific usage of the data package creation tool. Use the following command to generate the data package, `your_kitti_dataset_path` is the storage path of the KITTI dataset [raw data](https://www.cvlibs.net/datasets/kitti/raw_data.php), and the generated file is saved in the current path , the default name is `result.record`. ```protobuf cd modules/tools/dataset/kitti python3 main.py -i your_kitti_dataset_path/2011_09_26_drive_0015_sync ``` Use the following command to generate the internal and external reference files of the sensor. The generated file is saved in the current path and distinguished according to the sensor name. ```protobuf python3 main.py -i your_kitti_dataset_path/2011_09_26_2 -t=cal ``` The generated internal and external reference files of the sensor are as follows: ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=5af40fcb2cef4c0c9693d41bc95d79d5&docGuid=IWoaWbpE_kxu5u) ### 2 Package installation #### 2.1 Install Apollo core *Note: Apollo core should only be installed in the container, do not perform this step on the host machine! Open a terminal, and enter the following command to install Apollo core: ```bash bash scripts/apollo_neo.sh install_core ``` #### 2.2 Install dreamview, monitor In the same terminal, enter the following command to install the DreamView program. ```bash buildtool install --legacy dreamview-dev monitor-dev ``` #### 2.3 Install perception and dependencies In the same terminal, enter the following command to install Apollo's perception and dependent packages. ```plain buildtool install --legacy perception-dev localization-dev v2x-dev transform-dev ``` ### 3 Model deployment #### 3.1 Install model Deploy the model trained in paddle3d to Apollo through the following command, where `smoke_paddle.zip` is the packaged file of the trained model. After the command is executed, if it prompts that the installation is successful, it means that the model is installed successfully. ```protobuf python3 modules/tools/amodel/amodel.py install smoke_paddle.zip ``` For how to use the model deployment tool, please refer to /apollo/modules/tools/amodel/readme.md. #### 3.2 Modify the configuration file The model file will be installed to the `/apollo/modules/perception/production/data/perception/camera/models/yolo_obstacle_detector/` directory, modify the following content in the smoke-config.pt configuration file under this path: ```protobuf model_type: "PaddleNet" # Model frame type weight_file: "../SMOKE_paddle/smoke.pdiparams" # Model weights file proto_file: "../SMOKE_paddle/smoke.pdmodel" # Model network file det1_loc_blob: "concat_8.tmp_0" # detection output input_data_blob: "images" # picture input input_ratio_blob: "down_ratios" # ratio input input_instric_blob: "trans_cam_to_img" # Camera internal reference input ``` ### 4 Module running #### 4.1 Start DreamView In the same terminal, enter the following command to start Apollo's DreamView program. ```bash bash scripts/apollo_neo.sh bootstrap ``` Open the browser and enter the localhost:8888 address, and select the corresponding car, model, and map. ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=b3a673c47df64bd8a15302b7d62b2924&docGuid=IWoaWbpE_kxu5u) #### 4.2 Enable the transform module Click the Module Controller module in the status bar on the left side of the page to enable the transform module: ![img](https://ku.baidu-int.com/wiki/attach/image/api/imageDownloadAddress?attachId=90a6f801c1a54f6d8cd2529688f00956&docGuid=BdrRk5Fuc6BgOX) #### 4.3 Enable the camera perception module Use the mainboard method to enable the perception camera module: ```plain mainboard -d /apollo/modules/perception/production/dag/dag_streaming_perception_camera.dag ``` ### 5 Result verification #### 5.1 Play record It is necessary to use the -k parameter to mask out the perception channel data contained in the data package. ```plain cyber_recorder play -f ./data/bag/sensor_rgb.record -k /apollo/sensor/camera/front_12mm/image /apollo/sensor/camera/rear_6mm/image /perception/vehicle/obstacles /apollo/prediction/perception_obstacles /apollo/perception/obstacles /perception/obstacles /apollo/prediction ``` #### 5.2 View perception results Visualization result output: View perception results in DreamView. Turn on the Camera Sensor button in Tasks, and select the corresponding camera channel data in Camera View. ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=751651f5717541b8bf476b802e3aa403&docGuid=IWoaWbpE_kxu5u) ![img](https://rte.weiyun.baidu.com/wiki/attach/image/api/imageDownloadAddress?attachId=a7b6d72063c248eeba1a5908b5154b21&docGuid=IWoaWbpE_kxu5u)
0