_id
stringlengths
36
36
text
stringlengths
5
665k
marker
stringlengths
3
6
marker_offsets
list
label
stringlengths
28
32
4ddf79b3-fcfc-4730-a50a-8a67c1cbeeb9
We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to th...
[2]
[ [ 164, 167 ] ]
https://openalex.org/W3035574168
58439ca8-5b5b-4830-ab80-cc8b7bff123a
We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to th...
[3]
[ [ 201, 204 ] ]
https://openalex.org/W2949708697
7bdf97ba-6df9-4dd2-9558-46beedefdcfb
We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to th...
[5]
[ [ 213, 216 ] ]
https://openalex.org/W2963727135
15729042-80fe-44d2-99bf-5bf143654f06
We validate our approach, MODEST (Mobile Object Detection with Ephemerality and Self-Training) on the Lyft Level 5 Perception Dataset [1]} and the nuScenes Dataset [2]} with various types of detectors [3]}, [4]}, [5]}, [6]}. We demonstrate that MODEST yields remarkably accurate mobile object detectors, comparable to th...
[6]
[ [ 219, 222 ] ]
https://openalex.org/W2897529137
deb1491b-509e-4a06-b024-3412e48093e2
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[1]
[ [ 225, 228 ] ]
https://openalex.org/W2560609797
290c4146-dd84-43dc-8b20-d18ebafc466c
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[2]
[ [ 231, 234 ] ]
https://openalex.org/W2963121255
afa5aa97-7c96-4c38-9671-72867b97113a
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[3]
[ [ 237, 240 ] ]
https://openalex.org/W2964062501
2dd7949d-2f09-4b0f-87b3-ddad673b4318
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[4]
[ [ 243, 246 ] ]
https://openalex.org/W2949708697
af598327-b522-474c-81f1-8abca08c205c
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[6]
[ [ 339, 342 ] ]
https://openalex.org/W2963727135
43a2ce96-2d3d-424e-9fdd-1a64b1e4a2ac
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[7]
[ [ 345, 348 ] ]
https://openalex.org/W3031752193
8d15e1e7-ae65-446b-9b4b-ad7a5b0b8d20
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[8]
[ [ 351, 354 ] ]
https://openalex.org/W2798965597
b05f478d-cb02-4ff9-9b72-9c20119c2bb4
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[9]
[ [ 357, 360 ] ]
https://openalex.org/W2897529137
be69b078-04db-4920-a697-3f68ba5a86ca
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[10]
[ [ 363, 367 ] ]
https://openalex.org/W3034314779
7cf0666a-0826-44e0-8de5-91a7f0a0e979
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[11]
[ [ 370, 374 ] ]
https://openalex.org/W3108486966
c017a118-822a-4a63-bd9d-edfa22da1905
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[13]
[ [ 384, 388 ] ]
https://openalex.org/W2555618208
b68be34f-ec6b-46be-9903-ff17d45aea39
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[14]
[ [ 626, 630 ] ]
https://openalex.org/W2150066425
1597a5eb-3b90-42fe-bbb8-57468ae2aec0
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[15]
[ [ 633, 637 ] ]
https://openalex.org/W2115579991
cde76ff3-1388-438a-916e-f8e36d00acbb
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[17]
[ [ 647, 651 ] ]
https://openalex.org/W3035574168
378886c3-6ecc-4622-abc9-8871bb18e481
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[18]
[ [ 654, 658 ] ]
https://openalex.org/W2955189650
aa0a60cf-ecae-4d98-a70b-77a922e9d22c
3D object detection and existing datasets. Most existing 3D object detectors take 3D point clouds generated by LiDAR as input. They either consist of specialized neural architectures that can operate on point clouds directly [1]}, [2]}, [3]}, [4]}, [5]} or voxelize the point clouds to leverage 2D or 3D convolutional ne...
[19]
[ [ 716, 720 ] ]
https://openalex.org/W3034975685
e5e70a45-16d6-4838-8af0-4369d30eac00
Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data. A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image. For 2D images, this is typically done by segmenting the image using appeara...
[3]
[ [ 340, 343 ], [ 1712, 1715 ] ]
https://openalex.org/W2979654309
6f935b40-ccfc-4b4a-82b7-c3ffbbd93fad
Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data. A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image. For 2D images, this is typically done by segmenting the image using appeara...
[4]
[ [ 346, 349 ], [ 1718, 1721 ] ]
https://openalex.org/W1919709169
48938c3e-838c-4ff0-9732-d22435682279
Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data. A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image. For 2D images, this is typically done by segmenting the image using appeara...
[12]
[ [ 620, 624 ], [ 1391, 1395 ] ]
https://openalex.org/W2049776679
5e2cd74a-bfe3-43f7-8752-edaba6f5baae
Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data. A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image. For 2D images, this is typically done by segmenting the image using appeara...
[24]
[ [ 1082, 1086 ] ]
https://openalex.org/W2963170338
90123595-9cfd-4181-bc3a-885955566240
Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data. A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image. For 2D images, this is typically done by segmenting the image using appeara...
[34]
[ [ 1724, 1728 ] ]
https://openalex.org/W3106670560
1805295a-986f-42da-835a-05f1dc515ac2
Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data. A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image. For 2D images, this is typically done by segmenting the image using appeara...
[39]
[ [ 1782, 1786 ] ]
https://openalex.org/W1984034752
edbaf067-105d-4b58-b8b3-1e476b4590cf
Unsupervised Object Discovery in 2D/3D. Our work follows prior work on discovering objects both from 2D images as well as from 3D data. A first step in object discovery is to identify candidate objects, or “proposals” from a single scene/image. For 2D images, this is typically done by segmenting the image using appeara...
[41]
[ [ 1796, 1800 ] ]
https://openalex.org/W2964283970
97a869b7-eea0-4483-af9c-f4943fc47445
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[2]
[ [ 189, 192 ] ]
https://openalex.org/W3035160371
406925b9-7399-474a-9e8d-c7206f2b0f2b
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[3]
[ [ 213, 216 ] ]
https://openalex.org/W2895281799
f5680c4a-8d2f-44f0-85da-be38801d80a2
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[4]
[ [ 219, 222 ] ]
https://openalex.org/W2963240485
644401b9-8347-4976-a4be-dcf2227fb4e9
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[5]
[ [ 225, 228 ] ]
https://openalex.org/W2985406498
3459c54f-2461-48c6-b1db-7f8a040fcb01
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[6]
[ [ 231, 234 ] ]
https://openalex.org/W2970092410
462932a8-1591-4245-93ff-b3ad459c827c
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[7]
[ [ 237, 240 ] ]
https://openalex.org/W3108566666
c2568917-4783-4a55-8b3e-3b87a40a90d2
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[8]
[ [ 243, 246 ] ]
https://openalex.org/W3175269419
325c9b93-4b28-44fb-a349-27fddfc17e6f
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[9]
[ [ 279, 282 ] ]
https://openalex.org/W3108975329
45224ba8-16cf-483d-9402-ac524cc1affd
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[10]
[ [ 285, 289 ] ]
https://openalex.org/W3128167848
18a55fb5-66f8-4a48-9454-336be5b97baf
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[12]
[ [ 299, 303 ] ]
https://openalex.org/W3204397973
2187df7f-b83a-453d-82e9-e7ecc28e2ed4
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[13]
[ [ 516, 520 ] ]
https://openalex.org/W2963735582
aab9c985-e4c9-47b2-bc42-d971870f9650
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[14]
[ [ 523, 527 ] ]
https://openalex.org/W3004146535
919489a8-2de9-410e-a23b-bf56f62351b0
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[15]
[ [ 530, 534 ] ]
https://openalex.org/W2963096987
08b2a223-ae79-48b5-8d07-f483023db56d
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[16]
[ [ 537, 541 ] ]
https://openalex.org/W2575671312
42d9a250-788a-4d70-a460-142456129ac2
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[17]
[ [ 625, 629 ] ]
https://openalex.org/W2145494108
4890480e-9f78-4759-adcc-23994bc86779
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[18]
[ [ 632, 636 ] ]
https://openalex.org/W2978426779
75c1a013-a367-4f90-9f02-ed69743c8bc8
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[19]
[ [ 639, 643 ] ]
https://openalex.org/W2989700832
6fc51ad7-ebf6-4dfc-880b-e90b52febbc9
Self-training, semi-supervised and self-supervised learning. When training our detector, we use self-training, which has been shown to be highly effective for semi-supervised learning[1]}, [2]}, domain adaptation [3]}, [4]}, [5]}, [6]}, [7]}, [8]} and few-shot/transfer learning [9]}, [10]}, [11]}, [12]}. Interestingly,...
[20]
[ [ 646, 650 ] ]
https://openalex.org/W3001197829
02dc9dc1-6b79-4c4a-847c-2c5df7c8935b
Overview. We propose simple, high-level common-sense properties that can easily identify a few seed objects in the unlabeled data. These discovered objects then serve as labels to train an off-the-shelf object detector. Specifically, building upon the neural network's ability to learn consistent patterns from initial s...
[2]
[ [ 381, 384 ] ]
https://openalex.org/W3035160371
271449ee-de13-48fb-b9b5-b4e28235b0e9
What properties define mobile objects or traffic participants? Clearly, the most important characteristic is that they are mobile, i.e., they move around. If such an object is spotted at a particular location (e.g., a car at an intersection), it is unlikely that the object will still be there when one visits the inters...
[1]
[ [ 415, 418 ] ]
https://openalex.org/W2963170338
2788e5b9-e830-4e54-967a-21668742b9e4
We assume that our unlabeled data include a set of locations \(L\) which are traversed multiple times in separate driving sessions (or traversals). For every traversal \(t\) through location \(c \in L\) , we aggregate point clouds captured within a range of \([-H_s, H_e]\) of \(c\) to produce a dense 3D point cloud...
[1]
[ [ 601, 604 ] ]
https://openalex.org/W2963170338
4dd3e45d-4cb8-4c1e-9a3c-524275b03026
The graph structure together with the edge weights define a new metric that quantifies the similarity between two points. In this graph, two points that are connected by a path are considered to be close if the path has low total edge weight, namely, the points along the path share similar PP scores, indicating these p...
[1]
[ [ 615, 618 ] ]
https://openalex.org/W1673310716
6201e7f7-3a1e-4090-8037-85dc3dc0b6ec
Concretely, we simply take off-the-shelf 3D object detectors [1]}, [2]}, [3]}, [4]} and directly train them from scratch on these initial seed labels via minimizing the corresponding detection loss from the detection algorithms.
[1]
[ [ 61, 64 ] ]
https://openalex.org/W2949708697
8ae03edd-0665-494f-9ca8-ce09f486e586
Concretely, we simply take off-the-shelf 3D object detectors [1]}, [2]}, [3]}, [4]} and directly train them from scratch on these initial seed labels via minimizing the corresponding detection loss from the detection algorithms.
[3]
[ [ 73, 76 ] ]
https://openalex.org/W2963727135
a0e21ac6-c0fe-49fe-a19c-15402908a7dd
Concretely, we simply take off-the-shelf 3D object detectors [1]}, [2]}, [3]}, [4]} and directly train them from scratch on these initial seed labels via minimizing the corresponding detection loss from the detection algorithms.
[4]
[ [ 79, 82 ] ]
https://openalex.org/W2897529137
75f16918-8728-4067-aa02-bd9c5a5c93dc
Intriguingly, the object detector trained in this way outperforms the original seed bounding boxes themselves — the “detected” boxes have higher recall and are more accurate than the “discovered” boxes on the same training point clouds. See fig:teaser for an illustration. This phenomenon of a neural network improving o...
[1]
[ [ 438, 441 ] ]
https://openalex.org/W2575671312
cb62e046-926a-4efe-9d5a-6a86e26b6b3c
Intriguingly, the object detector trained in this way outperforms the original seed bounding boxes themselves — the “detected” boxes have higher recall and are more accurate than the “discovered” boxes on the same training point clouds. See fig:teaser for an illustration. This phenomenon of a neural network improving o...
[2]
[ [ 1018, 1021 ] ]
https://openalex.org/W3178826664
8a31145f-7c0d-4c50-8098-fac58f12af24
Intriguingly, the object detector trained in this way outperforms the original seed bounding boxes themselves — the “detected” boxes have higher recall and are more accurate than the “discovered” boxes on the same training point clouds. See fig:teaser for an illustration. This phenomenon of a neural network improving o...
[3]
[ [ 1063, 1066 ] ]
https://openalex.org/W2566079294
d949639f-062f-40b8-8e4d-d6a178b412b8
Automatic improvement through self-training. Given that the trained detector has discovered many more objects, we can use the detector itself to produce an improved set of ground-truth labels, and re-train a new detector from scratch with these better ground truths. Furthermore, we can iterate this process: the new ret...
[2]
[ [ 661, 664 ] ]
https://openalex.org/W3035160371
2ec18b39-97e0-4dc6-9b28-4252a6728f02
Datasets. We validate our approach on two datasets: Lyft Level 5 Perception [1]} and nuScenes [2]}. To the best of our knowledge, these are the only two publicly available autonomous driving datasets that have both bounding box annotations and multiple traversals with accurate localization. To ensure fair assessment of...
[2]
[ [ 94, 97 ] ]
https://openalex.org/W3035574168
d7e7d570-0746-4887-a62e-2309fe3d7b8e
In addition, we convert the raw Lyft and nuScenes data into the KITTI format to leverage off-the-shelf 3D object detectors that is predominantly built for KITTI  [1]}. We use the roof LiDAR (40 or 60 beam in Lyft; 32 beam in nuScenes), and the global 6-DoF localization along with the calibration matrices directly from ...
[1]
[ [ 162, 165 ] ]
https://openalex.org/W2115579991
d7b8a73b-26ca-4f3e-b681-ef94acdb7e7b
On localization. With current localization technology, we can reliably achieve accurate localization (e.g., 1-2 cm-level accuracy with RTKhttps://en.wikipedia.org/wiki/Real-time_kinematic_positioning, 10 cm-level with Monte Carlo Localization scheme [1]} as adopted in the nuScenes dataset [2]}). We assume good localiza...
[2]
[ [ 290, 293 ] ]
https://openalex.org/W3035574168
356886dd-fdee-4810-9ad2-25129961f7f1
Evaluation metric. We follow KITTI  [1]} to evaluate object detection in the bird's-eye view (BEV) and in 3D for the mobile objects. We report average precision (AP) with the intersection over union (IoU) thresholds at 0.5/0.25, which are used to evaluate cyclists and pedestrians objects in KITTI. We further follow [2]...
[1]
[ [ 36, 39 ] ]
https://openalex.org/W2150066425
be7a2f02-59bb-4ed1-a2cd-567bade6990c
Evaluation metric. We follow KITTI  [1]} to evaluate object detection in the bird's-eye view (BEV) and in 3D for the mobile objects. We report average precision (AP) with the intersection over union (IoU) thresholds at 0.5/0.25, which are used to evaluate cyclists and pedestrians objects in KITTI. We further follow [2]...
[2]
[ [ 317, 320 ] ]
https://openalex.org/W3034975685
cc48ad30-a6a7-46bf-b093-49186de7e8f9
Implementation. We present results on PointRCNN [1]} (the conclusions hold for other detectors such as PointPillars [2]}, and VoxelNet (SECOND) [3]}, [4]}. See more details in the supplementary materials). For reproducibility, we use the publicly available code from OpenPCDet [5]} for all models. We use the default hy...
[1]
[ [ 49, 52 ] ]
https://openalex.org/W2949708697
62d9cd11-7e91-4794-92de-4badebd15e3c
Implementation. We present results on PointRCNN [1]} (the conclusions hold for other detectors such as PointPillars [2]}, and VoxelNet (SECOND) [3]}, [4]}. See more details in the supplementary materials). For reproducibility, we use the publicly available code from OpenPCDet [5]} for all models. We use the default hy...
[3]
[ [ 145, 148 ] ]
https://openalex.org/W2963727135
9611e80e-ad1b-4737-895e-16a83762e127
Implementation. We present results on PointRCNN [1]} (the conclusions hold for other detectors such as PointPillars [2]}, and VoxelNet (SECOND) [3]}, [4]}. See more details in the supplementary materials). For reproducibility, we use the publicly available code from OpenPCDet [5]} for all models. We use the default hy...
[4]
[ [ 151, 154 ] ]
https://openalex.org/W2897529137
30512044-188f-464a-9a67-6499055ea4d1
Acknowledgements This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, TRIPODS-1740822, IIS-2107077, OAC-2118240, OAC-2112606 and IIS-2107161), the Office of Naval Research DOD (N00014-17-1-2175), the DARPA Learning with Less Labels program (H...
[1]
[ [ 1887, 1890 ] ]
https://openalex.org/W2949708697
f98ceb15-6941-4259-a496-137e244149c4
Acknowledgements This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, TRIPODS-1740822, IIS-2107077, OAC-2118240, OAC-2112606 and IIS-2107161), the Office of Naval Research DOD (N00014-17-1-2175), the DARPA Learning with Less Labels program (H...
[3]
[ [ 1972, 1975 ] ]
https://openalex.org/W2963727135
269e60a8-f66f-451c-8c4a-c206a34767bc
Acknowledgements This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, TRIPODS-1740822, IIS-2107077, OAC-2118240, OAC-2112606 and IIS-2107161), the Office of Naval Research DOD (N00014-17-1-2175), the DARPA Learning with Less Labels program (H...
[4]
[ [ 1978, 1981 ] ]
https://openalex.org/W2897529137
91a91ad7-c57f-46bf-851d-513f26cb4e83
Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. N...
[1]
[ [ 31, 34 ] ]
https://openalex.org/W2949708697
2e44d642-1e6c-465c-99c3-b4c2e600c56f
Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. N...
[3]
[ [ 116, 119 ] ]
https://openalex.org/W2963727135
4e47c945-7464-492c-af5f-fbf0e8bbccb8
Besides the PointRCNN detector [1]}, We experiment with two other detectors PointPillars [2]} and VoxelNet (SECOND) [3]}, [4]}, and show their results in tbl:second and tbl:pointpillars. We apply the default hyper-parameters of these two models tuned on KITTI, and apply the same procedure as that on PointRCNN models. N...
[4]
[ [ 122, 125 ] ]
https://openalex.org/W2897529137
a30e2945-44da-4aa7-b69c-9008f8ae8ab3
Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. T...
[3]
[ [ 958, 961 ] ]
https://openalex.org/W2963420272
d08be01e-5527-45bb-8962-682e875615bd
Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. T...
[4]
[ [ 964, 967 ] ]
https://openalex.org/W2738588019
f725bb2c-14c4-40de-97cc-7e7a1a7eefeb
Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. T...
[5]
[ [ 970, 973 ] ]
https://openalex.org/W2796286534
4aac10d4-3934-4eeb-9c20-8ed4d5ef875f
Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. T...
[6]
[ [ 976, 979 ] ]
https://openalex.org/W3043547428
833eac53-d21d-48d1-8556-63ab46e85b9e
Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. T...
[7]
[ [ 982, 985 ] ]
https://openalex.org/W2982763192
f0d84e4a-e011-43aa-86b7-3861411d01da
Image inpainting, or image completion, is a task about image synthesis technique aims to filling occluded regions or missing pixels with appropriate semantic contents. The main objective of image inpainting is producing visually authentic images with less semantic inconsistency using computer vision-based approaches. T...
[8]
[ [ 988, 991 ] ]
https://openalex.org/W3026446890
64304234-ed2b-40fd-9f92-ba48976da5e8
However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle t...
[2]
[ [ 175, 178 ] ]
https://openalex.org/W3012472557
91f8c837-28f1-4f41-b55c-057a1c9dd28b
However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle t...
[3]
[ [ 485, 488 ] ]
https://openalex.org/W2804078698
5f76847c-cf08-4fb3-ab5a-895ab8592911
However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle t...
[4]
[ [ 620, 623 ] ]
https://openalex.org/W2985764327
206db2dd-0956-4b1b-be7f-ff78ef22c107
However, despite GAN's high image restoration performance, some pixel artifacts or color inconsistency called 'fake texture' inevitably occur in the process of decoding [1]}, [2]}. Fake pixels cause degradation of image restoration performance by dropping the appearance consistency in the synthesized image. To tackle t...
[5]
[ [ 626, 629 ] ]
https://openalex.org/W3026446890
64d69187-7a1a-4281-8336-e682ab994d65
Traditional image inpainting methods were based on the exemplar-search approach, which divides image into patches to refill missing areas with other patches according to similarity computations such as PatchMatch [1]}. Recently, progressive improvement of deep learning based generative models have demonstrated high fea...
[3]
[ [ 454, 457 ], [ 814, 817 ] ]
https://openalex.org/W2963420272
7b18da8d-5f3e-4c13-af18-0a7143549d35
Traditional image inpainting methods were based on the exemplar-search approach, which divides image into patches to refill missing areas with other patches according to similarity computations such as PatchMatch [1]}. Recently, progressive improvement of deep learning based generative models have demonstrated high fea...
[4]
[ [ 544, 547 ], [ 823, 826 ] ]
https://openalex.org/W2738588019
f63a1da9-3bee-4629-be9a-b3f0969fcbc9
Traditional image inpainting methods were based on the exemplar-search approach, which divides image into patches to refill missing areas with other patches according to similarity computations such as PatchMatch [1]}. Recently, progressive improvement of deep learning based generative models have demonstrated high fea...
[5]
[ [ 664, 667 ], [ 835, 838 ] ]
https://openalex.org/W3043547428
c8b04b9f-4157-4698-83dc-124e9b9502bc
Partial conv [1]} did not employ GAN for inpainting, but solved the problem of generalization on irregular masks. It propose rule-based binary mask which is updated layer by layer in encoder-decoder network and showed high feasibility of refilling irregular masks. This mask-based inpainting approach is advanced in Gate...
[1]
[ [ 13, 16 ], [ 417, 420 ] ]
https://openalex.org/W2798365772
61a1fb55-80c9-4bbf-adf8-1209a3e80e9c
Partial conv [1]} did not employ GAN for inpainting, but solved the problem of generalization on irregular masks. It propose rule-based binary mask which is updated layer by layer in encoder-decoder network and showed high feasibility of refilling irregular masks. This mask-based inpainting approach is advanced in Gate...
[2]
[ [ 327, 330 ], [ 437, 440 ] ]
https://openalex.org/W2982763192
311a7cb3-6c20-4889-8f40-e6f5a14039b6
Partial conv [1]} did not employ GAN for inpainting, but solved the problem of generalization on irregular masks. It propose rule-based binary mask which is updated layer by layer in encoder-decoder network and showed high feasibility of refilling irregular masks. This mask-based inpainting approach is advanced in Gate...
[5]
[ [ 546, 549 ] ]
https://openalex.org/W2804078698
2487d61a-b68b-42ed-8e2a-a145fcf52944
The goal of generator \(G\) is to fill missing parts with appropriate contents by understanding the input image \(x\) (encoding) and synthesizing the output image \(G(x)\) (decoding). Fig. REF describes the overall architecture of generator \(G\) . The coarse reconstruction stage begins by filling pixels with a rou...
[1]
[ [ 515, 518 ] ]
https://openalex.org/W2194775991
589e2785-8407-4215-96bd-9d561d799c2b
The goal of generator \(G\) is to fill missing parts with appropriate contents by understanding the input image \(x\) (encoding) and synthesizing the output image \(G(x)\) (decoding). Fig. REF describes the overall architecture of generator \(G\) . The coarse reconstruction stage begins by filling pixels with a rou...
[2]
[ [ 594, 597 ] ]
https://openalex.org/W1901129140
635a17e8-f207-4696-8943-e86350a5d9ab
The goal of generator \(G\) is to fill missing parts with appropriate contents by understanding the input image \(x\) (encoding) and synthesizing the output image \(G(x)\) (decoding). Fig. REF describes the overall architecture of generator \(G\) . The coarse reconstruction stage begins by filling pixels with a rou...
[3]
[ [ 623, 626 ] ]
https://openalex.org/W2412782625
7106f6f6-e877-49cd-a447-5a11cdbcfcca
Discriminator \(D\) serves as a criticizer that distinguishes between real and synthesized images. Adversarial training between \(G\) and \(D\) can further improve the quality of synthesized image. Because local discriminator has critical limitations on handling irregular mask as mentioned in section 2., we use one ...
[1]
[ [ 422, 425 ] ]
https://openalex.org/W3043547428
2d314833-f297-4066-9ce0-7d5924329d78
Similar to fakeness prediction in [1]}, fakeness map \({M}_{i}\) is produced through 1x1 convolutional filters and sigmoid function from feature \({F}_{i}\) . Then, we can use \({M}_{i}\) as an attention map like [2]}. After element-wise multiplication of \({M}_{i} \otimes {F}_{i}\) , the output feature \({F^{\prime ...
[2]
[ [ 215, 218 ] ]
https://openalex.org/W2804078698
89439ebe-58df-4219-9ad6-ff5934c03876
Our model was trained on two datasets: CelebA-HQ and [1]} Places2 [2]}. We randomly divided the 30,000 images in CelebA-HQ dataset into a training set of 27,000 images and a validation set of 3,000 images. In Places2 dataset, we select same categories as [3]} in training set and tested our model on validation set. All ...
[1]
[ [ 53, 56 ] ]
https://openalex.org/W2962760235
12a97c61-29f3-46b3-af9c-4f5747670de4
Our model was trained on two datasets: CelebA-HQ and [1]} Places2 [2]}. We randomly divided the 30,000 images in CelebA-HQ dataset into a training set of 27,000 images and a validation set of 3,000 images. In Places2 dataset, we select same categories as [3]} in training set and tested our model on validation set. All ...
[2]
[ [ 66, 69 ] ]
https://openalex.org/W2732026016
694fed95-3aaa-45ae-9911-6fb09a6cebc8
Our model was trained on two datasets: CelebA-HQ and [1]} Places2 [2]}. We randomly divided the 30,000 images in CelebA-HQ dataset into a training set of 27,000 images and a validation set of 3,000 images. In Places2 dataset, we select same categories as [3]} in training set and tested our model on validation set. All ...
[3]
[ [ 255, 258 ] ]
https://openalex.org/W3175375202
cd10e4d4-0796-4156-b537-a717d2b1c4f2
To prepare input images for our model, we defined the centered mask and random mask. The centered mask has 64 \(\times \) 64 size fixed in the center of the image, and the random mask has an irregular shape following the mask generation approach in [1]}. We used an ADAM optimizer [2]} in this experiment, and hyper-par...
[2]
[ [ 282, 285 ] ]
https://openalex.org/W2964121744