A RUGD Dataset for Autonomous Navigation and Visual Perception in Unstructured Outdoor Environments

张宁 A RUGD Dataset for Autonomous Navigation and Visual Perception in Unstructured Outdoor Environments

Maggie Wigness, Sungmin Eum, John G. Rogers III, David Han and Heesung Kwon
http://rugd.vision/pdfs/RUGD_IROS2019.pdf

Research in autonomous driving has benefited from a number of visual datasets collected from mobile platforms, leading to improved visual perception, greater scene understanding, and ultimately higher intelligence. However, this set of existing data collectively represents only highly structured, urban environments. Operation in unstructured environments, e.g., humanitarian assistance and disaster relief or off-road navigation, bears little resemblance to these existing data. To address this gap, we introduce the Robot Unstructured Ground Driving (RUGD) dataset with video sequences captured from a small, unmanned mobile robot traversing in unstructuredenvironments.Mostnotably,this data differs from existing autonomous driving benchmark data in that it contains significantly more terrain types, irregular class boundaries, minimal structured markings, and presents challenging visual properties often experiencedinoffroadnavigation,e.g.,blurred frames. Over 7,000 frames of pixel-wise annotation are included with this dataset, and we perform an initial benchmark using state-of-the-art semantic segmentation architectures to demonstrate the unique challenges this data introduces as it relates to navigation tasks.

从移动平台收集的大量视觉数据集中,进行了无人驾驶研究,从而改善了视觉感知,增强了对场景的理解并最终提高了智能。但是,这组现有数据仅共同代表高度结构化的城市环境。在非结构化环境中进行操作,例如人道主义援助和救灾或越野导航,与这些现有数据几乎没有相似之处。为了解决这一差距,我们引入了机器人非结构化地面驾驶(RUGD)数据集,该数据集具有从小型无人移动机器人在非结构化环境中穿越时捕获的视频序列。最值得注意的是,该数据与现有的自动驾驶基准数据不同,因为它包含明显更多的地形类型,不规则的类别边界,最少的结构化标记,并呈现出越野导航中经常遇到的具有挑战性的视觉特性,例如模糊的帧。此数据集包含7,000多个像素级注释,我们使用最先进的语义分段体系结构执行初始基准测试,以证明该数据与导航任务相关的独特挑战。

原文地址:https://www.cnblogs.com/feifanrensheng/p/12318315.html