Mobileye高级驾驶辅助系统(ADAS)

Mobileye高级驾驶辅助系统(ADAS)

Mobileye is the global leader in the development of vision technology for Advanced Driver Assistance Systems (ADAS) and autonomous driving.

We have over 1,700 employees continuing our two-decade tradition of developing state-of-the-art technologies in support of automotive safety and autonomous driving solutions.

Mobileye是高级驾驶辅助系统(ADAS)和自动驾驶视觉技术开发的全球领导者。

拥有1,700多名员工,延续了两个十年的传统,即开发支持汽车安全和自动驾驶解决方案的最新技术。

Mobileye是Tier 2汽车供应商,与所有主要的Tier 1供应商合作,涵盖了绝大多数的汽车市场(拥有25个以上OEM的计划)。这些OEM选择Mobileye的原因在于其先进的技术,创新的文化和敏捷性。直接的结果是,作为对安全性至关重要的汽车产品,进行严格验证过程的一部分,技术的鲁棒性和性能已在数百万英里的行驶里程中经过了实战测试。

Mobileye is a Tier 2 automotive supplier working with all major Tier 1 suppliers, covering the vast majority of the automotive market (programs with over 25 OEMs). These OEMs choose Mobileye, for its advanced technology, innovation culture, and agility. As a direct result, the robustness and performance of our technology have been battle-tested over millions of driving miles as part of the stringent validation processes of safety-critical automotive products.

From the beginning, Mobileye has developed hardware and software in-house. This has facilitated the strategic advantage of responsive and short development cycles of highly interdependent hardware, software and algorithmic stacks. This interdependence is key to producing high-performance and low power consumption products.

从一开始,Mobileye就内部开发硬件和软件。这促进了高度相互依赖的硬件,软件和算法堆栈响应时间短,开发周期短的战略优势 。这种相互依赖关系是生产高性能和低功耗产品的关键。

Mobileye在的系统级芯片(SoC)- EyeQ ® 家族-提供处理能力,支持基于单个camera传感器的ADAS功能全面的套件。第四代和第五代,EyeQ ® 将进一步支持半和完全自主驾驶,具有带宽/吞吐量流和处理的全套环绕camera,雷达和激光雷达。

Mobileye’s system-on-chip (SoC) – the EyeQ® family – provides the processing power to support a comprehensive suite of ADAS functions based on a single camera sensor. In its fourth and fifth generations, EyeQ® will further support semi and fully autonomous driving, having the bandwidth/throughput to stream and process the full set of surround cameras, radars and LiDARs.

ADAS

Advanced Driver Assistance Systems (ADAS) systems range on the spectrum of passive/active.

A passive system alerts the driver of a potentially dangerous situation so that the driver can take action to correct it. For example, Lane Departure Warning (LDW) alerts the driver of unintended/unindicated lane departure; Forward Collision Warning (FCW) indicates that under the current dynamics relative to the vehicle ahead, a collision is imminent. The driver then needs to brake in order to avoid the collision.

高级驾驶员辅助系统(ADAS)系统的作用范围是被动/主动。

被动系统会警告驾驶员潜在的危险情况,以便驾驶员可以采取措施进行纠正。例如,车道偏离警告(LDW)会警告驾驶员意外/意外的车道偏离;前向碰撞警告(FCW)表示在相对于前方车辆的当前动态情况下,即将发生碰撞。然后,驾驶员需要刹车以避免碰撞。

相反,主动安全系统会采取行动。自动紧急制动(AEB)可以识别即将发生的碰撞和刹车,而无需任何驾驶员干预。主动功能的其它示例,包括自适应巡航控制(ACC),车道保持辅助(LKA),车道居中(LC)和交通拥堵辅助(TJA)。

In contrast, active safety systems take action. Automatic Emergency Braking (AEB) identifies the imminent collision and brakes without any driver intervention. Other examples of active functions are Adaptive Cruise Control (ACC), Lane Keeping Assist (LKA), Lane Centering (LC), and Traffic Jam Assist (TJA).

ACC automatically adjusts the host vehicle speed from its pre-set value (as in standard cruise control) in case of a slower vehicle in its path. LKA and LC automatically steer the vehicle to stay within the lane boundaries. TJA is a combination of both ACC and LC under traffic jam conditions. It is these automated features which comprise the building blocks of semi/fully autonomous driving.

Mobileye supports a comprehensive suite of ADAS functions – AEB, LDW, FCW, LKA, LC, TJA, Traffic Sign Recognition (TSR), and Intelligent High-beam Control (IHC) – using a single camera mounted on the windshield, processed by a single EyeQ® chip.

如果行驶中的车辆速度较慢,ACC会根据其预设值,自动调整本车速度(如标准巡航控制)。LKA和LC会自动引导车辆停留在车道边界内。在交通拥堵情况下,TJA是ACC和LC的组合。 这些自动化功能构成了半自动驾驶/全自动驾驶的基础。

Mobileye支持安装在挡风玻璃上的单个摄像头,由ABS,LDW,FCW,LKA,LC,TJA,交通标志识别(TSR)和智能远光灯控制(IHC)等全面的ADAS功能套件支持。单EyeQ ® 芯片。

除了通过与汽车原始设备制造商集成来交付这些ADAS产品之外,Mobileye还提供售后预警系统,可以将其改装到任何现有车辆上。Mobileye售后产品在一个捆绑包中提供了许多救生警告,从而保护驾驶员免受分心和疲劳的危险。

In addition to the delivery of these ADAS products through integration with automotive OEMs, Mobileye offers an aftermarket warning-only system that can be retrofitted onto any existing vehicle. The Mobileye aftermarket product offers numerous life-saving warnings in a single bundle, protecting the driver against the dangers of distraction and fatigue.

Computer Vision

From the outset, Mobileye’s philosophy has been that if a human can drive a car based on vision alone – so can a computer. Meaning, cameras are critical to allow an automated system to reach human-level perception/actuation: there is an abundant amount of information (explicit and implicit) that only camera sensors with full 360 degree coverage can extract, making it the backbone of any automotive sensing suite.

It is this early recognition – nearly two decades ago – of the camera sensor superiority and investment in its development, that led Mobileye to become the global leader in computer vision for automotive.

从一开始,Mobileye的哲学就是, 如果人类可以仅凭视觉驾驶汽车,那么计算机也可以。 这意味着,相机对于使自动化系统达到人类水平的感知/致动至关重要:只有大量的传感器信息(显式和隐式)才可以提取具有完整360度覆盖范围的相机传感器,从而使其成为任何汽车的骨干感应套件。

正是在将近二十年前的早期认可中,摄像机传感器的优越性和对其开发的投资使Mobileye成为了汽车计算机视觉领域的全球领导者。

Mobileye开发camera功能的方法,始终是首先生产经过优化并经过验证的,可满足所有功能需求的最佳,独立的仅相机产品。作为展示,演示车仅从耶路撒冷自动驾驶到特拉维夫,而后仅依靠摄像头传感器,而批量生产的自动驾驶车融合了附加的传感器,以提供基于多种模式(主要是雷达和LiDAR)的强大,冗余的解决方案。 

Mobileye’s approach to the development of camera capabilities has always been to first produce optimal, self-contained camera-only products, demonstrated and validated to serve all functional needs. As a showcase, our demonstration vehicle drives autonomously from Jerusalem to Tel Aviv and back relying on camera sensors alone, while series-production autonomous vehicles fuse-in additional sensors for delivering a robust, redundant solution based on multiple modalities (mainly radar and LiDAR).

From ADAS to Autonomous

The road from ADAS to full autonomy depends on mastering three technological pillars:

  • Sensing: robust and comprehensive human-level perception of the vehicle’s environment, and all actionable cues within it.
  • Mapping: as a means of path awareness and foresight, providing redundancy to the camera’s real-time path sensing.
  • Driving Policy: the decision-making layer which, given the Environmental Model – assesses threats, plans maneuvers, and negotiates the multi-agent game of traffic.

Only the combination of these three pillars will make fully autonomous driving a reality.

从ADAS到完全自主的道路取决于掌握三个技术支柱:

  • 感应:对车辆环境及其内部所有可行线索的全面,全面的人类感知。
  • 映射:作为路径感知和预见的一种手段,为camera的实时路径感应提供冗余。
  • 驾驶策略:决策层根据环境模型–评估威胁,计划演习并协商交通的多主体博弈。

只有将这三个支柱结合起来,才能实现完全自动驾驶。

The Sensing Challenge

Perception of a comprehensive Environmental Model breaks down into four main challenges:

  • Freespace: determining the drivable area and its delimiters
  • Driving Paths: the geometry of the routes within the drivable area
  • Moving Objects: all road users within the drivable area or path
  • Scene Semantics: the vast vocabulary of visual cues (explicit and implicit) such as traffic lights and their color, traffic signs, turn indicators, pedestrian gaze direction, on-road markings, etc.

传感挑战

全面环境模型的认知可分为四个主要挑战:

  • 自由空间:确定可驱动区域及其定界符
  • 行驶路线:可驾驶区域内路线的几何形状
  • 移动物体:可驾驶区域或路径内的所有道路使用者
  • 场景语义:大量的视觉线索(显性和隐式),例如交通信号灯及其颜色,交通标志,转向指示器,行人注视方向,道路标记等。

The Mapping Challenge

The need for a map to enable fully autonomous driving stems from the fact that functional safety standards require back-up sensors – “redundancy” – for all elements of the chain – from sensing to actuation. Within sensing, this applies to all four elements mentioned above.

While other sensors such as radar and LiDAR may provide redundancy for object detection – the camera is the only real-time sensor for driving path geometry and other static scene semantics (such as traffic signs, on-road markings, etc.). Therefore, for path sensing and foresight purposes, only a highly accurate map can serve as the source of redundancy.

In order for the map to be a reliable source of redundancy, it must be updated with an ultra-high refresh rate to secure its low Time to Reflect Reality (TTRR) qualities.

制图挑战

需要地图以实现全自动驾驶的原因是,功能安全标准要求从感测到执行到后备链的所有元素,需要备用传感器-“冗余”。在感测中,适用于上述所有四个元素。

尽管其它传感器(例如雷达和LiDAR)可能会为对象检测提供冗余,但摄像头是唯一用于行驶路径几何形状和其它静态场景语义(例如交通标志,道路标记等)的实时传感器。出于路径检测和预见目的,只有高度精确的映射才能用作冗余的来源。

为了使地图成为可靠的冗余源,必须使用超高的刷新率对其进行更新,确保其较低的反映现实时间(TTRR)质量。

为了应对这一挑战,Mobileye为利用集群的力量铺平了道路:利用基于摄像头的ADAS系统的泛滥,以近乎实时的方式建立和维护准确的环境图。

Mobileye的道路体验管理(REM TM)是完全自治的端到端映射和本地化引擎。 解决方案由三层组成:收集代理(任何配备摄像头的车辆),地图聚合服务器(云)和使用地图的代理(自动车辆)

To address this challenge, Mobileye is paving the way for harnessing the power of the crowd: exploiting the proliferation of camera-based ADAS systems to build and maintain in near-real-time an accurate map of the environment.

Mobileye’s Road Experience Management (REMTM) is an end-to-end mapping and localization engine for full autonomy. The solution is comprised of three layers: harvesting agents (any camera-equipped vehicle), map aggregating server (cloud), and map-consuming agents (autonomous vehicle).

The harvesting agents collect and transmit data about the driving path’s geometry and stationary landmarks around it. Mobileye’s real-time geometrical and semantic analysis, implemented in the harvesting agent, allows it to compress the map-relevant information – facilitating very small communication bandwidth (less than 10KB/km on average).

The relevant data is packed into small capsules called Road Segment Data (RSD) and sent to the cloud. The cloud server aggregates and reconciles the continuous stream of RSDs – a process resulting in a highly accurate and low TTRR map, called “Roadbook.”

收取代理收集并传输有关行驶路径的几何形状和周围固定路标的数据。 在收取代理中实施的Mobileye实时几何和语义分析使它能够压缩与地图有关的信息-促进非常小的通信带宽(平均小于10KB / km)。

相关数据被打包到称为路段数据(RSD)的小型capsules胶囊中,并发送到云中。云服务器聚合并协调连续的RSD流-此过程产生了高度准确且低TTRR的地图,称为“ Roadbook”。

映射链中的最后一个链接是本地化:为了使自动驾驶汽车可以使用任何地图,该车辆必须能够在其中定位自己。在地图使用代理(自动驾驶汽车)中运行的Mobileye软件,通过实时检测存储在其中的所有地标,自动在“Roadbook道路手册”中对车辆进行定位。

此外, REM TM提供了跨行业信息共享的技术和商业渠道。 REM TM旨在使不同的OEM可以参与此AD关键资产(Roadbook)的建设,同时为其RSD贡献获得适当和成比例的补偿。

The last link in the mapping chain is localization: in order for any map to be used by an autonomous vehicle, the vehicle must be able to localize itself within it. Mobileye software running within the map-consuming agent (the autonomous vehicle) automatically localizes the vehicle within the Roadbook by real-time detection of all landmarks stored in it.

Further, REMTM provides the technical and commercial conduit for cross-industry information sharing. REMTM is designed to allow different OEMs to take part in the construction of this AD-critical asset (Roadbook) while receiving adequate and proportionate compensation for their RSD contributions.

Driving Policy

Where sensing detects the present, driving policy plans for the future. Human drivers plan ahead by negotiating with other road users mainly using motion cues – the “desires” of giving-way and taking-way are communicated to other vehicles and pedestrians through steering, braking and acceleration. These “negotiations” take place all the time and are fairly complicated – which is one of the main reasons human drivers take many driving lessons and need an extended period of training until we master the art of driving. Moreover, the “norms” of negotiation vary from region to region as the code of driving in Massachusetts, for example, is quite different from that of California, even though the rules are identical.

驾驶策略

传感可以检测到当前情况,并为未来制定政策计划。驾驶员主要通过使用运动线索与其它道路使用者进行协商,从而制定了提前调度,即通过转向,制动和加速,将“让步”和“让步”的“愿望”传达给其它车辆和行人。这些“协商”一直在发生,而且相当复杂,这是人类驾驶员上许多驾驶课程并需要长期训练,直到掌握驾驶技术的主要原因之一。此外,协商的“规范”因地区而异,例如,马萨诸塞州的驾车守则与加利福尼亚州的驾车守则大不相同,

使机器人系统控制汽车的挑战在于,在可预见的未来,其它道路使用者很可能是人为驱动的。为了不妨碍交通,机器人汽车应表现出与人协商的技巧,同时时间保证功能安全。 换句话说,希望自动驾驶汽车安全驾驶,但要符合该地区的驾驶规范。Mobileye认为,对于手工制定的基于规则的决策而言,驾驶环境过于复杂。取而代之的是,采用机器学习来通过暴露于数据来“学习”决策过程。

The challenge behind making a robotic system control a car is that for the foreseeable future the “other” road users are likely to be human-driven, therefore in order not to obstruct traffic, the robotic car should display human negotiation skills but at the same time guarantee functional safety. In other words, we would like the robotic car to drive safely, yet conform to the driving norms of the region. Mobileye believes that the driving environment is too complex for hand-crafted rule-based decision making. Instead we adopt the use of machine learning to “learn” the decision making process through exposure to data.

Mobileye’s approach to this challenge is to employ what is called reinforcement learning algorithms trained through deep networks. This requires training the vehicle system through increasingly complex simulations by rewarding good behavior and punishing bad behavior. Our proprietary reinforcement learning algorithms add human-like driving skills to the vehicle system, in addition to the super-human sight and reaction times that our sensing and computing platforms provide. It also allows the system to negotiate with other human-driven vehicles in complex situations. Knowing how to do this well is one of the most critical enablers for safe autonomous driving.

Mobileye应对这一挑战的方法,采用通过深度网络训练的所谓的强化学习算法。需要通过奖励良好行为和惩罚不良行为,通过日益复杂的模拟来训练车辆系统。除了感知和计算平台提供的超人视觉和反应时间外,专有的强化学习算法还为车辆系统增加了类似人的驾驶技能。允许系统在复杂情况下与其它人为驾驶的车辆进行协商。理解如何做到这一点是安全自动驾驶的最关键因素之一。

人工智能芯片与自动驾驶
原文地址:https://www.cnblogs.com/wujianming-110117/p/14630450.html