2. News
  3. Stories
  4. 2017
  5. 02
  6. IQ-Parking

We use cookies to help you get the most out of our website and to improve our communication with you. We take your preferences into account and process data for analytics and personalization only if you give us your consent by clicking on "Agree and continue". You can withdraw your consent at any time with effect for the future. You can find further information on cookies and customization options by clicking on the "Show details" button. 

Show details Agree and continue

The four Rings Find their own Way

Audi at NIPS in Barcelona

The field of artificial intelligence

Self-learning systems are a key technology for piloted driving – Audi is using them to take the next step within the field of artificial intelligence. At the NIPS conference, the company presented the “Audi Q2 deep learning concept”, a model car that can find its own way into a parking spot.

NIPS (Conference and Workshop on Neural Information Processing Systems) is one of the world’s most important symposia on artificial intelligence (AI). Every year, it presents topics from the fields of machine learning and computational neuroscience, with the most recent held December 2016 in Barcelona. Participating in NIPS for the first time, Audi used a 1:8 scale model to demonstrate how a car develops intelligent parking strategies independently.

Data Scientists

Michael Schlittenbauer (left) and Torsten Schön are running the advance development project at Audi Electronics Venture GmbH (AEV).

“Deep learning takes place in a deep neural network that functions similarly to the human brain.”

Torsten Schön Data Scientist with AEV
  • Parking spot
  • Cameras and sensors
  • Learning process

“The area measures three meters by three meters, with an open metal frame serving as the parking bay,” explains Michael Schlittenbauer, a data scientist with Audi subsidiary Audi Electronics Venture GmbH (AEV), where the project is currently undergoing advance development. “We can put the Q2 model wherever we want – it will find and maneuver its way into its parking spot in all constellations.”

The sensors used by the model car are two mono-cameras directed forward and rearward and ten ultrasound sensors. A central on-board computer translates their information into control signals for the steering and the electric motor. On the surface, the model car calculates its position relative to the parking spot. As soon as it identifies the space, it works out how to get to its target, maneuvering accordingly based on the situation by steering and driving forward and backward.

“We refer here to “deep reinforcement learning” within a particularly deep neural network that functions similarly to the human brain,” explains Torsten Schön, who is also a data scientist at AEV. “The algorithm is first trained on the computer. Deep reinforcement learning is an iterative process that runs through many loops based on a method of trial-and-error.”

The “Audi Q2 deep learning concept” starts by arbitrarily choosing to drive in a particular direction. An algorithm identifies the failed and successful actions and evaluates them against a predefined benchmark. Gradually, the car manages to deal with an increasing number of situations, ultimately being able to solve even difficult tasks on its own.

In the next step, the developers are transferring the parking-space search process to a real car. Peter Steiner, CEO of Audi Electronics Venture GmbH, says: “Machines can safely and systematically handle highly complex situations on the basis of massive data quantities. AI applications will become particularly important for piloted driving, especially in demanding city traffic.”

Beneath the shell:

Two cameras, ten ultrasound sensors, one Li-ion battery, one electric motor, one actuator for the
steering – the technology of the model car.

The greater the degree of integration of AI components, the more Audi will intensify its cooperation with partners from the hi-tech industry. Alongside research institutes, the brand’s global network includes companies from hotspots in Silicon Valley (USA), Europe and Israel. One of them is MobilEye, which is a leader in the field of image recognition. Within this cooperation, the two parties have developed environmental-perception software based on deep learning. In 2017, Audi will use this for the first time in the central driver assistance control unit (zFAS) for its new A8 flagship. An important development partner for the zFAS was NVIDIA, market leader in the field of hardware systems with an associated programming environment.

May also be of interest


Text first published in the Audi magazine Encounter 01/2017

Important note

When you access this link, you leave the pages of Volkswagen AG. Volkswagen AG does not claim ownership of third-party websites accessible via links and is not responsible for their content. Volkswagen has no influence on the data that is collected, stored or processed on this site. You can find more detailed information on this in the data protection declaration of the provider of the external website.

Continue to page Cancel