Perception and Multimodal sensing for autonomous vehicles

Type | Start | End |
---|---|---|
National | Jan 2023 | Dec 2024 |
Responsible | URL |
---|---|
Santiago Royo | Percepción y sensado multimodal para vehículos autónomos |
Reference
This research is funded by the Ministerio de Ciencia, Innovación y Universidades (MICINN) of Spain project number TED2021-132338B-I00.
Description
Autonomous driving is the future of urban mobility, especially in an increasingly ageing society. In the medium term, journeys will be made in vehicles capable of making real-time decisions without human intervention in all types of situations, and the first trials have already begun in cities like San Francisco, Phoenix, and Singapore. Autonomous driving will help optimise energy consumption and traffic management, reduce accidents, and improve mobility for groups such as the elderly or disabled.
Achieving this transformation requires large volumes of precise data from detectors with various operational modes, enabling autonomous systems to interpret their surroundings and act safely, even in complex environments. This necessitates training advanced artificial intelligence (AI) algorithms, and for these algorithms to be reliable in all circumstances, they need a broad and diverse set of data to generate realistic and generalised information that helps the AI learn and make robust decisions.
With USEFUL, millions of images from multiple sensors, some of which are disruptive in the field of autonomous vehicles, will be gathered and converted into usable data to design reliable autonomous driving systems.
The data will be anonymised (not identifying pedestrians or vehicles) and incorporated into a public dataset that will allow other researchers to develop related algorithms. The goal is for the dataset to be as diverse as possible, capturing rural, urban, motorway, and other scenarios; including all types of vehicles, especially the most vulnerable such as bicycles or scooters; and collecting data in real weather conditions and realistic environments where sensor performance may vary. In fact, this data will be used to develop advanced applications for object detection on roads, environment mapping, movement prediction of road users, and digital twin creation, among other things, with greater reliability.
Additionally, USEFUL will allow testing different AI algorithm training methodologies to compare the most effective way of achieving secure perception systems. The vehicle will also enable future testing and validation of new types of autonomous driving sensors, new sensor distributions, or the development of detailed mapping applications for all types of environments.
Collaborators
Josep R. Casas | Associate Professor | josep.ramon.casas@upc.edu |