This episode is a live recording of our interview with Timo Sämann at the CVPR 2019 conference. Timo Sämann is a Product Technical Leader at Valeo, a French automotive supplier. He works in Kronach, Germany and received his Master's Degree in Electrical and Information Engineering at the Aschaffenburg University of Applied Sciences.
Sämann presented a tutorial titled “Safe Artificial Intelligence for Automated Driving” at CVPR 2019. His presentation focused on safety in autonomous driving and developing strategies to make AI safe in the field. During the interview, he shared key applications of the workshop he hosted at the conference and discussed the importance of safe AI.
/ Full Interview Transcripts /
Margaret Laffan: Good morning, Timo. Thank you for joining us today here at CVPR. How are you doing?
Timo Sämann: Thanks for having me.
Margaret Laffan: Awesome. Valeo is a French global automotive supplier. Can you tell us about your vision and goals?
Timo Sämann: I work at Valeo in Kronach, Germany. It's a small city with about 20,000 citizens, but it's a really beautiful city. There's a lot of nature around and it's great, and Valeo has had a research center in Kronach for three years. They started with just 10 engineers and has now grown to 80. Over the next [few] years, we expect to grow to 200 engineers and to provide enough space for that many people. There is also a new building under construction. Valeo is a large automotive supplier with more than 100,000 employees, and it has basically four business groups: visibility systems, thermo systems, powertrain systems, and comfort and driving-assistance systems. That's where I work in, and this business group has a goal to develop technologies that make driving safer, more autonomous, more intuitive, and more connected.
Margaret Laffan: So that brings us to what you were doing here at CVPR. So we know you organized a workshop on “Safe Artificial Intelligence for Automated Driving”. What was the reason for doing that type of workshop this year at CVPR?
Timo Sämann: For the realization of self-driving cars, we need the intensive use of deep learning methods. I will use the terms: AI algorithms, deep learning methods and deep networks as synonym[s] in this interview. One of the major challenges when using deep learning methods in the mobile industry is the highest safety requirements for the algorithms. The use of a “black box” solution such as DNN does not need the necessary requirements. We feel that the fact that the DNN (Deep Neural Networks) is resembled as a black box is largely neglected in scientific research. In order to emphasize the importance of this topic and to draw more attention to it, we organized this workshop.
Margaret Laffan: I know that the workshop had a number of different co-organizers as well. Can you tell us a bit more around who they were and their roles of participation were in the workshop?
Timo Sämann: Actually, this comes out of a project which deals with safe AI. Before I answer this question, let me say some words to this project. It is an German funded project which will actually start next month. The goal of it is to develop a strategy which allows us to make AI safe, and the final goal of it is to achieve a standardization that specifies conditions AI algorithms have to meet in order to be considered safe. That’s a really important point, because we have to ensure that before we can put the AI algorithms into a product such as the autonomous vehicle, which is responsible for human life at the end. There are more than 30 partners working together in this project, and hopefully it comes with a new standardization at the end of the project, which will be in 3 years from now.
Margaret Laffan: So some of the folks from safe AI, they were the co-organizers.
Timo Sämann: And come back to your questions, all co-organizers were very involved in this project and also care about that we have a nice, good mixture of industry partners, especially mobile industry and also science [industry].
Margaret Laffan: And when we think about safe AI, because it's such an important topic, what are some of the current challenges in safe AI? And were you addressing that in your workshop here as well?
Timo Sämann: So to give you a more structured answer to this question, I would like to follow the idea of who divides the entire safety space into three sub spaces. The first sub space is specification. This is how to specify the exact DNN behavior, so how should it behave in situation, A, B, C and so on. But the question is, how can we do this? How can we specify the exact behavior of DNN? And maybe even more important, how we can insert a specification into the DNN? So how can we insert a prior knowledge like traffic rules or physical laws into the DNN? It's an open question.
A second sub space is robustness. How can we get robustness against perturbations in the incubator. For example, adversarial attacks or out-of-distribution examples or bad weather conditions.
The third one is assurance, and this deals with how to validate and monitor the activities. Is it possible to obtain statistically relevant information, adjust testing in the real world? Or is it not feasible because too many test kilometers to drive? Do we have to test in a simulated world? Based on which KPIs (key performance indicators) we have to test to prove that the AI is safe. Or is it safer than the average human driver?
Margaret Laffan: So these are big challenges for safe AI. Thank you for that explanation. It's sets up that structure very nicely. So when you were in the workshop, what was the biggest takeaway for anybody who's in this space doing research?
Timo Sämann: I think one speaker hit the nail on the head in this talk. He said something like the main focus has been on benchmark so far. This means that you develop an algorithm that does particularly well on the benchmark, but there are a lot of benchmarks already in the high 90s. It's questionable about if it's worth the effort and all the time you spent to reach a gain of 0.2%.We need to understand more about how algorithms work, and pay less attention to the benchmark values.
Margaret Laffan: So it keeps circling around to explainability all the time. That's what we always keep coming back to that.
Timo Sämann: Yes, and I want to quickly add another point to it. LiDAR sensors are still very important to receive safe autonomous driving, because Elon Musk said some months ago, something like “everyone who relies on LiDAR is doomed”. I asked a lot of participants in the conference about their opinions about it, and I don't see any trends that AI will let us move away from using LiDAR. I think we will need it in short and also in the middle term to reach a safe artificial intelligence for autonomous driving.
Margaret Laffan: Yeah. We posed this question as well in an AI commercialization future transportation panel that we hosted back in early June. And all our panelists said the exact same thing as you did, so they concur with your views. So it seems that there's a shared perspective. So I'd like to talk to you a bit more now about your research itself. What your primary interest as a researcher is?
Timo Sämann: Sure. I’m interested in extending neural networks that they can take advantage of temporal consistency in video data. Most DNNs today just use single frames. This means that all the previous formation you obtained from previous times isn’t much used for the prediction in the current time steps. When you take the human brain as an example, we don't rediscover the whole world every second. We receive changes and we update changes that come along, and my goal is to get more robustness against perturbation in the incubator or other perturbation patterns by using temporal consistency and video data. I also published a paper on uncertainty and robustness workshop at ICML last week about this topic.
Margaret Laffan: When we think about your research, and the area of safe AI and transportation so forth, we know that keeping pedestrians, drivers and the public safe is the primary focus of all future transportation. How do we promote safety without sacrificing innovative opportunities? What's your perspective on this?
Timo Sämann: That's a really good question. Honestly, I don't think there has to be a trade-off. I think it forces us to understand more about AI algorithms, and I think this is the key when it comes to developing better AI.
Margaret Laffan: Timo, thank you so much for joining us today. It was a pleasure to have you here. Thank you.