Tell us a bit about the company you’re President and CEO of, ImmerVision.
ImmerVision is a global leader in 360o panoramic imaging technology that was founded in 2000 in France. Based in Montreal since 2003, ImmerVision now licenses its patented panomorph optical technology and image processing algorithms to lens producers, product manufacturers, and software developers around the world.
After collaborating with some 35 security camera brands and helping develop over 100 video surveillance software programs worldwide, ImmerVision entered the transportation market in 2017. We’re currently working with several industry players such as LeddarTech, a global leader in LiDAR technologies, to increase the perception and accuracy of vehicle vision systems. It’s really interesting work! And we’ll be making a major global announcement soon, so stay tuned!
ImmerVision was founded in France. Why did you choose Quebec for its development?
At the time, in the early 2000s, France was too far from our market, which was mainly in North America and Asia. Investissement Québec won us over with its R&D tax credits. All the great work they do canvassing and helping companies integrate was really valuable to us during our transition.
What is panoramic viewing and can you give an example of what it can do for the average person in their daily life?
There’s some debate about it, but connected devices, which are becoming more and more common in households, all have miniature cameras that capture and analyze our movements. CCTV cameras for houses and public buildings also use panoramic viewing.
Cameras on cars also use wide-angle viewing. Several cellphone brands, such as Motorola, have also adopted 360o capture, not to mention GoPro cameras, which are becoming increasingly popular worldwide.
What is the optics/photonics industry and why is it important in the development of smart mobility?
It’s intelligent machine vision that takes the data collected by wide-angle cameras and presents it in a way artificial intelligence algorithms can understand. Smart mobility depends 100% on understanding the environment. It’s not just viewing—above all else, it’s understanding what the cameras are seeing. You need all three components of autonomy: perception, resulting from high-level optical quality; data analysis, including the aggregation of information to determine action; and finally, action.
What are the challenges that still need to be addressed in the coming years?
We’re working on better capture, but the challenge is still how to get the information to the artificial intelligence algorithms in real time to gain a better understanding of what the cameras are seeing. We’re working on simplifying these processes so that there’s no delay. That’s the key to self-driving transportation. We call it “data in picture.”
How does it feel to be a supplier for major global transport companies?
We’re a lot smaller than the companies we work with! But they all really need smaller companies like us to advance their technologies. We spark innovation for these large companies. We help them think about technologies they might not otherwise have thought of. We’re very much in the early stages of this type of work.
The ecosystems of these large corporate machines are complex. Being smaller means we can be very responsive. And you shouldn’t be afraid to invest! That’s the key to our credibility.
As a woman, what’s it like for you in the male-dominated world of transportation?
I don’t deny that there are a lot of problems. But I personally have never encountered any obstacles as a woman when marketing our technologies. I never felt that my gender was a disadvantage, so I never thought of it that way.
The world of finance, on the other hand, is a whole other story. You have to work very hard to be taken seriously as a woman. I’ve often felt discrimination, for example, when looking for funding.
What do you think will be the greatest breakthrough in wide-angle imaging in the coming years?
It will be to bridge the gap with artificial intelligence. Today’s viewing is based on narrow-angle coverage. We need to ensure that viewing becomes perception, meaning it provides context for images and algorithms.
The idea is to reproduce the human brain—the more intelligible the sensor data is for artificial intelligence algorithms, the more intelligent the mobility will be.