Cerence Inc.  announced in conjunction with NVIDIA GTC 2022 that its Cerence Assistant conversational AI-powered in-car assistant is now supported on the NVIDIA DRIVE platform and uses the open and scalable DRIVE IX cockpit software stack. Cerence, in collaboration with NVIDIA, aims to deliver next-generation, multi-modal automotive cockpit experiences that will be core to the connected and autonomous car of the future. Cerence Assistant brings the best of Cerence's industry-leading AI, voice and multi-modal innovations like NVIDIA DRIVE IX gesture and gaze detection together to deliver an out-of-the-box, in-car assistant that is integrated with and optimized for the DRIVE platform.

Cerence Assistant leverages sensor data to best serve drivers throughout their daily journeys, for example, with low-fuel or charge notifications and subsequent navigation recommendations to the nearest gas or charging station. Cerence Assistant features robust speech recognition, natural language understanding and text-to-speech capabilities – all with global language coverage. It also offers multi-modal capabilities that enable voice to be combined with the NVIDIA DRIVE IX gesture and gaze detection features to further enhance the driver experience, serving as a true co-pilot in the cockpit.

Offered in a hybrid embedded/cloud architecture, Cerence Assistant ensures drivers have access to important capabilities regardless of connectivity. NVIDIA DRIVE IX empowers automakers and autonomous vehicle developers to build critical vehicle interaction capabilities based on vision, voice, and graphics user experience. With Cerence Assistant now supported on the NVIDIA DRIVE platform, automakers and tier-one suppliers can easily deploy best-in-class in-car experiences that leverage the power of AI and connectivity to enhance driver safety, comfort, and productivity.

Cerence plans to further integrate its offerings with NVIDIA technology to deliver enhanced multi-modal interaction capabilities that bring the power of vision and voice together. This will enable drivers to simply look at a building and ask the in-car assistant for information, creating a faster and more productive way of gaining necessary information while on the road.