Li Auto marks a significant milestone in autonomous vehicle technology with the release of its “End-to-End + VLM” system through OTA 6.4 update. This advancement extends across the company’s AD Max platform models, including MEGA, L9, L8, L7, and L6.
The integration of Visual Language Model technology represents a fundamental shift in how vehicles process and respond to their environment. This sophisticated system processes data within a unified model, enabling more intuitive decision-making capabilities.
GPU-powered processing delivers unprecedented response times through single inference execution. This technological leap translates to enhanced synchronization between visual perception and vehicle response, providing drivers with swift, precise reactions to road conditions.
The system’s foundation rests on an AI model trained using 3 million video clips, enabling advanced features like full-scene Navigate on Autopilot (NOA). This translates to practical improvements in everyday driving scenarios, from smoother acceleration to more natural lane changes.
Vehicle behavior now mirrors human-like decision-making patterns, creating a more natural driving experience. The system’s ability to handle complex traffic situations showcases the practical benefits of this technological advancement.
This update positions Li Auto at the forefront of intelligent driving technology, with capabilities that extend beyond traditional autonomous features. The seamless integration of AI-powered decisions transforms everyday driving into an enhanced experience.
Related Post
Li Auto Sets Up Hong Kong Chip R&D Office, Aims for EV Tech Leadership
Li Auto L9: 200k Deliveries Mark Chinese SUV Innovation
Li Auto Hits 500,000 Vehicle Deliveries in Just 46 Months, Sets Chinese EV Records