Autonomous cars are one of the AI visions of the future. Self-driving vehicles can eliminate accidents, increase efficiency, eliminate traffic jams, save time, and make our work commute more pleasant. However, the vision of a fully autonomous car has not materialized yet.
As of 2024, the highest level of autonomy commercially available is Level 3, with Mercedes-Benz’s Drive Pilot leading the way in limited markets. This system operates under specific conditions, including daylight and clear weather, and speeds up to 40 mph on pre-approved roads in California and Nevada.
Most vehicles marketed as self-driving, including Tesla’s Self-Driving system, remain at Level 2, requiring constant driver supervision.
One significant roadblock to the development of fully autonomous cars is the ethical decisions that cars need to make on the road. To examine these moral choices, MIT developed a moral machine, a tool designed to test the decisions of self-driving vehicles. Once on the road, level five autonomous cars will need to make millions of decisions, such as deciding about which obstacles to avoid, how to protect human lives, how to safely navigate, and the worst case scenario, who to injure if there are no other options.