Ubisoft AI Teaches Itself to Drive

Discrete and Continuous Action Representation for Practical RL in Video Games is a write-up on Ubisoft's work on artificial intelligence. The text is pretty academic, but VentureBeat interprets it for us, explaining that this involves AI that can teach itself to drive itself in a racing game. Here's a bit:
The Ubisoft researchers evaluated their algorithm on three environments designed to benchmark reinforcement learning systems, including a simple platformer-like game and two soccer-based games. They claim that its performance fell slightly short of industry-leading techniques, which they attribute to an architectural quirk. But they say that in a separate test, they successfully used it to train a video game vehicle with two continuous actions (acceleration and steering) and one binary discrete action (hand brake), the objective being to follow a given path as quickly as possible in environments the agent didn’t encounter during training.

“We showed that Hybrid SAC can be successfully applied to train a car on a high-speed driving task in a commercial video game,” wrote the researchers, who futher noted that their approach can accommodate a wide range of potential ways for an agent to interact with a video game environment, such as when the agent has the same inputs as a player (whose controller might be equipped with an analog stick that provides continuous values and buttons that can be pressed to yield discrete actions through combinations). “[This demonstrates] the practical usefulness of such an algorithm for the video game industry.”