Intro and Motivation

The ultimate goal of artificial intelligence is to achieve general intelligence, where AI systems can understand and learn from their environment in a manner that resembles human cognition. The path to the goal requires the agents to interact with other agents in their environment, some of which might also behave rationally. Those other agents might have different goals, which might not be aligned with an agent's goals, and they must be considered when deciding how to act.

This brings us to the world of games - abstract yet formal constructs that offer an ideal testing ground for exploring and advancing these critical concepts. In games, each player is guided by a specific, well-defined goal, and precise rules govern their interactions. For an agent to succeed in achieving its objectives, it must strategically anticipate, act, and react to the behaviors of other agents. Furthermore, games have deep historical roots, and after being played and scrutinized for centuries, they have been widely accepted as grand challenges for rational decision-making. As a result, games have been commonly used as benchmarks to demonstrate the potential for complex reasoning in artificial intelligence since its inception.

The journey towards mastering these games has seen a shift in methodology. While early successes relied heavily on manually designed domain expertise (e.g., chess and checkers), recent years have produced more universal, adaptable approaches. Self-play reinforcement learning, first demonstrated in Backgammon, has changed the field. With less dependence on human-crafted expertise, these methods have demonstrated success across a range of formidable games, from classics like Go, chess, and poker to more contemporary challenges such as shogi, Hex, Stratego, Diplomacy, and even complex digital environments like Atari, Doom, Dota 2, Starcraft, and Capture-the-Flag.

Multiple technological advancements have made these accomplishments possible, including deep reinforcement learning, dynamic network architectures, tree search-enhanced policy refinement, game-theoretic modeling, adaptive learning time scales, and meta-learning. While these achievements are commendable, we recognize that we are just beginning to scratch the surface of exploring the algorithms and reasoning processes in these dynamic environments. Much work remains to be done to understand the algorithms and reasoning processes within these environments.

A workshop on RL in games can bring together researchers and practitioners from various disciplines, mainly machine learning and game theory. We believe further deepening those cross-disciplinary interactions can often lead to the emergence of innovative ideas and collaborative research opportunities.