There is a total of 38 accepted papers: 4 oral presentations, and 34 posters.

Time Type Duration Title & Speaker / Author(s)

9:00 - 10:30 Intro + Mini-Tutorial 50 min Workshop overview. Intro to OpenSpiel, Factored Observation Games, and Search.
Julien Pérolat, Marc Lanctot, and Martin Schmid.
Oral 20 min Search in Cooperative Partially Observable Games.
Adam Lerer, Hengyuan Hu, Jakob Foerster and Noam Brown.
Poster 20+ min Poster Session #1


11:00-12:30 Invited Talk 30 min Amy Greenwald
Oral 20 min Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset.
Ruohan Zhang, Calen Walshe, Zhuode Liu, Lin Guan, Karl Muller, Jake Whritner, Luxin Zhang, Mary Hayhoe and Dana Ballard.
Poster 40+ min Poster Session #2

Lunch Break

2:00-3:30 Invited Talk 30 min Gerry Tesauro
Oral 20 min Counterfactual-Free Regret Minimization for Sequential Decision Making and Extensive-Form Games.
Gabriele Farina, Robin Schmucker and Tuomas Sandholm.
Oral 20 min Approximated Temporal-Induced Neural Self-Play for Finitely Repeated Bayesian Games.
Zihan Zhou, Zheyuan Ryan Shi, Fei Fang and Yi Wu.
Poster 20+ min Poster Session #3


4:00-5:00 Invited Talk 30 min Michael Bowling
Invited Talk 30 min David Silver

Poster Session #1

Decentralized Multi-Agent Actor-Critic with Generative Inference.
Kevin Corder, Manuel M Vindiola and Keith Decker.

Ensemble Policy Distillation in Deep Reinforcement Learning.
Yuxiang Sun and Pooyan Fazli.

Foolproof Cooperative Learning.
Alexis Jacq, Julien Perolat, Matthieu Geist and Olivier Pietquin.

RLCard: A Toolkit for Reinforcement Learning in Card Games.
Daochen Zha, Kwei-Herng Lai, Yuanpu Cao, Songyi Huang, Ruzhe Wei, Junyu Guo and Xia Hu.

Efficient Exploration with Failure Ratio for Deep Reinforcement Learning.
Minori Narita and Daiki Kimura.

Improved Generative Adversarial Imitation Learning Method for Stable Learning in Image Sequence Input based Game.
Wonsup Shin and Sung-Bae Cho.

Integrating Search and Scripts for Real-Time Strategy Games: An Empirical Survey.
Zuozhi Yang and Santiago Ontanon.

Single-Agent Optimization Through Policy Iteration Using Monte-Carlo Tree Search.
Arta Seify and Michael Buro.

Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks.
Gabriele Farina, Chun Kai Ling, Fei Fang and Tuomas Sandholm.

Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium.
Gabriele Farina, Chun Kai Ling, Fei Fang and Tuomas Sandholm.

Poster Session #2

Accelerating Self-Play Learning in Go.
David Wu.

Creating Pro-Level AI for a Real-Time Fighting Game using Deep Reinforcement Learning.
Inseok Oh, Seungeun Rho, Sangbin Moon, Seongho Son, Hyoil Lee and Jinyun Chung.

Variational Autoencoders for Opponent Modeling in Multi-Agent Systems.
Georgios Papoudakis and Stefano Albrecht.

Challenging Human Supremacy: Evaluating Monte Carlo Tree Search and Deep Learning for the Trick Taking Card Game Jass.
Joel Niklaus, Michele Alberti, Rolf Ingold, Markus Stolze and Thomas Koller.

Efficiently Guiding Imitation Learning Algorithms with Human Gaze.
Akanksha Saran, Ruohan Zhang, Elaine Schaertl Short and Scott Niekum.

Playing Games with Implicit Human Feedback.
Duo Xu, Mohit Agarwal, Raghupathy Sivakumar and Faramarz Fekri.

Scalable and Sample-Efficient Multi-Agent Imitation Learning.
Wonseok Jeon, Paul Barde, Joelle Pineau and Derek Nowrouzezahrai.

Analysis of Statistical Forward Planning Methods in Pommerman.
Diego Perez-Liebana, Raluca Gaina, Olve Drageset, Erc├╝ment Ilhan, Martin Balla and Simon Lucas.

Evaluating RL Agents in Hanabi with Unseen Partners.
Rodrigo de Moura Canaan, Xianbo Gao, Youjin Chung, Julian Togelius, Andy Nealen and Stefan Menzel.

Self-Play Learning Without a Reward Metric.
Dan Schmidt, Nick Moran, Jonathan Rosenfeld, Jonathan Rosenthal and Jonathan Yedidia.

Online Learning for Bidding Agent in First Price Auction.
Gota Morishita, Kenshi Abe, Kazuhisa Ogawa and Yusuke Kaneko.

A Story of Two Streams: Reinforcement Learning Models from Human Behavior and Neuropsychiatry.
Baihan Lin, Djallel Bouneffouf, Guillermo Cecchi, Jenna Reinen and Irina Rish.

Regret Minimization via Novel Vectorized Sampling Policies, Exploration and Counterfactual Baseline.
Hui Li.

Composability of Regret Minimizers.
Gabriele Farina, Christian Kroer and Tuomas Sandholm.

Poster Session #3

Discrete and Continuous Action Representation for Practical RL in Video Games.
Olivier Delalleau, Maxim Peter, Eloi Alonso and Adrien Logut.

Build Order Selection in Starcraft Utilizing a Customized Bayesian Multi-Armed Bandit Algorithm.
Hao Pan.

Deep RL Agent for a Real-Time Action Strategy Game.
Michal Warchalski, Dimitrije Radojevic and Milos Milosevic.

TradingStone: Benchmark for Multi-Action and Adversarial Games.
Manuel Del Verme, Simone Totaro and Wang Ling.

Selective Kernels Transfer in Deep Reinforcement Learning.
Jesus Garcia Ramirez, Eduardo Morales and Hugo Jair Escalante.

Off-Policy Deep Reinforcement Learning with Analogous Disentangled Exploration.
Anji Liu, Yitao Liang and Guy Van den Broeck.

Towards Graph Representation Learning in Emergent Communication.
Agnieszka Slowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik and Sean B. Holden.

Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions.
Gabriele Farina, Christian Kroer and Tuomas Sandholm.

Stable-Predictive Optimistic Counterfactual Regret Minimization.
Gabriele Farina, Christian Kroer, Noam Brown and Tuomas Sandholm.

Reinforcement Learning with Convolutional Reservoir Computing.
Hanten Chang and Katsuya Futagami.