arXiv:2408.13871v3 Announce Type: replace-cross Abstract: We present three game-playing agents incorporating Vision Transformers (ViT) into the AlphaZero framework: AlphaViT, AlphaViD (AlphaViT with a transformer decoder), and AlphaVDA (AlphaViD with learnable action embeddings). These agents can play multiple board games of varying sizes using a single neural network with shared weights, thus overcoming AlphaZero's limitation of fixed board sizes. AlphaViT employs only a transformer encoder, whereas AlphaViD and AlphaVDA incorporate both a transformer encoder and a decoder. In AlphaViD, the decoder processes outputs from the encoder, whereas AlphaVDA uses learnable embeddings as the decoder inputs. The additional decoder in AlphaViD and AlphaVDA provides flexibility to adapt to various action spaces and board sizes. Experimental results show that the proposed agents, trained on either individual games or on multiple games simultaneously, consistently outperform traditional algorithms, such as Minimax and Monte Carlo Tree Search. They approach the performance of AlphaZero despite relying on a single deep neural network (DNN) with shared weights. In particular, AlphaViT performs strongly across all evaluated games. Furthermore, fine-tuning the DNN with weights pre-trained on small board games accelerates convergence and improves performance, particularly in Gomoku. Interestingly, simultaneous training on multiple games yields performance comparable to, or even surpassing, that of single-game training. These results indicate the potential of transformer-based architectures for developing more flexible and robust game-playing AI agents that excel in multiple games and dynamic environments.