Proposal of an automatic tool for evaluating the quality of decision-making on Checkers player agents

  • Matheus Prado Prandini Faria UFU
  • Rita Maria Silva Julia UFU
  • Lídia Bononi Paiva Tomaz UFU,IFTM

Resumo


Checkers player agents represent an appropriate case study for the best unsupervised methods of Machine Learning. This work presents a tool to measure the performance of these methods based on the quality of the decision making of these agents. The proposed tool, based on the data of movements performed in real games by the agents under evaluation, provides a statistical way of automatically comparing the coincidence rates between the decision making of the evaluated agents with those that the remarkable player agent Cake would do in the same situations. The tool was validated through tournaments between agents comparing their respective coincidence rates and their performance.

Referências


Caixeta, G. S. e Julia, R. M. S. (2008). A Draughts Learning System Based on Neural Networks and Temporal Differences: The Impact of an Efficient Tree-Search Algorithm, pages 73–82. Springer Berlin Heidelberg, Berlin, Heidelberg.

Campos, P. e Langlois, T. (2003). Abalearn: Efficient self-play learning of the game abalone. In INESC-ID, Neural Networks and Signal Processing Group.

Duarte, V. A. R., Julia, R. M. S., Albertini, M. K., e Neto, H. C. (2015). Mp-draughts: Unsupervised learning multi-agent system based on mlp and adaptive neural networks. In 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pages 920–927.

Feldmann, R., Monien, B., Mysliwietz, P., e Vornberger, O. (1990). Distributed Game

Tree Search, pages 66–101. Springer New York, New York, NY.

Fierz, M. C. (2008). CheckerBoard informations and API.

Fierz, M. C. (2010). Cake informations. Technical report.

Fogel, D. B. e Chellapilla, K. (2002). Verifying anaconda’s expert rating by competing against chinook: experiments in co-evolving a neural checkers player. Neurocomputing, 42(1):69 – 86. Evolutionary neural systems.

Gilbert, E. (2000). KingsRow Technical report.

Neto, H. C., Julia, R. M. S., e Caexeta, G. S. (2014). Ls-visiondraughts: improving the performance of an agent for checkers by integrating computational intelligence, reinforcement learning and a powerful search method. Appl. Intell., vol. 41, no. 2, pages 225–250.

Neumann, J. e Morgenstern, O. (1953). Theory of games and economic behavior. Princeton Univ. Press, Princeton, NJ, 3. ed. edition.

Russell, S. e Norvig, P. (2004). Inteligência Artificial - Uma abordagem Moderna. Campus, 2nd edition.

Schaeffer, J., Burch, N., Bjornsson, Y., Kishimoto, A., Muller, M., Lake, R., Lu, P., e Sutphen, S. (2007). Checkers is solved. Science.

Schaeffer, J., Robert, L., Paul, L., e Martin, B. (1996). CHINOOK: The world manmachine checkers champion. The AI Magazine, 16(1):21–29.

Tomaz, L. B. P., Julia, R. M. S., e Barcelos, A. R. A. (2013). Improving the accomplishment of a neural network based agent for draughts that operates in a distributed learning environment. In 2013 IEEE 14th International Conference on Information Reuse Integration (IRI), pages 262–269.

Tomaz, L. B. P., Julia, R. M. S., e Duarte, V. A. (2017). A multiagent player system composed by expert agents in specific game stages operating in high performance environment. Applied Intelligence, pages 1–22.

van den Herik, H., Uiterwijk, J. W., e van Rijswijck, J. (2002). Games solved: Now and in the future. Artificial Intelligence, 134(1):277 – 311.

Publicado
22/10/2018
Como Citar

Selecione um Formato
FARIA, Matheus Prado Prandini; JULIA, Rita Maria Silva; TOMAZ, Lídia Bononi Paiva. Proposal of an automatic tool for evaluating the quality of decision-making on Checkers player agents. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 15. , 2018, São Paulo. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2018 . p. 389-400. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2018.4433.