abstract large_string | keywords large_string | huggingface large_string | github large_string | url large_string | booktitle large_string | year large_string | author large_string | title large_string | ENTRYTYPE large_string | ID large_string | type large_string | supervisor large_string | pdf large_string | doi large_string | pages large_string | number large_string | volume large_string | journal large_string | month large_string | note large_string | editor large_string | website large_string | series large_string | publisher large_string | numpages large_string | articleno large_string | issue_date large_string | address large_string | eprint large_string | eprinttype large_string | issn large_string | school large_string | isbn large_string | location large_string | tldr large_string | bot large_string | slides large_string | poster large_string | model large_string | blog large_string | day large_string | language large_string | dataset large_string | institution large_string | primaryclass large_string | archiveprefix large_string | eissn large_string | place large_string | howpublished large_string | video large_string | organization large_string | talk large_string | keywors large_string | article-number large_string | urldate large_string | data large_string | langid large_string | pagetotal large_string | titleaddon large_string | preprint large_string | repository large_string | software large_string | figshare large_string | laysummary large_string | annote large_string | appendix large_string | pypi large_string | code large_string | study large_string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
With the development of chess engines, cheating online has never been easier, resulting in a need for more robust and accurate detection systems. This paper presents a novel approach to chess cheater detection that combines conventional chess engines and neural networks to help identify which games are authentically played by humans and which show signs of extraneous intervention. By utilizing Stockfish to measure centipawn loss and its mathematical derivatives, we can measure deviations from typical computer-generated moves much like in conventional anti-cheat systems. Additionally, the neural network Maia, designed specifically to mimic human play, transmutes centipawn loss data to highlight deviations from human style. This dual-measurement system addresses the limitations of the given traditional anti-cheat systems, which face the issue of distinguishing between strong human players and those using engines. The collected metadata is analyzed using a sequential neural network, which identifies patterns of fair play violation. Our approach offers a robust solution for maintaining the integrity of online chess by accurately detecting and preventing cheating. | null | null | null | null | Information and Software Technologies | 2025 | Iavich, Maksim and Kevanishvili, Zura | A Neural Network Approach to Chess Cheat Detection | inproceedings | maksim:2025:neural-network-approach-chess-cheat-detection | null | null | null | null | 131--145 | null | null | null | null | null | Lopata, Audrius and Gudonien{\.{e}}, Daina and Butkien{\.{e}}, Rita and {\v{C}}eponis, Jonas | null | null | Springer Nature Switzerland | null | null | null | Cham | null | null | null | null | 978-3-031-84263-4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Context: This study aims to confirm, replicate and extend the findings of a previous article entitled ''Metamorphic Testing of Chess Engines'' that reported inconsistencies in the analyses provided by Stockfish, the most widely used chess engine, for transformed chess positions that are fundamentally identical. Initial findings, under conditions strictly identical to those of the original study, corroborate the reported inconsistencies. Objective: However, the original article considers a specific dataset (including randomly generated chess positions, end-games, or checkmate problems) and very low analysis depth (10 plies11A ply refers to a single turn taken by one player in a game. Two plies, one from each player, together constitute a complete move., corresponding to 5 moves). These decisions pose threats that limit generalizability of the results, but also their practical usefulness both for chess players and maintainers of Stockfish. Thus, we replicate the original study. Methods: We consider this time (1) positions derived from actual chess games, (2) analyses at appropriate and larger depths, and (3) different versions of Stockfish. We conduct novel experiments on thousands of positions, employing significantly deeper searches. Results: The replication results show that the Stockfish chess engines demonstrate significantly greater consistency in its evaluations. The metamorphic relations are not as effective as in the original article, especially on realistic chess positions. We also demonstrate that, for any given position, there exists a depth threshold beyond which further increases in depth do not result in any evaluation differences for the studied metamorphic relations. We perform an in-depth analysis to identify and clarify the implementation reasons behind Stockfish's inconsistencies when dealing with transformed positions. Conclusion: A first concrete result is thus that metamorphic testing of chess engines is not yet an effective technique for finding faults of Stockfish. Another result is the lessons learned through this replication effort: metamorphic relations must be verified in the context of the domain's specificities; without such contextual validation, they may lead to misleading or irrelevant conclusions; changes in parameters and input dataset can drastically alter the effectiveness of a testing method. | Reproducibility, Replicability, Metamorphic testing, Chess engines | null | null | https://www.sciencedirect.com/science/article/pii/S0950584925000187 | null | 2025 | Axel Martin and Djamel Eddine Khelladi and Th\'{e}o Matricon and Mathieu Acher | Re-evaluating metamorphic testing of chess engines: A replication study | article | martin:2025:re-evaluating-metamorphic-testing-chess-engines-replication-study | null | null | null | 10.1016/j.infsof.2025.107679 | 107679 | null | null | Information and Software Technology | null | null | null | null | null | null | null | null | null | null | null | null | 0950-5849 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
As artificial intelligence becomes increasingly intelligent---in some cases, achieving superhuman performance---there is growing potential for humans to learn from and collaborate with algorithms. However, the ways in which AI systems approach problems are often different from the ways people do, and thus may be uninterpretable and hard to learn from. A crucial step in bridging this gap between human and artificial intelligence is modeling the granular actions that constitute human behavior, rather than simply matching aggregate human performance. We pursue this goal in a model system with a long history in artificial intelligence: chess. The aggregate performance of a chess player unfolds as they make decisions over the course of a game. The hundreds of millions of games played online by players at every skill level form a rich source of data in which these decisions, and their exact context, are recorded in minute detail. Applying existing chess engines to this data, including an open-source implementation of AlphaZero, we find that they do not predict human moves well. We develop and introduce Maia, a customized version of AlphaZero trained on human chess games, that predicts human moves at a much higher accuracy than existing engines, and can achieve maximum accuracy when predicting decisions made by players at a specific skill level in a tuneable way. For a dual task of predicting whether a human will make a large mistake on the next move, we develop a deep neural network that significantly outperforms competitive baselines. Taken together, our results suggest that there is substantial promise in designing artificial intelligence systems with human collaboration in mind by first accurately modeling granular human decision-making. | Human-AI collaboration, Action Prediction, Chess | null | https://github.com/CSSLab/maia-chess | https://doi.org/10.1145/3394486.3403219 | {KDD} '20: The 26th {ACM} {SIGKDD} Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020 | 2020 | Reid McIlroy{-}Young and Siddhartha Sen and Jon M. Kleinberg and Ashton Anderson | Aligning Superhuman {AI} with Human Behavior: Chess as a Model System | inproceedings | mcilroy-young:2020:aligning-superhuman-ai-human-behavior | null | null | https://dl.acm.org/doi/pdf/10.1145/3394486.3403219 | 10.1145/3394486.3403219 | 1677--1687 | null | null | null | null | null | Rajesh Gupta and Yan Liu and Jiliang Tang and B. Aditya Prakash | https://www.maiachess.com/ | null | {ACM} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The advent of machine learning models that surpass human decision-making ability in complex domains has initiated a movement towards building AI systems that interact with humans. Many building blocks are essential for this activity, with a central one being the algorithmic characterization of human behavior. While much of the existing work focuses on aggregate human behavior, an important long-range goal is to develop behavioral models that specialize to individual people and can differentiate among them.To formalize this process, we study the problem of behavioral stylometry, in which the task is to identify a decision-maker from their decisions alone. We present a transformer-based approach to behavioral stylometry in the context of chess, where one attempts to identify the player who played a set of games. Our method operates in a few-shot classification framework, and can correctly identify a player from among thousands of candidate players with 98\% accuracy given only 100 labeled games. Even when trained on amateur play, our method generalises to out-of-distribution samples of Grandmaster players, despite the dramatic differences between amateur and world-class players. Finally, we consider more broadly what our resulting embeddings reveal about human style in chess, as well as the potential ethical implications of powerful methods for identifying individuals from behavioral data. | chess, deep-learning, embeddings, few-shot-learning, behavioral-stylometry | null | https://github.com/CSSLab/behavioral-stylometry | https://proceedings.neurips.cc/paper_files/paper/2021/file/ccf8111910291ba472b385e9c5f59099-Paper.pdf | Advances in Neural Information Processing Systems | 2021 | McIlroy-Young, Reid and Wang, Yu and Sen, Siddhartha and Kleinberg, Jon and Anderson, Ashton | Detecting Individual Decision-Making Style: Exploring Behavioral Stylometry in Chess | inproceedings | mcilroy-young:2021:chess-stylometry | null | null | null | null | 24482--24497 | null | 34 | null | null | keywords from github repo | M. Ranzato and A. Beygelzimer and Y. Dauphin and P.S. Liang and J. Wortman Vaughan | null | null | Curran Associates, Inc. | null | null | null | null | null | null | null | null | null | null | null | null | https://github.com/CSSLab/behavioral-stylometry/blob/main/documents/chess_embedding_slides.pdf | null | null | null | null | null | null | null | null | null | null | null | https://slideslive.com/38970556/detecting-individual-decisionmaking-style-exploring-behavioral-stylometry-in-chess?ref=speaker-92823 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AI systems that can capture human-like behavior are becoming increasingly useful in situations where humans may want to learn from these systems, collaborate with them, or engage with them as partners for an extended duration. In order to develop human-oriented AI systems, the problem of predicting human actions---as opposed to predicting optimal actions---has received considerable attention. Existing work has focused on capturing human behavior in an aggregate sense, which potentially limits the benefit any particular individual could gain from interaction with these systems. We extend this line of work by developing highly accurate predictive models of individual human behavior in chess. Chess is a rich domain for exploring human-AI interaction because it combines a unique set of properties: AI systems achieved superhuman performance many years ago, and yet humans still interact with them closely, both as opponents and as preparation tools, and there is an enormous corpus of recorded data on individual player games. Starting with Maia, an open-source version of AlphaZero trained on a population of human players, we demonstrate that we can significantly improve prediction accuracy of a particular player's moves by applying a series of fine-tuning methods. Furthermore, our personalized models can be used to perform stylometry---predicting who made a given set of moves---indicating that they capture human decision-making at an individual level. Our work demonstrates a way to bring AI systems into better alignment with the behavior of individual people, which could lead to large improvements in human-AI interaction. | Mimetic models; Human-AI interaction; Chess; Action prediction; Machine learning; Behavioral stylometry | null | https://github.com/CSSLab/maia-Individual | https://doi.org/10.1145/3534678.3539367 | {KDD} '22: The 28th {ACM} {SIGKDD} Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022 | 2022 | Reid McIlroy{-}Young and Russell Wang and Siddhartha Sen and Jon M. Kleinberg and Ashton Anderson | Learning Models of Individual Behavior in Chess | inproceedings | mcilroy-young:2022:learning-models-individual-behavior-chess | null | null | null | 10.1145/3534678.3539367 | 1253--1263 | null | null | null | null | null | Aidong Zhang and Huzefa Rangwala | null | null | {ACM} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://dl.acm.org/doi/suppl/10.1145/3534678.3539367/suppl_file/maia-individual.mp4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Skill acquisition is central to developing expertise, yet the behavioral mechanisms that separate more successful learners from less successful ones remain poorly understood. Using a large naturalistic dataset of about one million online chess games played by ~\hspace{0.167em}820 individuals over three years (2013–2015), we built an interpretable machine learning model to classify learners based only on behavioral features. Learners were labeled as ``fast learners'' or ``not fast learners'' based on normalized monthly Elo progression, adjusted for both starting rating and the increasing difficulty of improving at higher levels. We engineered time-sensitive features across four behavioral dimensions: practice structure, challenge level, strategic exploration (measured via move-sequence entropy), and tactical efficiency (the number of rounds needed to reach a 70\% win probability in games eventually won). A logistic regression model trained on the five strongest predictors - optimal challenge steady magnitude, optimal challenge late slope, entropy steady magnitude, optimal challenge mean, and tactical efficiency mean - achieved an F1 of 0.68 and an AUC of 0.78. Coefficients showed that average tactical efficiency was a strong predictor of fast learning, whereas the role of challenge-level features was less clear. To explore this, we fitted a linear regression with average tactical efficiency (as a proxy for expertise) as the dependent variable. This model explained 53\% of the variance (R\ensuremath{^2} = 0.53, RMSE\hspace{0.167em}=\hspace{0.167em}0.05) and revealed optimal challenge as the strongest predictor. This suggests that well-calibrated challenge levels are key to differences in chess performance. | null | null | null | https://www.researchsquare.com/article/rs-7789635/v1 | null | 2025 | Meireles, Lu\'{\i}s and Mendes-Neves, Tiago and Moreira, Jo\~{a}o | Practice Structure Predicts Skill Growth in Online Chess: A Behavioral Modeling Approach | misc | meireles:2025:practice-structure-predicts-skill-growth-online-chess-behavioral-modeling-approach | null | null | null | 10.21203/rs.3.rs-7789635/v1 | null | null | null | null | October | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Chess is a canonical example of a task that requires rigorous reasoning and long-term planning. Modern decision Transformers - trained similarly to LLMs - are able to learn competent gameplay, but it is unclear to what extent they truly capture the rules of chess. To investigate this, we train a 270M parameter chess Transformer and test it on out-of-distribution scenarios, designed to reveal failures of systematic generalization. Our analysis shows that Transformers exhibit compositional generalization, as evidenced by strong rule extrapolation: they adhere to fundamental syntactic rules of the game by consistently choosing valid moves even in situations very different from the training data. Moreover, they also generate high-quality moves for OOD puzzles. In a more challenging test, we evaluate the models on variants including Chess960 (Fischer Random Chess) - a variant of chess where starting positions of pieces are randomized. We found that while the model exhibits basic strategy adaptation, they are inferior to symbolic AI algorithms that perform explicit search, but gap is smaller when playing against users on Lichess. Moreover, the training dynamics revealed that the model initially learns to move only its own pieces, suggesting an emergent compositional understanding of the game. | null | null | https://github.com/meszarosanna/ood_chess | https://arxiv.org/abs/2510.20783 | null | 2025 | Anna M\'{e}sz\'{a}ros and Patrik Reizinger and Ferenc Husz\'{a}r | Out-of-distribution Tests Reveal Compositionality in Chess Transformers | misc | meszaros:2025:out-of-distribution-tests-reveal-compositionality-chess-transformers | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2510.20783 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This study addresses the challenge of quantifying chess puzzle difficulty - a complex task that combines elements of game theory and human cognition and underscores its critical role in effective chess training. We present GlickFormer, a novel transformer-based architecture that predicts chess puzzle difficulty by approximating the Glicko-2 rating system. Unlike conventional chess engines that optimize for game outcomes, GlickFormer models human perception of tactical patterns and problem-solving complexity.The proposed model utilizes a modified ChessFormer backbone for spatial feature extraction and incorporates temporal information via factorized transformer techniques. This approach enables the capture of both spatial chess piece arrangements and move sequences, effectively modeling spatio-temporal relationships relevant to difficulty assessment.Experimental evaluation was conducted on a dataset of over 4 million chess puzzles. Results demonstrate GlickFormer's superior performance compared to the state-of-the-art ChessFormer baseline across multiple metrics. The algorithm's performance has also been recognized through its competitive results in the IEEE BigData 2024 Cup: Predicting Chess Puzzle Difficulty competition, where it placed 11th.The insights gained from this study have implications for personalized chess training and broader applications in educational technology and cognitive modeling. | Training;Measurement;Accuracy;Games;Predictive models;Transformers;Feature extraction;Data models;Problem-solving;Context modeling | null | null | https://doi.ieeecomputersociety.org/10.1109/BigData62323.2024.10825919 | 2024 IEEE International Conference on Big Data (BigData) | 2024 | Milosz, Szymon and Kapusta, Pawel | { Predicting Chess Puzzle Difficulty with Transformers } | inproceedings | milosz:2024:predicting-puzzle-difficulty-transformers | null | null | null | 10.1109/BigData62323.2024.10825919 | 8377--8384 | null | null | null | December | null | null | null | null | IEEE Computer Society | null | null | null | Los Alamitos, CA, USA | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This paper presents our third-place solution for the FedCSIS 2025 Challenge: Predicting Chess Puzzle Difficulty - Second Edition. Building on our prior GlickFormer architecture, we develop a transformer-based approach featuring a novel multitask pretraining strategy that combines masked-square reconstruction with solution policy prediction. Our spatial-only architecture directly embeds solution moves, eliminating temporal modules, while integrating human-centric priors through Maia-2 engine solve-rate predictions. Evaluated on the Lichess puzzle corpus, our approach reduces validation MSE by 30.4\% compared to from-scratch training and achieves competitive results (test MSE: 55.9k) despite distribution shifts in the competition environment.\hspace{0.6em} | null | null | null | http://dx.doi.org/10.15439/2025F7603 | Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS) | 2025 | Szymon Mi\l{}osz | Pretraining Transformers for Chess Puzzle Difficulty Prediction | inproceedings | milosz:2025:pretraining-transformers-chess-puzzle-difficulty-prediction | null | null | null | 10.15439/2025F7603 | 831--835 | null | 43 | null | null | null | Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak | null | Annals of Computer Science and Information Systems | IEEE | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
From the observational methodology approach, this study analyses definitive errors or losing blunders, i.e. errors that result in the loss of the game, in elite players at U8 level. An ad hoc observation instrument has been designed as a combination of field format and category systems, based on a thorough theoretical review of the internal logic of chess. The games were compiled in the ChessBase 17 program and analysed using Stockfish 16 NNUE via https://lichess.org/es. The moment in the game when the error occurs is extracted and recorded and coded using Lince software. The reliability of the records from the observation system developed was guaranteed by interobserver agreement, calculated using Cohen's Kappa coefficient. This paper's objective is achieved by means of the decision tree analysis technique, obtained using the CHAID procedure, taking the ``impact of the error'' as the predicted dimension. The results obtained have allowed us to conclude that the errors that lead to the loss of the game for elite U8 players are related to short-term calculation (tactical motifs, undefended pieces or checkmate) as opposed to long-term strategic errors. | Chess, Definitive Errors, Children, Elite, Stockfish NNUE | null | null | https://doi.org/10.2478/ijcss-2025-0012 | null | 2025 | Miranda, Jorge and Arana, Javier and Lapresa, Daniel and Anguera, M. Teresa | Observational Analysis of Mistakes in Chess Initiation, Using Decision Trees | article | miranda:2025:observational-analysis-mistakes-chess-initiation-decision-trees | null | null | null | 10.2478/ijcss-2025-0012 | 45--60 | 2 | 24 | International Journal of Computer Science in Sport | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Within the observational methodology, and based on a detailed analysis of FIDE laws of Chess, an observation system has been developed ad hoc for analyzing the illegal moves that children commit in chess. The reliability of the resulting data was confirmed by analysis of interobserver agreement, using Cohen's kappa statistic. The results of the generalizability study confirmed the generalizability of results and the validity of the observation instrument. A Lag Sequential Analysis was performed to identify significant associations between categorical variables in each of the seven types of illegal movements characterized (castling, pinned, king to threatened square, incorrect movement of piece, promotion, occupation, do not remove check). The results obtained in the analysis of illegal movements, reveal the difficulties that the child (under 12 years of age) finds in the understanding and practice of chess. | observational methodology, chess learning, illegal moves, under-12 years of age | null | null | https://dialnet.unirioja.es/servlet/tesis?codigo=397721 | null | 2026 | Miranda P\'{e}rez, Jorge | An\'{a}lisis observacional de los movimientos ilegales y err\'{o}neos en la iniciaci\'{o}n al ajedrez | thesis | miranda:2026:observational-analysis-illegal-erroneous-moves-chess-beginners | phdthesis | Lapresa Ajamil, Daniel and Arana Idiakez, Xabier Sabino | null | null | null | null | null | null | null | Spanish abstract: En el seno de la metodolog\'{\i}a observacional, y a partir de un pormenorizado an\'{a}lisis del reglamento -Leyes FIDE-, se ha elaborado un sistema de observaci\'{o}n ad hoc que permite analizar los movimientos ilegales en el ajedrez de iniciaci\'{o}n. La fiabilidad de los datos, en forma de concordancia inter-observadores, se ha garantizado mediante el coeficiente Kappa de Cohen. En el seno de la Teor\'{\i}a de la Generalizabilidad, se han realizado dos planes de medida que han permitido garantizar la generalizabilidad de los resultados obtenidos y la validez del instrumento de observaci\'{o}n. Se ha realizado un an\'{a}lisis de residuos ajustados en la b\'{u}squeda de relaci\'{o}n asociativa entre variables categ\'{o}ricas, en cada uno de los siete tipos de movimientos ilegales caracterizados (enroque; clavada; rey a casilla amenazada; movimiento incorrecto de pieza; promoci\'{o}n del pe\'{o}n; ocupaci\'{o}n de casillas; no remover al jaque). Los resultados obtenidos en el an\'{a}lisis de los movimientos ilegales, revelan las dificultades que el ni\~{n}o, de categor\'{\i}a sub12, encuentra en el entendimiento y pr\'{a}ctica del ajedrez, constituyendo una valiosa informaci\'{o}n que contribuya a optimizar el proceso de iniciaci\'{o}n de los ni\~{n}os en el ajedrez. | null | null | null | null | null | null | null | null | null | null | null | null | null | Logro\~{n}o, Spain | null | null | null | null | null | null | null | null | null | Universidad de La Rioja | null | null | null | null | null | null | null | null | null | null | null | null | spanish | 262 | Observational analysis of illegal and erroneous moves in chess beginners | null | null | null | null | null | null | null | null | null | null |
As online platforms become ubiquitous, there is growing concern that their use can potentially lead to negative outcomes in users' personal lives, such as disrupted sleep and impacted social relationships. A central question in the literature studying these problematic effects is whether they are associated with the amount of time users spend on online platforms. This is often addressed by either analyzing self-reported measures of time spent online, which are generally inaccurate, or using objective metrics derived from server logs or tracking software. Nonetheless, how the two types of time measures comparatively relate to problematic effects -- whether they complement or are redundant with each other in predicting problematicity -- remains unknown. Additionally, transparent research into this question is hindered by the literature's focus on closed platforms with inaccessible data, as well as selective analytical decisions that may lead to reproducibility issues.In this work, we investigate how both self-reported and data-derived metrics of time spent relate to potentially problematic effects arising from the use of an open, non-profit online chess platform. These effects include disruptions to sleep, relationships, school and work performance, and self-control. To this end, we distributed a gamified survey to players and linked their responses with publicly-available game logs. We find problematic effects to be associated with both self-reported and data-derived usage measures to similar degrees. However, analytical models incorporating both self-reported and actual time explain problematic effects significantly more effectively than models with either type of measure alone. Furthermore, these results persist across thousands of possible analytical decisions when using a robust and transparent statistical framework. This suggests that the two methods of measuring time spent measure contain distinct, complementary information about problematic usage outcomes and should be used in conjunction with each other. | online well-being, problematic platform use, specification curve analysis, survey methodology | null | null | https://doi.org/10.1145/3449160 | null | 2021 | Mok, Lillio and Anderson, Ashton | The Complementary Nature of Perceived and Actual Time Spent Online in Measuring Digital Well-being | article | mok:2021:time-online-digital-well-being | null | null | null | 10.1145/3449160 | null | CSCW1 | 5 | Proc. ACM Hum.-Comput. Interact. | April | null | null | null | null | Association for Computing Machinery | 27 | 86 | April 2021 | New York, NY, USA | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
We rely on all manners of digital systems to organize and facilitate our human functions. From social networks connecting us to each other, to content providers keeping us perpetually entertained, to search engines serving each of our informational needs, to computational models informing us how healthy we are, to artificially-intelligent coaches supplementing our natural intelligence, every corner of human existence is permeated by the digital tools we create. Accompanying the boons of these systems, however, are increasingly complex risks to our digital health. Our attention is pulled into cyberspace via algorithms that use billions of datapoints to learn what we like, sometimes to the detriment of our physical wellness. Ideological rifts online threaten our societal harmony as partisans become ever more polarized, whose obsession with political content in turn feeds the underbelly of our social media ecosystem. All the while, the same data underpinning these online interactions also allow others to make finely-optimized decisions about us, often to the detriment of the disadvantaged. This thesis offers a more optimistic vision: that the same computational infrastructure powering our potentially perilous systems can be repurposed to help us understand their perils. We first outline a framework for rigorously assessing the welfare of our digital systems through the well-being of individual users, the cohesion of user communities, and whether the systems themselves deserve trust. We then utilize this framework to conduct four empirical studies measuring the extent to which digital welfare is preserved or endangered by data-driven systems. At the level of individual users, we directly measure how spending time on a large-scale chess platform, Lichess, can be perceived as detrimental to personal well-being. We find that perceived harms are explained not only by the time that people believe they spend online, but also the actual time they spend engaging with the platform. For groups of users, we quantify how partisan users on the Reddit platform are selective towards politically-congruent news outlets, thus consuming and disseminating polarized news. Despite the platform appearing polarized on aggregate, we discover that narrow, hyper-partisan communities are responsible for deeply-ingrained ideological segregation. We then extend this result by identifying whether key individuals can influence the news consumption cycle on Reddit. Through an analysis of where news about political figures is shared on Reddit and the language it attracts, we illustrate that nationally-recognizable politicians are selectively discussed more by in-group online communities than they are by in-group news outlets. Out-group communities, on the other hand, generate the most toxic and hateful commentary. At the level of problematic downstream outcomes, we further probe whether people can tell when systems like algorithmic risk assessments harm data subjects in unfair ways. We find that observers are easily distracted by who makes risk assessments rather than how equitable the assessments are, suggesting that the task of welfare measurement itself needs to be made accessible for laypeople at large. This thesis posits that the online social systems jeopardizing our collective welfare can also be used to understand the very dangers they pose. By empirically measuring how well people are doing when they use or are impacted by these systems, we in turn empirically demonstrate the feasibility of this ideal. We conclude by speculating on the imminent ubiquity of artificial intelligence in our cyber-environment and their implications for the work in this thesis. | Computational Social Science, Data Science, Human-AI Interaction, Human-Computer Interaction, Web Science | null | null | null | null | 2024 | Mok, Lillio | Measuring the Digital Welfare of Online Social Systems | thesis | mok:2024:measuring-digital-welfare-online-systems | Doctoral Thesis | Anderson, Ashton | null | null | null | null | null | null | null | http://hdl.handle.net/1807/140863 | null | null | null | null | null | null | null | null | null | null | null | University of Toronto | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Chess is a game of strategic thinking and time management, where a player can lose a game on time despite making all the best moves. Finding the best move is a deliberate and energy-intensive process in a game where players are often under time pressure. Therefore, players who can balance this trade-off will have a significant advantage. The current study explores such instances where winning is contingent on how well players balance their accuracy under time pressure. We found that winning players, compared to their opponents, followed a more adaptive decision strategy--they made more theoretical best moves (i.e., accurate moves) in highly critical positions. However, the accuracy difference between the opponents was very similar in less critical positions. We conclude that winning players have a better understanding of when and how to allocate their limited resources efficiently, even when controlling for differences in skill levels, compared to their opponents. | Chess, Adaptive Decision Making, Resource Constraints, Skilled Decision Maker, Evaluation | null | null | https://doi.org/10.1080/13546783.2025.2550306 | null | 2025 | Supratik Mondal and Jakub Traczyk | Adaptive decision making in the wild: a case study of chess | article | mondal:2025:adaptive-decision-making-in-the-wild-case-study-chess | null | null | null | 10.1080/13546783.2025.2550306 | 1--21 | 0 | 0 | Thinking \& Reasoning | null | null | null | null | null | Routledge | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Ranking items from pairwise comparisons is common in domains ranging from sports to consumer preferences. Statistical inference-based methods, such as the Bradley--Terry model, have emerged as flexible and powerful tools to tackle ranking in empirical data. However, in situations with limited and/or noisy comparisons, it is often challenging to confidently distinguish item performance based on evidence available in the data. Most ranking methods nevertheless force a complete ordering, suggesting a meaningful distinction when there is none. Here, we introduce a principled nonparametric Bayesian framework for learning partial rankings---rankings with ties---that infers distinctions between items only when supported by the evidence. We develop a fast agglomerative algorithm for Maximum a Posteriori (MAP) inference under this framework and evaluate its performance on a range of synthetic and real-world datasets, finding that it often yields a more parsimonious and reliable summary of the data than traditional ranking approaches, particularly in sparse observational settings. | null | null | https://github.com/seb310/partial-rankings | https://doi.org/10.1038/s42005-025-02461-y | null | 2025 | Morel-Balbi, Sebastian and Kirkley, Alec | Estimation of partial rankings from sparse, noisy comparisons | article | morel-balbi:2025:estimation-partial-rankings-sparse-noisy-comparisons | null | null | https://www.nature.com/articles/s42005-025-02461-y.pdf | 10.1038/s42005-025-02461-y | 30 | 1 | 9 | Communications Physics | December | null | null | null | null | null | null | null | null | null | null | null | 2399-3650 | null | null | null | null | null | null | null | null | null | 20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://arxiv.org/abs/2501.02505 | null | null | null | null | null | null | null | null | null |
Methods of Explainable AI (XAI) try to illuminate the decision making process of complex Machine Learning models by generating explanations. However, for most real-world data there is no ``groundtruth'' explanation, which makes evaluating the correctness of XAI methods and model decisions difficult. Often visual assessment or anecdotal evidence is the only type of evaluation. In this work we propose to use the game of chess as a source of ``near ground-truth'' (NGT) explanations, which XAI methods can be compared against using various metrics, serving as a ``sanity check''. We demonstrate this process in an experiment with a deep convolutional neural network, to which we apply a range of commonly used XAI methods. As our main contribution, we publish our data set of 30 million chess positions along with their NGT explanations for free use in XAI research. | Explainable AI, Trustworthy AI, Convolutional Neural Networks, Chess | null | null | https://ceur-ws.org/Vol-3341/KDML-LWDA_2022_CRC_8977.pdf | Proceedings of the {LWDA} 2022 Workshops: FGWM, FGKD, and FGDB, Hildesheim (Germany), Oktober 5-7th, 2022 | 2022 | Sascha M{\"{u}}cke and Lukas Pfahler | Check Mate: {A} Sanity Check for Trustworthy {AI} | inproceedings | muecke:2022:check-mate-sanity-check-trustworthy-ai | null | null | null | null | 91--103 | null | 3341 | null | null | Section 4.1 of the paper mentions code being available alongside the data on kaggle | Pascal Reuss and Viktor Eisenstadt and Jakob Michael Sch{\"{o}}nborn and Jero Sch{\"{a}}fer | null | {CEUR} Workshop Proceedings | CEUR-WS.org | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://www.kaggle.com/datasets/smuecke/chess-xai-benchmark | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The world of competitive chess has long been a captivating arena for intellectual competition, where human intelligence, strategic thinking, and long-term planning converge. This study delves into the intricate web of factors that influence a chess player's competitive success through the lens of predictive modeling and machine learning techniques. | null | null | null | null | Advanced Technologies, Systems, and Applications IX | 2024 | Mujagi{\'{c}}, Amar and Mujagi{\'{c}}, Adnan and Mehanovi{\'{c}}, D{\v{z}}elila | Predictive Analysis of Chess Player Performance: An Analysis of Factors Influencing Competitive Success Using Machine Learning Techniques | inproceedings | mujagic:2024:predictive-analysis-chess-player-performance-maching-learning | null | null | null | null | 392--408 | null | null | null | null | null | Ademovi{\'{c}}, Naida and Ak{\v{s}}amija, Zlatan and Karabegovi{\'{c}}, Almir | null | null | Springer Nature Switzerland | null | null | null | Cham | null | null | null | null | 978-3-031-71694-2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Security is an integral requirement of any trustworthy software architecture, particularly critical for application programming interfaces (APIs). In this paper, we survey security documentation practices, specifically API security schemes related to authentication and authorization, by mining a large collection of OpenAPI descriptions retrieved from open-source GitHub repositories. Our study focuses on detecting existing security schemes and evaluating their prevalence and positioning within API descriptions. We distinguish whether security schemes are introduced locally (at the path or operation level) or globally (for the entire API). Our analysis highlights scenarios where security schemes are featured in APIs in different proportions over time, thus tracking whether the API documentation tends to include more (or less) security details as the API evolves. | API Analytics, OpenAPI, Security | null | null | null | 22nd IEEE International Conference on Software Architecture (ICSA) | 2025 | Diana Carolina Mu{\~n}oz Hurtado and Souhaila Serbout and Cesare Pautasso | Mining Security Documentation Practices in OpenAPI Descriptions | inproceedings | munoz-hurtado:2025:mining-security-documentation-practices-openapi-descriptions | null | null | null | null | null | null | null | null | March | null | null | null | null | null | null | null | null | Odense, Denmark | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Designing AI systems that capture human-like behavior has attracted growing attention in applications where humans may want to learn from, or need to collaborate with, these AI systems. Many existing works in designing human-like AI have taken a supervised learning approach that learns from data of human behavior, with the goal of creating models that can accurately predict human behavior. While this approach has shown success in capturing human behavior at different skill levels and even identifying individual behavioral styles, it also suffers from the drawback of mimicking human mistakes. Moreover, existing models only capture a snapshot of human behavior, leaving the question of how to improve them---e.g., from one human skill level to a stronger one---largely unanswered. Using chess as an experimental domain, we investigate the question of teaching an existing human-like model to be stronger using a data-efficient curriculum, while maintaining the model's human similarity. To achieve this goal, we extend the concept of curriculum learning to settings with multiple labeling strategies, allowing us to vary both the curriculum (dataset) and the teacher (labeling strategy). We find that the choice of teacher has a strong impact on both playing strength and human similarity; for example, a teacher that is too strong can be less effective at improving playing strength and degrade human similarity more rapidly. We also find that the choice of curriculum can impact these metrics, but to a smaller extent; for example, training on a curriculum of human mistakes provides only a marginal benefit over training on a random curriculum. Finally, we show that our strengthened models achieve human similarity on datasets corresponding to their strengthened level of play, suggesting that our curriculum training methodology is improving them in human-like steps. | Human-like AI, Curriculum Learning | null | null | https://openreview.net/forum?id=fJY2iCssvIs | null | 2023 | Saumik Narayanan and Kassa Korley and Chien-Ju Ho and Siddhartha Sen | Improving the Strength of Human-Like Models in Chess | misc | narayanan:2023:improving-strength-human-models-chess | null | null | https://openreview.net/pdf?id=fJY2iCssvIs | null | null | null | null | null | null | Rejected submission to ICLR 2023, also submitted as a poster at the Human in the Loop Learning (HiLL) Workshop at NeurIPS 2022 (https://neurips.cc/virtual/2022/64426) | null | null | null | null | null | null | null | null | null | null | null | null | null | null | We efficiently train Human-like AI models to play chess at a stronger level, while retaining their human-like style, by extending the concept of curriculum learning to support multiple teachers | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Generative large language models (LLMs) have revolutionized natural language processing (NLP) by demonstrating exceptional performance in interpreting and generating human language. There has been some exploration of their application to non-linguistic tasks, which could lead to significant advancements in fields that rely heavily on structured data and specialized knowledge. However, there has been limited direct comparison of the effects of model adaptation techniques for non-linguistic compared to linguistic tasks with LLMs. To this end, the work in this paper investigates the effects of fine-tuning and few-shot learning on pre-trained LLMs for non-linguistic tasks using chess puzzles as a case study task. We compare the impact of fine-tuning and few-shot learning on models performing the same task represented in both chess notation (i.e., non-linguistic data) and natural language descriptions of the same chess notations (i.e., natural language data). Our experiments with Mixtral-8x7B-v0.1 and Meta-Llama-3-70B resulted in a 5\% lower average increase in performance after fine-tuning for non-linguistic tasks compared to linguistic tasks. Similarly, few-shot learning on pre-trained models exhibited a 3\% lower average increase in performance for on non-linguistic tasks compared to linguistic tasks. Furthermore, few-shot learning on fine-tuned models resulted in a significant accuracy drop, particularly for Mixtral, with a 24.82\% decrease for non-linguistic tasks. These results suggest that fine-tuning and few-shot learning for generative LLMs have stronger effects on linguistic tasks and their data than for non-linguistic. | large language models, natural language processing, model adaptation techniques | null | null | https://doi.org/10.1145/3672608.3707740 | Proceedings of the 40th ACM/SIGAPP Symposium on Applied Computing | 2025 | Nguyen, Khoa and Jahan, Sadia and Slavin, Rocky | A Comparison of the Effects of Model Adaptation Techniques on Large Language Models for Non-Linguistic and Linguistic Tasks | inproceedings | nguyen:2025:comparison-effects-model-adaptation-techniques-large-language-models-non-linguistic-tasks | null | null | null | 10.1145/3672608.3707740 | 936--944 | null | null | null | null | null | null | null | SAC '25 | Association for Computing Machinery | 9 | null | null | New York, NY, USA | null | null | null | null | 9798400706295 | Catania International Airport, Catania, Italy | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The goal of this research is to analyze the structure of the network of chess players that play on Lichess.org. We aim to understand the way that Lichess randomizes player pairings and how closely related players are in order to better understand the relationship between ranking and pairing systems. We will also observe the behavior of players with respect to the game types that they play to see if this influences player groupings. Another hope of this project is to do an exploratory analysis of standout players and those that choose their opponents, rather than let the algorithm choose for them. | null | null | null | https://github.com/lichess-org/database/blob/master/web/chess-social-networks-paper.pdf | null | 2021 | Nolan, Eva and Scognamillo, Valentin | Online Chess Social Networks | misc | nolan:2021:online-chess-social-networks | null | null | null | null | null | null | null | null | null | Student project | null | null | null | null | null | null | null | null | null | null | null | Hamilton College | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
To help evaluate and understand the latent capabilities of language models, this paper introduces an approach using optimized input embeddings, or 'soft prompts,' as a metric of conditional distance between a model and a target behavior. The technique aims to facilitate latent capability discovery as a part of automated red teaming/evaluation suites and to provide quantitative feedback about the accessibility of potentially concerning behaviors in a way that may scale to powerful future models, including those which may otherwise be capable of deceptive alignment. An evaluation framework using soft prompts is demonstrated in natural language, chess, and pathfinding, and the technique is extended with generalized conditional soft prompts to aid in constructing task evaluations. | null | null | https://github.com/RossNordby/SoftPromptsForEvaluation | https://arxiv.org/abs/2505.14943 | null | 2025 | Ross Nordby | Soft Prompts for Evaluation: Measuring Conditional Distance of Capabilities | misc | nordby:2025:soft-prompts-evaluation-measuring-conditional-distance-capabilities | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2505.14943 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The ranking of players and particularly of chess players has been a topic of debate throughout the last 80 years. Such exploration spawned what has become the benchmark of evaluating professional chess players since the 1970s: the Elo rating model. The Elo system, the first to have a sound statistical basis, was designed by Elo (1978) from the assumption that the performance of a player in a game is a normally distributed random variable Alliot (2017). However, this ranking model is not without its limitations and such has led to extreme rating deflation of the World Chess Federation (FIDE) Standard Elo rating system Sonas (2023). Such attention on the FIDE's rating mechanism has ignited focus on the Elo system's drawbacks which we will address in this dissertation. | null | null | null | http://dx.doi.org/10.13140/RG.2.2.18931.13604 | null | 2024 | O'Rourke, Patrick | An alternative chess rating model based on latent variables | thesis | o-rourke:2024:alternative-chess-rating-model-latent-variables | mathesis | Riccardo Rastelli | https://www.researchgate.net/profile/Patrick-Orourke-7/publication/383313248_An_alternative_chess_rating_model_based_on_latent_variables/links/66c87d5975613475fe76987d/An-alternative-chess-rating-model-based-on-latent-variables.pdf | 10.13140/RG.2.2.18931.13604 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | University College Dublin | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Current chess rating systems update ratings incrementally and may not always accurately reflect a player's true strength at all times, especially for rapidly improving players or very rusty players. To overcome this, we explore a method to estimate player ratings directly from game moves and clock times. We compiled a benchmark dataset from Lichess with over one million games, encompassing various time controls and including move sequences and clock times. Our model architecture comprises a CNN to learn positional features, which are then integrated with clock-time data into a Bidirectional LSTM, predicting player ratings after each move. The model achieved an MAE of 182 rating points on the test data. Additionally, we applied our model to the 2024 IEEE Big Data Cup Chess Puzzle Difficulty Competition dataset, predicted puzzle ratings and achieved competitive results. This model is the first to use no hand-crafted features to estimate chess ratings and also the first to output a rating prediction after each move. Our method highlights the potential of using move-based rating estimation for enhancing rating systems and potentially other applications such as cheating detection. | Chess, Rating Estimation, Cheating Detection | null | null | https://link.springer.com/chapter/10.1007/978-3-031-86585-5_1 | Computers and Games | 2025 | Omori, Michael and Tadepalli, Prasad | Chess Rating Estimation from Moves and Clock Times Using a CNN-LSTM | inproceedings | omori:2024:chess-rating-estimation-moves-clock-times-cnn-lstm | null | null | https://arxiv.org/pdf/2409.11506 | null | 3--13 | null | null | null | null | null | Hartisch, Michael and Hsueh, Chu-Hsuan and Schaeffer, Jonathan | null | null | Springer Nature Switzerland | null | null | null | Cham | null | null | null | null | 978-3-031-86585-5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Introduction: Quantifying signals in large, sparse datasets is challenging, as noise and redundant features often obscure informative patterns. Chess middlegames, with their dynamic complexity and endless possibilities, provide a testbed for exploring such challenges. Building on 12 studies that identified three categories of chess complexity--difficulty, optionality, and rarity--we propose a novel method to quantify chess pattern complexity. Methods: A complexity scoring system was developed, integrating centipawn change, variability, engine search depth, and inverse document frequency of constellations to measure difficulty, optionality, and rarity. From a large dataset of chess games, we extracted 106,810 unique 11-ply constellations that appeared at least 20 times across 2,235,823 games. The chess engine was used to evaluate chess decisions when answering on constellations, and a multinomial logistic regression model was employed to predict move quality. Results: The model demonstrated good predictive power with a weighted F1 score of 0.75 and a ROC AUC of 0.82. It correctly classified over 70\% of blunders, mistakes, and optimal moves, although it has less success in differentiating inaccuracies from optimal moves. Discussion: Our approach establishes a flexible framework for quantifying complexity in large, sparse datasets, demonstrating applicability beyond chess. By integrating different metrics, we achieve high accuracy in predicting move quality during chess middlegames. Beyond supporting chess education by identifying constellations tied to specific move types, this methodology could be valuable in contexts where understanding the interplay of dynamic patterns and their outcomes is essential for deriving actionable insights. | Chess Complexity, Move Prediction, Cognitive Modeling, Big Data Analysis, Sparse Data | null | https://github.com/sgjustino/Chess_Thesis | null | null | 2024 | Ong, Justin and Bilali{\'c}, Merim and Vaci, Nemanja | Sparse but Strategic: Quantitative Insights into Chess Middlegame Complexity | article | ong:2024:sparse-but-strategic-quantitative-insights-chess-middlegame-complexity | null | null | https://www.researchsquare.com/article/rs-5574128/v1.pdf?c=1748726628000 | 10.21203/rs.3.rs-5574128/v1 | null | null | null | Research Square | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Concept probing is one prominent methodology for interpreting and analyzing (deep) neural network models. It has, for example, formed the backbone of several recent works to understand better the high-level knowledge learned and employed by game-playing agents, particularly in chess. However, some recent theoretical and empirical studies have questioned the methodology's reliability and highlighted some limitations. Here, in the game-playing domain of chess, we investigate the effectiveness of several different probing architectures and look into the reliability of methods for interpreting their results. We use a world-class chess-playing agent as our test domain, which allows us, via self-play, to quantify the importance of the concepts identified in the agent's neural network by the concept probes. Our results demonstrate that the widespread practice of using linear probes and interpreting their accuracy to indicate concept importance is somewhat unreliable and needs to be revised. We demonstrate several ways of doing that in our domain, particularly by using more complex probes and amnesic-like probing. | null | null | null | https://doi.org/10.3233/FAIA240574 | {ECAI} 2024 - 27th European Conference on Artificial Intelligence, 19-24 October 2024, Santiago de Compostela, Spain - Including 13th Conference on Prestigious Applications of Intelligent Systems {(PAIS} 2024) | 2024 | A{\dh}alsteinn P{\'{a}}lsson and Yngvi Bj{\"{o}}rnsson | Empirical Evaluation of Concept Probing for Game-Playing Agents | inproceedings | palsson:2024:empirical-evaluation-concept-probing-game-playing-agents | null | null | null | 10.3233/FAIA240574 | 874--881 | null | 392 | null | null | null | Ulle Endriss and Francisco S. Melo and Kerstin Bach and Alberto Jos{\'{e}} Bugar{\'{\i}}n Diz and Jose Maria Alonso{-}Moral and Sen{\'{e}}n Barro and Fredrik Heintz | null | Frontiers in Artificial Intelligence and Applications | {IOS} Press | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
With the widespread use of chess engines cheating in chess has become easier than ever, especially in online chess. Cheating obviously brings a negative impact to the sport. However, research on the topic on cheat detection in chess is still scarcely found. Thus, this paper will discuss data and algorithms that can be used to develop cheat detection tools to analyze games. For data, there are analyzed data and unanalyzed data from online chess games whereas for the algorithm that will be explored there are convolutional neural network (CNN) and densely connected neural network. The results from the experiment using the CNN algorithm are better than the densely connected neural network for detecting if the player is cheating or not. Meanwhile for the data, using either unanalyzed and analyzed data doesn't change the best performing neural network, but it was found using the analyzed data still boosts the accuracy of both neural networks. | Seminars;Neural networks;Games;Convolutional neural networks;Intelligent systems;Information technology;Engines;Cheat Detection;Online Chess Games;Convolutional Neural Network;Dense Neural Network;Neural Network | null | null | https://ieeexplore.ieee.org/document/9702792 | 2021 4th International Seminar on Research of Information Technology and Intelligent Systems (ISRITI) | 2021 | Patria, Reyhan and Favian, Sean and Caturdewa, Anggoro and Suhartono, Derwin | Cheat Detection on Online Chess Games using Convolutional and Dense Neural Network | inproceedings | patria:2021:cheat-detection-online-chess | null | null | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9702792 | 10.1109/ISRITI54043.2021.9702792 | 389--395 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Social robots (SRs) should autonomously interact with humans, while exhibiting proper social behaviors associated to their role. By contributing to health-care, education, and companionship, SRs will enhance life quality. However, personalization and sustaining user engagement remain a challenge for SRs, due to their limited understanding of human mental states. Accordingly, we leverage a recently introduced mathematical dynamic model of human perception, cognition, and decision-making for SRs. Identifying the parameters of this model and deploying it in behavioral steering system of SRs allows to effectively personalize the responses of SRs to evolving mental states of their users, enhancing long-term engagement and personalization. Our approach uniquely enables autonomous adaptability of SRs by modeling the dynamics of invisible mental states, significantly contributing to the transparency and awareness of SRs. We validated our model-based control system in experiments with 10 participants who interacted with a Nao robot over three chess puzzle sessions, 45 - 90 minutes each. The identified model achieved a mean squared error (MSE) of 0.067 (i.e., 1.675\% of the maximum possible MSE) in tracking beliefs, goals, and emotions of participants. Compared to a model-free controller that did not track mental states of participants, our approach increased engagement by 16\% on average. Post-interaction feedback of participants (provided via dedicated questionnaires) further confirmed the perceived engagement and awareness of the model-driven robot. These results highlight the unique potential of model-based approaches and control theory in advancing human-SR interactions. | Mathematical Dynamic Model of Mental States, Adaptive Cognition-Aware Social Robots, Model-based Control | null | https://github.com/marialuis-mp/MMM-Controller-for-Social-Robot | https://arxiv.org/abs/2504.21548 | null | 2025 | Maria Mor\~{a}o Patr\'{\i}cio and Anahita Jamshidnejad | Leveraging Systems and Control Theory for Social Robotics: A Model-Based Behavioral Control Approach to Human-Robot Interaction | misc | patricio:2025:leveraging-systems-control-theory-social-robotics-model-based-behavioral-control-human-robot-interaction | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2504.21548 | null | null | null | null | null | null | null | null | null | null | null | null | null | https://data.4tu.nl/datasets/ccadc914-9502-46d6-9ba5-fef581f2933f | null | eess.SY | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
We use logistic regression to estimate the value of the pieces in standard chess and several chess variants, namely Chess 960, Atomic chess, Antichess, and Horde chess. We perform our regressions on several years of data from Lichess, the free and open-source internet chess server. We use the published player ratings to control for the confounding effect of differential player skill. We adjust for the attenuation bias in regressions due to the noise in observed ratings. We find that major piece values, relative to the value of a pawn, are fairly consistent with historical valuation systems. However we find slightly higher value to bishops than knights. We find that piece values are smaller, in absolute value, in Atomic and Antichess than standard chess. We also present approximate values of the pieces to equalize odds when players of varying skill face off. | null | null | null | https://arxiv.org/abs/2509.04691 | null | 2025 | Steven Pav | Inferring Piece Value in Chess and Chess Variants | misc | pav:2025:inferring-piece-value-chess-variants | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2509.04691 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | stat.AP | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Development teams for mobile applications can receive thousands of user reviews daily. At the same time, these developers use different communication channels, such as the GitHub issue tracker. Although GitHub issues are accessible and manageable for developers, their content often differs starkly from what users write in app reviews. Issues may lack steps to reproduce bugs or insights that justify the priority of new feature requests. The sheer volume of user reviews for a popular app, combined with their heterogeneity and varying quality, makes manual integration into issue trackers unfeasible. We present an approach that automatically augments GitHub issues with informative user reviews to bridge the gap between user feedback and developer-managed issues. Using a state-of-the-art large language model (LLM), our approach automatically retrieves user reviews with high semantic textual similarity (STS) to the issue content and suggests reviews that augment developers' understanding of the issue. In this paper, we present large-scale quantitative and qualitative analyses to assess the feasibility of enriching development workflows with user-written information. Using over 37,000 issues and 750,000 reviews from 19 popular Free/Libre/Open Source Software (FLOSS) mobile applications, our approach augments 3,017 (8\%) issues with 7,287 (1\%) potentially informative reviews. In addition to providing insights into user-reported bugs and feature requests, the information from these matches points toward a novel and promising way to leverage user reviews for concerted app evolution. | Semantic Textual Similarity, User Feedback Mining, GitHub Issues, Information Retrieval, Software Repository Mining | null | null | null | 2025 International Conference on Software Maintenance and Evolution (ICSME) | 2025 | Pilone, Arthur and Raglianti, Marco and Lanza, Michele and Kon, Fabio and Meirelles, Paulo | Automatically Augmenting GitHub Issues with Informative User Reviews | inproceedings | pilone:2025:automatically-augmenting-github-issues-informative-user-reviews | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://figshare.com/articles/dataset/Replication_package_for_the_paper_Automatically_Augmenting_GitHub_Issues_with_Informative_User_Reviews_/28578140 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://gitlab.com/ArthurPilone/deepermatcher | null | null | null | null | null | null | null | null |
Classical models for supervised machine learning, such as decision trees, are efficient and interpretable predictors, but their quality is highly dependent on the particular choice of input features. Although neural networks can learn useful representations directly from raw data (e.g., images or text), this comes at the expense of interpretability and the need for specialized hardware to run them efficiently. In this paper, we explore a hypothesis class we call Learned Programmatic Representations (LeaPR) models, which stack arbitrary features represented as code (functions from data points to scalars) and decision tree predictors. We synthesize feature functions using Large Language Models (LLMs), which have rich prior knowledge in a wide range of domains and a remarkable ability to write code using existing domain-specific libraries. We propose two algorithms to learn LeaPR models from supervised data. First, we design an adaptation of FunSearch to learn features rather than directly generate predictors. Then, we develop a novel variant of the classical ID3 algorithm for decision tree learning, where new features are generated on demand when splitting leaf nodes. In experiments from chess position evaluation to image and text classification, our methods learn high-quality, neural network-free predictors often competitive with neural networks. Our work suggests a flexible paradigm for learning interpretable representations end-to-end where features and predictions can be readily inspected and understood. | null | null | https://github.com/gpoesia/leapr/ | https://arxiv.org/abs/2510.14825 | null | 2025 | Gabriel Poesia and Georgia Gabriela Sampaio | Programmatic Representation Learning with Language Models | misc | poesia:2025:programmatic-representation-learning-language-models | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2510.14825 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Interpretability of Deep Neural Networks (DNNs) is a growing field driven by the study of vision and language models. Yet, some use cases, like image captioning, or domains like Deep Reinforcement Learning (DRL), require complex modelling, with multiple inputs and outputs or use composable and separated networks. As a consequence, they rarely fit natively into the API of popular interpretability frameworks. We thus present TDHook, an open-source, lightweight, generic interpretability framework based on `tensordict` and applicable to any `torch`' model. It focuses on handling complex composed models which can be trained for Computer Vision, Natural Language Processing, Reinforcement Learning or any other domain. This library features ready-to-use methods for attribution, probing and a flexible get-set API for interventions, and is aiming to bridge the gap between these method classes to make modern interpretability pipelines more accessible. TDHook is designed with minimal dependencies, requiring roughly half as much disk space as `transformer\_lens`, and, in our controlled benchmark, achieves up to a \texttimes{}2 speed-up over `captum` when running integrated gradients for multi-target pipelines on both CPU and GPU. In addition, to value our work, we showcase concrete use cases of our library with composed interpretability pipelines in Computer Vision (CV) and Natural Language Processing (NLP), as well as with complex models in DRL. | null | null | https://github.com/Xmaster6y/tdhook | https://arxiv.org/abs/2509.25475 | null | 2025 | Yoann Poupart | TDHook: A Lightweight Framework for Interpretability | misc | poupart:2025:tdhook-lightweight-framework-interpretability | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2509.25475 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.AI | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see: https://nikaashpuri.github.io/sarfa-saliency/. | Deep Reinforcement Learning, Saliency maps, Chess, Go, Atari, Interpretable AI, Explainable AI | null | https://github.com/nikaashpuri/sarfa-saliency | https://openreview.net/forum?id=SJgzLkBKPB | International Conference on Learning Representations | 2020 | Nikaash Puri and Sukriti Verma and Piyush Gupta and Dhruv Kayastha and Shripad Deshmukh and Balaji Krishnamurthy and Sameer Singh | Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution | inproceedings | puri:2020:explain-your-move-understanding-agent-actions-using-specific-relevant-feature-attribution | null | null | https://openreview.net/pdf?id=SJgzLkBKPB | null | null | null | null | null | null | null | null | https://nikaashpuri.github.io/sarfa-saliency/ | null | null | null | null | null | null | null | null | null | null | null | null | We propose a model-agnostic approach to explain the behaviour of black-box deep RL agents, trained to play Atari and board games, by highlighting relevant portions of the input state. | null | null | null | null | null | null | null | https://nikaashpuri.github.io/sarfa-saliency/jekyll/update/2020/04/25/chess-saliency-dataset.html | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Online chess serves as a naturalistic context where cognitive processes can be studied within rule-based, constrained environments. Player skills are defined via ELO rating system accomodating game factors such as wins/losses, and change in ratings over time. Furthermore, quitting a chess match possesses actual consequences for a player by affecting their rankings/ratings. We used chess as a testbed for understanding quitting due to its adversarial nature and availability of large amounts of data that contribute towards its ecological relevance. | null | null | null | https://www.cgs.iitk.ac.in/user/hariharan22/site/pdfs/Paper126_ACCS11_chess_quit_final.pdf | Proceedings of the 11th Annual Conference of Cognitive Science (ACCS 2024) | 2024 | Purohit, Hariharan and Srivastava, Nisheeth | `Sounds like a skill issue': what makes you quit at chess? | inproceedings | purohit:2024:sounds-like-skill-issue-what-makes-you-quit-chess | null | null | null | null | null | null | null | null | null |
abstract is the last paragraph from the introduction. The author describes the paper on the website: I am currently investigating the cognitive mechanisms underlying quitting behavior, using computational models and behavioral experiments. My work aims to bridge theoretical frameworks with real-world quitting scenarios. I take inspiration from phenomenology to approach quitting as a phenomenal experience first and aim to provide a mechanistic framework for it.
Quitting in Online Chess players: My first project looked at the quitting behavior of online chess players. Using the the open access data from lichess.org, I analyzed chess players' propensity to quit in 'Classical' chess matches. Using a combination of game factors and custom quitting factors, I quantified the quitting behavior of player with a statistical hazard of quitting. I also showed evidence for tilting in chess players, that occurs as a consquence of quitting a chess match.
| null | https://www.cgs.iitk.ac.in/user/hariharan22/site/ | null | null | null | null | null | null | null | null | null | null | null | Mumbai, India | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Stopping decisions are frequently modeled as decisions to switch to alternative activities once the current activity stops being adequately rewarding, such as in optimal foraging theory, as well as more recent metacognitive models. However, the sense of stopping and making decisions in such frameworks is highly platonic, with both decisions and stopping actions occurring instantaneously. In contrast, the phenomenology of quitting actions that one is undertaking appears to be temporally extended and metacognitively challenging. We study the metacognitive covariates of quitting decisions made by chess players using a large database of chess games sourced from an online chess portal. Our analysis reveals that players tend to persevere when they are playing against stronger opponents and after having played poor moves. We also find that a history of quitting games makes players more likely to quit in future games, but that having recently quit in a game offers some protective effect against quitting. Finally, we find that quitting a game makes it more likely that a player will play a game again soon. We place these results in the context of modeling quitting as a metacognitive choice affected by multiple competing goals. | null | null | null | https://escholarship.org/uc/item/02n5p1j5 | null | 2025 | Purohit, Hariharan and Srivastava, Nisheeth | A metacognitive appraisal of quitting | article | purohit:2025:metacognitive-appraisal-quitting | null | null | null | null | null | null | 47 | Proceedings of the Annual Meeting of the Cognitive Science Society | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
With the development of technology, more and more online educational products emerge in chess, which makes it difficult for different users to choose from. It's important to develop methodologies to assist different levels chess players to learn in varies environment. List method and rubric evaluation has been conducted, and advice has been put forward based on this approach. The results show that chess online educational products were rich in content and full featured, which could be divided into four categories: overall ecology, video tutorial, tactical training, live broadcast product. However, products still need to improve in product positioning and user experience to promote the development of chess online education. | Chess, Online education, Products, Comparative study | null | null | https://doi.org/10.1007/978-3-030-51968-1_9 | Blended Learning. Education in a Smart Learning Environment: 13th International Conference, ICBL 2020, Bangkok, Thailand, August 24–27, 2020, Proceedings | 2020 | Dong, Qian and Miao, Rong | A Comparative Study of Chess Online Educational Products | inproceedings | qian:2020:comparative-study-online-chess-educational-products | null | null | null | 10.1007/978-3-030-51968-1_9 | 101–113 | null | null | null | null | null | null | null | null | Springer-Verlag | 13 | null | null | Berlin, Heidelberg | null | null | null | null | 978-3-030-51967-4 | Bangkok, Thailand | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
In this paper we show how word embeddings, a technique used most commonly for natural language processing, can be repurposed to analyse gameplay data. Using a large study of chess games and applying the popular Word2Vec algorithm, we show that the resulting vector representation can reveal both common knowledge and subtle details about the game, including relative piece values and the natural spatial flow of chess play. Our results suggest that word embeddings are a cheap and simple technique that can provide a broad overview of a game's dynamics, helping designers and critics form new hypotheses about a game's design, structure and flow. | null | null | null | https://ojs.aaai.org/index.php/AIIDE/article/view/18907 | Proceedings of the Seventeenth {AAAI} Conference on Artificial Intelligence and Interactive Digital Entertainment, {AIIDE} 2021, virtual, October 11-15, 2021 | 2021 | Youn{\`{e}}s Rabii and Michael Cook | Revealing Game Dynamics via Word Embeddings of Gameplay Data | inproceedings | rabii:2021:revealing-game-dynamics-word-embeddings | null | null | https://dl.acm.org/doi/pdf/10.5555/3505520.3505544 | null | 187--194 | null | null | null | null | null | David Thue and Stephen G. Ware | https://knivesandpaintbrushes.org/younes | null | {AAAI} Press | null | null | null | null | null | null | null | null | 978-1-57735-871-8 | null | This paper shows that word embedding techniques such as Word2Vec can be applied to gameplay data, helping show possible relationships between elements of a game's design. We apply Word2Vec to chess and show how it rediscovers interesting strategic knowledge about the game. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://www.youtube.com/watch?v=Qj96jh4c6As | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
A game's theme is an important part of its design – it conveys narrative information, rhetorical messages, helps the player intuit strategies, aids in tutorialisation and more. Thematic elements of games are notoriously difficult for AI systems to understand and manipulate, however, and often rely on large amounts of hand-written interpretations and knowledge. In this paper we present a technique which connects game embeddings, a recent method for modelling game dynamics from log data, and word embeddings, which models semantic information about language. We explain two different approaches for using game embeddings in this way, and show evidence that game embeddings enhance the linguistic translations of game concepts from one theme to another, opening up exciting new possibilities for reasoning about the thematic elements of games in the future. | automated game design, computational creativity, procedural content generation | null | null | https://doi.org/10.1145/3649921.3659851 | Proceedings of the 19th International Conference on the Foundations of Digital Games | 2024 | Rabii, Youn\`{e}s and Cook, Michael | "Hunt Takes Hare": Theming Games Through Game-Word Vector Translation | inproceedings | rabii:2024:hunt-takes-hare-theming-games-through-game-word-vector-translation | null | null | null | 10.1145/3649921.3659851 | null | null | null | null | null | null | null | null | FDG '24 | Association for Computing Machinery | 7 | 74 | null | New York, NY, USA | null | null | null | null | 9798400709555 | Worcester, MA, USA | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Machine learning has shown great success in various aspects of chess, particularly in game-playing engines such as AlphaZero. However, predicting the difficulty of chess puzzles remains a relatively unexplored area. In the IEEE BigData 2024 Cup: Predicting Chess Puzzle Difficulty competition, participants are asked to build a machine learning approach to predict the difficulty of chess puzzles. We present an approach that leverages deep learning and pairwise learning-to-rank techniques to estimate the difficulty of chess puzzles. Our method applies pairwise learning to rank approaches to simulate games between puzzles and uses the outcomes to estimate their Glicko-2 ratings. This approach achieved 4th place in the competition, demonstrating its effectiveness. | Deep learning;Computer vision;Transfer learning;Games;Predictive models;Big Data;Transformers;Data models;Engines;chess;deep learning;learning to rank;glicko-2 | null | null | https://ieeexplore.ieee.org/document/10825356 | 2024 IEEE International Conference on Big Data (BigData) | 2024 | Rafaralahy, Andry | Pairwise Learning to Rank for Chess Puzzle Difficulty Prediction | inproceedings | rafaralahy:2024-pairwise-ltr-learning-to-rank-chess-puzzle-difficulty-prediction | null | null | null | 10.1109/BigData62323.2024.10825356 | 8385--8389 | null | null | null | December | null | null | null | null | null | null | null | null | null | null | null | 2573-2978 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Households increasingly play and engage with video games. We examined how households play video games among 20 interviewees coming from varied and familial households. Our study focused on interactions, examining how gaming influences daily household dynamics. Previous studies have focused mainly on the impact on relationships. Looking at households led us to observe fluid role dynamics around gaming. Our findings map the stages of how households play games from gaining plausible momentum, actions, conversations, and roles taken during game sessions, and reflections after gaming. Our findings highlight a novel role of the Gamer Host leading the game session and attending to everyone's enjoyment. Our observations exemplify the supportive and positive social outcomes close-knit gaming can afford and implications for achieving harmonious gaming in households. Our findings tie to prospects on communal and social aspects on technology use providing new perspectives on user experiences in an immediate social environment. | digital games, household, media-centric, qualitative methods, social interactions | null | null | https://doi.org/10.1145/3748619 | null | 2025 | Rautalahti, Heidi and Ma, Rongjun and Bourdoucen, Amel and Wang, Yajing and Lindqvist, Janne | Fluid Roles for Close-Knit Gaming: Households Playing Digital Games | article | rautalahti:2025:fluid-roles-close-knit-gaming-households-playing-digital-games | null | null | null | 10.1145/3748619 | null | 6 | 9 | Proc. ACM Hum.-Comput. Interact. | October | null | null | null | null | Association for Computing Machinery | 35 | GAMES024 | October 2025 | New York, NY, USA | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Chess is a complex game characterized by diverse strategies and time constraints, making quick decision-making essential for success. While Elo ratings are widely recognized as indicators of player skill, the predictability of match outcomes based solely on these ratings remains a challenge. The study aims to develop a model for accurately predicting the outcome of chess games using logistic regression, focusing on Elo rating differences and the number of moves in each game. The dataset includes over 20,000 games from lichess.org, and only games with decisive outcomes (excluding draws) were used. The research categorizes Elo ratings into various classes and evaluates model performance across these ranges. The model achieves a predictive accuracy of 68.18\%, demonstrating the significance of Elo ratings and move counts in determining game results. Performance metrics, including precision, recall, and Fl-score, further validate the model's effectiveness. The study concludes that while Elo ratings and move count are strong predictors of chess outcomes, further refinement is needed to improve performance at high-ranking skill levels. The model performs well for experts but loses accuracy in master-level games, reflecting higher skill-level complexities. The insights gained from this research contribute to a deeper understanding of predictive modeling in chess, sug-gesting potential avenues for further investigation into additional Influencing factors. | Training;Measurement;Analytical models;Logistic regression;Accuracy;Focusing;Psychology;Games;Predictive models;Time factors;Logistic regression;machine learning;predictive modeling | null | null | null | 2025 International Conference on Electronics, Information, and Communication (ICEIC) | 2025 | Reyes, Ma. Julianna Re-an DG. and Dicreto, Eirnan and Santos, Emmanuel Gabriel D. and Limbag, Daniella Franxene P. and Sampedro, Gabriel Avelino | EloMetrics: Advanced Outcome Prediction for Chess Matches with Elo Ratings and Logistic Regression | inproceedings | reyes:2025:elometrics-advanced-outcome-prediction-chess-elo-ratings-logistic-regression | null | null | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879733 | 10.1109/ICEIC64972.2025.10879733 | 1--4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The ability to predict blunders in chess plays a crucial role in improving players' performance and enabling strategic decision-making. We introduce a novel, scalable, and personalized blunder prediction model for chess. Unlike prior work requiring a separate model per player, our unified architecture learns a collaborative user embedding space, allowing it to generalize weaknesses across players and new users. Our hybrid model, inspired by Deep Factorization Machines (DeepFM), fuses a frozen pre-trained CNN (for board embeddings) with dynamically learned user embeddings to model player-board interactions while still utilizing metadata about the state of the board and the user. We demonstrate that this latent 'blunder profile' is a significantly more powerful predictor of error than a player's explicit Elo rating. The system achieves state-of-the-art performance (0.801 AUC) on both immediate and non-immediate blunders, offering an efficient and data-sparse-friendly solution for personalized chess analysis. Ultimately, this approach demonstrates the practical viability of deep personalization in complex strategy games, facilitating highly efficient, user-centric learning environments. | null | null | null | https://doi.org/10.1007/s10489-026-07131-2 | null | 2026 | Rokach, Yarden and Shapira, Bracha | Blunder prediction in chess | article | rokach:2026:blunder-prediction-chess | null | null | null | 10.1007/s10489-026-07131-2 | 92 | 4 | 56 | Applied Intelligence | February | null | null | null | null | null | null | null | null | null | null | null | 1573-7497 | null | null | null | null | null | null | null | null | null | 16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Coordination, debate, and reflection have shown promising improvements in multi-agent Large Language Model (LLM) task performance. Inspired by the role of questioning in human group reasoning, this research introduces a novel component to multi-agent LLM systems: a Question-Asking Agent (QAA) that guides collaboration through targeted, uncertainty-reducing questions. The QAA selects questions based on Expected Information Gain (EIG), a metric used to quantify the value of information a question may provide. To evaluate the impact of the QAA, a multi-agent LLM system was implemented and tested on the chess game state tracking task, a benchmark problem that challenges LLMs to maintain consistent reasoning across a sequential input. The system included generic agents collaborating through dialogue and a QAA generating questions using template-based formulations with calculable EIG. Experiments were conducted across 15 configurations varying the number of agents (1-5) and QAA strategy (none, random, EIG-driven). Results show that the QAA with EIG consistently improved system accuracy compared to both the baseline and the random-question QAA. Additionally, increasing the number of agents showed improvements across all QAA strategies. This study demonstrates that EIG-guided questioning can significantly improve reasoning performance in multi-agent LLM systems. These findings open new directions for enhancing coordination, interpretability, and performance in multi-agent LLM settings across a range of structured reasoning tasks beyond chess. | Large Language Models, Chess, Expected Information Gain, Multi-Agent | null | null | https://digital.wpi.edu/concern/etds/9p290f765 | null | 2025 | Roohani, Keon | Coordination in Multi-Agent LLM Systems: The Role of a Question-Asking Agent in Guiding Collaborative Consensus | thesis | roohani:2025:coordination-multi-agent-llm-systems-role-question-asking-agent-guiding-collaborative-consensus | mathesis | Murai, Fabricio | https://digital.wpi.edu/pdfviewer/fq978037t | null | null | null | null | null | April | null | null | null | null | null | null | null | null | null | null | null | null | null | null | Worcester, MA, USA | null | null | null | null | null | null | null | null | null | Worcester Polytechnic Institute | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This research investigates temporal differences in chess gameplay between ADHD and neurotypical players, analyzing over 9,800 games across various skill levels and time controls. The study reveals distinct patterns in time management and decision-making, with significant variations observed across different game phases and complexity levels. | null | null | null | https://flatfish4u.github.io/research/2024/02/22/chess-research.html | null | 2024 | Benjamin Rosales | The Temporal Differences in Chess Between ADHD and Neurotypical Individuals | misc | rosales:2024:temporal-differences-chess-adhd-neurotypical-individuals | null | null | https://flatfish4u.github.io/assets/papers/chess_study.pdf | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Human chess players prefer training with human opponents over chess agents as the latter are distinctively different in level and style than humans. Chess agents designed for human-agent play are capable of adjusting their level, however their style is not aligned with that of human players. In this paper, we propose a novel approach for designing such agents by integrating the theory of chess players' decision-making with a state-of-the-art Monte Carlo Tree Search (MCTS) algorithm. We demonstrate the benefits of our approach using two sets of analyses. Quantitatively, we establish that the agents attain their desired Elo ratings. Qualitatively, through a Turing-inspired test with a human chess expert, we show that our agents are indistinguishable from human players. | chess, game playing agents, human-agent play | null | null | https://doi.org/10.1145/3349537.3351904 | Proceedings of the 7th International Conference on Human-Agent Interaction, {HAI} 2019, Kyoto, Japan, October 06-10, 2019 | 2019 | Hanan Rosemarin and Ariel Rosenfeld | Playing Chess at a Human Desired Level and Style | inproceedings | rosemarin:2019:playing-chess-human-level-style | null | null | null | 10.1145/3349537.3351904 | 76--80 | null | null | null | null | null | Natsuki Oka and Tomoko Koda and Mohammad Obaid and Hideyuki Nakanishi and Omar Mubin and Kazuaki Tanaka | null | null | {ACM} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This paper uses chess, a landmark planning problem in AI, to assess transformers' performance on a planning task where memorization is futile -- even at a large scale. To this end, we release ChessBench, a large-scale benchmark dataset of 10 million chess games with legal move and value annotations (15 billion data points) provided by Stockfish 16, the state-of-the-art chess engine. We train transformers with up to 270 million parameters on ChessBench via supervised learning and perform extensive ablations to assess the impact of dataset size, model size, architecture type, and different prediction targets (state-values, action-values, and behavioral cloning). Our largest models learn to predict action-values for novel boards quite accurately, implying highly non-trivial generalization. Despite performing no explicit search, our resulting chess policy solves challenging chess puzzles and achieves a surprisingly strong Lichess blitz Elo of 2895 against humans (grandmaster level). We also compare to Leela Chess Zero and AlphaZero (trained without supervision via self-play) with and without search. We show that, although a remarkably good approximation of Stockfish's search-based algorithm can be distilled into large-scale transformers via supervised learning, perfect distillation is still beyond reach, thus making ChessBench well-suited for future research. | chess, supervised learning, transformer, scaling, benchmark | null | https://github.com/google-deepmind/searchless_chess | https://dl.acm.org/doi/10.5555/3737916.3740018 | Proceedings of the 38th International Conference on Neural Information Processing Systems | 2024 | Ruoss, Anian and Del\'{e}tang, Gr\'{e}goire and Medapati, Sourabh and Grau-Moya, Jordi and Wenliang, Li Kevin and Catt, Elliot and Reid, John and Lewis, Cannada A. and Veness, Joel and Genewein, Tim | Amortized planning with large-scale transformers: a case study on chess | inproceedings | ruoss:2024:amortized-planning-transformers-case-study-chess | null | null | https://proceedings.neurips.cc/paper_files/paper/2024/file/78f0db30c39c850de728c769f42fc903-Paper-Conference.pdf | null | null | null | null | null | null | Previously known as "Grandmaster-Level Chess Without Search" (https://arxiv.org/pdf/2402.04494v1) | null | https://neurips.cc/virtual/2024/poster/94747 | NeurIPS '24 | Curran Associates Inc. | 26 | 2102 | null | Red Hook, NY, USA | null | null | null | null | 9798331314385 | Vancouver, BC, Canada | null | null | null | null | null | null | null | https://storage.googleapis.com/searchless_chess | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://arxiv.org/abs/2402.04494 | null | null | null | null | null | null | null | null | null | |
Although artificial intelligence systems can now outperform humans in a variety of domains, they still lag behind in the ability to arrive at good solutions to problems using limited resources. Recent proposals have suggested that the key to this cognitive efficiency is intelligent selection of the situations in which computational resources are spent. We tested this hypothesis in the domain of complex planning by analyzing how humans managed time available for thinking in over 12 million online chess matches. We found that players spent more time thinking in board positions where planning was more beneficial. This effect was greater in stronger players, and additionally strengthened by considering only the information available to the player at the time of choice. Finally, we found that the qualitative features of this relationship were consistent with a policy that considers the empirically-measured cost of spending time in chess. This provides evidence that human efficiency is supported by intelligent selection of when to apply computation. | null | null | null | https://doi.org/10.31234/osf.io/8j9zx | null | 2022 | Russek, Evan and Acosta-Kane, Daniel and van Opheusden, Bas and Mattar, Marcelo and Griffiths, Tom | Time spent thinking in online chess reflects the value of computation | article | russel:2022:thinking-online-chess-computation | null | null | null | 10.31234/osf.io/8j9zx | null | null | null | PsyArXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The rapidly evolving field of Human-Computer Interaction (HCI) faces a fundamental constraint: the limited bandwidth of information exchange between users and computing systems. One promising approach to increasing this bandwidth is implicit interaction: a paradigm in which applications modify their state based on information gleaned from users, rather than direct input. Within the context of reading such information from human neural signals, this concept is formally recognized as implicit Brain-Computer Interfaces (implicit BCI). My work focuses on implicit BCIs which measure the prefrontal cortex (PFC); early prototypes have successfully leveraged the PFC to approximate mental workload, but much is left to be understood about the full potential of this region. Through three research projects spanning two brain measurement modalities, this dissertation makes targeted contributions to this area of research. With functional Near-Infrared Spectroscopy (fNIRS), I explore two facets of PFC activation demonstrated in functional Magnetic Resonance Imaging (fMRI)-based neuroscience research which are underexplored in applied contexts: episodic memory and brain-network based classification; in the first project, I study the measurable effects of episodic and working memory within the context of using Large Language Models (LLMs), and in the second project I develop a real-time implicit BCI designed to differentiate between different brain networks. The third project benchmarks low-cost EEG in three studies which distinguish brain states based on different factors: quality of moves made during chess playing, workload levels within standard cognitive psychology tasks, and cognitive states during the tasks. For all studies I use Linear Mixed Models (LMM) to observe macro patterns in the data, and machine learning to explore potential for implicit BCI. Results indicate that, in addition to the well-understood concept of measuring singular aspects of consciousness across a gradient (e.g. workload), promising potential exists for leveraging the PFC towards classification across tasks which engage different cognitive processes, both with fNIRS and low-cost EEG. Further, careful consideration of ``noise'' in implicit BCI introduces a new idea: Human-Sensor-Computer Interaction (HSCI). Taken together, this dissertation provides relevant context to inform the next generation of Human-Sensor-Computer systems, including PFC-based interfaces stretching past workload, and beyond. | Computer science, Human-Computer Interaction, Brain-Computer Interfaces | null | null | http://hdl.handle.net/10427/B2774940P | null | 2025 | Russell, Matthew | Beyond Workload: Paving the Road for the Next Generation of Implicit Prefrontal Cortex Based Brain-Computer Interfaces | thesis | russel:2025:beyond-workload-paving-road-next-generation-implicit-prefrontal-cortex-brain-computer-interface | PhD thesis | Jacob, Robert | null | null | null | null | null | null | null | Second two keywords are from the defense page: https://www.cs.tufts.edu/t/colloquia/current/?event=1651 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | Tufts University, Department of Computer Science | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Consumer-grade electroencephalography (EEG) devices show promise for Brain-Computer Interface (BCI) applications, but their efficacy in detecting subtle cognitive states remains understudied. We developed a comprehensive study paradigm which incorporates a combination of established cognitive tasks (N-Back, Stroop, and Mental Rotation) and adds a novel ecological Chess puzzles task. We tested our paradigm with the MUSE 2, a low-cost consumer-grade EEG device. Using linear mixed-effects modeling we demonstrate successful distinctions of within-task workload levels and cross-task cognitive states based on the spectral power data derived from the MUSE 2 device. With machine learning we further show reliable predictive power to differentiate between workload levels in the N-Back task, and also achieve effective cross-task classification. These findings demonstrate that consumer-grade EEG devices like the MUSE 2 can be used to effectively differentiate between various levels of cognitive workload as well as among more nuanced task-based cognitive states, and that these tools can be leveraged for real-time adaptive BCI applications in practical settings. | null | null | https://github.com/mattrussell2/chess-mw-MUSE | https://arxiv.org/abs/2505.07592 | null | 2025 | Matthew Russell and Samuel Youkeles and William Xia and Kenny Zheng and Aman Shah and Robert J. K. Jacob | Neural Signatures Within and Between Chess Puzzle Solving and Standard Cognitive Tasks for Brain-Computer Interfaces: A Low-Cost Electroencephalography Study | misc | russell:2025:neural-signatures-chess-puzzle-solving-standard-cognitive-tasks-brain-computer-interfaces-low-cost-electroencephalography-study | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2505.07592 | null | null | null | null | null | null | null | null | null | null | null | null | null | https://github.com/mattrussell2/chess-mw-MUSE-DATA | null | cs.HC | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Chess is a complex logical game involving ongoing strategic forward planning and evaluation. Solving chess puzzles is one of the most common ways of training and developing chess skills. It involves continuing the game from a certain initial chessboard state against a real or AI opponent until defeat or a significant advantage is achieved. To ensure solving chess puzzles is efficient and engaging for the real player, it is important to understand the difficulty of the puzzle and match it with the skill of the solver. Accurate and fast assessment of puzzle difficulty is therefore a critical problem that online chess platforms need to solve at scale to optimally match many real players with adequate puzzles. To solve this challenge, we propose a chess knowledge-agnostic strategy to predict puzzle difficulty based solely on the moves of the players against their opponents trying to solve the puzzles. Specifically designed deep convolutional neural networks (CNN) were deployed as supervised learning predictors, fed with player moves represented as multichannel chessboard images. Extensive testing of our model with almost 4 million training examples against Glicko-2 evaluated puzzle difficulty ratings--considered as ground truth--resulted in good predictive performance. This was acknowledged by our runner-up result as the 7th place in the IEEE Big Data 2024 Cup and highlighted the capability of fast puzzle difficulty prediction based only on players' moves as evidence, with no prior chess knowledge nor utilization of computationally expensive chess engines. | Training;Costs;Games;Computer architecture;Predictive models;Big Data;Data models;Complexity theory;Convolutional neural networks;Engines;chess puzzle difficulty;deep learning;convolutional neural networks;ensemble learning;Glicko-2 rating | null | null | https://ieeexplore.ieee.org/document/10825595 | 2024 IEEE International Conference on Big Data (BigData) | 2024 | Ruta, Dymitr and Liu, Ming and Cen, Ling | Moves Based Prediction of Chess Puzzle Difficulty with Convolutional Neural Networks | inproceedings | ruta:2024:moves-based-prediction-chess-puzzle-difficulty-convolutional-neural-networks | null | null | null | 10.1109/BigData62323.2024.10825595 | 8390--8395 | null | null | null | December | null | null | null | null | null | null | null | null | null | null | null | 2573-2978 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
In this paper we propose a novel supervised learning approach for training Artificial Neural Networks (ANNs) to evaluate chess positions. The method that we present aims to train different ANN architectures to understand chess positions similarly to how highly rated human players do. We investigate the capabilities that ANNs have when it comes to pattern recognition, an ability that distinguishes chess grandmasters from more amateurplayers. We collect around 3,000,000 different chess positions played by highly skilled chess players and label them with the evaluation function of Stockfish, one of the strongest existing chess engines. We create 4 different datasets from scratch that are used for different classification and regression experiments. The results show how relatively simple Multilayer Perceptrons (MLPs) outperform Convolutional Neural Networks (CNNs) in all the experiments that we have performed. We also investigate two different board representations, the first one representing if a piece is present on the board or not, and the second one in which we assign a numerical value to the piece according to its strength. Our results show how the latter input representation influences the performances of the ANNs negatively in almost all experiments. | Deep Learning, COMPUTER GAMES, Machine Learning | null | null | http://www.icpram.org/ | 7th International Conference on Pattern Recognition Applications and Methods | 2018 | Matthia Sabatelli and Francesco Bidoia and Valeriu Codreanu and Marco Wiering | Learning to Evaluate Chess Positions with Deep Neural Networks and Limited Lookahead | inproceedings | sabatelli:2018:learning-evaluate-chess-positions-deep-neural-networks-limited-lookahead | null | null | null | 10.5220/0006535502760283 | 276--283 | null | null | null | January | 7th International Conference on Pattern Recognition Applications and Methods ; Conference date: 16-01-2018 Through 18-01-2018 | null | null | null | SciTePress | null | null | null | null | null | null | null | null | 978-989758276-9 | null | null | null | null | null | null | null | 20 | English | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Agentic AI systems execute a sequence of actions, such as reasoning steps or tool calls, in response to a user prompt. To evaluate the success of their trajectories, researchers have developed verifiers, such as LLM judges and process-reward models, to score the quality of each action in an agent's trajectory. Although these heuristic scores can be informative, there are no guarantees of correctness when used to decide whether an agent will yield a successful output. Here, we introduce e-valuator, a method to convert any black-box verifier score into a decision rule with provable control of false alarm rates. We frame the problem of distinguishing successful trajectories (that is, a sequence of actions that will lead to a correct response to the user's prompt) and unsuccessful trajectories as a sequential hypothesis testing problem. E-valuator builds on tools from e-processes to develop a sequential hypothesis test that remains statistically valid at every step of an agent's trajectory, enabling online monitoring of agents over arbitrarily long sequences of actions. Empirically, we demonstrate that e-valuator provides greater statistical power and better false alarm rate control than other strategies across six datasets and three agents. We additionally show that e-valuator can be used for to quickly terminate problematic trajectories and save tokens. Together, e-valuator provides a lightweight, model-agnostic framework that converts verifier heuristics into decisions rules with statistical guarantees, enabling the deployment of more reliable agentic systems. | null | null | https://github.com/shuvom-s/e-valuator | https://arxiv.org/abs/2512.03109 | null | 2025 | Shuvom Sadhuka and Drew Prinster and Clara Fannjiang and Gabriele Scalia and Aviv Regev and Hanchen Wang | E-valuator: Reliable Agent Verifiers with Sequential Hypothesis Testing | misc | sadhuka:2025:evaluator-reliable-agent-verifiers-sequential-hypothesis-testing | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2512.03109 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://pypi.org/project/e-valuator/ | null | null | null | null | null | null | null |
Starting with early successes in computer vision tasks, deep learning based techniques have since overtaken state of the art approaches in a multitude of domains. However, it has been demonstrated time and again that these techniques fail to capture semantic context and logical constraints, instead often relying on spurious correlations to arrive at the answer. Since application of deep learning techniques to critical scenarios are dependent on adherence to domain specific constraints, several attempts have been made to address this issue. One limitation holding back a thorough exploration of this area, is a lack of suitable datasets which feature a rich set of rules. In order to address this, we present the VALUE (Vision And Logical Understanding Evaluation) Dataset, consisting of 200,000+ annotated images and an associated rule set, based on the popular board game - chess. The curated rule set considerably constrains the set of allowable predictions, and are designed to probe key semantic abilities like localization and enumeration. Alongside standard metrics, additional metrics to measure performance with regards to logical consistency is presented. We analyze several popular and state of the art vision models on this task, and show that, although their performance on standard metrics are laudable, they produce a plethora of incoherent results, indicating that this dataset presents a significant challenge for future works. | logical constraints, domain knowledge, deep learning, computer vision | null | https://github.com/espressoVi/VALUE-Dataset | https://openreview.net/forum?id=nS9oxKyy9u | null | 2024 | Soumadeep Saha and Saptarshi Saha and Utpal Garain | {VALUED} - Vision and Logical Understanding Evaluation Dataset | article | saha:2024:valued-vision-logical-understanding-dataset | null | null | null | null | null | null | null | Journal of Data-centric Machine Learning Research | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://zenodo.org/records/10607059 | null | null | null | null | null | null | https://www.youtube.com/watch?v=6V9VlTEfHT4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Deep learning, a family of data-driven artificial intelligence techniques, has shown immense promise in a plethora of applications, and it has even outpaced experts in several domains. However, unlike symbolic approaches to learning, these methods fall short when it comes to abiding by and learning from pre-existing established principles. This is a significant deficit for deployment in critical applications such as robotics, medicine, industrial automation, etc. For a decision system to be considered for adoption in such fields, it must demonstrate the ability to adhere to specified constraints, an ability missing in deep learning-based approaches. Exploring this problem serves as the core tenet of this dissertation. This dissertation starts with an exploration of the abilities of conventional deep learning-based systems vis-\`{a}-vis domain coherence. A large-scale rule-annotated dataset is introduced to mitigate the pronounced lack of suitable constraint adherence evaluation benchmarks, and with its aid, the rule adherence abilities of vision systems are analyzed. Additionally, this study probes language models to elicit their performance characteristics with regard to domain consistency. Examination of these language models with interventions illustrates their ineptitude at obeying domain principles, and a mitigation strategy is proposed. This is followed by an exploration of techniques for imbuing deep learning systems with domain constraint information. Also, a comprehensive study of standard evaluation metrics and their blind spots pertaining to domain-aware performance estimation is undertaken. Finally, a novel technique to enforce constraint compliance in models without training is introduced, which pairs a search strategy with large language models to achieve cutting-edge performance. A key highlight of this dissertation is the emphasis on addressing pertinent real-world problems with scalable and practicable solutions. We hope the results presented here pave the way for wider adoption of deep learning-based systems in pivotal situations with enhanced confidence in their trustworthiness. | null | null | null | https://digitalcommons.isical.ac.in/doctoral-theses/629/ | null | 2025 | Saha, Soumadeep | Domain Obedient Deep Learning | thesis | saha:2025:domain-obedient-deep-learning | PhD thesis | Garain, Utpal | null | null | null | null | null | null | null | Check if http://hdl.handle.net/10263/7608 works and replace url | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | Computer Vision and Pattern Recognition Unit, Indian Statistical Institute | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
We develop a satisficing model of choice in which the available alternatives differ in their inherent complexity. We assume--and experimentally validate--that complexity leads to errors in the perception of alternatives' values. The model yields sharp predictions about the effect of complexity on choice probabilities, some of which qualitatively contrast with those of maximization-based choice models. We confirm the predictions of the satisficing model--and thus reject maximization--in a novel data set with information on hundreds of millions of real-world chess moves by highly experienced players. These findings point to the importance of complexity and satisficing for decision making outside of the laboratory. | null | null | null | http://www.nber.org/papers/w30002 | null | 2022 | Salant, Yuval and Spenkuch, Jorg L | Complexity and Satisficing: Theory with Evidence from Chess | techreport | salant:2022:complexity-satisficing-theory-evidence-chess | Working Paper | null | null | 10.3386/w30002 | null | 30002 | null | null | April | null | null | null | Working Paper Series | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | National Bureau of Economic Research | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
We explore the role of memory for choice behavior in unfamiliar environments. Using a unique data set, we document that decision makers exhibit a "memory premium." They tend to choose in-memory alternatives over out-of-memory ones, even when the latter are objectively better. Consistent with well-established regularities regarding the inner workings of human memory, the memory premium is associative, subject to interference and repetition effects, and decays over time. Even as decision makers gain familiarity with the environment, the memory premium remains economically large. Our results imply that the ease with which past experiences come to mind plays an important role in shaping choice behavior. | null | null | null | http://www.nber.org/papers/w33649 | null | 2025 | Salant, Yuval and Spenkuch, Jorg L and Almog, David | The Memory Premium | techreport | salant:2025:memory-premium | Working Paper | null | null | 10.3386/w33649 | null | 33649 | null | null | April | null | null | null | Working Paper Series | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | National Bureau of Economic Research | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Online chess platforms generate vast amounts of game data, presenting opportunities to analyze match outcomes using machine learning approaches. This study develops and compares four machine learning models to classify chess game results (White win, Black win, or Draw) by integrating player rating information with game dynamic metadata. We analyzed 11,510 complete games from the Lichess platform after preprocessing a dataset of 20,058 initial records. Seven key features were engineered to capture both pre-game skill parameters (player ratings, rating difference) and game complexity metrics (game duration, turn count). Four machine learning algorithms were implemented and optimized through grid search cross-validation: Multinomial Logistic Regression, Random Forest, K-Nearest Neighbors, and Histogram Gradient Boosting. The Gradient Boosting classifier achieved the highest performance with 83.19\% accuracy on hold-out data and consistent 5-fold cross-validation scores (83.08\% \pm{} 0.009\%). Feature importance analysis revealed that game complexity (number of turns) was the strongest correlate of the outcome across all models, followed by the rating difference between opponents. Draws represented only 5.11\% of outcomes, creating class imbalance challenges that affected classification performance for this outcome category. The results demonstrate that ensemble methods, particularly gradient boosting, can effectively capture non-linear interactions between player skill and game length to classify chess outcomes. These findings have practical applications for chess platforms in automated content curation, post-game quality assessment, and engagement enhancement strategies. The study establishes a foundation for robust outcome analysis systems in online chess environments. | chess prediction; machine learning; classification algorithms; online gaming; player rating systems; gradient boosting; game outcome forecasting | null | null | https://www.mdpi.com/2079-9292/15/1/1 | null | 2026 | Samara, Kamil and Antreassian, Aaron and Klug, Matthew and Hasan, Mohammad Sakib | Machine Learning Approaches for Classifying Chess Game Outcomes: A Comparative Analysis of Player Ratings and Game Dynamics | article | samara:2026:machine-learning-approaches-classifying-chess-game-outcomes-comparative-analysis-player-ratings-game-dynamics | null | null | null | 10.3390/electronics15010001 | null | 1 | 15 | Electronics | null | null | null | null | null | null | null | null | null | null | null | null | 2079-9292 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Do neural networks build their representations through smooth, gradual refinement, or via more complex computational processes? We investigate this by extending the logit lens to analyze the policy network of Leela Chess Zero, a superhuman chess engine. We find strong monotonic trends in playing strength and puzzle-solving ability across layers, yet policy distributions frequently follow non-smooth trajectories. Evidence for this includes correct puzzle solutions that are discovered early but subsequently discarded, move rankings that remain poorly correlated with final outputs, and high policy divergence until late in the network. These findings contrast with the smooth distributional convergence typically observed in language models. | Understanding high-level properties of models, Probing, logit lens, chess, iterative inference | null | https://github.com/hartigel/leela-logit-lens | https://openreview.net/forum?id=nRPQhySXJP | Mechanistic Interpretability Workshop at NeurIPS 2025 | 2025 | Elias Sandmann and Sebastian Lapuschkin and Wojciech Samek | Iterative Inference in a Chess-Playing Neural Network | inproceedings | sandmann:2025:iterative-inference-chess-playing-neural-network | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | We extended the logit lens to Post-LN to analyze Leela Chess, revealing interpretable intermediate policies with monotonic capability improvement but non-monotonic policy dynamics that contrast with smooth language model convergence | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://figshare.com/s/5342980a9ba8b26985a9 | null | null | null | null | null | null |
In this paper, we quantify the non-transitivity in chess using human game data. Specifically, we perform non-transitivity quantification in two ways--Nash clustering and counting the number of rock-paper-scissor cycles--on over one billion matches from the Lichess and FICS databases. Our findings indicate that the strategy space of real-world chess strategies has a spinning top geometry and that there exists a strong connection between the degree of non-transitivity and the progression of a chess player's rating. Particularly, high degrees of non-transitivity tend to prevent human players from making progress in their Elo ratings. We also investigate the implications of non-transitivity for population-based training methods. By considering fixed-memory fictitious play as a proxy, we conclude that maintaining large and diverse populations of strategies is imperative to training effective AI agents for solving chess. | game theory; multi-agent AI; non-transitivity quantification | null | null | https://doi.org/10.3390/a15050152 | null | 2022 | Ricky Sanjaya and Jun Wang and Yaodong Yang | Measuring the Non-Transitivity in Chess | article | sanjaya:2022-non-transitivity-chess | null | null | null | 10.3390/A15050152 | 152 | 5 | 15 | Algorithms | null | code access expired: https://anonymous.4open.science/r/MSc-Thesis-8543 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This research investigates the potential for large language models to learn to generate valid chess moves solely through pre-training on chess game data. The primary objective of this study is to investigate the impact of custom notation systems and tokenisation methods specifically designed for use with chess games. The study aims to improve the models' understanding of game states and move sequences by developing and implementing a custom notation system, xLAN+. This notation system incorporates additional information about captures, checks and checkmates in order to increase the performance of the models. Furthermore, the strategic gameplay capabilities of the models are to be enhanced by fine-tuning them on datasets filtered by Elo rating. This approach postulates that filtering games by skill level can help models develop a deeper understanding of strategic gameplay, thereby improving their ability to generate high-quality moves. The research uses two different architectures, Transformer, based on OpenAI's GPT-2 configuration, and Mamba, a State Space Model (SSM) optimised for long sequence processing. Initial findings indicate that the custom notation xLAN+ significantly improves the models' ability to generate valid moves and maintain game state accuracy over extended sequences. The comparison between GPT-2 and Mamba reveals that while both architectures can learn chess rules and generate plausible moves, the SSM offers slight advantages in handling long-range dependencies and maintaining game context. This project demonstrates the potential of language models to learn complex tasks like chess through data-driven approaches, paving the way for their application in other strategic and decision-making domains. | AI, Chess, GPT-2, LLM, Mamba, NLP, KI, Schach | null | null | https://digitalcollection.zhaw.ch/items/2ca7f5f3-535c-406a-87af-432ea6ba940b | null | 2024 | Schmid, Lars and Maag, Jerome | Optimizing language models for chess : the impact of custom notation and Elo-based fine-tuning | thesis | schmid-maag:2024:optimizing-language-models-chess-impact-custom-notation-elo-based-finetuning | Bachelor's thesis | Cieliebak, Mark and von D\"{a}niken, Pius | null | 10.21256/zhaw-31999 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | Z{\"u}rcher Hochschule f{\"u}r Angewandte Wissenschaften | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Decision-making researchers often face a trade-off when conducting controlled laboratory experiments, as these can limit the ability to identify stable relationships between decision-making quality and individual differences, such as expertise or personality traits. This study introduces an innovative paradigm that leverages the objective assessment capabilities of artificial intelligence in a naturalistic online chess setting. Ninety-four participants evaluated tactical chess positions and identified optimal moves under various time control conditions, followed by personality assessments. Expertise, measured by online Elo rating, emerged as a key predictor, accounting for 55~\% of the variance in decision quality and 61~\% in evaluation accuracy, underscoring the precision of the chosen approach. The study also highlights the significant impact of time control on decision-making. Additionally, the paradigm shows promise in examining the interplay between personality factors and decision-making processes, with a notable correlation between higher impulsivity scores and faster response times. However, high impulsivity was not associated with reduced decision quality, raising questions about the validity of this measurement. Overall, the results suggest that the chess paradigm, accessible to a broad sample due to the widespread appeal of online chess, provides a powerful tool that combines laboratory precision with real-world relevance. | Artificial intelligence, Decision quality, Expertise, Naturalistic decision making, Individual differences | null | null | https://www.sciencedirect.com/science/article/pii/S0191886925001369 | null | 2025 | Robin Schr\"{o}dter and Katrin Heyers and Jan Birkemeyer and Stefanie Klatt | The role of expertise, impulsivity, and preference for intuition on decision quality | article | schroedter:2025:role-expertise-impulsivity-preference-intuition-decision-quality | null | null | null | 10.1016/j.paid.2025.113174 | 113174 | null | 240 | Personality and Individual Differences | null | null | null | null | null | null | null | null | null | null | null | null | 0191-8869 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
For chess players to sharpen their tactical skills effectively, they train on chess puzzles with a fitting difficulty level. This paper presents an approach to estimate the difficulty level of chess puzzles using a deep neural network. The proposed approach achieved second place in the IEEE BigData Cup 2024 competition: Predicting chess puzzle difficulty. For the design of our network architecture, we take inspiration from the human problem-solving process for chess puzzles. We train the model to predict the correct move as an auxiliary task to improve the training process. We also predict themes, which are patterns in chess puzzles as a second auxiliary task. Finally, we use the uncertainty in the position, i.e. how incorrect the model's move prediction is, as a further input to guide the estimation of the puzzle difficulty. | Training;Uncertainty;Fitting;Estimation;Games;Artificial neural networks;Predictive models;Network architecture;Big Data;Problem-solving;chess puzzle;difficulty estimation;neural network | null | null | https://ieeexplore.ieee.org/document/10826087 | 2024 IEEE International Conference on Big Data (BigData) | 2024 | Sch\"{u}tt, Anan and Huber, Tobias and Andr\'{e}, Elisabeth | Estimating Chess Puzzle Difficulty Without Past Game Records Using a Human Problem-Solving Inspired Neural Network Architecture | inproceedings | schuett:2024:estimating-chess-puzzle-difficulty-without-past-records-using-neural-network | null | null | null | 10.1109/BigData62323.2024.10826087 | 8396--8402 | null | null | null | December | null | null | null | null | null | null | null | null | null | null | null | 2573-2978 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Advancing planning and reasoning capabilities of Large Language Models (LLMs) is one of the key prerequisites towards unlocking their potential for performing reliably in complex and impactful domains. In this paper, we aim to demonstrate this across board games (Chess, Fischer Random / Chess960, Connect Four, and Hex), and we show that search-based planning can yield significant improvements in LLM game-playing strength. We introduce, compare and contrast two major approaches: In external search, the model guides Monte Carlo Tree Search (MCTS) rollouts and evaluations without calls to an external game engine, and in internal search, the model is trained to generate in-context a linearized tree of search and a resulting final choice. Both build on a language model pre-trained on relevant domain knowledge, reliably capturing the transition and value functions in the respective environments, with minimal hallucinations. We evaluate our LLM search implementations against game-specific state-of-the-art engines, showcasing substantial improvements in strength over the base model, and reaching Grandmaster-level performance in chess while operating closer to the human search budget. Our proposed approach, combining search with domain knowledge, is not specific to board games, hinting at more general future applications. | search, planning, language models, games, chess | null | null | https://openreview.net/forum?id=KKwBo3u3IW | Forty-second International Conference on Machine Learning | 2025 | John Schultz and Jakub Adamek and Matej Jusup and Marc Lanctot and Michael Kaisers and Sarah Perrin and Daniel Hennes and Jeremy Shar and Cannada A. Lewis and Anian Ruoss and Tom Zahavy and Petar Veli{\v{c}}kovi{\'c} and Laurel Prince and Satinder Singh and Eric Malmi and Nenad Tomasev | Mastering Board Games by External and Internal Planning with Language Models | inproceedings | schultz:2025:mastering-board-games-external-internal-planning-language-models | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | We pre-trained an LLM capable of playing board games at a high level. We further introduce external and internal planning methods that achieve Grandmaster-level performance in chess while operating closer to the human search budget. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://www.youtube.com/watch?v=JyxE_GE8noc | null | null | null | null | null | null | null | null | null | null | null | null | null | 1.Large Language Models (LLMs) demonstrate impressive performance across various tasks that require complex reasoning. Yet, they still struggle to play board games as simple as tic-tac-toe.\n2.We developed an LLM that can play different board games, reaching Grandmaster-level chess performance. We investigated different planning strategies that enable the LLM to improve its performance, the more ``thinking time'' we provide to the model.\n3.In the future, similar planning strategies can unlock strong performance improvements in LLMs applied to other reasoning problems. | null | null | null | null | null |
As AI systems become more capable, they may internally represent concepts outside the sphere of human knowledge. This work gives an end-to-end example of unearthing machine-unique knowledge in the domain of chess. We obtain machine-unique knowledge from an AI system (AlphaZero) by a method that finds novel yet teachable concepts and show that it can be transferred to human experts (grandmasters). In particular, the new knowledge is learned from internal mathematical representations without a priori knowing what it is or where to start. The produced knowledge from AlphaZero (new chess concepts) is then taught to four grandmasters in a setting where we can quantify their learning, showing that machine-guided discovery and teaching is possible at the highest human level. AI systems have attained superhuman performance across various domains. If the hidden knowledge encoded in these highly capable systems can be leveraged, human knowledge and performance can be advanced. Yet, this internal knowledge is difficult to extract. Due to the vast space of possible internal representations, searching for meaningful new conceptual knowledge can be like finding a needle in a haystack. Here, we introduce a method that extracts new chess concepts from AlphaZero, an AI system that mastered chess via self-play without human supervision. Our method excavates vectors that represent concepts from AlphaZero's internal representations using convex optimization, and filters the concepts based on teachability (whether the concept is transferable to another AI agent) and novelty (whether the concept contains information not present in human chess games). These steps ensure that the discovered concepts are useful and meaningful. For the resulting set of concepts, prototypes (chess puzzle–solution pairs) are presented to experts for final validation. In a preliminary human study, four top chess grandmasters (all former or current world chess champions) were evaluated on their ability to solve concept prototype positions. All grandmasters showed improvement after the learning phase, suggesting that the concepts are at the frontier of human understanding. Despite the small scale, our result is a proof of concept demonstrating the possibility of leveraging knowledge from a highly capable AI system to advance the frontier of human knowledge; a development that could bear profound implications and shape how we interact with AI systems across many applications. | null | null | null | https://www.pnas.org/doi/abs/10.1073/pnas.2406675122 | null | 2025 | Lisa Schut and Nenad Toma\v{s}ev and Thomas McGrath and Demis Hassabis and Ulrich Paquet and Been Kim | Bridging the human–AI knowledge gap through concept discovery and transfer in AlphaZero | article | schut:2025:briding-human-ai-knowledge-gap-concept-discovery-transfer-alphazero | null | null | null | 10.1073/pnas.2406675122 | e2406675122 | 13 | 122 | Proceedings of the National Academy of Sciences | null | null | null | null | null | null | null | null | null | null | https://www.pnas.org/doi/pdf/10.1073/pnas.2406675122 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | Lichess openings cited in appendix | null | null | null | null |
What factors of our learning experiences enable us to best acquire complex skills? Recent ideas from artificial intelligence point to two such factors: (1) a balance of real experience with simulated experience acquired during planning itself, and (2) appropriate diversity in training examples. To test whether these factors influence the development of human expertise, we analyzed data from 1,873 chess players on the online platform Lichess, each of whom played hundreds to thousands of games over months to years. We found that both the time spent planning before moves and the diversity of opening positions encountered predict skill improvement over time. These findings suggest that principles shaping the development of expertise in artificial intelligence systems may also apply to human learning. | null | null | null | https://escholarship.org/uc/item/5c76v07h | null | 2025 | Schut, Lisa and Russek, Evan and Kuperwajs, Ionatan and Mattar, Marcelo G and Ma, Wei Ji and Griffiths, Tom | Learning in online chess increases with more time spent thinking and diversity of experience | inproceedings | schut:2025:learning-online-chess-increases-time-thinking-diversity-experience | null | null | null | null | null | null | 47 | Proceedings of the Annual Meeting of the Cognitive Science Society | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Deep neural networks are powerful machines for visual pattern recognition, but reasoning tasks that are easy for humans may still be difficult for neural models. Humans possess the ability to extrapolate reasoning strategies learned on simple problems to solve harder examples, often by thinking for longer. For example, a person who has learned to solve small mazes can easily extend the very same search techniques to solve much larger mazes by spending more time. In computers, this behavior is often achieved through the use of algorithms, which scale to arbitrarily hard problem instances at the cost of more computation. In contrast, the sequential computing budget of feed-forward neural networks is limited by their depth, and networks trained on simple problems have no way of extending their reasoning to accommodate harder problems. In this work, we show that recurrent networks trained to solve simple problems with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference. We demonstrate this algorithmic behavior of recurrent networks on prefix sum computation, mazes, and chess. In all three domains, networks trained on simple problem instances are able to extend their reasoning abilities at test time simply by "thinking for longer." | Deep learning, algorithms, generalization, recurrent networks, prefix sums, mazes, chess | null | https://github.com/aks2203/easy-to-hard | https://openreview.net/forum?id=Tsp2PL7-GQ | Proceedings of the 35th International Conference on Neural Information Processing Systems | 2021 | Schwarzschild, Avi and Borgnia, Eitan and Gupta, Arjun and Huang, Furong and Vishkin, Uzi and Goldblum, Micah and Goldstein, Tom | Can you learn an algorithm? generalizing from easy to hard problems with recurrent networks | inproceedings | schwarzschild:2021:can-you-learn-algorithm-generalizing-easy-hard-examples- | null | null | https://openreview.net/pdf?id=Tsp2PL7-GQ | null | null | null | null | null | null | null | null | https://proceedings.neurips.cc/paper/2021/hash/3501672ebc68a5524629080e3ef60aef-Abstract.html | NeurIPS '21 | Curran Associates Inc. | 12 | 513 | null | Red Hook, NY, USA | null | null | null | null | 9781713845393 | null | Recurrent netowrks can learn processes that can generalize from easy training data to harder examples at test time by iterating more times. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://openreview.net/attachment?id=Tsp2PL7-GQ&name=supplementary_material | null | null | null |
We describe new datasets for studying generalization from easy to hard examples. | null | null | null | https://arxiv.org/abs/2108.06011 | null | 2021 | Avi Schwarzschild and Eitan Borgnia and Arjun Gupta and Arpit Bansal and Zeyad Emam and Furong Huang and Micah Goldblum and Tom Goldstein | Datasets for Studying Generalization from Easy to Hard Examples | article | schwarzschild:2021:datasets-easy-hard-examples | null | null | null | null | null | null | abs/2108.06011 | CoRR | null | null | null | null | null | null | null | null | null | null | 2108.06011 | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://pypi.org/project/easy-to-hard-data/ | null | null |
At the moment, it's clear that AI has surpassed human ability in almost every field. But, how useful really is this to us? Several fields (eg. law, education, games) have noticed that having a ``perfect'' AI isn't as good as it seems in all cases. One of the marquee examples is law, where an AI wouldn't consider highly important parts of human thinking such as fairness and ethics, it would sentence the victim based on raw logic, something that we clearly want to avoid. Being able to make AI that can understand humans and make decisions like them is becoming much more important in order to be able to make critical decisions that others can interpret and collaborate with. Being able to completely model human reasoning in any field is not possible due to the amount of `noise' in each individual, but we aim for this in the game of chess. Chess is one of the most complex and popular games due to its large fan base and the vast amount of positions that can arise, despite the test of time chess remains unsolved. There are multiple existing chess engines, such as Stockfish and Alphazero, which are essentially `perfect' forms of AI, they aim to be as strong as possible and win as many games as possible rather than trying to play like a human. We introduce Harmonia, an encoder-only Transformer trained through multitask learning to predict human chess moves and game outcomes. Although there have been multiple attempts to emulate human play in chess through the Maia project, Allie Chess, etc., our model is able to surpass all of these existing baselines despite being significantly smaller in size and trained on fewer data. Although Harmonia is extremely good at modeling human play in chess, it is much more important to promote the development of human-AI interaction. It is unique in the sense that it was trained on human data in order to learn human policies and internal representations, which can be used as a tool for further research on educational uses and understanding how humans think. The results in this paper show that there is a high potential for artificial intelligence that can mimic human behavior to be more understandable and collaborative with fellow humans. | Artificial Intelligence, Human-AI Interaction, Chess, Transformer, Move Prediction, Multitask Learning, AI Alignment, Human Cognition | null | null | https://ieeexplore.ieee.org/document/11050701 | 2025 IEEE Conference on Artificial Intelligence (CAI) | 2025 | Hari Sekar, Easwar Gnana and Jin, Roger | Human-Aligned Chess AI: A Multitask Transformer for Humanlike Decision-Making | inproceedings | sekar:2025:human-aligned-chess-ai-multitask-transformer-humanlike-decision-making | null | null | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11050701 | 10.1109/CAI64502.2025.00213 | 1230--1234 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Psychology and social science research offer some promising work in the field of decision-making science. However, given the qualitative nature of much of this research, understanding some physiological bases of decision-making may assist by providing more objectivity. The purpose of this study, therefore, was to explore hormonal and neurophysiological biomarkers of stress relative to strategic decision making, with and without an accompanying exercise stress. | Testosterone, Cortisol, Stress, Decision-making | null | null | https://doi.org/10.1007/s40750-025-00264-7 | null | 2025 | Serpell, Benjamin G. and Crewther, Blair T. and Fourie, Phillip J. and Goodman, Stephen P. J. and Cook, Christian J. | Stress and Strategic Decision Making | article | serpell:2025:stress-strategic-decision-making | null | null | null | 10.1007/s40750-025-00264-7 | 12 | 3 | 11 | Adaptive Human Behavior and Physiology | June | null | null | null | null | null | null | null | null | null | null | null | 2198-7335 | null | null | null | null | null | null | null | null | null | 27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This study aims to determine the results of the analysis of chess playing skills on mathematics learning outcomes for junior athletes of the Raja Kombi Trenggalek chess club. The research method used is a qualitative descriptive method with a quantitative approach. Participants in this study were 8 junior athletes of Raja Kombi Trenggalek chess club. Data collection techniques using interviews, skills results and documentation. The data analysis in this study used the mean and percentage formula.
After analyzing the data, the results of this study concluded that in this study, the average score of playing chess skills was 85.00, then the average score of mathematics learning outcomes was 86.25. This is of course the higher the level of achievement of skills or intellectual intelligence, the higher the level of problem solving such as in learning mathematics. This is also influenced by motor and psychological aspects as a support for intelligence skills that affect the thinking of athletes. Then from the data analysis it can be said that the higher the level of achievement of chess playing skills, the higher the level of problem solving as in learning mathematics
| Analysis, Chess Skills, Mathematics Learning Outcomes | null | null | https://doi.org/10.20961/phduns.v18i1.51318 | null | 2018 | Setiawan, Andika Yogi and Pratama, Henri Gunawan | Analysis of Chess Playing Skills on Mathematics Learning Outcomes Junior Athletes Raja Kombi Trenggalek Chess Club | article | setiawan:2018:analysis-chess-skills-mathematics-learning | null | null | null | 10.20961/phduns.v18i1.51318 | 37--46 | 1 | 18 | PHEDHERAL | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Summarizing event sequences is a key aspect of data mining. Most existing methods neglect conditional dependencies and focus on discovering sequential patterns only. In this paper, we study the problem of discovering both conditional and unconditional dependencies from event sequences. We do so by discovering rules of the form X \rightarrow{} Y where X and Yare sequential patterns. Such rules are simple to understand and provide a clear description of the relation between the antecedent and the consequent. To discover a succinct and non-redundant set of rules we formalize the problem in terms of the Minimum Description Length principle. As the search space is enormous and does not exhibit helpful structure, we propose the SEQRET method to discover high-quality rule sets in practice. Through extensive empirical evaluation we show that unlike the state of the art, SEQRET ably recovers the ground truth on synthetic datasets and finds useful rules from real datasets. | sequential patterns, rule mining, minimum description length | null | null | https://arxiv.org/abs/2505.06049 | Proceedings of the Fortieth AAAI Conference on Artificial Intelligence (AAAI-26) | null | Aleena Siji and Joscha C\"{u}ppers and Osman Ali Mian and Jilles Vreeken | Seqret: Mining Rule Sets from Event Sequences | inproceedings | siji:2026:seqret-mining-rule-sets-event-sequences | null | null | null | null | null | null | null | null | null | preprint: https://arxiv.org/abs/2505.06049 | null | https://eda.rg.cispa.io/prj/seqret/ | null | AAAI Press | null | null | null | null | null | null | null | null | null | Singapore | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://eda.rg.cispa.io/prj/seqret/seqret-v20250526.zip | null |
Predicting player behavior in strategic games, especially complex ones like chess, presents a significant challenge. The difficulty arises from several factors. First, the sheer number of potential outcomes stemming from even a single position, starting from the initial setup, makes forecasting a player's next move incredibly complex. Second, and perhaps even more challenging, is the inherent unpredictability of human behavior. Unlike the optimized play of engines, humans introduce a layer of variability due to differing playing styles and decision-making processes. Each player approaches the game with a unique blend of strategic thinking, tactical awareness, and psychological tendencies, leading to diverse and often unexpected actions. This stylistic variation, combined with the capacity for creativity and even irrational moves, makes predicting human play difficult. Chess, a longstanding benchmark of artificial intelligence research, has seen significant advancements in tools and automation. Engines like Deep Blue, AlphaZero, and Stockfish can defeat even the most skilled human players. However, despite their exceptional ability to outplay top-level grandmasters, predicting the moves of non-grandmaster players, who comprise most of the global chess community -- remains complicated for these engines. This paper proposes a novel approach combining expert knowledge with machine learning techniques to predict human players' next moves. By applying feature engineering grounded in domain expertise, we seek to uncover the patterns in the moves of intermediate-level chess players, particularly during the opening phase of the game. Our methodology offers a promising framework for anticipating human behavior, advancing both the fields of AI and human-computer interaction. | Knowledge Representation, Machine Learning, Behavioral Programming, Predicting Human Actions, Human Decision-Making in Chess, Feature Engineering, Chess | null | null | https://arxiv.org/abs/2504.05425 | null | 2025 | Benny Skidanov and Daniel Erbesfeld and Gera Weiss and Achiya Elyasaf | A Behavior-Based Knowledge Representation Improves Prediction of Players' Moves in Chess by 25% | misc | skidanov:2025:behavior-based-knowledge-representation-improves-prediction-player-moves | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2504.05425 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Despite the impressive generative capabilities of large language models (LLMs), their lack of grounded reasoning and susceptibility to hallucinations limit their reliability in structured domains such as chess. We present Ca{\"i}ssa AI, a neuro-symbolic chess agent that augments LLM-generated move commentary with symbolic reasoning, knowledge graph integration, and verification modules. Ca{\"i}ssa AI combines a fine-tuned chess-specific LLM with a Prolog-based rule engine encoding chess tactics and rules, along with a dynamically constructed Neo4j knowledge graph representing the current board state. This hybrid architecture enables the system to generate not only accurate move suggestions but also coherent, strategically grounded commentary. A LangGraph-based verification module cross-checks LLM outputs against symbolic logic to ensure consistency and correctness, effectively mitigating hallucinations. By aligning data-driven generation with formal domain knowledge, Ca{\"i}ssa AI enhances both trustworthiness and explainability. Our results demonstrate that this tight neuro-symbolic integration produces verifiable, high-quality commentary and serves as a generalizable blueprint for AI systems requiring real-time, interpretable decision support. | Neuro-Symbolic AI, Chess Agents, Explainable Reasoning | null | null | null | KI 2025: Advances in Artificial Intelligence | 2026 | Soliman, Mazen and Ehab, Nourhan | Ca{\"i}ssa AI: A Neuro-Symbolic Chess Agent for~Explainable Move Suggestion and~Grounded Commentary | inproceedings | soliman:2026:caissa-ai-neuro-symbolic-chess-agent-explainable-move-suggestion-grounded-commentary | null | null | null | null | 148--160 | null | null | null | null | null | Braun, Tanya and Paa{\ss}en, Benjamin and Stolzenburg, Frieder | null | null | Springer Nature Switzerland | null | null | null | Cham | null | null | null | null | 978-3-032-02813-6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Chess games demonstrate players' ability to envision the situation on a large scale, cope with variations, and take precautions. It's been proven statistically and mathematically that the white sides are more likely to win due to offensive advantage. Nonetheless, utilizing numerous defensive gambits, the black enhances its chances of succeeding, among which the Sicilian defense ranks the top. Characterized by asymmetrical arrangements, the Sicilian defense carves the path for the queen while not waiving to occupy central positional advantages. Dating back to its origin in the 16th century, it has prevailed since the mid-20th century and has now developed its most complex variations in response to white's first move, ``e4''. In this research, whether there is a significant difference in the winning rate of each variation is examined, and the payoffs for them are evaluated mathematically for a theoretically optimized strategy. | Chess games; Sicilian defense; Chi-square Test | null | null | https://doi.org/10.61173/v2xdqn32 | null | 2023 | Song, Ziming | Investigation of the Sicilian Defense: Winning rates and strategic discrimination | article | song:2023:investigation-sicilian-defense | null | null | https://www.deanfrancispress.com/index.php/hc/article/view/323/HC000572.pdf | null | null | 4 | 1 | Interdisciplinary Humanities and Communication Studies | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The evaluation of Large Language Models (LLMs) in complex reasoning domains typically relies on performance alignment with ground-truth oracles. In the domain of chess, this standard manifests as accuracy benchmarks against strong engines like Stockfish. However, high scalar accuracy does not necessarily imply robust conceptual understanding. This paper argues that standard accuracy metrics fail to distinguish between genuine geometric reasoning and the superficial memorization of canonical board states. To address this gap, we propose a Geometric Stability Framework, a novel evaluation methodology that rigorously tests model consistency under invariant transformations-including board rotation, mirror symmetry, color inversion, and format conversion. We applied this framework to a comparative analysis of six state-of-the-art LLMs including GPT-5.1, Claude Sonnet 4.5, and Kimi K2 Turbo, utilizing a dataset of approximately 3,000 positions. Our results reveal a significant Accuracy-Stability Paradox. While models such as GPT-5.1 achieve near-optimal accuracy on standard positions, they exhibit catastrophic degradation under geometric perturbation, specifically in rotation tasks where error rates surge by over 600\%. This disparity suggests a reliance on pattern matching over abstract spatial logic. Conversely, Claude Sonnet 4.5 and Kimi K2 Turbo demonstrate superior dual robustness, maintaining high consistency across all transformation axes. Furthermore, we analyze the trade-off between helpfulness and safety, identifying Gemini 2.5 Flash as the leader in illegal state rejection (96.0\%). We conclude that geometric stability provides an orthogonal and essential metric for AI evaluation, offering a necessary proxy for disentangling reasoning capabilities from data contamination and overfitting in large-scale models. | Large Language Models, Geometric Stability, Chess Evaluation, Robustness Analysis, AI Reasoning, Evaluation Metrics | null | null | https://arxiv.org/abs/2512.15033 | null | 2025 | Xidan Song and Weiqi Wang and Ruifeng Cao and Qingya Hu | Beyond Accuracy: A Geometric Stability Analysis of Large Language Models in Chess Evaluation | misc | song:2025:beyond-accuracy-geometric-stability-analysis-large-language-models-chess-evaluation | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2512.15033 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.AI | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Contemporary chess engines offer precise yet opaque evaluations, typically expressed as centipawn scores. While effective for decision-making, these outputs obscure the underlying contributions of individual pieces or patterns. In this paper, we explore adapting SHAP (SHapley Additive exPlanations) to the domain of chess analysis, aiming to attribute a chess engines evaluation to specific pieces on the board. By treating pieces as features and systematically ablating them, we compute additive, per-piece contributions that explain the engines output in a locally faithful and human-interpretable manner. This method draws inspiration from classical chess pedagogy, where players assess positions by mentally removing pieces, and grounds it in modern explainable AI techniques. Our approach opens new possibilities for visualization, human training, and engine comparison. We release accompanying code and data to foster future research in interpretable chess AI. | chess, explainable AI, shap | null | https://github.com/fspinna/chessplainer | https://ai4hgi.github.io/paper13.pdf | Proceedings of AI4HGI 2025, the First Workshop on Artificial Intelligence for Human-Game Interaction at the 28th European Conference on Artificial Intelligence (ECAI 2025) | 2025 | Spinnato, Francesco | Towards Piece-by-Piece Explanations for Chess Positions with {SHAP} | inproceedings | spinnato:2025:towards-piece-by-piece-explanations-chess-positions-shap | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This paper shows the weaknesses of two symmetric encryption schemes – Chessography and Cascaded Spin Shuffle. The security claims made by their authors are unsubstantiated. Despite being featured in peer-reviewed publications, their flaws are readily apparent and do not require any sophisticated cryptanalysis. Consequently, the paper proposes a set of speculative "red flag" indicators aimed at identifying encryption proposals of potentially questionable quality. | encryption, symmetric ciphers, cryptanalysis, chess | null | null | https://ceur-ws.org/Vol-4092/paper32.pdf | Proceedings of the Workshop on Applied Security (WAS 2025) at the 25th Conference Information Technologies – Applications and Theory (ITAT 2025) | 2025 | Stanek, Martin | Bad cipher design: Chessography and Cascaded Spin Shuffle | inproceedings | stanek:2025:bad-cipher-design-chessography-cascaded-spin-shuffle | null | null | null | null | 395--403 | null | 4092 | null | null | Older version with only chessography scheme covered: https://arxiv.org/abs/2412.09742 | null | null | CEUR Workshop Proceedings | CEUR-WS.org | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The Quantum Approximate Optimization Algorithm (QAOA) is extensively benchmarked on synthetic random instances such as MaxCut, TSP, and SAT problems, but these lack semantic structure and human interpretability, offering limited insight into performance on real-world problems with meaningful constraints. We introduce Quantum King-Ring Domination (QKRD), a NISQ-scale benchmark derived from chess tactical positions that provides 5,000 structured instances with one-hot constraints, spatial locality, and 10--40 qubit scale. The benchmark pairs human-interpretable coverage metrics with intrinsic validation against classical heuristics, enabling algorithmic conclusions without external oracles. Using QKRD, we systematically evaluate QAOA design choices and find that constraint-preserving mixers (XY, domain-wall) converge approximately 13 steps faster than standard mixers (p<10^{-7}, d\approx0.5) while eliminating penalty tuning, warm-start strategies reduce convergence by 45 steps (p<10^{-127}, d=3.35) with energy improvements exceeding d=8, and Conditional Value-at-Risk (CVaR) optimization yields an informative negative result with worse energy (p<10^{-40}, d=1.21) and no coverage benefit. Intrinsic validation shows QAOA outperforms greedy heuristics by 12.6\% and random selection by 80.1\%. Our results demonstrate that structured benchmarks reveal advantages of problem-informed QAOA techniques obscured in random instances. We release all code, data, and experimental artifacts for reproducible NISQ algorithm research. | null | null | null | https://arxiv.org/abs/2601.00318 | null | 2026 | Gerhard Stenzel and Michael K\"{o}lle and Tobias Rohe and Julian Hager and Leo S\"{u}nkel and Maximilian Zorn and Claudia Linnhoff-Popien | Quantum King-Ring Domination in Chess: A QAOA Approach | misc | stenzel:2026:quantum-king-ring-domination-chess-qaoa-approach | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2601.00318 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
We analyse how a transformer-based language model learns the rules of chess from text data of recorded games. We show how it is possible to investigate how the model capacity and the available number of training data influence the learning success of a language model with the help of chess-specific metrics. With these metrics, we show that more games used for training in the studied range offers significantly better results for the same training time. However, model size does not show such a clear influence. It is also interesting to observe that the usual evaluation metrics for language models, predictive accuracy and perplexity, give no indication of this here. Further examination of trained models reveals how they store information about board state in the activations of neuron groups, and how the overall sequence of previous moves influences the newly-generated moves. | null | null | null | https://aclanthology.org/2021.ranlp-1.153 | Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) | 2021 | St{\"o}ckl, Andreas | Watching a Language Model Learning Chess | inproceedings | stockl:2021:watching-language-model-learning-chess | null | null | null | null | 1369--1379 | null | null | null | September | null | Mitkov, Ruslan and Angelova, Galia | null | null | INCOMA Ltd. | null | null | null | Held Online | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
There are an increasing number of domains in which artificial intelligence (AI) systems both surpass human ability and accurately model human behavior. This introduces the possibility of algorithmically-informed teaching in these domains through more relatable AI partners and deeper insights into human decision-making. Critical to achieving this goal, however, is coherently modeling human behavior at various skill levels. Chess is an ideal model system for conducting research into this kind of human-AI alignment, with its rich history as a pivotal testbed for AI research, mature superhuman AI systems like AlphaZero, and precise measurements of skill via chess rating systems. Previous work in modeling human decision-making in chess uses completely independent models to capture human style at different skill levels, meaning they lack coherence in their ability to adapt to the full spectrum of human improvement and are ultimately limited in their effectiveness as AI partners and teaching tools. In this work, we propose a unified modeling approach for human-AI alignment in chess that coherently captures human style across different skill levels and directly captures how people improve. Recognizing the complex, non-linear nature of human learning, we introduce a skill-aware attention mechanism to dynamically integrate players' strengths with encoded chess positions, enabling our model to be sensitive to evolving player skill. Our experimental results demonstrate that this unified framework significantly enhances the alignment between AI and human players across a diverse range of expertise levels, paving the way for deeper insights into human decision-making and AI-guided teaching tools. Our implementation is available https://github.com/CSSLab/maia2. | Human-AI Alignment, Action Prediction, Chess, Skill-aware Attention | null | https://github.com/CSSLab/maia2 | null | Proceedings of the 38th International Conference on Neural Information Processing Systems | 2025 | Tang, Zhenwei and Jiao, Difan and McIlroy-Young, Reid and Kleinberg, Jon and Sen, Siddhartha and Anderson, Ashton | Maia-2: a unified model for human-AI alignment in chess | inproceedings | tang:2024:maia-2-unified-model-human-ai-alignment-chess | null | null | null | null | null | null | null | null | null | null | null | null | NeurIPS '24 | Curran Associates Inc. | 26 | 659 | null | Red Hook, NY, USA | null | null | null | null | 9798331314385 | Vancouver, BC, Canada | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Elo rating, widely used for skill assessment across diverse domains ranging from competitive games to large language models, is often understood as an incremental update algorithm for estimating a stationary Bradley-Terry (BT) model. However, our empirical analysis of practical matching datasets reveals two surprising findings: (1) Most games deviate significantly from the assumptions of the BT model and stationarity, raising questions on the reliability of Elo. (2) Despite these deviations, Elo frequently outperforms more complex rating systems, such as mElo and pairwise models, which are specifically designed to account for non-BT components in the data, particularly in terms of win rate prediction. This paper explains this unexpected phenomenon through three key perspectives: (a) We reinterpret Elo as an instance of online gradient descent, which provides no-regret guarantees even in misspecified and non-stationary settings. (b) Through extensive synthetic experiments on data generated from transitive but non-BT models, such as strongly or weakly stochastic transitive models, we show that the ''sparsity'' of practical matching data is a critical factor behind Elo's superior performance in prediction compared to more complex rating systems. (c) We observe a strong correlation between Elo's predictive accuracy and its ranking performance, further supporting its effectiveness in ranking. | Pairwise comparison, ranking | null | null | https://arxiv.org/abs/2502.10985 | null | 2025 | Shange Tang and Yuanhao Wang and Chi Jin | Is Elo Rating Reliable? A Study Under Model Misspecification | misc | tang:2025:is-elo-rating-reliable-study-under-model-misspecification | null | null | null | null | null | null | null | null | null | submitted to ICLR 2026: https://openreview.net/forum?id=uUq0gemhnv | null | null | null | null | null | null | null | null | 2502.10985 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Evaluating whether vision-language models (VLMs) reason consistently across representations is challenging because modality comparisons are typically confounded by task differences and asymmetric information. We introduce SEAM, a benchmark that pairs semantically equivalent inputs across four domains that have existing standardized textual and visual notations. By employing distinct notation systems across modalities, in contrast to OCR-based image-text pairing, SEAM provides a rigorous comparative assessment of the textual-symbolic and visual-spatial reasoning capabilities of VLMs. Across 21 contemporary models, we observe systematic modality imbalance: vision frequently lags language in overall performance, despite the problems containing semantically equivalent information, and cross-modal agreement is relatively low. Our error analysis reveals two main drivers: textual perception failures from tokenization in domain notation and visual perception failures that induce hallucinations. We also show that our results are largely robust to visual transformations. SEAM establishes a controlled, semantically equivalent setting for measuring and improving modality-agnostic reasoning. | null | https://huggingface.co/datasets/lilvjosephtang/SEAM-Benchmark | https://github.com/CSSLab/SEAM | https://openreview.net/forum?id=lI4LgGv4sX | Second Conference on Language Modeling | 2025 | Zhenwei Tang and Difan Jiao and Blair Yang and Ashton Anderson | {SEAM}: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models | inproceedings | tang:2025:seam-semantically-equivalent-modalities-benchmark-vlm-vision-language-models | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | https://lilv98.github.io/SEAM-Website/ | null | null | null | null |
The ability to make good decisions is critical in life. Although anecdotal and preliminary evidence suggests that social comparison could impair decision making, surprisingly little attention has been paid to such dynamics within cognitive science. The present study aimed to address this gap by exploring, via a sample of 1.5 million chess games and a fuzzy regression discontinuity design, whether higher status of competitors could cause individuals to commit more errors. Critically, chess data includes overt symbols of social status, viz. titles conferred at arbitrary thresholds of ratings that represent playing strength, and an objective measure of errors could be calculated by contrasting the moves that players chose in games against the optimal moves determined by powerful chess engines. I found no evidence that the mere presence of status titles impacted error rates. | decision making; error rate; cognitive psychology, social psychology, regression discontinuity design; chess | null | null | https://escholarship.org/uc/item/85d620jz | Proceedings of the Annual Meeting of the Cognitive Science Society | 2023 | Tay, Li Qian | Can higher social status of competitors cause decision makers to commit more errors? | inproceedings | tay:2023:social-status-competitors-cause-decision-maker-errors | null | null | null | null | null | null | 45 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Traditionally, the relative strength of a chess player within a competitive pool is identified by a rating number. In order to reach a fair rating that best represents their level of play, chess players are required to play numerous games against various opponents within that pool. However, intuitively, experienced chess players are capable of extracting a rough estimate of a player's strength by looking at the moves they made in a single game. How accurately could a machine learning model based on a large dataset of chess games predict player ratings from a single game, and what would these predictions depend on? This paper presents an attempt to identify, encode and model chess gameplay features in order to predict a player's rating from a single game played. If successful, such a model could be employed to attach a fair initial rating to a new player within a pool before any games are played. We use an extensive dataset of chess games downloaded from a popular online chess platform, from which we extract a set of 30 features which are used to model and ultimately predict players' ratings. Our findings show that we are capable of predicting the rating bracket of a player with 79.3\% accuracy when considering the extreme ends of the dataset (lowest vs. highest rated players), while the accuracy consistently drops as we increase the respective bracket width. We discovered that the most important features of our predictive models are both theory-and engine-related; most importantly, the features that we have extracted lead to explainable, quantifiable predictions of chess player strength. | null | null | null | https://doi.org/10.1109/CoG57401.2023.10333133 | {IEEE} Conference on Games, CoG 2023, Boston, MA, USA, August 21-24, 2023 | 2023 | Tim Tijhuis and Paris Mavromoustakos Blom and Pieter Spronck | Predicting Chess Player Rating Based on a Single Game | inproceedings | tijhuis:2023:predicting-chess-rating-single-game | null | null | null | 10.1109/COG57401.2023.10333133 | 1--8 | null | null | null | null | null | null | null | null | {IEEE} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
This book presents a compelling account of the historic FIDE world chess championship match between Ding Liren of China and Gukesh Dommaraju of India, held in Singapore from November 25 to December 13, 2024. Sponsored by Google, the 14-game match marked several significant milestones: the first All-Asian world chess championship, the first to be hosted in Singapore, and the momentous crowning of the youngest undisputed world chess champion in history. Through detailed reports, in-depth analyses of each game, and a curated collection of over a hundred quality photographs, the book explores the players' remarkable journeys, offering insights into their distinct playing styles, rigorous preparation, and the psychological challenges they faced both on and off the board. Set against the dynamic cultural backdrop of Singapore, it also examines the growing prominence of Asia in the world of chess. Combining strategic depth, expert commentary, and behind-the-scenes perspectives, this work provides an engaging narrative of resilience, rivalry, and excellence, making it an indispensable read for chess enthusiasts and sports aficionados. | null | null | null | https://www.worldscientific.com/worldscibooks/10.1142/14303 | null | 2025 | Urcan, Olimpiu G | East Meets East: Inside The 2024 World Chess Championship In Singapore | book | urcan:2025:east-meets-east-inside-2024-world-chess-championship-singapore | null | null | null | 10.1142/14303 | null | null | null | null | null | null | null | null | null | World Scientific Publishing Company | null | null | null | null | null | null | null | null | 9789819812820 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The rapid advancement of Generative AI has raised significant questions regarding its ability to produce creative and novel outputs. Our recent work investigates this question within the domain of chess puzzles and presents an AI system designed to generate puzzles characterized by aesthetic appeal, novelty, counter-intuitive and unique solutions. We briefly discuss our method below and refer the reader to the technical paper for more details. To assess our system's creativity, we presented a curated booklet of AI-generated puzzles to three world-renowned experts: International Master for chess compositions Amatzia Avni, Grandmaster Jonathan Levitt, and Grandmaster Matthew Sadler. All three are noted authors on chess aesthetics and the evolving role of computers in the game. They were asked to select their favorites and explain what made them appealing, considering qualities such as their creativity, level of challenge, or aesthetic design. | null | null | null | https://arxiv.org/abs/2510.23772 | null | 2025 | Vivek Veeriah and Federico Barbero and Marcus Chiam and Xidong Feng and Michael Dennis and Ryan Pachauri and Thomas Tumiel and Johan Obando-Ceron and Jiaxin Shi and Shaobo Hou and Satinder Singh and Nenad Toma\v{s}ev and Tom Zahavy | Evaluating In Silico Creativity: An Expert Review of AI Chess Compositions | misc | veeriah:2025:evaluating-silico-creativity-expert-review-ai-chess-competitions | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 2510.23772 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | cs.AI | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Convolutional neural networks are typically applied to image analysis problems. We investigate whether a simple convolutional neural network can be trained to evaluate chess positions by means of predicting Stockfish (an existing chess engine) evaluations. Publicly available data from lichess.org was used, and we obtained a final MSE of 863.48 and MAE of 12.18 on our test dataset (with labels ranging from -255 to +255). To accomplish better results, we conclude that a more capable model architecture must be used. | null | null | null | null | null | 2019 | Vikstr{\"o}m, Joel | Training a Convolutional Neural Network to Evaluate Chess Positions | thesis | vikstrom:2019:convolutional-neural-network-cnn-evaluate-chess-positions | Bachelor's thesis | Markidis, Stefano | null | null | 18 | 2019:377 | null | null | null | null | null | null | TRITA-EECS-EX | null | null | null | null | null | null | null | null | KTH, School of Electrical Engineering and Computer Science (EECS) | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Reasoning is a central capability of human intelligence. In recent years, with the advent of large-scale datasets, pretrained large language models have emerged with new capabilities, including reasoning. However, these models still struggle with long-term, complex reasoning tasks, such as playing chess. Based on the observation that expert chess players employ a dual approach combining long-term strategic play with short-term tactical play along with language explanation, we propose improving the reasoning capability of large language models in chess by integrating annotated strategy and tactic. Specifically, we collect a dataset named MATE, which consists of 1 million chess positions with candidate moves annotated for strategy and tactics. We finetune the LLaMA-3-8B model and compare it against state-of-the-art commercial language models in the task of selecting better chess moves. Our experiments show that our models perform better than GPT, Claude, and Gemini models. We find that language explanations can enhance the reasoning capability of large language models. | null | https://huggingface.co/OutFlankShu/MATE | null | https://aclanthology.org/2025.naacl-short.52/ | Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers) | 2025 | Wang, Shu and Ji, Lei and Wang, Renxi and Zhao, Wenxiao and Liu, Haokun and Hou, Yifan and Wu, Ying Nian | Explore the Reasoning Capability of {LLM}s in the Chess Testbed | inproceedings | wang:2025:explore-reasoning-capability-llms-chess-testbed | null | null | null | 10.18653/v1/2025.naacl-short.52 | 611--622 | null | null | null | April | null | Chiruzzo, Luis and Ritter, Alan and Wang, Lu | https://mate-chess.github.io/ | null | Association for Computational Linguistics | null | null | null | Albuquerque, New Mexico | null | null | null | null | 979-8-89176-190-2 | null | null | null | null | null | null | null | null | null | https://huggingface.co/datasets/OutFlankShu/MATE_NAACL2025_Explore-the-Reasoning-Capability-of-LLMs-in-the-Chess-Testbed | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Lossless data compression has evolved into an indispensable tool for reducing data transfer times in heterogeneous systems. However, performing decompression on host systems can create performance bottlenecks. Accelerator libraries, such as nvCOMP, address this problem by providing custom GPU-enabled versions of some general-purpose compression methods, including Snappy, ZStandard and gzip. However, popular bzip-like compression schemes, which rely on block-sorting transforms have not yet been integrated since their fine-grained parallelization is challenging. With a focus on decompression, we propose novel techniques for fine-grained parallelization of inverse Burrows-Wheeler transform (iBWT) and inverse Move-to-Front (iMTF) transform on GPUs, which enable efficient processing of bzip2-based archives on CUDA-enabled accelerators for the first time. Consequently, we present the first fully GPU-enabled bzip2 decompression pipeline as a use case for the proposed algorithms. Our experimental results reveal speedups of up to 6.1x over a multicore CPU implementation for iBWT, and throughput rates of up to 2400 MB/s for combined iBWT and iMTF on an A100 GPU. For decompression of bzip2 archives, a throughput of over 11.62 GB/s is achieved on a DGX H100 server. The source code of our parallel decoder implementation is available at https://github.com/weissenberger/bzip2gpu. | Burrows-Wheeler transform, CUDA, GPU, Move-to-front transform, accelerators, bzip2, data compression | null | null | https://doi.org/10.1145/3673038.3673067 | Proceedings of the 53rd International Conference on Parallel Processing | 2024 | Wei{\ss}enberger, Andr{\'e} and Schmidt, Bertil | Massively Parallel Inverse Block-sorting Transforms for bzip2 Decompression on GPUs | inproceedings | weissenberger:2025:massively-parallel-inverse-block-sorting-transforms-bzip2-decompression-gpu | null | null | null | 10.1145/3673038.3673067 | 856–865 | null | null | null | null | null | null | null | ICPP '24 | Association for Computing Machinery | 10 | null | null | New York, NY, USA | null | null | null | null | 9798400717932 | Gotland, Sweden | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Chess provides an ideal testbed for evaluating the reasoning, modeling, and abstraction capabilities of large language models (LLMs), as it has well-defined structure and objective ground truth while admitting a wide spectrum of skill levels. However, existing evaluations of LLM ability in chess are ad hoc and narrow in scope, making it difficult to accurately measure LLM chess understanding and how it varies with scale, post-training methodologies, or architecture choices. We present ChessQA, a comprehensive benchmark that assesses LLM chess understanding across five task categories (Structural, Motifs, Short Tactics, Position Judgment, and Semantic), which approximately correspond to the ascending abstractions that players master as they accumulate chess knowledge, from understanding basic rules and learning tactical motifs to correctly calculating tactics, evaluating positions, and semantically describing high-level concepts. In this way, ChessQA captures a more comprehensive picture of chess ability and understanding, going significantly beyond the simple move quality evaluations done previously, and offers a controlled, consistent setting for diagnosis and comparison. Furthermore, ChessQA is inherently dynamic, with prompts, answer keys, and construction scripts that can evolve as models improve. Evaluating a range of contemporary LLMs, we find persistent weaknesses across all five categories and provide results and error analyses by category. We will release the code, periodically refreshed datasets, and a public leaderboard to support further research. | null | null | null | https://arxiv.org/abs/2510.23948 | null | 2025 | Qianfeng Wen and Zhenwei Tang and Ashton Anderson | ChessQA: Evaluating Large Language Models for Chess Understanding | misc | wen:2025:chessqa-evaluating-large-language-models-chess-understanding | null | null | null | null | null | null | null | null | null | submitted here: https://openreview.net/forum?id=gBz9NMbvYS | null | null | null | null | null | null | null | null | 2510.23948 | null | null | null | null | null | null | null | null | null | null | null | null | null | https://huggingface.co/datasets/wieeii/ChessQA-Benchmark | null | cs.LG | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The idea of training Artificial Neural Networks to evaluate chess positions has been widely explored in the last ten years. In this paper we investigated dataset impact on chess position evaluation. We created two datasets with over 1.6 million unique chess positions each. In one of those we also included randomly generated positions resulting from consideration of potentially unpredictable chess moves. Each position was evaluated by the Stockfish engine. Afterwards, we created a multi class evaluation model using Multilayer Perceptron. Solution to the evaluation problem was tested with three different data labeling methods and three different board representations. We show that the accuracy for the model trained for the dataset without randomly generated positions is higher than for the model with such positions, for all data representations and 3, 5 and 11 evaluation classes. | chess position evaluation, deep neural network, model evaluation, accuracy | null | null | https://doi.org/10.1007/978-3-031-30442-2_32 | Parallel Processing and Applied Mathematics - 14th International Conference, {PPAM} 2022, Gdansk, Poland, September 11-14, 2022, Revised Selected Papers, Part {I} | 2022 | Dawid Wieczerzak and Pawel Czarnul | Dataset Related Experimental Investigation of Chess Position Evaluation Using a Deep Neural Network | inproceedings | wieczerzak:2022:dataset-experimental-investigation-chess-position-evaluation-neural-network | null | null | null | 10.1007/978-3-031-30442-2_32 | 429--440 | null | 13826 | null | null | null | Roman Wyrzykowski and Jack J. Dongarra and Ewa Deelman and Konrad Karczewski | null | Lecture Notes in Computer Science | Springer | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
We detail the bread emoji team's submission to the IEEE BigData 2024 Predicting Chess Puzzle Difficulty Challenge. Our solution revolved around the use of ensembled, pretrained, neural chessboard embedders (specifically, truncated Maia and Leela models) and an empirically-guided distribution rescaling postprocessing step.Our approach was the outright winner of the competition with a >16.4\% reduction in mean squared error (MSE) over second place in the preliminary evaluation and a >13.3\% reduction in MSE over second place in the final evaluation. | Training;Transfer learning;Artificial neural networks;Predictive models;Big Data;Data models;Emojis | null | https://github.com/mcognetta/ieee-chess | null | 2024 IEEE International Conference on Big Data (BigData) | 2024 | Woodruff, Tyler and Filatov, Oleg and Cognetta, Marco | The bread emoji Team's Submission to the IEEE BigData 2024 Cup: Predicting Chess Puzzle Difficulty Challenge | inproceedings | woodruff:2024:predicting-chess-puzzle-difficulty | null | null | null | 10.1109/BigData62323.2024.10826037 | 8415--8422 | null | null | null | December | null | null | null | null | null | null | null | null | null | null | null | 2573-2978 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Estimating the difficulty of chess puzzles provides a rich testbed for studying human–computer interaction and adaptive learning. Building on recent advances and the FedCSIS 2025 Challenge, we address the task of predicting chess puzzle difficulty ratings using a multi-source representation approach. Our approach integrates pre-trained neural embeddings of board states, solution move sequences, and engine-derived success probabilities. These heterogeneous features are fused via dedicated embedding and projection layers, followed by a multi-layer perceptron regressor. Post-processing calibration and model ensemble further enhance robustness and generalization. Experiments on the FedCSIS 2025 dataset demonstrate that our method effectively leverages both structural and empirical information, achieving strong predictive performance. Our approach achieved fifth place on the final official leaderboard, highlighting the effectiveness of combining neural representations with domain-specific probabilistic features for robust chess puzzle difficulty prediction. | null | null | null | http://dx.doi.org/10.15439/2025F2456 | Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS) | 2025 | Haitao Xiao and Daiyuan Yu and Xuegang Wen and Le Chen and Kun Fu | Multi-Source Feature Fusion and Neural Embedding for Predicting Chess Puzzle Difficulty | inproceedings | xiao:2025:multi-source-feature-fusion-neural-embedding-predicting-chess-puzzle-difficulty | null | null | null | 10.15439/2025F2456 | 843--848 | null | 43 | null | null | null | Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak | null | Annals of Computer Science and Information Systems | IEEE | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Chess is a popular game among many people worldwide and is frequently played online. Although players are ranked based on existing rating systems, automation is essential for coordinating matches in tournaments with thousands of participants. In this study, we analyzed the game records of highly skilled chess players and determined appropriate ratings for players without ratings by using a decision tree model. It was determined that high-rated and low-rated players could be identified with accuracy levels above 80\%, even for players who were not included at the time of model training. | Chess log, player rating prediction, Training, Analytical models, Automation, Computational modeling, Games, Big Data, Decision trees, Model Training Time, Skilled Players, Chess Players, Thousands Of Participants, Process Mining, Internet Gaming, Tree Depth, Stage Of The Game | null | null | https://doi.org/10.1109/BigComp57234.2023.00066 | {IEEE} International Conference on Big Data and Smart Computing, BigComp 2023, Jeju, Republic of Korea, February 13-16, 2023 | 2023 | Habuki Yamada and Nobuko Kishi and Masato Oguchi and Miyuki Nakano | A Method for Estimating Online Chess Game Player Ratings with Decision Tree | inproceedings | yamada:2023:estimating-online-ratings-decision-tree | null | null | null | 10.1109/BIGCOMP57234.2023.00066 | 320--321 | null | null | null | null | null | Hyeran Byun and Beng Chin Ooi and Katsumi Tanaka and Sang{-}Won Lee and Zhixu Li and Akiyo Nadamoto and Giltae Song and Young{-}Guk Ha and Kazutoshi Sumiya and Yuncheng Wu and Hyuk{-}Yoon Kwon and Takehiro Yamamoto | null | null | {IEEE} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Neurons in large language models often exhibit polysemanticity, simultaneously encoding multiple unrelated concepts and obscuring interpretability. Instead of relying on post-hoc methods, we present MoE-X, a mixture-of-experts (MoE) language model designed to be intrinsically interpretable. Our approach is motivated by the observation that, in language models, wider networks with sparse activations are more likely to capture interpretable factors. however, directly training such large sparse networks is computationally prohibitive. MoE architectures offer a scalable alternative by activating only a subset of experts for any given input, inherently aligning with interpretability objectives. In MoE-X, we establish this connection by rewriting the MoE layer as an equivalent sparse, large MLP. This approach enables efficient scaling of the hidden size while maintaining sparsity. To further enhance interpretability, we enforce sparse activation within each expert and redesign the routing mechanism to prioritize experts with the highest activation sparsity. These designs ensure that only the most salient features are routed and processed by the experts. We evaluate MoE-X on chess and natural language tasks, showing that it achieves performance comparable to dense models while significantly improving interpretability. MoE-X achieves a perplexity better than GPT-2, with interpretability surpassing even sparse autoencoder (SAE)-based approaches. | Mixture of Expert; Interpretability; Polysemanticity | null | null | https://openreview.net/forum?id=6QERrXMLP2 | Forty-second International Conference on Machine Learning | 2025 | Xingyi Yang and Constantin Venhoff and Ashkan Khakzar and Christian Schroeder de Witt and Puneet K. Dokania and Adel Bibi and Philip Torr | Mixture of Experts Made Intrinsically Interpretable | inproceedings | yang2025miyang:2025:mixture-experts-intrinsically-interpretable | null | null | null | null | null | null | null | null | null | https://proceedings.mlr.press/v267/yang25ag.html | null | https://icml.cc/virtual/2025/poster/46377 | null | null | null | null | null | null | null | null | null | null | null | null | We present MoE-X a mixture-of-experts (MoE) language model designed to be intrinsically interpretable. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
In the post-AlphaGo era, there has been a renewed interest in search techniques such as Monte Carlo Tree Search (MCTS), particularly in their application to Large Language Models (LLMs). This renewed attention is driven by the recognition that current next-token prediction models often lack the ability for long-term planning. Is it possible to instill search-like abilities within the models to enhance their planning abilities without relying on explicit search? We propose DiffuSearch , a model that does \textit{implicit search} by looking into the future world via discrete diffusion modeling. We instantiate DiffuSearch on a classical board game, Chess, where explicit search is known to be essential. Through extensive controlled experiments, we show DiffuSearch outperforms both the searchless and explicit search-enhanced policies. Specifically, DiffuSearch outperforms the one-step policy by 19.2\% and the MCTS-enhanced policy by 14\% on action accuracy. Furthermore, DiffuSearch demonstrates a notable 30\% enhancement in puzzle-solving abilities compared to explicit search-based policies, along with a significant 540 Elo increase in game-playing strength assessment. These results indicate that implicit search via discrete diffusion is a viable alternative to explicit search over a one-step policy. All codes are publicly available at \href{https://github.com/HKUNLP/DiffuSearch}{https://github.com/HKUNLP/DiffuSearch} | discrete diffusion model, search, planning, chess, MCTS | https://huggingface.co/datasets/jiacheng-ye/chess10k | https://github.com/HKUNLP/DiffuSearch | https://openreview.net/forum?id=A9y3LFX4ds | The Thirteenth International Conference on Learning Representations, {ICLR} 2025, Singapore, April 24-28, 2025 | 2025 | Jiacheng Ye and Zhenyu Wu and Jiahui Gao and Zhiyong Wu and Xin Jiang and Zhenguo Li and Lingpeng Kong | Implicit Search via Discrete Diffusion: {A} Study on Chess | inproceedings | ye:2025:implicit-search-discrete-diffusion-study-chess | null | null | null | null | null | null | null | null | null | null | null | null | null | OpenReview.net | null | null | null | null | null | null | null | null | null | null | We propose a model that does implicit search by looking into the future world via discrete diffusion modeling. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Online chess has opened up a way for players to sharpen their skills through move analytics. Based on this feature, a support system called chess advisor can be utilized to assist players in a real-time match. However such system doesn't exist within the website itself, rather an involvement of 3rd party software is required. These kinds of software usually need assistance for input. This paper aims to create a system that is able to recognize chessboard state in real-time. We have proposed a chess piece image recognition system using transfer learning using ResNet50, a simple neural network, and a convolutional neural network (CNN). We collected our own 693 2D chess piece images dataset and performed data preprocessing to standardize the image dimensions. The proposed system detects and converts individual positions of each piece into a computer-readable output. This output can be used to facilitate the chess advisor in determining the best move. The transfer learning approach utilizing the ResNet50 model exhibited superior performance with an accuracy of 95.65\%. This outperformed alternative methods such as the simple neural network and the CNN. The ResNet50 model's advantage stemmed from its pre-training on the ImageNet dataset, enabling effective pattern generalization. In contrast, the simple neural network and CNN achieved lower accuracies of 73.4\% and 85\%. These models encountered difficulties due to limited dataset quantity and quality, as well as the absence of regularization techniques. Nevertheless, the CNN displayed better consistency in generalizing patterns compared to the simple neural network. | Training;Image recognition;Computational modeling;Transfer learning;Neural networks;Software;Real-time systems;Chess Pieces;CNN;Transfer Learning;Simple Neural Network;Image Recognition | null | null | null | 2023 4th International Conference on Artificial Intelligence and Data Sciences (AiDAS) | 2023 | Yohanes, Gabriel and Nursalim, Mario and Nicholas and Kurniadi, Felix Indra | Chess Piece Image Recognition Using Transfer Learning, Simple Neural Network, and Convolutional Neural Network | inproceedings | yohanes:2024:chess-piece-image-recognition-nn-cnn | null | null | null | 10.1109/AiDAS60501.2023.10284718 | 160--164 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
The complete connectome of the Drosophila larva brain offers a unique opportunity to investigate whether biologically evolved circuits can support artificial intelligence. We convert this wiring diagram into a Biological Processing Unit (BPU)---a fixed recurrent network derived directly from synaptic connectivity. Despite its modest size (3,000 neurons and 65,000 weights between them), the unmodified BPU achieves 98\% accuracy on MNIST and 58\% on CIFAR-10, surpassing size-matched MLPs. Scaling the BPU via structured connectome expansions further improves CIFAR-10 performance, while modality-specific ablations reveal the uneven contributions of different sensory subsystems. On the ChessBench dataset, a lightweight GNN-BPU model trained on only 10,000 games achieves 60{\%} move accuracy, nearly 10x better than any size transformer. Moreover, CNN-BPU models with {\$}{\$}{\backslash}sim {\$}{\$}\sim{}2M parameters outperform parameter-matched Transformers, and with a depth-6 minimax search at inference, reach 91.7{\%} accuracy, exceeding even a 9M-parameter Transformer baseline. These results demonstrate the potential of biofidelic neural architectures to support complex cognitive tasks and motivate scaling to larger and more intelligent connectomes in future work. | biological inspired AI, biological connectome, chess | null | null | null | Artificial General Intelligence | 2026 | Yu, Siyu and Qin, Zihan and Liu, Tingshan and Xu, Beiya and Vogelstein, R. Jacob and Brown, Jason and Vogelstein, Joshua T. | Biological Processing Units: Leveraging an Insect Connectome to~Pioneer Biofidelic Neural Architectures | inproceedings | yu:2026:biological-processing-units-leveraging-insect-connectome-pioneer-biofidelic-neural-architectures | null | null | https://arxiv.org/pdf/2507.10951 | null | 361--369 | null | null | null | null | null | Ikl{\'e}, Matthew and Kolonin, Anton and Bennett, Michael | null | null | Springer Nature Switzerland | null | null | null | Cham | null | null | null | null | 978-3-032-00800-8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
In recent years, Artificial Intelligence (AI) systems have surpassed human intelligence in a variety of computational tasks. However, AI systems, like humans, make mistakes, have blind spots, hallucinate, and struggle to generalize to new situations. This work explores whether AI can benefit from creative decision-making mechanisms when pushed to the limits of its computational rationality. In particular, we investigate whether a team of diverse AI systems can outperform a single AI in challenging tasks by generating more ideas as a group and then selecting the best ones. We study this question in the game of chess, the so-called drosophila of AI. We build on AlphaZero (AZ) and extend it to represent a league of agents via a latent-conditioned architecture, which we call AZ\_db. We train AZ\_db to generate a wider range of ideas using behavioral diversity techniques and select the most promising ones with sub-additive planning. Our experiments suggest that AZ\_db plays chess in diverse ways, solves more puzzles as a group and outperforms a more homogeneous team. Notably, AZ\_db solves twice as many challenging puzzles as AZ, including the challenging Penrose positions. When playing chess from different openings, we notice that players in AZ\_db specialize in different openings, and that selecting a player for each opening using sub-additive planning results in a 50 Elo improvement over AZ. Our findings suggest that diversity bonuses emerge in teams of AI agents, just as they do in teams of humans and that diversity is a valuable asset in solving computationally hard problems. | null | null | null | https://doi.org/10.48550/arXiv.2308.09175 | null | 2023 | Tom Zahavy and Vivek Veeriah and Shaobo Hou and Kevin Waugh and Matthew Lai and Edouard Leurent and Nenad Tomasev and Lisa Schut and Demis Hassabis and Satinder Singh | Diversifying {AI:} Towards Creative Chess with AlphaZero | article | zahavy:2023:diversifying-ai-towards-creative-chess-alphazero | null | null | null | 10.48550/ARXIV.2308.09175 | null | null | abs/2308.09175 | CoRR | null | null | null | null | null | null | null | null | null | null | 2308.09175 | arXiv | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
AI research in chess has been primarily focused on producing stronger agents that can maximize the probability of winning. However, there is another aspect to chess that has largely gone unexamined: its aesthetic appeal. Specifically, there exists a category of chess moves called ``brilliant'' moves. These moves are appreciated and admired by players for their high intellectual aesthetics. We demonstrate the first system for classifying chess moves as brilliant. The system uses a neural network, using the output of a chess engine as well as features that describe the shape of the game tree. The system achieves an accuracy of 79\% (with 50\% base-rate), a PPV of 83\%, and an NPV of 75\%. We demonstrate that what humans perceive as "brilliant" moves is not merely the best possible move. We show that a move is more likely to be predicted as brilliant, all things being equal, if a weaker engine considers it lower-quality (for the same rating by a stronger engine). Our system opens the avenues for computer chess engines to (appear to) display human-like brilliance, and, hence, creativity. | null | null | https://github.com/kamronzaidi/brilliant-moves-clf | https://computationalcreativity.net/iccc24/papers/ICCC24_paper_200.pdf | Proceedings of the 15th International Conference on Computational Creativity | 2024 | Zaidi, Kamron and Guerzhoy, Michael | Predicting User Perception of Move Brilliance in Chess | inproceedings | zaidi:2024:predicting-user-perception-move-brilliance-chess | null | null | https://computationalcreativity.net/iccc24/papers/ICCC24_paper_200.pdf | null | 423--427 | null | null | null | null | null | Grace, Kazjon and Llano, Maria Teresa and Martins, Pedro and Hedblom, Maria M. | null | null | Association for Computational Creativity | null | null | null | J{\"o}nk{\"o}ping, Sweden | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.