default search action
CIG 2012: Granada, Spain
- 2012 IEEE Conference on Computational Intelligence and Games, CIG 2012, Granada, Spain, September 11-14, 2012. IEEE 2012, ISBN 978-1-4673-1193-9
- Jeff Orkin, Gillian Smith, Michael Bowling:
Keynotes [abstracts of three keynote presentations]. - Mark J. Nelson, Paolo Burelli, Kostas Karpouzis, Simon M. Lucas, Peter I. Cowling:
Tutorials.
Paper
- Pablo J. Villacorta, Luis Quesada, David A. Pelta:
Automatic design of deterministic sequences of decisions for a repeated imitation game with action-state dependency. 1-8 - Garrison W. Greenwood, Phillipa M. Avery:
Update rules, reciprocity and weak selection in evolutionary spatial games. 9-16 - Daniel A. Ashlock, Elizabeth Knowles:
Deck-based prisoner's dilemma. 17-24 - Mark Wittkamp, Luigi Barone, Philip Hingston, Lyndon While:
Noise tolerance for real-time evolutionary learning of cooperative predator-prey strategies. 25-32 - Daniel A. Ashlock, Wendy Ashlock, Spyridon Samothrakis, Simon M. Lucas, Colin Lee:
From competition to cooperation: Co-evolution in a rewards continuum. 33-40 - Jacques Basaldua, J. Marcos Moreno-Vega:
Win/loss States: An efficient model of success rates for simulation-based functions. 41-46 - Amit Benbassat, Moshe Sipper:
Evolving both search and strategy for Reversi players using genetic programming. 47-54 - Laurentiu Ilici, Jiaojian Wang, Olana Missura, Thomas Gärtner:
Dynamic difficulty for checkers and Chinese chess. 55-62 - Athanasios Papadopoulos, Konstantinos Toumpas, Anthony C. Chrysopoulos, Pericles A. Mitkas:
Exploring optimization strategies in board game Abalone for Alpha-Beta search. 63-70 - Kokolo Ikeda, Daisuke Tomizawa, Simon Viennot, Yuu Tanaka:
Playing PuyoPuyo: Two search algorithms for constructing chain and tactical heuristics. 71-78 - Thomas Philip Runarsson, Simon M. Lucas:
Imitating play from game trajectories: Temporal difference learning versus preference learning. 79-82 - Majed Alhajry, Faisal Alvi, Moataz A. Ahmed:
TD(λ) and Q-learning based Ludo players. 83-90 - Martin Wistuba, Lars Schaefers, Marco Platzner:
Comparison of Bayesian move prediction systems for Computer Go. 91-99 - Bulent Tastan, Yuan Chang, Gita Sukthankar:
Learning to intercept opponents in first person shooter games. 100-107 - Matteo Botta, Vincenzo Gautieri, Daniele Loiacono, Pier Luca Lanzi:
Evolving the optimal racing line in a high-end racing game. 108-115 - Christos Athanasiadis, Damianos Galanopoulos, Anastasios Tefas:
Progressive neural network training for the Open Racing Car Simulator. 116-123 - Casey Rosenthal, Clare Bates Congdon:
Personality profiles for generating believable bot behaviors. 124-131 - Christian Bauckhage, Kristian Kersting, Rafet Sifa, Christian Thurau, Anders Drachen, Alessandro Canossa:
How players lose interest in playing a game: An empirical study based on distributions of total playing times. 139-146 - Michelle McPartland, Marcus Gallagher:
Interactively training first person shooter bots. 132-138 - Wei Gong, Ee-Peng Lim, Palakorn Achananuparp, Feida Zhu, David Lo, Freddy Chong Tat Chua:
In-game action list segmentation and labeling in real-time strategy games. 147-154 - Marlos C. Machado, Gisele L. Pappa, Luiz Chaimowicz:
A binary classification approach for automatic preference modeling of virtual agents in Civilization IV. 155-162 - Anders Drachen, Rafet Sifa, Christian Bauckhage, Christian Thurau:
Guns, swords and data: Clustering of player behavior in computer games in the wild. 163-170 - Tróndur Justinussen, Peter Hald Rasmussen, Alessandro Canossa, Julian Togelius:
Resource systems in games: An analytical approach. 171-178 - Michele Pirovano, Renato Mainetti, Gabriel Baud-Bovy, Pier Luca Lanzi, N. Alberto Borghese:
Self-adaptive games for rehabilitation at home. 179-186 - Sylvain Cussat-Blanc, Stéphane Sanchez, Yves Duthen:
Controlling cooperative and conflicting continuous actions with a Gene Regulatory Network. 187-194 - Luís Peña, Sascha Ossowski, José María Peña Sánchez, Simon M. Lucas:
Learning and evolving combat game controllers. 195-202 - Timothy Davison, Jörg Denzinger:
The huddle: Combining AI techniques to coordinate a player's game characters. 203-210 - Diego Perez Liebana, Philipp Rohlfshagen, Simon M. Lucas:
Monte Carlo Tree Search: Long-term versus short-term planning. 219-226 - Hendrik Baier, Mark H. M. Winands:
Beam Monte-Carlo Tree Search. 227-233 - Edward Jack Powley, Daniel Whitehouse, Peter I. Cowling:
Monte Carlo Tree Search with macro-actions and heuristic route planning for the Physical Travelling Salesman Problem. 234-241 - Pierre Perick, David Lupien St-Pierre, Francis Maes, Damien Ernst:
Comparison of different selection strategies in Monte-Carlo Tree Search for the game of Tron. 242-249 - Matthias F. Brandstetter, Samad Ahmadi:
Reactive control of Ms. Pac Man using information retrieval based on Genetic Programming. 250-256 - Johan Svensson, Stefan J. Johansson:
Influence Map-based controllers for Ms. PacMan and the ghosts. 257-264 - Tom Pepels, Mark H. M. Winands:
Enhancements for Monte-Carlo Tree Search in Ms Pac-Man. 265-272 - David J. Gagne, Clare Bates Congdon:
FRIGHT: A flexible rule-based intelligent ghost team for Ms. Pac-Man. 273-280 - Greg Foderaro, Ashleigh Swingler, Silvia Ferrari:
A model-based cell decomposition approach to on-line pursuit-evasion path planning and the video game Ms. Pac-Man. 281-287 - Marie Gustafsson Friberger, Julian Togelius:
Generating interesting Monopoly boards from open data. 288-295 - Cameron Browne, Simon Colton:
Computational creativity in a closed game system. 296-303 - Noor Shaker, Miguel Nicolau, Georgios N. Yannakakis, Julian Togelius, Michael O'Neill:
Evolving levels for Super Mario Bros using grammatical evolution. 304-311 - Cameron McGuinness:
Statistical analyses of representation choice in level generation. 312-319 - Annika Jordan, Dimitri Scheftelowitsch, Jan Lahni, Jannic Hartwecker, Matthias Kuchem, Mirko Walter-Huber, Nils Vortmeier, Tim Delbrügger, Ümit Güler, Igor Vatolkin, Mike Preuss:
BeatTheBeat music-based procedural content generation in a mobile game. 320-327 - Isaac M. Dart, Mark J. Nelson:
Smart terrain causality chains for adventure-game puzzle generation. 328-334 - Manuel Kerssemakers, Jeppe Tuxen, Julian Togelius, Georgios N. Yannakakis:
A procedural procedural level generator generator. 335-341 - Samuel A. Roberts, Simon M. Lucas:
Evolving spaceship designs for optimal control and the emergence of interesting behaviour. 342-349 - Miguel Frade, Francisco Fernández de Vega, Carlos Cotta:
Aesthetic Terrain Programs database for creativity assessment. 350-354 - Anja Johansson, Pierangelo Dell'Acqua:
Emotional behavior trees. 355-362 - Reid Swanson, Dustin Escoffery, Arnav Jhala:
Learning visual composition preferences from an annotated corpus generated through gameplay. 363-370 - Dario Maggiorini, Antonio Nigro, Laura Anna Ripamonti, Marco Trubian:
The Perfect Looting System: Looking for a Phoenix? 371-378 - Wichit Sombat, Philipp Rohlfshagen, Simon M. Lucas:
Evaluating the enjoyability of the ghosts in Ms Pac-Man. 379-387 - Johan Hagelbäck:
Potential-field based navigation in StarCraft. 388-393 - Nasri Bin Othman, James Decraene, Wentong Cai, Nan Hu, Malcolm Yoke Hean Low, Alexandre Gouaillard:
Simulation-based optimization of StarCraft tactical AI through evolutionary computation. 394-401 - Stefan Wender, Ian D. Watson:
Applying reinforcement learning to small scale combat in the real-time strategy game StarCraft: Broodwar. 402-408 - Gabriel Synnaeve, Pierre Bessière:
Special tactics: A Bayesian approach to tactical decision-making. 409-416 - Antonio Fernández-Ares, Pablo García-Sánchez, Antonio Miguel Mora, Juan Julián Merelo Guervós:
Adaptive bots for real-time strategy games via map characterization. 417-721 - Quentin Gemine, Firas Safadi, Raphael Fonteneau, Damien Ernst:
Imitative learning for real-time strategy games. 424-429 - Jason M. Traish, James R. Tulip:
Towards adaptive online RTS AI with NEAT. 430-437 - Jay Young, Fran Smith, Christopher Atkinson, Ken Poyner, Tom Chothia:
SCAIL: An integrated Starcraft AI system. 438-445
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.