default search action
22. ALT 2011: Espoo, Finland
- Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, Thomas Zeugmann:
Algorithmic Learning Theory - 22nd International Conference, ALT 2011, Espoo, Finland, October 5-7, 2011. Proceedings. Lecture Notes in Computer Science 6925, Springer 2011, ISBN 978-3-642-24411-7
Editors' Introduction
- Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, Thomas Zeugmann:
Editors' Introduction. 1-13
Invited Papers
- Peter Auer, Shiau Hong Lim, Chris Watkins:
Models for Autonomously Motivated Exploration in Reinforcement Learning - (Extended Abstract). 14-17 - Yoshua Bengio, Olivier Delalleau:
On the Expressive Power of Deep Architectures. 18-36 - Jorma Rissanen:
Optimal Estimation. 37 - Eyke Hüllermeier, Johannes Fürnkranz:
Learning from Label Preferences. 38 - Ming Li:
Information Distance and Its Extensions. 39
Inductive Inference
- Timo Kötzing:
Iterative Learning from Positive Data and Counters. 40-54 - Sanjay Jain, Eric Martin, Frank Stephan:
Robust Learning of Automatic Classes of Languages. 55-69 - Sanjay Jain, Eric Martin, Frank Stephan:
Learning and Classifying. 70-83 - Michael Geilke, Sandra Zilles:
Learning Relational Patterns. 84-98
Regression
- Sébastien Gerchinovitz, Jia Yuan Yu:
Adaptive and Optimal Online Linear Regression on ℓ1-Balls. 99-113 - Nina Vaits, Koby Crammer:
Re-adapting the Regularization of Weights for Non-stationary Regression. 114-128 - Arnak S. Dalalyan, Joseph Salmon:
Competing against the Best Nearest Neighbor Filter in Regression. 129-143
Bandit Problems
- Sébastien Bubeck, Gilles Stoltz, Jia Yuan Yu:
Lipschitz Bandits without the Lipschitz Constant. 144-158 - Antoine Salomon, Jean-Yves Audibert:
Deviations of Stochastic Bandit Regret. 159-173 - Aurélien Garivier, Eric Moulines:
On Upper-Confidence Bound Policies for Switching Bandit Problems. 174-188 - Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, Rémi Munos, Peter Auer:
Upper-Confidence-Bound Algorithms for Active Learning in Multi-armed Bandits. 189-203
Online Learning
- Constantinos Panagiotakopoulos, Petroula Tsampouka:
The Perceptron with Dynamic Margin. 204-218 - Manfred K. Warmuth, Wouter M. Koolen, David P. Helmbold:
Combining Initial Segments of Lists. 219-233 - Eyal Gofer, Yishay Mansour:
Regret Minimization Algorithms for Pricing Lookback Options. 234-248 - Chi-Jen Lu, Wei-Fu Lu:
Making Online Decisions with Bounded Memory. 249-261 - Tor Lattimore, Marcus Hutter, Vaibhav Gavane:
Universal Prediction of Selected Bits. 262-276 - Brendan Juba, Santosh S. Vempala:
Semantic Communication for Simple Goals Is Equivalent to On-line Learning. 277-291
Kernel and Margin Based Methods
- Xinhua Zhang, Ankan Saha, S. V. N. Vishwanathan:
Accelerated Training of Max-Margin Markov Networks with Kernels. 292-307 - Corinna Cortes, Mehryar Mohri:
Domain Adaptation in Regression. 308-323 - Daiki Suehiro, Kohei Hatano, Eiji Takimoto:
Approximate Reduction from AUC Maximization to 1-Norm Soft Margin Optimization. 324-337
Intelligent Agents
- Peter Sunehag, Marcus Hutter:
Axioms for Rational Reinforcement Learning. 338-352 - Laurent Orseau:
Universal Knowledge-Seeking Agents. 353-367 - Tor Lattimore, Marcus Hutter:
Asymptotically Optimal Agents. 368-382 - Tor Lattimore, Marcus Hutter:
Time Consistent Discounting. 383-397
Other Learning Models
- Anna Kasprzik, Ryo Yoshinaka:
Distributional Learning of Simple Context-Free Tree Grammars. 398-412 - Elena Grigorescu, Lev Reyzin, Santosh S. Vempala:
On Noise-Tolerant Learning of Sparse Parities and Related Problems. 413-424 - Malte Darnstädt, Hans Ulrich Simon, Balázs Szörényi:
Supervised Learning and Co-training. 425-439 - Shalev Ben-David, Shai Ben-David:
Learning a Classifier when the Labeling Is Known. 440-451
Erratum
- Samuel E. Moelius, Sandra Zilles:
Erratum: Learning without Coding. 452
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.