![](https://dblp1.uni-trier.de/img/logo.ua.320x120.png)
![](https://dblp1.uni-trier.de/img/dropdown.dark.16x16.png)
![](https://dblp1.uni-trier.de/img/peace.dark.16x16.png)
Остановите войну!
for scientists:
![search dblp search dblp](https://dblp1.uni-trier.de/img/search.dark.16x16.png)
![search dblp](https://dblp1.uni-trier.de/img/search.dark.16x16.png)
default search action
12th ICML 1995: Tahoe City, California, USA
- Armand Prieditis, Stuart Russell:
Machine Learning, Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, California, USA, July 9-12, 1995. Morgan Kaufmann 1995, ISBN 1-55860-377-8
Contributed Papers
- Naoki Abe, Hang Li, Atsuyoshi Nakamura:
On-line Learning of Binary Lexical Relations Using Two-dimensional Weighted Majority Algorithms. 3-11 - Hussein Almuallim, Yasuhiro Akiba, Shigeo Kaneda:
On Handling Tree-Structured Attributed in Decision Tree Learning. 12-20 - Peter Auer, Robert C. Holte, Wolfgang Maass:
Theory and Applications of Agnostic PAC-Learning with Small Decision Trees. 21-29 - Leemon C. Baird III:
Residual Algorithms: Reinforcement Learning with Function Approximation. 30-37 - Shumeet Baluja, Rich Caruana:
Removing the Genetics from the Standard Genetic Algorithm. 38-46 - Scott Benson:
Inductive Learning of Reactive Action Models. 47-54 - Justine Blackmore, Risto Miikkulainen:
Visualizing High-Dimensional Structure with the Incremental Grid Growing Neural Network. 55-63 - Avrim Blum:
Empirical Support for Winnow and Weighted-Majority Based Algorithms: Results on a Calendar Scheduling Domain. 64-72 - Carla E. Brodley:
Automatic Selection of Split Criterion during Tree Growing Based on Node Location. 73-80 - Clifford Brunk, Michael J. Pazzani:
A Lexical Based Semantic Bias for Theory Revision. 81-89 - Philip K. Chan, Salvatore J. Stolfo:
A Comparative Evaluation of Voting and Meta-learning on Partitioned Data. 90-98 - Pawel Cichosz, Jan J. Mulawka:
Fast and Efficient Reinforcement Learning with Truncated Temporal Differences. 99-107 - John G. Cleary, Leonard E. Trigg:
K*: An Instance-based Learner Using and Entropic Distance Measure. 108-114 - William W. Cohen:
Fast Effective Rule Induction. 115-123 - William W. Cohen:
Text Categorization and Relational Learning. 124-132 - Susan Craw, Paul Hutton:
Protein Folding: Symbolic Refinement Competes with Neural Networks. 133-141 - James Cussens:
A Bayesian Analysis of Algorithms for Learning Finite Functions. 142-149 - Ido Dagan, Sean P. Engelson:
Committee-Based Sampling For Training Probabilistic Classifiers. 150-157 - Piew Datta, Dennis F. Kibler:
Learning Prototypical Concept Descriptions. 158-166 - Gerald DeJong:
A Case Study of Explanation-Based Control. 167-175 - Thomas G. Dietterich, Nicholas S. Flann:
Explanation-Based Learning and Reinforcement Learning: A Unified View. 176-184 - Steven K. Donoho, Larry A. Rendell:
Lessons from Theory Revision Applied to Constructive Induction. 185-193 - James Dougherty, Ron Kohavi, Mehran Sahami:
Supervised and Unsupervised Discretization of Continuous Features. 194-202 - John A. Drakopoulos:
Bounds on the Classification Error of the Nearest Neighbor Rule. 203-208 - Michael O. Duff:
Q-Learning for Bandit Problems. 209-217 - Sean P. Engelson, Moshe Koppel:
Distilling Reliable Information From Unreliable Theories. 218-225 - Philip W. L. Fong:
A Quantitative Study of Hypothesis Selection. 226-234 - Matthias Fuchs:
Learning Proof Heuristics by Adaptive Parameters. 235-243 - Truxton Fulton, Simon Kasif, Steven Salzberg:
Efficient Algorithms for Finding Multi-way Splits for Decision Trees. 244-251 - Luca Maria Gambardella, Marco Dorigo:
Ant-Q: A Reinforcement Learning Approach to the Traveling Salesman Problem. 252-260 - Geoffrey J. Gordon:
Stable Function Approximation in Dynamic Programming. 261-268 - Russell Greiner:
The Challenge of Revising an Impure Theory. 269-277 - Jukka Hekanaho:
Symbiosis in Multimodal Concept Learning. 278-285 - Mark Herbster, Manfred K. Warmuth:
Tracking the Best Expert. 286-294 - Hajime Kimura, Masayuki Yamamura, Shigenobu Kobayashi:
Reinforcement Learning by Stochastic Hill Climbing on Discounted Reward. 295-303 - Ron Kohavi, George H. John:
Automatic Parameter Selection by Minimizing Estimated Error. 304-312 - Eun Bae Kong, Thomas G. Dietterich:
Error-Correcting Output Coding Corrects Bias and Variance. 313-321 - P. Krishnan, Philip M. Long, Jeffrey Scott Vitter:
Learning to Make Rent-to-Buy Decisions with Systems Applications. 322-330 - Ken Lang:
NewsWeeder: Learning to Filter Netnews. 331-339 - Kevin J. Lang:
Hill Climbing Beats Genetic Search on a Boolean Circuit Synthesis Problem of Koza's. 340-343 - Pat Langley, Karl Pfleger:
Case-Based Acquisition of Place Knowledge. 344-352 - Nick Littlestone:
Comparing Several Linear-threshold Learning Algorithms on Tasks Involving Superfluous Attributes. 353-361 - Michael L. Littman, Anthony R. Cassandra, Leslie Pack Kaelbling:
Learning Policies for Partially Observable Environments: Scaling Up. 362-370 - David J. Lubinsky:
Increasing the Performance and Consistency of Classification Trees by Using the Accuracy Criterion at the Leaves. 371-377 - Wolfgang Maass, Manfred K. Warmuth:
Efficient Learning with Virtual Threshold Gates. 378-386 - R. Andrew McCallum:
Instance-Based Utile Distinctions for Reinforcement Learning with Hidden State. 387-395 - David E. Moriarty, Risto Miikkulainen:
Efficient Learning from Delayed Rewards through Symbiotic Evolution. 396-404 - Partha Niyogi:
Free to Choose: Investigating the Sample Complexity of Active Learning of Real Valued Functions. 405-412 - Richard Nock, Olivier Gascuel:
On Learning Decision Committees. 413-420 - Arlindo L. Oliveira, Alberto L. Sangiovanni-Vincentelli:
Inferring Reduced Ordered Decision Graphs of Minimum Description Length. 421-429 - Jonathan J. Oliver, David J. Hand:
On Pruning and Averaging Decision Trees. 430-437 - Jing Peng:
Efficient Memory-Based Dynamic Programming. 438-446 - Eduardo Pérez, Larry A. Rendell:
Using Multidimensional Projection to Find Relations. 447-455 - Bernhard Pfahringer:
Compression-Based Discretization of Continuous Attributes. 456-463 - J. Ross Quinlan:
MDL and Categorical Theories (Continued). 464-470 - R. Bharat Rao, Diana F. Gordon, William M. Spears:
For Every Generalization Action, Is There Really an Equal and Opposite Reaction? 471-479 - Marcos Salganicoff, Lyle H. Ungar:
Active Exploration and Learning in real-Valued Spaces using Multi-Armed Bandit Allocation Indices. 480-487 - Jürgen Schmidhuber:
Discovering Solutions with Low Kolmogorov Complexity and High Generalization Capability. 488-496 - Moninder Singh, Gregory M. Provan:
A Comparison of Induction Algorithms for Selective and non-Selective Bayesian Classifiers. 497-505 - Padhraic Smyth, Alexander G. Gray, Usama M. Fayyad:
Retrofitting Decision Tree Classifiers Using Kernel Density Estimation. 506-514 - Brett Squires, Claude Sammut:
Automatic Speaker Recognition: An Application of Machine Learning. 515-521 - W. Nick Street, Olvi L. Mangasarian, William H. Wolberg:
An Inductive Learning Approach to Prognostic Prediction. 522-530 - Richard S. Sutton:
TD Models: Modeling the World at a Mixture of Time Scales. 531-539 - Geoffrey G. Towell, Ellen M. Voorhees, Narendra Kumar Gupta, Ben Johnson-Laird:
Learning Collection Fusion Strategies for Information Retrieval. 540-548 - Xuemei Wang:
Learning by Observation and Practice: An Incremental Approach for Planning Operator Acquisition. 549-557 - Gary M. Weiss
:
Learning with Rare Cases and Small Disjuncts. 558-565 - David Wolpert:
Horizonal Generalization. 566-574 - Takefumi Yamazaki, Michael J. Pazzani, Christopher J. Merz:
Learning Hierarchies from Ambiguous Natural Language Data. 575-583
Invited Talks
- W. Bruce Croft:
Machine Learning and Information Retrieval (Abstract). 587 - David Heckerman:
Learning With Bayesian Networks (Abstract). 588 - Dean Pomerleau:
Learning for Automotive Collision Avoidance and Autonomous Control. 589
![](https://dblp1.uni-trier.de/img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.