Gaussian Processes for Machine Learning

Gaussian Processes for Machine Learning

The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. Contains illustrative examples and exercises, and code and datasets are available on the Web.

Publication date: 31 Dec 2006

ISBN-10: 026218253X

ISBN-13: 9780262182539

Paperback: 266 pages

Views: 13,822

Type: N/A

Publisher: The MIT Press

License: n/a

Post time: 04 Oct 2007 09:51:04

Gaussian Processes for Machine Learning

Gaussian Processes for Machine Learning The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. Contains illustrative examples and exercises, and code and datasets are available on the Web.
Tag(s): Machine Learning
Publication date: 31 Dec 2006
ISBN-10: 026218253X
ISBN-13: 9780262182539
Paperback: 266 pages
Views: 13,822
Document Type: N/A
Publisher: The MIT Press
License: n/a
Post time: 04 Oct 2007 09:51:04
:santagrin: This book was suggested by Bernd Gutmann.

Terms and Conditions:
Carl Edward Rasmussen wrote:© 2006 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

Excerpts from the Introduction:

Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.

The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.

Intended Audience:

The book is primarily intended for graduate students and researchers in machine learning at departments of Computer Science, Statistics and Applied Mathematics. As prerequisites we require a good basic grounding in calculus, linear algebra and probability theory as would be obtained by graduates in numerate disciplines such as electrical engineering, physics and computer science. For preparation in calculus and linear algebra any good university-level textbook on mathematics for physics or engineering such as Arfken [1985] would be fine. For probability theory some familiarity with multivariate distributions (especially the Gaussian) and conditional probability is required. Some background mathematical material is also provided in Appendix A.
 




About The Author(s)


Carl Edward Rasmussen is a Reader in Information Engineering at the Deparment of Engineering, University of Cambridge and Adjunct Research Scientist at the Max Planck Institute for Biological Cybernetics, Tübingen.

Carl Edward Rasmussen

Carl Edward Rasmussen is a Reader in Information Engineering at the Deparment of Engineering, University of Cambridge and Adjunct Research Scientist at the Max Planck Institute for Biological Cybernetics, Tübingen.


Chris Williams is Professor of Machine Learning and Director of the Institute for Adaptive and Neural Computation in the School of Informatics, University of Edinburgh.

Christopher K. I. Williams

Chris Williams is Professor of Machine Learning and Director of the Institute for Adaptive and Neural Computation in the School of Informatics, University of Edinburgh.


Book Categories
Sponsors
Icons8, a free icon pack