Computational Learning Theory

by ;
Format: Paperback
Pub. Date: 1997-06-01
Publisher(s): Cambridge Univ Pr
  • Free Shipping Icon

    This Item Qualifies for Free Shipping!*

    *Excludes marketplace orders.

List Price: $56.69

Buy New

Arriving Soon. Will ship when available.
$53.99

Rent Textbook

Select for Price
There was a problem. Please try again later.

Used Textbook

We're Sorry
Sold Out

eTextbook

We're Sorry
Not Available

How Marketplace Works:

  • This item is offered by an independent seller and not shipped from our warehouse
  • Item details like edition and cover design may differ from our description; see seller's comments before ordering.
  • Sellers much confirm and ship within two business days; otherwise, the order will be cancelled and refunded.
  • Marketplace purchases cannot be returned to eCampus.com. Contact the seller directly for inquiries; if no response within two days, contact customer service.
  • Additional shipping costs apply to Marketplace purchases. Review shipping costs at checkout.

Summary

Computational learning theory is a subject which has been advancing rapidly in the last few years. The authors concentrate on the probably approximately correct model of learning, and gradually develop the ideas of efficiency considerations. Finally, applications of the theory to artificial neural networks are considered. Many exercises are included throughout, and the list of references is extensive. This volume is relatively self contained as the necessary background material from logic, probability and complexity theory is included. It will therefore form an introduction to the theory of computational learning, suitable for a broad spectrum of graduate students from theoretical computer science and mathematics.

Table of Contents

Notation
Chapter 1: Concepts, Hypotheses, Learning Algorithms
1(8)
1.1 Introduction
1(1)
1.2 Concepts
2(1)
1.3 Training and Learning
3(2)
1.4 Learning by Construction
5(1)
1.5 Learning by Enumeration
6(1)
Further Remarks
7(1)
Exercises
8(1)
Chapter 2: Boolean Formulae and Representations
9(10)
2.1 Monomial Concepts
9(2)
2.2 A Learning Algorithm for Monomials
11(2)
2.3 The Standard Notation for Boolean Functions
13(1)
2.4 Learning Disjunctions of Small Monomials
14(1)
2.5 Representations of Hypothesis Spaces
15(2)
Further Remarks
17(1)
Exercises
17(2)
Chapter 3: Probabilistic Learning
19(10)
3.1 An Algorithm for Learning Rays
19(1)
3.2 Probably Approximately Correct Learning
20(3)
3.3 Illustration -- Learning Rays is Pac
23(1)
3.4 Exact Learning
24(2)
Further Remarks
26(1)
Exercises
27(2)
Chapter 4: Consistent Algorithms and Learnability
29(9)
4.1 Potential Learnability
29(1)
4.2 The Finite Case
30(1)
4.3 Decision Lists
31(2)
4.4 A Consistent Algorithm for Decision Lists
33(2)
Further Remarks
35(1)
Exercises
36(2)
Chapter 5: Efficient Learning--I
38(13)
5.1 Outline of Complexity Theory
38(2)
5.2 Running Time of Learning Algorithms
40(1)
5.3 An Approach to the Efficiency of Pac Learning
41(3)
5.4 The Consistency Problem
44(1)
5.5 A Hardness Result
44(4)
Further Remarks
48(1)
Exercises
49(2)
Chapter 6: Efficient Learning--II
51(20)
6.1 Efficiency in Terms of Confidence and Accuracy
51(1)
6.2 Pac Learning and the Consistency Problem
52(3)
6.3 The Size of a Representation
55(2)
6.4 Finding the Smallest Consistent Hypothesis
57(2)
6.5 Occam Algorithms
59(2)
6.6 Examples of Occam Algorithms
61(3)
6.7 Epac Learning
64(3)
Further Remarks
67(2)
Exercises
69(2)
Chapter 7: The VC Dimension
71(15)
7.1 Motivation
71(15)
7.2 The Growth Function
73(1)
7.3 The VC Dimension
74(3)
7.4 The VC Dimension of the Real Perceptron
77(2)
7.5 Sauer's Lemma
79(5)
Further Remarks
84(1)
Exercises
84(2)
Chapter 8 Learning and the VC Dimension
86(19)
8.1 Introduction
86(1)
8.2 VC Dimension and Potential Learnability
86(3)
8.3 Proof of the Fundamental Theorem
89(5)
8.4 Sample Complexity of Consistent Algorithms
94(2)
8.5 Lower Bounds on Sample Complexity
96(5)
8.6 Comparison of Sample Complexity Bounds
101(2)
Further Remarks
103(1)
Exercises
103(2)
Chapter 9: VC Dimension and Efficient Learning
105(18)
9.1 Graded Real Hypothesis Spaces
105(3)
9.2 Efficient Learning of Graded Spaces
108(3)
9.3 VC Dimension and Boolean Spaces
111(2)
9.4 Optimal Sample Complexity for Boolean Spaces
113(2)
9.5 Efficiency With Respect to Representations
115(2)
9.6 Dimension-based Occam Algorithms
117(3)
9.7 Epac Learning Again
120(1)
Further Remarks
121(1)
Exercises
122(1)
Chapter 10: Linear Threshold Networks
123(20)
10.1 The Boolean Perceptron
123(2)
10.2 An Incremental Algorithm
125(2)
10.3 A Finiteness Result
127(3)
10.4 Finding a Consistent Hypothesis
130(1)
10.5 Feedforward Neural Networks
131(3)
10.6 VC Dimension of Feedforward Networks
134(3)
10.7 Hardness Results for Neural Networks
137(3)
Further Remarks
140(1)
Exercises
141(2)
References 143(7)
Index 150

An electronic version of this book is available through VitalSource.

This book is viewable on PC, Mac, iPhone, iPad, iPod Touch, and most smartphones.

By purchasing, you will be able to view this book online, as well as download it, for the chosen number of days.

Digital License

You are licensing a digital product for a set duration. Durations are set forth in the product description, with "Lifetime" typically meaning five (5) years of online access and permanent download to a supported device. All licenses are non-transferable.

More details can be found here.

A downloadable version of this book is available through the eCampus Reader or compatible Adobe readers.

Applications are available on iOS, Android, PC, Mac, and Windows Mobile platforms.

Please view the compatibility matrix prior to purchase.