Notation |
|
|
Chapter 1: Concepts, Hypotheses, Learning Algorithms |
|
|
1 | (8) |
|
|
1 | (1) |
|
|
2 | (1) |
|
1.3 Training and Learning |
|
|
3 | (2) |
|
1.4 Learning by Construction |
|
|
5 | (1) |
|
1.5 Learning by Enumeration |
|
|
6 | (1) |
|
|
7 | (1) |
|
|
8 | (1) |
|
Chapter 2: Boolean Formulae and Representations |
|
|
9 | (10) |
|
|
9 | (2) |
|
2.2 A Learning Algorithm for Monomials |
|
|
11 | (2) |
|
2.3 The Standard Notation for Boolean Functions |
|
|
13 | (1) |
|
2.4 Learning Disjunctions of Small Monomials |
|
|
14 | (1) |
|
2.5 Representations of Hypothesis Spaces |
|
|
15 | (2) |
|
|
17 | (1) |
|
|
17 | (2) |
|
Chapter 3: Probabilistic Learning |
|
|
19 | (10) |
|
3.1 An Algorithm for Learning Rays |
|
|
19 | (1) |
|
3.2 Probably Approximately Correct Learning |
|
|
20 | (3) |
|
3.3 Illustration -- Learning Rays is Pac |
|
|
23 | (1) |
|
|
24 | (2) |
|
|
26 | (1) |
|
|
27 | (2) |
|
Chapter 4: Consistent Algorithms and Learnability |
|
|
29 | (9) |
|
4.1 Potential Learnability |
|
|
29 | (1) |
|
|
30 | (1) |
|
|
31 | (2) |
|
4.4 A Consistent Algorithm for Decision Lists |
|
|
33 | (2) |
|
|
35 | (1) |
|
|
36 | (2) |
|
Chapter 5: Efficient Learning--I |
|
|
38 | (13) |
|
5.1 Outline of Complexity Theory |
|
|
38 | (2) |
|
5.2 Running Time of Learning Algorithms |
|
|
40 | (1) |
|
5.3 An Approach to the Efficiency of Pac Learning |
|
|
41 | (3) |
|
5.4 The Consistency Problem |
|
|
44 | (1) |
|
|
44 | (4) |
|
|
48 | (1) |
|
|
49 | (2) |
|
Chapter 6: Efficient Learning--II |
|
|
51 | (20) |
|
6.1 Efficiency in Terms of Confidence and Accuracy |
|
|
51 | (1) |
|
6.2 Pac Learning and the Consistency Problem |
|
|
52 | (3) |
|
6.3 The Size of a Representation |
|
|
55 | (2) |
|
6.4 Finding the Smallest Consistent Hypothesis |
|
|
57 | (2) |
|
|
59 | (2) |
|
6.6 Examples of Occam Algorithms |
|
|
61 | (3) |
|
|
64 | (3) |
|
|
67 | (2) |
|
|
69 | (2) |
|
Chapter 7: The VC Dimension |
|
|
71 | (15) |
|
|
71 | (15) |
|
|
73 | (1) |
|
|
74 | (3) |
|
7.4 The VC Dimension of the Real Perceptron |
|
|
77 | (2) |
|
|
79 | (5) |
|
|
84 | (1) |
|
|
84 | (2) |
|
Chapter 8 Learning and the VC Dimension |
|
|
86 | (19) |
|
|
86 | (1) |
|
8.2 VC Dimension and Potential Learnability |
|
|
86 | (3) |
|
8.3 Proof of the Fundamental Theorem |
|
|
89 | (5) |
|
8.4 Sample Complexity of Consistent Algorithms |
|
|
94 | (2) |
|
8.5 Lower Bounds on Sample Complexity |
|
|
96 | (5) |
|
8.6 Comparison of Sample Complexity Bounds |
|
|
101 | (2) |
|
|
103 | (1) |
|
|
103 | (2) |
|
Chapter 9: VC Dimension and Efficient Learning |
|
|
105 | (18) |
|
9.1 Graded Real Hypothesis Spaces |
|
|
105 | (3) |
|
9.2 Efficient Learning of Graded Spaces |
|
|
108 | (3) |
|
9.3 VC Dimension and Boolean Spaces |
|
|
111 | (2) |
|
9.4 Optimal Sample Complexity for Boolean Spaces |
|
|
113 | (2) |
|
9.5 Efficiency With Respect to Representations |
|
|
115 | (2) |
|
9.6 Dimension-based Occam Algorithms |
|
|
117 | (3) |
|
|
120 | (1) |
|
|
121 | (1) |
|
|
122 | (1) |
|
Chapter 10: Linear Threshold Networks |
|
|
123 | (20) |
|
10.1 The Boolean Perceptron |
|
|
123 | (2) |
|
10.2 An Incremental Algorithm |
|
|
125 | (2) |
|
|
127 | (3) |
|
10.4 Finding a Consistent Hypothesis |
|
|
130 | (1) |
|
10.5 Feedforward Neural Networks |
|
|
131 | (3) |
|
10.6 VC Dimension of Feedforward Networks |
|
|
134 | (3) |
|
10.7 Hardness Results for Neural Networks |
|
|
137 | (3) |
|
|
140 | (1) |
|
|
141 | (2) |
References |
|
143 | (7) |
Index |
|
150 | |