[Cover] [Contents] [Index] Previous page Next Section

Page 175
8.15 Proposition For all 0175-001.gif:
(a) 0175-002.gif.
(b) 0175-003.gif.
(c) 0175-004.gif.
(d) 0175-005.gif.
(e) 0175-006.gif.
(f) 0175-007.gif.
The foregoing result yields a collection of hierarchies that are the subject of Exercise 8-16. Some other hierarchies are implied by the following result:
8.16 Proposition For each 0175-008.gif,. 0175-009.gif.
Proof: For each finite 0175-010.gif define:
0175-011.gif
0175-012.gif
0175-013.gif
It is easy to verify that 0175-014.gif. Also, 0175-015.gif 0175-016.gif. But, 0175-017.gif (see Exercise 6-9); thus 0175-018.gif.
§8.1.3 Comparison of Different Kinds of Inaccuracies
The last section revealed the extent to which inaccurate data can impede learning. However, no information was provided about the relative impact of different kinds of inaccuracies. For example, which is worse—noise or missing data? In the case of function learning we will see that missing data is strictly more disruptive than noise. To begin, the next proposition shows that there are collections of functions for which programs can be synthesized despite an arbitrary, finite number of noisy points, but for which even finite-variant programs cannot be synthesized if just one datum is missing. The language-identification counterpart of this result follows as an easy corollary.
8.17 Proposition 0175-019.gif.

 
[Cover] [Contents] [Index] Previous page Next Section