[Cover] [Contents] [Index] Previous page Next Section

Page 187
By Exercise 8-25 we also have:
8.46 Proposition Let j and 0187-001.gif be such that 0187-002.gif and let 0187-003.gif. Then:
(a) 0187-004.gif.
(b) 0187-005.gif.
(c) 0187-006.gif
The last three results inform us that if the number of inaccuracies is finite without any preassigned bound, or if the final program witnessing the identification is required to be either accurate or make a finite number of mistakes without any preassigned bound, then learning from multiple texts does no harm so long as a majority of the input texts are good.
The next two propositions follow from Propositions 8.45, 8.44, 8.13, and 8.10.
8.47 Proposition Let b, j, 0187-007.gif, 0187-008.gif . Then:
(a) 0187-009.gif.
(b) 0187-010.gif.
8.48 Proposition Let b, j, 0187-011.gif be such that 0187-012.gif. Then, 0187-013.gif
Proposition 8.47(a), for example, says that there are collections of functions for which a program that makes up to (b + l) errors can be identified from multiple texts, a majority of which are *-imperfect, but for which a program that makes only up to b errors cannot be identified from a single accurate text. Propositions 8.47 and 8.48 yield a number of hierarchies. We note one of them in the next corollary, which implies that strictly larger classes of functions can be identified from multiple inaccurate texts, a majority of which are *-imperfect, by allowing extra anomalies in the final program.
8.49 CorollaryLet j, 0187-014.gif be such that 0187-015.gif. Then, 0187-016.gif.
We now consider situations in which a scientist pays a price for learning from multiple inaccurate texts instead of learning from a single inaccurate text. We restrict ourselves to the case of noisy data delivered on three inaccurate texts, at least two of which are

 
[Cover] [Contents] [Index] Previous page Next Section