[Cover] [Contents] [Index] Previous page Next Section

Page 142
within some fixed, finite set of indexes — where each index is for a function sufficiently close to the target. We adapt this notion to the case of languages.
6.25 Definition (Case [25], Osherson and Weinstein [143])  Let 0142-001.gif and 0142-002.gif.
(a) We say that M on T finitely converges to a finite set D just in case, for all but finitely many n, 0142-003.gif.
(b) 0142-004.gif (written: 0142-005.gif) just in case for all texts T for L there is a finite, nonempty set D of cardinality at most b such that M finitely converges to D and, for each 0142-006.gif, Wi = a L.
(c) 0142-007.gif.
Thus, 0142-008.gif just in case for every text T for L there is a nonempty, finite set DT of no more than b indexes for a-variants of L such that all but finitely many of M's conjectures are drawn from DT.
For the function case, Proposition 6.17 showed that there is nothing to be gained from allowing scientists to vacillate. The situation is different for languages. Indeed, we now prove that there are collections of languages for which 0-error indexes can be learned if the scientist may vacillate among n + 1 conjectures, but for which even *-error indexes cannot be learned if vacillation is limited to sets of size n.
6.26 Proposition (Case [25])
(a) For 0142-009.gif, 0142-010.gif.
(b) For 0142-011.gif, 0142-012.gif.
As an immediate corollary we obtain:
6.27 Corollary Let 0142-013.gif. Then 0142-014.gif.
Proof (of Proposition 6.26): We prove only part (a). The argument for part (b) can be worked out using the proof of part (a) in a way similar to the proof of Corollary 6.6.
Notation: Define 0142-015.gif, where m is the least number 0142-016.gif such that 0142-017.gif. Intuitively, Lastn(M,  s ) denotes the set of the last n distinct indexes output by M when it is fed evidential state

 
[Cover] [Contents] [Index] Previous page Next Section