[Cover] [Contents] [Index] Previous page Next Section

Page 79
If the enumeration of We fails to progress past some odd stage, then We is finite and 0079-001.gif fails to identify some text for We.
If the construction progresses past all stages, then We = N and 0079-002.gif fails to converge on some text for We and so fails to identify We (in this case e and p are equivalent indexes, both for N).
Thus, in all cases 0079-003.gif fails to identify {Wp, We} = [h(e)].
In conjunction with Exercise 4-1, Proposition 4.36 implies:
4.37 Corollary 0079-004.gif is not performable on [·].
Thus, no parameterized scientist can adapt herself to the identification of an arbitrary, identifiable 0079-005.gif on the basis of an [·] description of Image-0862.gif. In this sense there is no universal learning machine for [·] and TxtEx.
The proof of Proposition 4.36 rests on the possible presence of equivalent indexes in the description of a class of languages. Disallowing such equivalences in fact makes performability easier. The next definition helps to make this claim precise.
4.38 Definition 0079-006.gif.
So, if 0079-007.gif then Wi contains no two equivalent indices. (The symbol ''Image-0863.gif" may be read as "no equivalences.")
4.39 Proposition 0079-008.gif is performable on [·].
Proof: Exercise 4-15.
Is the finiteness qualification in Proposition 4.39 essential? The next proposition shows that it is.
4.40 Proposition 0079-009.gif is not perform able on [·].
Thus, no parameterized scientist can adapt itself to the identification of an arbitrary, identifiable 0079-010.gif even on the basis of an [·] description of Image-0864.gif that is purged of equivalent indices. A proof of 4.40 is given in Osherson, Stob, and Weinstein [141, Proposition 3C] and omitted here.

 
[Cover] [Contents] [Index] Previous page Next Section