[Cover] [Contents] [Index] Previous page Next Section

Page 171
(a.1) M NaExb-identifies f (written: 0171-001.gif just in case for all a-noisy texts G for f, 0171-002.gif and  j M(G) =b f.
(a.2) 0171-003.gif.
(b.1) M InaExb-identifies f (written: 0171-004.gif just in case for all a-incomplete texts G for f, 0171-005.gif and  j M(G) =b f.
(b.2) 0171-006.gif.
(c.1) M ImaExb-identifies f (written: 0171-007.gif) just in case for all a-imperfect texts G for f, 0171-008.gif and  j M(G) =b f.
(c.2) 0171-009.gif.
As an example of a collection of functions whose identifiability is not affected by the presence of a finite number of inaccuracies in texts, consider the class Image-1803.gif of constant functions, i.e., 0171-010.gif. It is easy to construct a scientist that identifies Image-1804.gif on *-noisy texts, *-incomplete texts, and *-imperfect texts. On the other hand, no scientist identifies Image-1805.gif, the collection of self-describing functions1, from 1-incomplete texts.
The basic idea of the foregoing Definitions 8.6 and 8.9 is easily extended to other criteria of learning introduced in Chapter 6. For example, consider the language learning criterion 0171-011.gif, according to which a scientist M is successful on a language L just in case, given any text for L, M converges to a finite set of indexes D such that 0171-012.gif and each index in D is for some b-variant of L. An a-noisy-text version of 0171-013.gif is defined by requiring the scientist to 0171-014.gif a language on any a-noisy text. The resulting paradigm is named 0171-015.gif. With this background, the reader may easily formulate the exact definitions of the following paradigms for language identification: 0171-016.gif, 0171-017.gif, 0171-018.gif, NaTxtBcb, InaTxtBcb, and ImaTxtBcb.
Several criteria of function identification studied in Chapter 6 may also be adapted to the present context. In particular, we shall focus on NaBcb, InaBcb, and ImaBcb in what follows.
§8.1.2 Hierarchy Results
The present section considers the tradeoff between inaccuracy in the available data versus leniency in the learning criterion. In particular, we investigate the effect of allowing
1Recall from Chapter 4 (Definition 4.24) that 0171-019.gif.

 
[Cover] [Contents] [Index] Previous page Next Section