Beschreibung
Page management mechanisms provide candidates for page stealing and prefetching from a main storage data cache of shared data when the jobs sharing the data are accessing it in a sequential manner. Pages are stolen behind the first reader in the cache, and thereafter at locations least likely to be soon re-referenced by trailing readers. A "clustering" of readers may be promoted to reduce I/O contention. Prefetching is carried out so that the pages most likely to be soon referenced by one of the readers are brought into the cache.