Memory Models as Applied to Hypertext

Theorized Models of Memory
Dual Store:
short-term and long-term memory
working memory intervenes between short- and long-term
Long-term Memory:
as propositional network
Long-term Memory:
as a hierarchy of information
Long-term Memory:
as Parallel Distributed Processing (a nodal network)

In hypertext literature, debate arises between long-term memory as a hierarchy versus as parallel distributed processing.

Davida Charney notes that several critics have related the structure of a hypertext document to the structure of long-term memory, especially if we represent long-term memory as a nodal network of information. Consider the following graphic:

Depiction of Nodal Network of Information

Is this a representation of hypertext links, or is this a representation of bits of information in memory? The graphic is taken from a chapter on memory in a theories of learning textbook (Ormrod 241), yet many have represented hypertext in similar ways. Such correlations have led many, Charney states, to assume that such an obvious similarity means hypertext necessarily facilitates long-term memory of information.

Charney offers a well-informed critique: most research supports the hierarchical representation of memory. Thus the nodal network model may be inapplicable. Furthermore, the nodal model is somewhat of a fad since it results from our current technology and operates more as a metaphor (and we know that representational metaphors change with society). Additionally, information must move through a narrow "gateway" - working memory. Research shows that working memory has a limited capacity for information - far more limited than what long-term memory can store and than what sensory means (books, hypertexts, etc.) can provide. So while the presentation of information and the long-term storage of it may appear to have similar structures, the intervening "working" part of memory can't process the same amount of information. Thus the correlations drawn by other critics simply can't work since it ignores the critical missing link.

A solution to Charney's critique does exist.

One of the ways we store information is by "chunking" it - breaking it down into soluble bits that can be transferred through working memory into long-term memory (Ormrod 188-9). Hypertext clearly allows us to chunk information together and present information as chunks as opposed to fully constructed information found in books. Let's say, for instance, that one "page" in hypertext is actually a chunk of related information. If we keep these chunks small enough or break the information into small enough chunks, then working memory can better process the information and move it to long-term memory.

Again, Alvin Toffler's concept of "blip culture" supports the "chunking" of information. As we are bombarded with fragments of images and information, we must continually integrate the fragments into a meaningful whole. From this construction we also develop our own identities. In essence, Toffler points out that we are socially trained to handle bits of information and to integrate those bits into a large-grain pattern (see Smith's argument).

Because hypertext is bits of information, there is no room for irrelevant information. While we can put irrelevant information in a hypertext, a hypertext document is usually criticized if it too closely resembles a print document such as a book. A hypertext designed as a book does not capitalize on hypertext's abilities to make text dynamic and to string fragments into a whole. If a hypertext is designed as fragments that we integrate and continually reintegrate into a pattern, then the personalized creation of a text can be better committed to memory.

Return to Main Map