"Conscious" Software: A Computational View of Mind
Institute for Intelligent Systems and
Department of Mathematical Sciences
The University of Memphis
Here we describe a software agent that implements the global workspace theory of consciousness. A clerical agent that corresponds with humans in natural language via email, CMattie composes and sends weekly seminar announcements to a mailing list she maintains. She's designed under a two tiered architecture with high-level concepts, behaviors, associations, etc., undergirded with low-level codelets that do most of the actual work. A wide variety of computational mechanisms, many taken from the "new AI," flesh out the architecture. As a computational model, CMattie provides ready answers, that, is testable hypotheses, to very many questions about human cognition. Several such are noted. There's also a discussion of the extent to which such "conscious" software agents can be expected to be conscious.
1.0.1 Like the Roman god Janus, the "conscious" software project has two faces, its science face and its engineering face. Its science side will flesh out the global workspace theory of consciousness, while its engineering side explores architectural designs (Sloman 1996) for information agents that promise more flexible, more human-like intelligence within their domains. The fleshed out global workspace theory (Baars, 1988: 1997) will yield a multitude of testable hypotheses about human cognition. The architectures and mechanisms that underlie consciousness and intelligence in humans can be expected to yield software agents that learn continuously, that adapt readily to dynamic environments, and that behave flexibly and intelligently when faced with novel and unexpected situations. This paper is devoted primarily to the description of one such "conscious" software agent and the issues that arise therefrom.
1.1 Autonomous Agents
1.1.1 Artificial intelligence pursues the twin goals of understanding human intelligence and of producing intelligent software and/or artifacts. Designing, implementing and experimenting with autonomous agents furthers both these goals in a synergistic way.
1.1.2 An autonomous agent (Franklin and Graesser 1997) is a system situated in, and part of, an environment, which senses that environment, and acts on it, over time, in pursuit of its own agenda. In biological agents, this agenda arises from evolved in drives; in artificial agents from drives built in by its creator. Such drives, which act as motive generators (Sloman, 1987) must be present, whether explicitly represented, or expressed causally. The agent also acts in such a way as to possibly influence what it senses at a later time. In other words, it is structurally coupled to its environment (Maturana 1975, Maturana and Varela 1980). Biological examples of autonomous agents include humans and most animals. Non-biological examples include some mobile robots, and various computational agents, including artificial life agents, software agents and many computer viruses. We"ll be concerned with autonomous software agents, designed for specific tasks, and "living" in real world computing systems such as operating systems, databases, or networks.
1.2 Cognitive Agents Architecture and Theory
1.2.1 Such autonomous software agents, when equipped with cognitive (interpreted broadly) features chosen from among multiple senses, perception, short and long term memory, attention, planning, reasoning, problem solving, learning, emotions, moods, attitudes, multiple drives, etc., are called cognitive agents (Franklin 1997). Though ill defined, cognitive agents can play a synergistic role in the study of human cognition, including consciousness. Here's how it can work.
1.2.2 Minds, in my view, are best viewed as control structures for autonomous agents (Franklin 1995.) A theory of mind constrains the design of a cognitive agent that implements that theory. While a theory is typically abstract and only broadly sketches an architecture, an implemented design must provide a fully articulated architecture, and the mechanisms upon which it rests. This architecture and these mechanisms serve to flesh out the theory, making it more concrete. Also every design decision taken during an implementation constitutes a hypothesis about how human minds work. The hypothesis says that humans do it the way the agent was designed to do it, whatever "it" was. These hypotheses will suggest experiments with humans by means of which they can be tested. Conversely, the results of such experiments will suggest corresponding modifications of the architecture and mechanisms of the cognitive agent implementing the theory. The concepts and methodologies of cognitive science and of computer science will work synergistically to enhance our understanding of mechanisms of mind. I have written elsewhere in much more depth about this research strategy (Franklin 1997), which I've called Cognitive Agent Architecture and Theory (CAAT). The autonomous agents described herein were designed following the dictates of the CAAT strategy.
1.3 What's to come?
1.3.1 An attempt at implementing global workspace agents (to be explained below) in pursuit of the CAAT strategy is underway. Its first phase was to build Virtual Mattie, an autonomous software agent that "lives" in a Unix system, communicates with seminar organizers and attendees via email in natural language, and composes and sends seminar announcements, again via email, all without human direction (Franklin et al, 1996). VMattie, now up and running far more successfully than her designers had even hoped (Song et al forthcoming; Zhang ,Franklin, Olde, Wan and Graesser,1998), implements about forty percent of Baars' global workspace theory of consciousness (Baars, 1988, 1997). The second phase will add the missing pieces of the global workspace theory, producing "Conscious" Mattie. CMattie is almost completely designed, though the design is not quite stable, and the coding stage has begun. She will implement a fairly full version of global workspace theory, and will account for most of the psychological and neuroscientific facts that, according to Baars (1997 Appendix), must constrain any theory of consciousness (Franklin and Graesser forthcoming). We will refer to a cognitive software agent that implements global workspace theory in this sense as a "conscious" software agent.
1.3.2 Still, we are concerned that the so limited domain of CMattie is inherently insufficient to allow us to achieve the more engineering goals of the "conscious" software project, to produce software that is more intelligent, more flexible, more human-like than existing artificial intelligence software. For these goals, we need more dynamic, more challenging domains that require agents with multiple senses, multiple time-varying drives, and more complex actions to serve as proof-of-concept projects for "conscious" software. These more challenging domains will also address some of the limitations on the scientific side of the project (see section 7. below). Phase three will pursue another such "conscious" software agent, IDA, in parallel with the completion of CMattie. Now in the planning stage, IDA, an intelligent distribution agent, is intended to help the Navy with its reassignment of personnel at the end of duty tours (Franklin, Kelemen, and McCauley, 1998). This reassignment offers a complex, demanding domain orders of magnitude more challenging than that of CMattie. We hope it will prove a suitable proof-of-concept project. Yet another such challenging project called AutoTutor is waiting in the wings. AutoTutor is a fully automated computer tutor that simulates dialogue moves of normal human tutors and that will eventually incorporate sophisticated tutoring strategies (Graesser, Franklin & Wiemer-Hastings, 1998; Wiemer-Hastings, et al, 1998). The first, unconscious, version of AutoTutor was completed in the spring of 1998 on the topic of computer literacy. If energy and funding hold out, we intend to try for a "conscious" version of AutoTutor as another proof-of-concept project.
1.3.3 In addition to a brief account of VMattie, this paper will contain a relatively complete high level account of CMattie, including short descriptions of the various mechanisms used to build her and a summary of global workspace theory as well. No more will be said about IDA or AutoTutor, primarily because of space constraints. The design principles followed in the implementation of each of these agents are derived from the author's action selection paradigm of mind (Franklin 1995, Chapter 16). Its "multiplicity of mind" tenet asserts that "minds tend to be embodied as collections of relatively independent modules with little communication between them." The corresponding design principle recommends building cognitive agents as multiagent systems with no central executive. The agents mentioned above are implemented using codelets, small pieces of code doing a single small job. The "diversity of mind" tenet asserts that "mind is enabled by a multitude of disparate mechanisms." This specifically denies the unified theory of cognition hypothesis (Newell 1990). The corresponding design principle suggests choosing mechanisms suitable to the job to be done rather than trying for a single, unified mechanism. There are other such tenets and their corresponding design principles. (For a full account see Franklin 1997.)
1.3.4 Building the machinery of human consciousness into a software agent raises the fascinating issue of software awareness. Is it possible for software agents to be aware in anything like the way humans, and presumably many animals, are aware? If so, how could one know? Baars requires both a human subject's immediate assertion of consciousness of an event and some independent verification as conditions for accepting that something conscious had indeed occurred (Baars 1988 page 15). In software agents a mechanism for verification could be built into an interface, and the agent could be given the capability of reporting the content of its "consciousness." While Baars' criteria seem to me a perfectly fine operational definition of consciousness, as he intended, I doubt it would cut much ice with philosophers. Why can't a zombie (in the philosophical sense) report a verifiable experience as being conscious? He could either be lying or mistaken. Baars also give neuro-anatomical arguments for animal consciousness (1988, pages 33 ff.) essentially stressing structural similarities with humans. Others mount different sorts of arguments (Griffin 1984, Franklin 1995 Chapter 3). A slightly fuller account of this issue was given elsewhere (Franklin and Graesser, to appear)
1.3.5 Conversational software systems since Weizenbaum's Eliza (1966) have mimicked consciousness. A recently successful such system is Mauldin's Julia (1994), who fooled any number of men in an online chat room into seriously hitting on her. There's even the $100,000 Loebner Prize for the first such system to successfully pass the Turing test (web). All of these systems depend on more or less simple syntactic transformations together with a built-in database of phrases to perform their feats. There's no claim of consciousness, nor any reason to suspect it. Recall that the Turing test was intended as a sufficient indicator of intelligence, not consciousness (Turing 1950).
1.3.6 But "conscious" software agents present a different problem. Suppose CMattie notices that sessions of two different seminars are scheduled for the same room at overlapping times. "CMattie notices" implies that this scheduling conflict results in the creation of a coalition of codelets that gains the spotlight of "consciousness" (to be described in section 7 below). This coalition might contain the codelet that discovered the conflict together with two or more others that carry information about the two sessions. CMattie's emotion of concern might be aroused. Is CMattie then aware of the conflict in something like a human's conscious awareness? If so, how could we know it? CMattie, on noting the conflict, would send email messages to the two seminar organizers saying that she noticed the conflict and suggesting that they resolve it. Would these messages, together with our noting that the coalition did indeed occupy the spotlight, satisfy Baars' criterion for consciousness? Perhaps so, but it wouldn't convince me of CMattie's awareness in anything like a human sense. How can we be sure of consciousness in any other creature, computational or biological? Can "conscious" software agents help us with this problem?
2.0.1 Following the diversity of mind tenet of the action selection paradigm of mind (Franklin 1995, Chapter 16), the architectures of the various "conscious" software agents are designed using a diversity of mechanisms of mind. A mechanism of mind is a computational mechanism that serves to enable some cognitive function. Taken from the "new AI" literature (see Maes 1993), each of these mechanisms required extensions and enhancements to make it suitable for use in "conscious" software. Very brief descriptions of the original versions of each appear in the following subsections. Full accounts can be found in the original sources referenced. Expository accounts can be found in Artificial Minds (Franklin 1995). Extensions and enhancements are described in subsequent sections. Some more commonly known mechanisms such as case based reasoning and classifier systems are used in the CMattie architecture described below. Accounts of theses are not included in this section since descriptions of them are readily available in easily found books (Kolodner 1993; Holland, 1986).
2.1 The Copycat Architecture
2.1.1 Copycat is an analogy making program that produces answers to such conundrums as "abc is to abd as iijjkk is to ?". Hofstadter and Mitchell (1993, 1994) consider analogy making, along with recognition and categorization, as examples of high-level perception, that is deep, abstract, multi-model forms of perception rather than low-level, concrete, uni-model forms. Copycat is intended to model this kind of high-level perception. Its design assumes that high-level perception emerges from the activity of many independent processes, running in parallel, sometimes competing, sometimes cooperating. These independent processes, here called codelets, create and destroy temporary perceptual constructs, trying out variations to eventually produce an answer. The codelets rely on an associative network knowledge base with blurry conceptual boundaries called the slipnet. The slipnet evolves to the problem by changing activation levels and by changing degrees of conceptual overlap. There is no central executive, no one in charge. Decisions are made by codelets independently and probabilistically. The system self-organizes; analogy making emerges.
2.1.2 Copycat's architecture is tripartite, consisting of a slipnet, a working area, and a population of codelets. The slipnet, an associative network comprised of nodes and links, contains permanent concepts and relations between them. That's what Copycat knows. It does not learn. The slipnet is its long-term memory. The system has a connectionist flavor by virtue of spreading activation in the slipnet. All of this is explicitly encoded. The working area, working memory if you like, is where perceptual structures are built and modified, sometime by being torn down. The population of codelets consists of perceptual and higher level structuring agents. As demons should, they wait until the situation is right for them to run, and then jump into the fray.
2.2 Behavior Nets
2.2.1 Behavior nets were introduced by Pattie Maes in a paper entitled "How to do the right thing" (1990). The "right thing" refers to a correct action in the current context. This work is about behavior selection, that is, how to control actions subject to constraints. It's designed to work well with limited computational and time resources in a world that's not entirely predictable.
2.2.2 A behavior looks very much like a production rule, having preconditions as well as additions and deletions. A behavior is distinguished from a production rule by the presence of an activation, a number indicating some kind of strength level. Each behavior occupies a node in a digraph (directed graph). The three types of links of the digraph are completely determined by the behaviors. If a behavior X will add a proposition b, which is on behavior Y's precondition list, then put a successor link from X to Y. There may be several such propositions resulting in several links between the same nodes. Next, whenever you put in a successor going one way, put a predecessor link going the other. Finally, suppose you have a proposition m on behavior Y's delete list that is also a precondition for behavior X. In such a case, draw a conflictor link from X to Y, which is to be inhibitory rather than excitatory.
2.2.3 As in connectionist models, this digraph spreads activation. The activation comes from activation stored in the behaviors themselves, from the environment, and from goals. Maes' system has built-in global goals, some goals to be achieved one time only, while others are drives to be pursued continuously. The environment awards activation to a behavior for each of its true preconditions. The more relevant it is to the current situation, the more activation it's going to receive from the environment. This source of activation tends to make the system opportunistic. Each goal awards activation to every behavior that, by being active, will satisfy that goal. This source of activation tends to make the system goal directed. Finally, activation spreads from behavior to behavior along links. Along successor links, one behavior strengthens those behaviors whose preconditions it can help fulfill by sending them activation. Along predecessor links, one behavior strengthens any other behavior whose add list fulfills one of its own preconditions. A behavior sends inhibition along a conflictor link to any other behavior that can delete one of its true preconditons, thereby weakening it. Every conflictor link is inhibitory.
2.2.4 Call a behavior executable if all of its preconditions are satisfied. Here's a pseudocode version of Maes' algorithm for the system:
1. Add activation from environment and goals.
2. Spread activation forward and backward among the behaviors
3. Decay - total activation remains constant
4. Behavior fires if
i) it's executable and
ii) it's over threshold and
iii) it's the maximum such
5. If one behavior fires, its activation = zero, and all thresholds revert to their normal value.
6. If none fires, reduce all thresholds by 10%
In this last case, the system "thinks" for one round, and then tries again.
2.2.5 Note that there is nothing magical about the 10% in the previous paragraph. The system may well work better at a higher or lower value. This threshold reduction rate is one of several global parameters that can be used to tune a behavior net. For example, strengthening the activation rate of drives will make the system more goal driven, while varying the activation rate from the environment makes it more or less opportunistic.
2.3 Pandemonium Theory
2.3.1 John Jackson (1987) extended Selfridge's pandemonium theory (1959) to a theory of mind. Picture a collection of demons (comparable to Copycat's codelets) living in a sports stadium of some kind. Some of the demons are involved with perception, others cause external actions and still others act internally on other demons. Almost all the demons are up in the stands. A half dozen or so are down on the playing field exciting the crowd in the stands. A demon excites other demons to which it is linked. Demons in the stand respond. Some are more excited than others and are yelling louder. Stronger links produce louder responses. The loudest demon in the stands joins those on the field, displacing one of those currently performing back to the stands.
2.3.2 The system starts off with a certain number of initial demons and initial, built-in links between them. New links are made between demons and existing links are strengthened in proportion to the time the two demons have been together on the field. The strength of the link between two demons depends not only upon the time they're together on the field, but also upon the motivational level of the whole system at that time, the "gain." The gain is turned up when things are going well, turned down, even to negative, when things are getting worse. The higher the gains, the more the links between concurrently performing demons are strengthened.
2.3.3 Under such a strategy, demons would tend to reappear on the playing field if they were associated with improved conditions, resulting in strengthened links between these demons. When one of these arrives once again on the playing field, its compatriots tend to get pulled in also because of the added strength of the links between them. The system's behavior would then tend to steer toward its goals, the goals being the basis on which the system decides things are improving.
2.3.4 Typically, improved conditions result not from a single action, but from a coordinated sequence of actions. Suppose we make the links from demons on the playing field to new arrivals stronger than those from new arrivals to incumbents. Uphill links would tend to be stronger than downhill links. And suppose we also have demons gradually fade from the playing field, instead of suddenly jumping up and heading for the stands. Habitual sequences could then be completed from memory simply by putting an initial segment on the playing field. Once started, the system tends to redo that sequence.
2.3.5 Although we focused on the playing field, much of the really important activity takes place below ground (subconsciously) in the sub-arena. The sub-arena measures the system's well being, and on this basis, adjusts the gain on changes in link strengths through association. The sub-arena performs sensory input by sending demons representing low-level input to the playing field. Thus it provides a sensory interface. Low-level actions are carried out by demons in the sub-arena at the command of action demons on the playing field. Some primitive sensory capabilities and some primitive actions are built in.
2.3.6 Jackson also allows for the creation of concepts in his system. Demons that have very strong links can be merged into a single concept demon. When concept demons are created, their component demons survive, and continue to act individually. In a pandemonium system, the playing field is a major bottleneck because so few demons entertain on the playing field at any one time. Concept demons help relieve this bottleneck. Also, when compacted into a concept demon, higher level features of one problem enable the transfer of solutions to another. Not only can we have concept demons, but also compound concept demons that result from merging concept demons. With compound concept demons a hierarchy of concepts at various levels of abstraction is possible. Higher-level concept demons might well linger on the playing field longer than low level demons
2.3.7 Unused links decay, or lose strength, at some background rate. Negative links may decay at a different rate. High-level demons enjoy a slower decay rate. As a consequence, sufficiently rarely used links disappear, and recent associations count more than older associations. As links have strengths, demons also have their strengths, the strength of voice of those up in the crowd yelling, and the strength of signal of those on the playing field. The demon that yells the loudest goes to the playing with the same strength as when he was summoned.
2.4 Sparse Distributed Memory
2.4.1 Pentti Kanerva (1988) designed a content addressable memory that, in many ways, is ideal for use as a long-term associative memory. Content addressable means that items in memory can be retrieved by using part of their contents as a cue, rather than having to know its address in memory. To describe Kanerva's sparse distributed memory, even superficially, will require more effort than we've expended on the other mechanisms, and even a short excursion into Boolean geometry.
2.4.2 Boolean geometry is the geometry of Boolean spaces. A Boolean space is the set of all Boolean vectors (that is, vectors composed of zeros and ones) of some fixed length, n, called the dimension of the space. Points in Boolean space are Boolean vectors. The Boolean space of dimension n contains 2n Boolean vectors, each of length n. The number of points increases exponentially as the dimension increases. Though his model of memory is more general, Kanerva uses 1000 dimensional Boolean space, the space of Boolean vectors of length 1000, as his running example.
2.4.3 Boolean geometry uses a metric called the Hamming distance, where the distance between two points is the number of coordinates at which they differ. Thus d((1,0,0,1,0), (1,0,1,1,1)) = 2
2.4.4 By a sphere we mean the set of all points within some fixed distance, the radius, from its center. Spheres in Boolean space are quite different in one respect from the Euclidean spheres we're used to. Points of a Euclidean sphere are uniformly distributed throughout. For r < n/2 most of the points in a sphere in Boolean space lie close to its boundary.
2.4.5 A memory is called random access if any storage location can be reached in essentially the same length of time that it takes to reach any other. Kanerva constructs a model of a random access memory capable, in principle, of being implemented on a sufficiently powerful digital computer. This memory has an address space, a set of allowable addresses each specifying a storage location in a sense to be explained below. Kanerva's address space is Boolean space of dimension 1000. Thus allowable addresses are Boolean vectors of length 1000, henceforth to be called bit vectors in deference to both the computing context and to brevity.
2.4.6 Kanerva's address space is enormous. It contains 21000 locations, no doubt more points than the number of elementary particles in the entire universe. One cannot hope for such a vast memory. On the other hand, thinking of feature vectors, a thousand features wouldn't deal with human visual input until a high level of abstraction had been reached. A dimension of 1000 may not be all that much; it may, for some purposes, be unrealistically small.
2.4.7 Kanerva proposes to deal with this vast address space by choosing a uniform random sample, size 220, of locations, that is, about a million of them. These he calls hard locations With 220 hard locations out of a possible 21000 locations, the ratio is 2-980 very sparse indeed. In addition, the distance from a random location in the entire address space to the nearest hard location will fall between 411 and 430 ninety-eight percent of the time, with the median distance being 424. The hard locations are certainly sparse.
2.4.8 We've seen how sparse distributed memory is sparse. It is distributed in that many hard locations participate in storing and retrieving each datum, and one hard location can be involved in the storage and retrieval of many data. This is a very different beast than the store-one-datum-in-one-location type of memory to which we're accustomed. Each hard location, itself a bit vector of length 1000, stores data in 1000 counters, each with range -40 to 40. We now have a million hard locations, each with a thousand counters, totaling a billion counters in all. Numbers in the range -40 to 40 will take most of a byte to store. Thus we're talking about a billion bytes, a gigabyte, of memory. Quite a lot, but not out of the question.
2.4.9 How do these counters work? Writing a 1 to the counter increments it; writing a 0 decrements it. A datum, x, to be written is a bit vector of length 1000. To write x at a given hard location x, write each coordinate of x to the corresponding counter in x, either incrementing it or decrementing it.
2.4.10 Call the sphere of radius 451 centered at location x the access sphere of that location.x usually some 424 bits away and the median distance from x to hard locations in its access sphere about 448x is accessible from x.x to a location z simply write x to each of the roughly one thousand hard locations accessible from z Distributed storage.
2.4.11 With our datum distributively stored, the next question is how to retrieve it. With this in mind, let's ask first how one reads from a single hard location, x. Compute z, the bit vector read at x, by assigning its ith bit the value 1 or 0 according as x's ith counter is positive or negative. Thus, each bit of z results from a majority rule decision of all the data that have been written on x. The read datumz, is an archetype of the data that have been written to x, but may not be any one of them.z, is the datum with smallest mean distance from all data that have been written to x.
2.4.12 Knowing how to read from a hard location allows us to read from any of the 21000 arbitrary locations. Suppose z is any location. The bit vector, x, to be read at z is formed by pooling the data read from each hard location accessible from z.x results from a majority rule decision over the pooled data.x add together the ith bits of the data read from hard locations accessible from z and use half the number of such hard locations as a threshold. At or over threshold, assign a 1. Below threshold assign a 0. Put another way, pool the bit vectors read from hard locations accessible from z,, and let each of their ith bits vote on the ith bit of x.
2.4.13 We now know how to write items into memory, and how to read them out. But what's the relation between the datum in and the datum out? Are these two bit vectors the same, as we'd hope? Let's first look at the special case where the datum x is written at the location x.x recovers x. Here's the idea of the proof. Reading from x recovers archetypes from each of some thousand hard locations and takes a vote. The voting is influenced by the ~1000 stored copies of x and, typically, by about 10,000 other stored data items. stored item are what's needed to recover it. Iterated reading allows recovery when reading from a noisy version of what's been stored. Again, Kanerva offers conditions (involving how much of the stored item is available for the read) under which this is true, and mathematical proof.
2.4.14 Since a convergent sequence of iterates converges very rapidly, while a divergent sequence of iterates bounces about seemingly at random, comparison of adjacent items in the sequence quickly tells whether or not a sequence converges. Thus, this memory is content addressable, provided we write each datum with itself as address.
2.4.15 Kanerva lists several similarities between properties of his sparse distributed memory and of human memory. One such has to do with the human property of knowing what one does or doesn't know. If asked for a telephone number I've once known, I may search for it. When asked for one I've never known, an immediate "I don't know" response ensues. Sparse distributed memory could make such decisions based on the speed of initial convergence. If it's slow, I don't know. The "on the tip of my tongue phenomenon" is another such. In sparse distributed memory, this could correspond to the cue having content just at the threshold of being similar enough for reconstruction. Yet another is the power of rehearsal during which an item would be written many times and, at each of these to a thousand locations. A well-rehearsed item would be retrieved with fewer cues. Finally, forgetting would tend to increase over time as a result of other writes to memory.
2.4.16 The above discussion, based on the identity of datum and address, produced a content addressable memory with many pleasing properties. It works well for reconstructing individual memories. However, more is needed. We, and our autonomous agents, must also remember sequences of events or actions. Kanerva shows how the machinery we've just seen can be modified to provide this capability. The basic idea is something like this. The cue for a sequence of patterns serves as the address for the first pattern of the sequence. Thereafter, the content of each pattern in the sequence is the address of the next pattern.
3. Global workspace theory
3.0.1 The material in this section is from Baars' two books (1988, 1997) and superficially describes his global workspace theory of consciousness.
3.1 Processors and Processes
3.1.1 In his global workspace theory, Baars, along with many others (e.g. Ornstein 1986; Edelman 1987; Minsky 1985), postulates that human cognition is implemented by a multitude of relatively small, special purpose processes, almost always unconscious. (It's a multiagent system.) Communication between them is rare and over a narrow bandwidth.
3.2 Global workspace
3.2.1 Coalitions of such processes find their way into a global workspace (and into consciousness). This limited capacity workspace serves to broadcast the message of the coalition to all the unconscious processors, in order to recruit other processors to join in handling the current novel situation, or in solving the current problem. Thus consciousness in this theory allows us to deal with novelty or problematic situations that can't be dealt with efficiently, or at all, by habituated unconscious processes. Something like this key insight of Baars' theory seems to have been independently arrived at by others also. Freeman writes as follows (1995 p. 136)
" I speculate that consciousness reflects operations by which the entire knowledge store in an intentional structure is brought instantly into play each moment of the waking life of an animal, putting into immediate service all that an animal has learned in order to solve its problems, without the need for look-up tables and random access memory systems."
3.3.1 All this takes place under the auspices of contexts: goal contexts, perceptual contexts, conceptual contexts, and/or cultural contexts. Baars uses goal hierarchies, dominant goal contexts, a dominant goal hierarchy, dominant context hierarchies, and lower level context hierarchies. Each context is, itself a coalition of processes. Though contexts are typically unconscious, they strongly influence conscious processes.
3.4.1 Baars postulates that learning results simply from conscious attention, that is, that consciousness is sufficient for learning.
3.5 Rest of the Theory
3.5.1 There's much more to the theory, including attention, action selection, emotion, voluntary action, metacognition and a sense of self. I think of it as a high level theory of cognition.
4. The Virtual Mattie Architecture
4.0.1 Virtual Mattie (VMattie), an autonomous clerical agent "lives" in a UNIX system, communicates with humans via email in natural language with no agreed upon protocol, and autonomously carries out her tasks without human intervention. In particular, she keeps a mailing list to which she emails seminar announcements once a week. VMattie's various tasks include gathering information from seminar organizers, reminding organizers to send seminar information, updating her mailing list in response to human requests, composing next week's seminar schedule announcement, and sending out the announcement to all the people on her mailing list in a timely fashion. At the time of this writing VMattie is up and running, and doing all that was expected of her.
4.0.2 In VMattie, Baars' "vast collection of unconscious processes" are implemented as codelets in the manner of the Copycat architecture (Hofstadter and Mitchell 1994, Mitchell 1993). All of the higher level constructs are associated with collections of codelets that carry out actions or acquire particular information associated with the construct. Working memory consists of two distinct workspaces as well as the perception registers (see Figure 5.2 below). (This yields a hypothesis about human cognition.) Perceptual contexts include certain nodes from a slipnet type associative memory à la Copycat, and certain templates in workspaces. How can a context, a coalition of codelets, be a node. We routinely identify the node and its associated coalition of codelets. The node type perceptual contexts become active via spreading activation reaching a threshold (another hypothesis). Several nodes can be active at once, producing composite perceptual contexts (another hypothesis). Baars says that "[o]ne of the remarkable features of conscious experiences is how they can trigger unconscious contexts that help to interpret later conscious events." The VMattie architecture fleshes out this assertion with mechanisms. Goal contexts are implemented via an expanded version of Maes' behavior nets (1990). Again they become active by having preconditions met and exceeding a time variable threshold (another hypothesis).
4.1 The VM Architecture
4.1.1 The VM architecture is composed of three major parts, the perceptual apparatus, the action selection module, and the input/output module. The perceptual apparatus consists of a slipnet, a processing workspace, and a set of perception registers (see Figure 4.1). The slipnet is an associative knowledge base. The perception registers hold and make available the information created during perception of a message. (Another tenet of the action selection paradigm of mind asserts that minds operate on sensations to create information for their own use (Franklin 1995 p. 413; see also Oyama 1985).) The action selection module is composed of a behavior net, including explicitly represented drives, a workspace and a long-term (tracking) memory (see Figure 4.1). The mechanisms and functions of all these modules will be described below.
4.2 Perception via Slipnet
4.2.1 In sufficiently narrow domains, natural language understanding may be achieved via an analysis of surface features without the use of a traditional symbolic parser. Allen describes this approach as complex, template-based matching, natural language processing (1995). VMattie's limited domain requires her to deal with only nine distinct message types, each with predictable content. This allows for surface level natural language processing. VMattie's language understanding module has been implemented as a Copycat-like architecture though her understanding takes place differently. The mechanism includes a slipnet storing domain knowledge, and a pool of codelets (processors) specialized for specific jobs, along with templates for building and verifying understanding. Together they constitute an integrated sensing system for the autonomous agent VMattie. With it she's able to recognize, categorize and understand.
Figure 4.1 Vmattie Architecture (Franklin, et al, 1996)
4.2.2 The perception registers hold information created from an incoming email message. Acting like a structured blackboard, the perception registers make this information available to codelets that need it. Each register holds the content of a specified field. Fields include organizer-name, email-address, date, speaker, seminar-name, etc. These field names label the behavior variables discussed in the preceding paragraph. When occupied, perception registers provide environmental activation to behaviors that can use their contents. A detailed description of VMattie's perceptual apparatus has appeared elsewhere (Zhang ,Franklin, Olde, Wan and Graesser,1998).
4.3 Instantiated Behavior Nets
4.3.1 VMattie has several distinct drives operating in parallel. (Our drives play the same role in this mechanism, as do Maes' goals.) VMattie wants:
1) to get the weekly seminar out in a timely fashion,
2) to maintain complete information on each of the ongoing seminars,
3) to keep her mailing list updated,
4) to acknowledge each incoming message.
These drives vary in urgency as email messages arrive and as the time for the seminar announcement to be sent approaches. This variation in drive urgency, other than on and off, is an enhancement to the original behavior net architecture. Drives provide activation to behaviors that fulfill them.
4.3.2 Behaviors are typically mid-level actions, many depending on several codelets for their execution. Examples of behaviors might include add-address-to-list, associate-organizer-with-seminar, or compose-reminder (to remind organizer to send speaker, title, etc.). As described in 2.2 above, the behavior net is composed of behaviors and their various links.
4.3.3 Our behaviors must support variables. To associate-organizer-with-seminar immediately asks which organizer and which seminar. VMattie's behaviors implement the usual preconditions, action, add list and delete list, allowing variables in the contents of any of these. Picture an underlying digraph composed of templates of behaviors with their variables unbound and their links. Above this, picture an identical, instantiated copy with the variables in its behaviors bound. Now, picture several such instantiated layers, each independent of the others except for activation inputs from drives, etc. (See Figure 4.2) Instantiated behaviors and their links lie above their templates. Activation spreads only through instantiated links. A detailed description of VMattie's instantiated behavior net will appear (Song and Franklin, forthcoming).
Figure 4.2 Demonstration of a behavior template and its instantiation sequence. Behavior template 1 can have more than one behavior instance at a time. Drive 1 and Attention Registers only spread activation to the behavior instances, but not to behavior template. (Song and Franklin, forthcoming)
5. "Conscious" Mattie
5.0.1 Though comprehensive, Baars' theory is quite abstract, as a psychological theory should be. It offers general principles and broad architectural sketches. Questions of architectural detail, that is of just how functional components fit together and who talks to whom, are sometimes left open, as are almost all questions of mechanisms, that is of how these components do what they are claimed to do. For example, in Baar's presentation the various types of contexts (perceptual, conceptual, goal contexts) are lumped architecturally. Though distinguished functionally, their architectural relationships, as well as their mechanisms, are left unspecified. As a good theory should, this one raises as many questions as it answers.
5.0.2 Providing a more detailed and discriminated architecture, and the mechanisms with which to implement it, can be expected to suggest answers to many of these questions about human cognition. With this in mind we introduce "conscious" Mattie (CMattie), a cognitive agent designed within the constraints of global workspace theory. CMattie's architecture and mechanisms serve to flesh out that theory and, hopefully, provide a fertile source of hypotheses for cognitive science and cognitive neuroscience.
5.1 Foundation in Virtual Mattie
5.1.1 CMattie is best viewed as an extension of VMattie. Her domain is exactly the same; CMattie is also a clerical software agent who communicates with humans via email and sends out weekly seminar schedules. The VM architecture, as described in Section 4, is carried over in its entirety to CMattie. Perception in CMattie is again via a slipnet and a workspace. Actions are selected by an instantiated behavior net. The perceptual registers are in place. The input/output module that sends and receives email messages is the same.
5.1.2 Yet there are significant differences. VMattie's slipnet (perceptual knowledge base) contains an embedded artificial neural network, feedforward and trained by backpropagation, that identifies an incoming message type. CMattie's slipnet clings much more closely to the original Copycat model, facilitating the learning of new message types.
5.1.3 VMattie's instantiated behavior net selects and executes an instantiated behavior from each instantiation at each time step. (See 4.3 above.) These behaviors operate in (simulated) parallel. CMattie selects and executes only one instantiated behavior from all instantiated behavior net layers at each time step. This brings CMattie in line with global workspace theory, which prescribes a single dominant goal context at a time. These contexts are discussed in 5.2 below.
5.1.4 VMattie's behavior net contains a behavior stream that implements the perceptual process of understanding a new message. The process by which a message moves from input text to understanding in the perceptual registers is controlled by a stream of behaviors in the behavior net. In humans the analogous process seems to typically be automatic, unconscious and independent of the current goal context. Let me say a little more to clarify this point. In humans, the current goal context certainly influences perception. But, we have no goal context for converting a retinal image into a subsequent mapping. Such goal contexts are beyond the perceptual apparatus. For this reason CMattie implements the perceptual process via codelets directly.
5.1.5 In VMattie missing information in a message is recovered from the tracking memory after perception has occurred. In CMattie default information is added as a result of "consciousness" of something missing during the latter part of the perceptual process itself. Again, this seems more in line with what happens in humans in analogous situations.
5.2 Concordance with Global Workspace Theory
5.2.1 In CMattie, Baars' "vast collection of unconscious processes" are implemented as codelets in the manner of the Copycat architecture, or equivalently as Jackson's demons. Her limited capacity global workspace is implemented as a portion of Jackson's playing field. Working memory consists of at least four distinct workspaces (yielding a hypothesis about humans). Perceptual contexts include certain nodes from a slipnet type associative memory à la Copycat, and certain templates in workspaces. A node type perceptual context becomes active via spreading activation reaching a threshold (another hypothesis). Several nodes can be active at once, producing composite perceptual contexts (another hypothesis). These mechanisms allow conscious experiences to trigger unconscious contexts that help to interpret later conscious events. Conceptual contexts also reside in the slipnet, as well as in sparse distributed memory, CMattie's associative memory. Goal contexts are implemented as instantiated behaviors in a much more dynamic version of Maes' behavior nets. They become active by having preconditions met and exceeding a time-varying activation threshold (another hypothesis). Goal hierarchies are implemented as instantiated behaviors and their associated drives. The dominant goal context is determined by the currently active instantiated behavior. The dominant goal hierarchy is one rooted at the drive associated with the currently active instantiated behavior.
5.2.2 But, you object, global workspace theory calls for contexts to be coalitions of codelets. Each slipnet node is associated with a collection of codelets that it both activates and receives activation from. The same is true of each behavior whose associated codelets perform its action when the behavior is executed. The CM architecture, to be described in detail below, is comprised of a more abstract level consisting of the slipnet, the behavior net and many other modules, and a less abstract level, the codelets. In specifying a concordance with global workspace theory, it's best to identify each higher level construct (e.g. slipnet node, behavior) with it's associated coalition of codelets.
5.2.3 The remaining functions comprising global workspace theory are implemented in CMattie by modules named in easily recognizable ways, for example emotion, learning, and metacognition.
5.3 The CM Architecture
5.3.1 CMattie's architecture consists of a number of modules complexly interconnected. One way of coming to grips with it is to think of five major components, the codelets, the high level constructs, the "consciousness" mechanism, metacognition and learning. The first three of these are at least indicated in figure 5.1. Learning is embedded in several of the other modules, Metacognition sits above all the others and influences them. Each module will be described briefly in this section. Referring back to the figure will help in understanding the relationships between the modules.
Figure 5.1 The CMattie Architecture
5.3.2 As in VMattie, all of CMattie's actions are performed by codelets. Codelets implement the perception mechanism and put into action all selected behaviors. In addition to these codelets, CMattie also utilizes other classes of codelets, for example emotional codelets (see 5.4) and "consciousness" codelets. The latter serve to bring novel information and/or problematic situations to "consciousness". For example, if a speaker-topic message arrives without the title of the talk, a "consciousness" codelet, watchful for just this situation, creates an informative coalition, and competes vigorously for the spotlight. The coalition would consist of the "consciousness" codelet that recognized the missing title, and other "consciousness" codelets holding pertinent data from the perception registers.
5.3.3 Codelets in CMattie participate in a pandemonium theory style organization. Those who share time in the spotlight of "consciousness" have associations between them formed or strengthened. Those codelets sharing time in the playing field also change associations, but at a much lesser rate. Coalitions of highly associated codelets may form higher-level concept codelets (demons a la Jackson). This is comparable to chunking in SOAR (Laird, Newell and Rosenbloom 1987).
5.3.4 CMattie employs an important subclasse of codelets, distinguished by their need to be active in more than one context simultaneously. For example, consider a codelet whose task is to write the speaker name in the appropriate place in the announcement template. Such a codelet may be awaiting it chance to write when another speaker-topic message is perceived, requiring another such. Such codelets, called generator codelets, spawn instances of themselves with their variables bound. Each of these instantiated codelets carries the complete picture of a single task within itself. Instantiated codelets associated with instantiated behaviors are examples of generated codelets.
5.4.1 Including emotional capabilities in non-biological autonomous agents is not a new idea (Bates, Loyall, and Reilly 1991, Sloman and Poli 1996, Picard 1997). Some claim that truly intelligent robots or software agents can't be effectively designed without emotions. In CMattie we'll experiment with building in mechanisms for emotions (McCauley and Franklin 1998) such as guilt at not getting an announcement out on time, frustration at not understanding a message, and anxiety at not knowing the speaker and title of an impending seminar. These emotions will play a role analogous to the single temperature variable in the original copycat architecture, but more complex. They'll also provide gain control for the pandemonium architecture. Action selection will be influenced by emotions via their effect on drives, modeling recent work on human action selection (Damasio 1994).
5.4.2 CMattie can "experience" four basic emotions, anger, fear, happiness and sadness. These emotions can vary in intensity as indicated by their activation levels. For example, anger can vary from mild annoyance to rage as its activation rises. A four vector containing the current activations of these four basic emotions represents CMattie's current emotional state. Like humans, there's always some emotional state however slight. Also like humans, her current emotional state is often some complex combination of basic emotions. The effect of emotions on codelets, drives, etc. varies with their intensity. Fear brought on by an imminent shutdown message might be expected to strengthen CMattie's self-preservation drive resulting in additional activation going from it into the behavior net.
5.4.3. CMattie's emotional codelets serve to change her emotional state. When its preconditions are satisfied, an emotional codelet will enhance or diminish one of the four basic emotions. An emotion can build till saturation occurs. Repeated emotional stimuli result in habituation. Emotion codelets can also combine into concept codelets (see 2.3.6 above) to implement more complex secondary emotions that act by affecting more than one basic emotion at once. Emotion codelets also serve to enhance or diminish the activation of other codelets. They also act to increase or decrease the strength of drives, thereby influencing CMattie's choice of behaviors.
5.4.4 As we'll see in the next two subsections, CMattie's associative memory associates emotions with situations while her episodic memory remembers past emotions. These memories become part of the contents of "consciousness" (see 5.7 below) and can affect the current emotion. Thus, CMattie's remembered emotions also influence her action selection.
5.4.5 CMattie's emotion mechanism maintains a continual, multidimensional evaluation of how well things are going for her. By affecting action selection it should help her to chose good enough actions in unforeseen situations. The change in association of an emotional codelet and other codelets as a result of being "conscious" together (as described in below) results in the learning of emotional associations. The "chunking" into emotion concept codelets (5.4.3) is also a form of learning.
5.5 Associative Memory
5.5.1 In 5.3.3 above we described the pandemonium style association that occurs between CMattie's codelets. That's one type of associative memory, though we don't refer to it as such. Here we describe another associative memory based on sparse distributed memory (see 2.4 above) and implemented similarly.
5.5.2 The contents of the perception registers (see 4.2.2 and Figure 5.1 above) are encoded as simple ASCII code, and strung together into a Boolean vector. This vector is lengthened to include space in which to encode an emotion and an action along with each set of contents of the perception registers. Such a vector will be used both to read from, and to write to, CMattie's associative memory. Both are accomplished through the focus, sparse distributed memory's gateway to the world. CMattie's focus is a register of appropriate size to hold the vector just described. For purely technical reasons, information is sometimes written redundantly (Anwar and Franklin forthcoming). The structure of the focus is more complicated than described above, as will be explained in the next section.
5.5.3 How is the associative memory used? When new sensory information is created, it appears in the perception registers in the focus (to be described in 5.6.8 below). A read is then made from associative memory, using the contents of the perception registers as the address. Whatever is associated with this current perception is returned. Typically this will include default values of fields not mentioned in the email message, as well as an associated emotion and a suggested action. The actual perceived information will, in most but perhaps not all, cases remain unchanged. Upon arrival in the focus the result of this read provides environmental and internal activation to the behavior net and to the emotion module through consciousness (see 5.7 below). Its information is thus made available to the rest of the system, for example to codelets who want to enter it in one or another template. This description of the use of associative memory is only roughly accurate. It omits the role played by consciousness as will be discussed in 5.7 below.
5.5.4 Shortly thereafter we may expect a current emotion, influenced by the associated emotion, to arise. In parallel, the behavior net is choosing its next behavior. Perhaps a new goal context is called for by this current perception. Recall that it's possible that the behavior net not choose a behavior, but simply to "think" for another round (see 2.2.4 above). Whatever emotion and behavior arise are entered into the focus as emotion and action, the contents of the perception registers remaining unchanged. A write is then made with this Boolean vector as address. This constitutes CMattie's association with the current percept.
5.6 Episodic Memory
5.6.1 Humans use an intermediate term memory in several ways. For example, I can easily recall what I had for dinner last night. A month from now I'd probably find that impossible unless it had somehow become relevant in the meantime and been reinforced. If perhaps the mussels had been exceptional, and I later reported on them to my wife and daughters within a few days, I might well be able to recall the entire meal a month later. Here the intermediate term memory was used to store items that might become relevant for a short while. I'd be unlikely to remember the color of the napkins or the pattern on the silverware, though others might.
5.6.2 Humans also need an intermediate term memory to keep a to-do list for tracking intended actions. Some such actions can not yet be performed because some precondition is missing. Others simply haven't reached a high enough priority. Such intended behaviors of both types are handled in CMattie by her behavior net. Intention occurs at the time of instantiation (see 4.3 and 5.1.3 above). When a stream of behaviors is completed, it disappears. Thus, in CMattie the behavior net acts as an intermediate term memory. The hypothesis that we humans keep a to-do list in a similar fashion comes with one obvious difficulty. I often forget items on my internal to-do list while CMattie doesn't.
5.6.3 We humans also use intermediate term memory as episodic memory to keep track of contexts that might be needed again. This may only be a different view of remembering the meal with the exceptional mussels. CMattie also needs an episodic memory. Suppose a speaker-topic message arrives without the title of the talk. CMattie asks the organizer for the title. (This uses "consciousness" on her part in a technical sense to be described below.) The response may well contain little other than the missing title. How is CMattie to establish a context for the reply?
5.6.4 One possibility is to use CMattie's associative memory. After all, her understanding of the original message was written there after being understood and placed in the incoming perception registers. The way sparse distributed memory works, however, makes this solution untenable. In the example described, a read using only the seminar organizers name and/or email address would be required. Encoded, this would specify such a small piece of the Boolean vector used as an address that little useful information could be expected to be read (see 2.4.14 above). For this reason we've given CMattie a separate episodic memory based on a different mechanism. This yields yet another hypothesis about how human memory is organized. This time I suspect the hypothesis will turn out to be incorrect.
5.6.5 CMattie's episodic memory is implemented as a case-based memory, suitable for use with cased-based reasoning (Kolodner 1993). This choice was influenced by the need to use cased-based reasoning for learning (see 5.9 below). Like sparse distributed memory, cased-based memory is content addressable (see 2.4.1 above). However, in the way it's used in CMattie, a small cue will suffice to retrieve the desired item.
Figure 5.2 CMattie's Focus
Figure 5.2 CMattie's Focus
5.6.6 Like her associative memory, CMattie's episodic memory is content addressable. Presented with a cue, here considered to be a case, episodic memory returns the stored cases most similar to the cue. All this immediately conjures up two questions: What is a case, and how is similarity judged? In CMattie's episodic memory a case looks just like an entry in associative memory, that is, a copy of the perception registers augmented with an emotion and an action. The second question is less easy to answer since, as of this writing, the similarity metric is still being designed. One idea that will likely come into play is to give an exact match in a particular perception register a great weight.
5.6.7 CMattie's use of episodic memory is much like that of her associative memory. When a percept arrives in the perception registers, a read from episodic memory is made. When, following that, a behavior is chosen, or declined to be chosen, a write to episodic memory is made. The case here included the contents of the perception registers, the new behavior, if chosen, and the current emotion, which may well have changed as a result of the percept, of associations with it, and/or of a context recovered from episodic memory.
5.6.8 CMattie's focus (see figure 5.2 above) consists of four sets of registers. The perception registers hold the percept as it emerges from her perceptual module. Using these contents as addresses her associative and episodic memories are read into the two upper sets of registers. Most often the read from associative memory will contain copies of the contents of the perception registers with defaults filled in and with an emotion and an action added. The read from episodic memory into its set of registers should contain the case most similar to the original perception register contents. Later the set of registers for writes to associative and episodic memories should contain the original perception register contents augmented by default values, together with the current emotion and the currently active behavior (goal context). These are subsequently written to the two memories using themselves as addresses.
5.6.9 "Consciousness" comes into play here, as we'll see in the next section. In order to achieve this a codelet is associated with each of the individual registers in each of the four collections of registers in the focus. The codelet associated with a register carries the content of that register to make it available to other codelets. For example, one codelet might carry a message type while another carries the name of a seminar.
5.7 The Spotlight of "Consciousness"
5.7.1 According to global workspace theory (Baars, 1988; 1996; 3.2 above) the contents of consciousness, a coalition of processors, are broadcast to all the other processors. As a result, those processors are enlisted who can help with the novel and/or problematic situation at hand. They are the relevant processors. In the CMattie architecture processors are implemented by codelets. The apparatus for producing "consciousness" consists of a coalition manager, a spotlight controller, a broadcast manager, and a collection of "consciousness" codelets who recognize novel or problematic situations (Bogner, 1998; Bogner, Ramamurthy, and Franklin, to appear).
5.7.2 We'll take up a slightly simplified version of each of these in turn. (The full description will be given in section 5.8 below.) But first, let's return to Jackson's metaphor or the sports stadium (2.3 above). The same metaphor is useful for describing the activity of CMattie's codelets after some small but crucial changes are made. Picture a sports arena composed of stands and a playing field (see figure 5.3). In the stands are the inactive codelets. This must not be taken too literally. Each of these codelets is alert to conditions that would cause it to become active and join the playing field or, in the case of generator codelets, to instantiate a copy of itself, with variables bound, into the playing field (5.3.4 above). Note how this differs from pandemonium theory where demons are drawn into the playing field only by the strength of their association with current players.
5.7.3 On the playing field we find the active codelets, that is, codelets that are actively carrying out their functions. Some of these are joined in coalitions. One such coalition should lie in the spotlight of "consciousness". One can think of the playing field as CMattie's working memory or, better yet, as the union of her several working memories. At any given time codelets associated with her perceptual workspace and with her composition workspace (4.1.1) will be active, along with codelets carrying information from the focus (5.8.9).
5.7.4 Each "consciousness" codelet (5.3.2 above) keeps a watchful eye out for some particular situation to occur that might call for "conscious" intervention. An example might be of two seminars scheduled in the same room at the same time. Upon encountering such a situation, the appropriate "consciousness" codelet will be associated (2.3.3; 5.3.3 above) with the small number of codelets that carry the information describing the situation (5.6.9 above). In this case these codelets might collectively carry the names of the two seminars involved, the room, the date and the overlapping times. This association should lead to the collection of this small number of codelets, together with the "consciousness" codelet that collected them, becoming a coalition. Codelets also have activations (5.2.2; 5.4.2 above). The "consciousness" codelet increases its activation in order that the coalition might compete for "consciousness" if one is formed.
5.7.5 CMattie's coalition manager is responsible for forming and tracking coalitions of codelets on the playing field. Such coalitions are initiated on the basis of the mutual associations between the member codelets. Since association can both increase and diminish, the forming and tracking of coalitions is a dynamic process. Coalitions appear and disappear. Codelets may leave one coalition, and may join another.
5.7.6 While the existence of a coalition depends on the strengths of the associations between its members, its chance of becoming "conscious" depends on their average activation. CMattie's spotlight controller is responsible for selecting the coalition with the highest such average to shine upon. Since activations change even more rapidly than associations, the spotlight of "consciousness" can be expected to frequently shift from one coalition to another. Recall that the activation of a codelet can be influenced by a higher level concept (slipnet node, behavior), by the current emotion and, in the case of a "consciousness" codelet, by its own action. A codelet's activation goes to zero when its task is finished.
5.7.7 Global workspace theory calls for the contents of "consciousness", that is, of the spotlight in the CMattie architecture, to be broadcast to each of the codelets. Here we must distinguish between instantiated codelets and the other types. Instantiated codelets have their variables bound to particular pieces of information. They are either on the playing field in the process of carrying out their particular duties, or they are on the sidelines waiting to do so. (See 5.8.5 below for an explanation.) In either case instantiated codelets are already committed to certain duties and are not available to help with subsequent situations. Hence instantiated codelets do not receive broadcasts from "consciousness". All other codelets, including the generator codelets, do receive each broadcast. It is possible that a codelet is actively engaged on the playing field and cannot respond to a relevant percept. This design decision suggests the existence of instantiated processors in humans, a hypothesis that to my knowledge remains to be tested.
5.8 CMattie in Operation
5.8.1 Having struggled through the foregoing subsections of Section 5, the reader will have created a mental model of the workings of the CMattie architecture. To provide a chance to check this individually created model against that of the author, this section will contain a brief run through of CMattie in operation. Also, the account in the previous portions of this section left out some details in order not to overburden the reader all at once. These details will be described here.
5.8.2 Suppose a new message arrives in CMattie's inbox. Codelets move it into the perceptual module as soon as that module is free. Perception occurs as described in 4.2 and 5.1 above. The constructed bare percept is then moved into the incoming perception registers in the focus. Associative and episodic memories make their contributions as described in 5.5.3 and 5.6.7 creating the finished percept, partly from the environment (the incoming message), and partly from memory.
5.8.3 At this point, generator codelets, whose job it is to carry information from the registers in the focus, will typically take note of the new percept, and instantiate copies of themselves with variables bound to the appropriate register information. A "consciousness" codelet will also note the new percept, associate itself with these information-bearing codelets, and provide activation to itself and them. (Note this extension of pandemonium theory where codelets would only watch the playing field for a chance to act.) The resulting highly activated coalition (5.7.5) will typically soon find itself in the spotlight of "consciousness" (5.7.6), becoming its contents. These contents are then broadcast to all existing codelets (5.7.7) except the instantiated codelets.
5.8.4 Some of the codelets receiving the broadcasst may deem themselves relevant and respond. In particular, the contents of the message type register can be counted on to stimulate all the codelets associated with the beginning behavior of any behavior stream that normally responds to a message of this type (4.3.2; 5.3.2). These mostly generator codelets will then instantiate copies of themselves with their variables bound appropriately to the information on the blackboard. Now comes one of the omitted details mentioned above. Were this collection of instantiated codelets to join the playing field, the corresponding behavior would then be active without having been selected by the behavior net. Hence these instantiated codelets remain on the sideline of the playing field poised for action.
5.8.5 As we've seen in 4.0.2, such a collection of codelets can be identified with the behavior (goal context) it subserves. As such, its very presence on the sidelines causes an instantiated copy of the associated behavior to be added to the behavior net. But the behavior net is composed of streams, not individual behaviors. Thus an instantiated copy of the entire behavior of the stream to which the original behavior belongs is added to the behavior net. (In CMattie, a behavior belongs to only one stream. In future "conscious" software agents this may not be the case, and another design decision will have to be made.) And, since each instantiated behavior in this new stream is associated with a coalition of codelets, instantiated copies of all of these are added to sidelines. As an instantiated behavior in this stream is selected for execution by the behavior net, the corresponding coalition of codelets joins the playing field and each of them actively carries out their respective tasks.
5.8.6 CMattie's behavior net is now augmented with the intention carry out behaviors it expects to use to deal with the situation posed by the incoming message. As each instantiated behavior in the new streams is executed, its coalition of instantiated codelets joins the playing field and become active. CMattie's behavior net has acted as a to do list (see 5.6.2 above).
5.9 Conceptual and Behavioral Learning
5.9.1 CMattie has learned from an experience if the probability of certain actions in certain situations change as a consequence of that experience. Several distinct learning mechanisms are implemented in the CMattie architecture, some of which we've already seen. The storing of a percept in associative memory may well affect subsequent choice of actions, as might its being written to episodic memory. Hence both memories can be considered learning mechanisms. Codelets change their associations by virtue of sharing "consciousness" or, to a lessor extent, of being in the playing field at the same time (see 2.3.5 and 5.3.3 above). Again, such a change might affect a subsequent choice of action, so we've a third learning mechanism. We've also seen this form of learning applied to emotion (5.4.5). CMattie's metacognition module learns via classifiers (see 5.11 below), a fourth learning mechanism. In this section we'll describe an additional learning mechanism for conceptual learning (Ramamurthy, Bogner and Franklin, 1998; Bogner, Ramamurthy and Franklin, to appear). We'll also discuss a quite similar mechanism for learning new behaviors. These design decisions suggest another hypothesis, that humans also employ similarly diverse learning mechanisms.
5.9.2 CMattie learns concepts into her perceptual mechanism, that is, she learns new slipnet nodes and links, and new perceptual codelets. This learning takes place by modifying what's known, existing nodes, links and codelets, using case based reasoning (Kolodner, 1993). The impetus for such learning comes from messages from a seminar organizer informing CMattie that she has mishandled a previous message. An interchange between CMattie and the organizer may eventually lead to her learning a new concept. We'll trace a hypothetical scenario for such learning.
5.9.3 Suppose CMattie receives an announcement of a dissertation defense to be held at a certain place and time with a certain speaker and title. She would most probably treat this as a speaker-topic message for a seminar. This understanding is disseminated through "consciousness", leading to an acknowledgement to the sender stating that she is initializing a new seminar called ``Dissertation defense Seminar'' with the sender as organizer. This acknowledgement may well elicit a negative response from the sender. CMattie has slipnet nodes, including a message type, codelets and behaviors to help deal with such a situation. Such a negative response may start a "conversation'' between CMattie and the sender. During this
Figure 5.1 Slipnet Fragment (from Ramamurthy, Bogner, and Franklin, 1998)
interchange, CMattie learns that a dissertation defense is similar to a seminar, but with slightly different features. In this case, the periodicity feature (see Figure 5.1) has a different value. The email conversation, stripped of headers and pleasantries might go something like this:
Sender: It's not a dissertation defense seminar, just a dissertation defense.
CMattie: What's a dissertation defense?
Sender: It's like a seminar but only happens irregularly.
CMattie can trace the thread of the conversation via her episodic memory. She has codelets that recognize words associated with features. Thus she should recognize "irregularly" as having a certain meaning with regard to periodicity. At this point, case based reasoning comes into play, allowing the creation of a new slipnet node for dissertation defense with features the same as those of the seminar node except for periodicity fixed at "irregular." Links are also put in place similar to those of the seminar node. A new message type node is also created, along with its links. Finally the needed new codelets are created, modeled after the old. Case based reasoning has solved the problem by first identifying the solution to the most similar old problem, and then modifying it to solve the new one. (In order for this to work, initial cases have to be included in case based memory at startup.) A new concept has been learned to the extent that CMattie needs to learn it.
5.9.4 Behavioral learning occurs quite similarly. CMattie's behavioral learning mechanism, again case based, takes note of the changes wrought in the slipnet and deduces needed changes in behavior. This leads to new behavior streams and new codelets to support them. If CMattie initially gets things wrong, another interchange with the sender may ensue. Eventually, CMattie will learn an acceptable behavior for a dissertation defense. Note that we've described what is essentially a one-shot learning. Though we might consider this reinforcement learning, it would be a stretch. This global learning is quite different from the local learning common in new AI systems, such as in neural net or reinforcement learning.
5.10.1 Metacognition should include knowledge of one's own cognitive processes, and the ability to actively monitor and consciously regulate them. This would require self-monitoring, self-evaluation, and self-regulation. Metacognition plays an important role for humans. It guides people in revising or even abandoning tasks, goals, or strategies (Hacker 1997). If we want to build more human-like software agents, we need to build metacognition into them. Aaron Sloman calls this meta-management, and has been making this point for many years (Sloman, 1996). Also, Baars' global workspace theory explicitly calls for metacognition (Baars, 1988).
5.10.2 Following Minsky, we'll think of CMattie's "brain" as consisting of two parts, the A-brain and the B-brain (Minsky, 1985). The A-brain, as illustrated in Figure 5.1, consists of all the modules of CMattie's architecture that have been described so far. It performs all of her cognitive activities except metacognition. Its environment is the outside world, a dynamic, but limited, real world environment. The B-brain, sitting on top of the A-brain, monitors and regulates it. The B-brain's environment is the A-brain, or more specifically, the A-brain's activities. In this subsection, we'll discuss the mechanism of the B-brain and its interaction with some relevant modules in the A-brain (Zhang, Franklin and Dasgupta, 1998; Zhang and Franklin, forthcoming).
5.10.3 One can look at a metacognitive module as an autonomous agent (see 1.1.2 above) in its own right. It senses the A-brain's activity and acts upon it over time in pursuit of its own agenda. It's also structurally coupled to its quite restricted environment. Its agenda derives from built in metacognitive drives. One such drive is to interrupt oscillatory behavior. Another such might be to keep CMattie more on task, that is to make it more likely that a behavior stream would carry out to completion. Yet another would push toward efficient allocation of resources.
5.10.4 Unlike the situation in her A-Brain where drives are explicitly represented as part of the behavior net, CMattie's metacognitive drives are embodied in fuzzy production rules. The preconditions of such rules typically include some specification of an emotional state. Another type of precondition may involve the number of email messages in the incoming queue, or the number of instantiated behavior streams, or the memory space they are using.
5.10.5 How does the metacognition module influence CMattie's behavior to promote her drives? Oscillatory behavior might occur as the perceptual mechanism goes back and forth between two message types unable to decide on either. Metacognition might then send additional activation to one message type node in the slipnet, effectively forcing a decision, even a wrong one. The metacognition module can also affect CMattie's behavior by tuning global parameters, for example in the behavior net (see 2.2.5 above). This kind of tuning could serve to keep her more on task, by increasing the parameter that controls the amount of activation a drive pumps into its behavior streams. Or, it could make her more thoughtful by increasing the threshold for executing behaviors. Finally, metacognition may be concerned with high-level allocation of resources. For example, memory might be shifted from, say, a workspace (part of working memory) to the behavior net to accommodate a shortage of space there.
5.10.6 CMattie's metacognition module is quite complex in its own right (Zhang and Franklin, forthcoming), being comprised of several distinct submodules. Due to space limitations, only a cursory description will be given here. An inner perception submodule monitors the A-Brain. It consists of sensors and detectors. Detectors differ from sensors in that they perform inferences. Sensors get the raw data from the A-brain, and detectors put them into internal representations. The fuzzy classifier system (Valenzuela-Rendon, 1991) at the heart of metacognition's action selection needs fuzzy inputs. The fuzzifier submodule contains membership functions that interpret a real (crisp) number to express the fuzzy values, and uses them to fuzzify each inner percept. Thus each numeric value of an inner percept is replaced by the corresponding linguistic value. These fuzzy percepts are then fed to the encoder submodule, which encodes them into finite-length strings and puts them in a message list. These fuzzy string percepts may match antecedents of classifiers. This matching activates collections of classifiers from the fuzzy rule base submodule of classifiers, often referred to as the classifier store. This fuzzy rule base of classifiers contains the metacognition modules' knowledge of what to do in a given situation. Metacognition then uses classifiers from the fuzzy rule-base to infer appropriate fuzzy string actions that are posted in the message list by winning classifiers. The decoder submodule decodes the string action to a set of fuzzy actions. Using the membership functions, the defuzzifier submodule transforms these fuzzy values into crisp numeric values that can be used by the inner actions submodule. The appropriate actions are then taken.
5.10.7 Metacognition in CMattie is implemented as a classifier system in order that it may learn. Learning actions always requires feedback on the results of prior actions. The Evaluator submodule is implemented by a reinforcement learning algorithm (Barto, Sutton, and Brouwer, 1981) that assigns reward or punishment to classifiers based on the next inner percept. It also uses a reinforcement distribution algorithm (Bonarini, 1997) to distribute credit among the classifiers. The more common bucket brigade algorithm (Holland and Reitman, 1978) is not used since sequences of actions are not typically required of metacognition in CMattie. When things are not going too well over a period of time, learning occurs via a genetic algorithm (Holland, 1975) acting to produce new classifiers.
5.11.1 Another of CMattie's drives is for self-preservation. Why is such a drive needed? What can happen to a software agent? The most feared event would be a sudden shut down of the machine on which the agent is running, possibly causing a loss of data and/or of state. Another, less feared event is of running out of resources, say memory. CMattie handles some such situations reflexively and others in a more deliberative way (Ramamurthy and Franklin, forthcoming).
5.11.2 If CMattie receives a system message, as opposed to an email message, warning of an eminent shutdown, the message is detected by self-preservation codelets early in the perception process. These codelets immediately act reflexively to save data structures containing both data and state. They also shut down the agent if time permits. CMattie is started up automatically with the saved data when the host system comes online again. Much of CMattie's action selection is reactive in the sense of Sloman (1996). Here we have a reflex action that doesn't even make it through the perception process before action is taken.
5.11.3 On the other hand an email message from the system administrator warning of a shutdown is handled in the perception module like any other message. The percept it generates will give rise to a coalition of codelets with high priority into "consciousness". The coalitions that respond to the resulting broadcast are highly active and give rise to equally active behaviors. The resulting action is quick but more deliberate than that described in the previous paragraph. The results are much the same, but likely to be more complete.
5.11.4 CMattie will also negotiate with the system administrator for more resources when the need arises. This occurs with much the same kind of mechanisms that implement conceptual and behavioral learning (section 5.9). These might include disk space, memory space and/or access to time on the central processing unit. This requires that CMattie be able to sense her use of these various resources.
5.11.5 CMattie's self-preservation drive also motivates her to backup her important data structures to disk at regular intervals. This is also accomplished in the usual way with self-preservation codelets realizing the necessity and activating the appropriate behaviors.
5.12.1 At the time of this writing CMattie's design is essentially complete though not yet stable. Small modifications are being made as the coding proceeds and turns up issues not previously considered. These issues require design decisions that simultaneously give rise to hypotheses about human cognition. The coding is in Java, chosen primarily because of the ease of use of threads. CMattie is very much a multi-agent system. The coding is perhaps more than half finished. We estimate about a quarter of a million lines of code in the complete implementation. Intelligence doesn't come cheaply. CMattie will live in a Unix system.
6.0.1 If, as we've seen in the previous paragraph, CMattie isn't even up and running, what's the justification for such a long and detailed article about her? CMattie's design constitutes a computational model of mind. In particular, it fleshes out Baars' global workspace theory with a more concrete architecture and the mechanisms with which to implement it. The resulting conceptual model promises to be a rich source of hopefully testable hypotheses about human cognition. In theory, each of our design decisions leads to such a hypothesis (Franklin 1997), namely that in humans it works according to our design. Of course, many or even most of these hypotheses may turn out to be false. We humans may do it differently. Nonetheless, even false hypotheses can lead to new knowledge. This section will explicitly offer several such hypotheses, stated as questions, mostly to give the reader an idea of the kind of hypotheses available from this conceptual model. We make no claim for novelty in these hypotheses. Much may already be known about them. In the few cases that references are known to the author, they will be given after the question. And, we will only include a small sample of the available hypotheses.
6.1.1 In CMattie, working memory consists of several different workspaces. One serves her perceptual module. IDA, a "conscious" software successor to CMattie (see 8. below) will have perceptual workspaces for each of several senses each equipped with different facilities. Another CMattie workspace serves for the composition of announcements. Yet another, the focus, serves as a working memory for incoming percepts, together with associated memories, emotions and actions. Do humans also have several working memories each capable of holding different types of data? (Shah & Miyake, in press)
6.1.2 CMattie's associative and episodic memories use quite different mechanisms. Her associative memory is content addressable and requires a rather complete perceptual cue for recall. Also content addressable, her episodic memory must react appropriately to a small cue so that the proper context can be found for an incoming message. Thus different mechanisms are required. Do human associative and episodic memories also differ in their mechanisms and in the size of their cues?
6.1.3 CMattie's behavior net serves a memory like function; it implements her internal to do list. Each sequence of behaviors (goal contexts) that CMattie intends is instantiated in some layer of her behavior net. Will all of them eventually be acted upon? I presume so, since CMattie won't be very busy in her limited domain. Do humans have some similar sort of action selection mechanism that acts as a to do list?
6.2.1 It's been often suggested, and is now apparently widely accepted, that human cognition is effected by a host of individual small processors working in parallel (Baars 1988, 1997; Edelman 1987; Minsky 1985; Ornstein 1986). These processors are implemented in the CMattie architecture by the codelets we've talked so much about. They are postulated to work as what computer scientists call demons, that is, they watch and wait for a situation appropriate to them, and then they act. In CMattie, a codelet's job might be to write a particular piece of information, say a day of the week, into a seminar announcement. Suppose this codelet has collected its appropriate day, say "Tuesday," and is waiting for its overlying behavior to be executed so that it can do its job. Suppose during this time another message is processed needing another day of the week, say "Wednesday," written into a different place in the seminar announcement. But the write-day-of-the-week codelet is occupied. What now? To deal with this kind of situation, we've had the original codelet instantiate a copy of itself carrying the "Tuesday" information. Later it can instantiate another copy carrying "Wednesday." Do human processors instantiate such copies of themselves that carry specific pieces of information? A slightly weaker hypotheses can be proposed in computer science terms. Can human processors contain bound variables?
6.2.2 The situation described in the previous paragraph raises a question about one of the most basic tenets of global workspace theory. The theory demands that every processor receive each broadcast from the global workspace. The instantiated codelet carrying "Tuesday" is irrevocably set on its course of action. All it can do is write its work in the proper place in the announcement being composed. It can't help in any way with the novel or problematic situation that provoked the latest "conscious" broadcast. Why should it receive that broadcast? Perhaps we should not think of it as a processor at all. Do some human processors, already embarked on some given task, not receive "conscious" broadcasts?
6.2.3 Sometimes a collection of CMattie's codelets, having been awakened by a relevant "conscious" broadcast, will wait in the wings for it's overlying behavior to be executed, that is, for its goal context to become dominant. When this happens, these codelets begin performing their tasks. It seems obvious from introspection that humans also at times postpone reacting to some conscious stimulus until some task with higher priority is completed.
6.3.1 Associations directly contribute to CMattie's perceptions, while episodic memories do not. When her perception module is finished with an incoming message, the information therein is written to the incoming perception registers in the focus. The resulting string of characters is used as the address at which to read associative memory. The results of this read also go to the empty slots in the incoming perception registers, and become part of the percept. At the same time, the same address is used to read episodic memory with the results going to a separate set of registers. The contents of the incoming perception registers typically become "conscious". Those from episodic memory do also when they are relevant, say when a room time conflict is noticed by a "consciousness" codelet. The contents of episodic memory do not become part of the percept. Do human associative and episodic memories differ in that the first can contribute to a percept while the second cannot?
7. Can "Conscious" Software be Conscious?
7.0.1 Having seen an extended account of a "conscious" software agent, it's reasonable to ask the question of this section title. Put another way, is there some sense in which it is reasonable to speak of a computer system, including its software, as being conscious? Since the word "conscious" is used with several meanings, we have several questions in disguise. Let's look at some, but by no means all, of the different possible meanings. Pinker (1997, p134), citing Jackendoff (1987), distinguishes three meanings of "consciousness": self-knowledge, access to information (access-consciousness) and sentience. We'll explore the possibility of "conscious" software being conscious in each of these senses. To focus our discussion, we'll restrict our attention to CMattie, the best developed of the "conscious" software agents.
7.1.1 Access-consciousness refers to the accessing of information from perception and from short-term memory for use in, say, rational thought and deliberate decision making. Not all internal information is so available, leading to the conscious/unconscious distinction. What about CMattie? A look at the black dots in figure 5.1 reveals that her perception registers, containing the content of her perception, is available to the spotlight of "consciousness". The same is true of two of her working memories, as well as several other modules. It seems safe to say that CMattie is access-conscious.
7.2.1 One part of the self-knowledge sense of consciousness refers to the existence of an internal model of the agent's world that contains a notion of self. This notion of consciousness has been explored in non-human primates by Gallup (1982) and others (see also Fox 1982) using the now well known marked forehead and mirror technique. They've discovered that, while the great apes tend to exhibit self-knowledge by this test, several species of monkeys do not. I would not conclude that these monkeys have no internal sense of self, since the test seems a sufficient, but not necessary, criterion. Another interesting question raised by this work is whether a sense of self can be learned. Several gorillas failed Gallup's test, while the famous Koko (Patterson 1994) passed it. I doubt that a human infant has such a sense, so it must be learnable by humans. Can it be taught to the monkeys who failed Gallup's test?
7.2.2 And CMattie? CMattie has such internal models of her world in her slipnet, in her episodic memory, in her associative memory and in the template for her behavior net. Although, these models don't contain the notion of her "self," the slipnet certainly could. That CMattie isn't conscious in this sense is simply the result of a design decision. Her domain doesn't require it. As Aaron Sloman pointed out (personal communication), if CMattie were scheduled to speak at one of the seminars she announces, we'd want to build in a sense of self.
7.2.3 On the other hand, CMattie (and no doubt the monkeys as well) is capable of self-awareness in the sense of being able to monitor her activity and change her strategy when things aren't going well. This capability is embodied in her metacognition module (Zhang,Franklin and Dasgupta 1998; Zhang and Franklin, forthcoming), which doesn't appear in figure 5.1. Metacognition uses internal sensors to track the rest of CMattie's mind, uses emotions and its own criteria to decide if things are going well or not, and effects change gently by spreading activation appropriately, and/or modifying global parameters. (See 5.10 above) Metacognition is also involved in deliberate action.
7.3.1 This brings us to a highly controversial issue. Can CMattie be sentient in some sense? Some would say that only biological agents can experience qualia (Hill 1991, Searle 1992). Some biologists speculate that sentience arises from synchronized oscillations in the brain (Crick and Koch 1990). In the context of qualia, the neuroscientist Walter Freeman says, "I am willing to believe that rabbits are conscious, and that every animal possessing laminated neuropil has some consciousness, though I do not extend the attribute to lesser brains or to other forms of matter" (1995, p. 136). Nonetheless, Freeman doesn't rule out the possibility of conscious artifacts (p. 139). He even speculates about ethical issues. Neither does the philosopher John Haugeland who describes the assertion that no AI system could be conscious as "very hard to defend" (1985 p. 247).
7.2.3 Roboticist Hans Moravec postulates imagery as the "beginnings of awareness" in machines. "In our lab, the programs we have developed usually present ... information from the robot's world model in the form of pictures on a computer screen, a direct window into the robot's mind. In these internal models of the world I see the beginnings of awareness in the minds of our machines, an awareness I believe will evolve into consciousness comparable with that of humans" (1988 p. 39). Baars defines imagery as "conscious experience of internal events" (1997 p. 22). If Moravec is right, then CMattie must be aware of those of her internal events that come into her spotlight of "consciousness".
7.3.3 Philosopher Dave Chalmers takes a hard look at the possibility of artificial sentience (1996). "I claim that conscious experience arises from fine-grained functional organization." (p.248) He refers to a more formalized version of this statement as "the principle of organizational invariance" (p. 248). He later concludes that "[t]he invariance principle tells us that in principle, cognitive systems realized in all sorts of media can be conscious" (p. 275). Still later the "in principle" is bypassed. "... there is a nonempty class of computations such that the implementation of any computation in that class is sufficient for a mind, and in particular, is sufficient for the existence of conscious experience" (p. 314). And that's not all. "... implementing the right computation suffices for rich conscious experience like our own" (p 315).
7.3.4 What about CMattie? Does she fall into Chalmers' special "nonempty class of computations"? Unfortunately, we can't tell. There are no easy characterizations for members of the class. And the proof rests on the invariance principle. Is it true? Ultimately, these thoughts, though no doubt important for consciousness studies, don't help us with the CMattie problem, except to make sentience on her part more plausible.
7.3.5 Recall Chalmers' assertion that "conscious experience arises from fine-grained functional organization." This contradicts those who expect consciousness to emerge from any sufficiently complex system. We hold this same view. If you want conscious software, you must build in the appropriate architecture and mechanisms. The question is, have we done so in CMattie?
7.3.6 One tempting way out of the dilemma of determining awareness in software is to follow what we could do with humans, ask them. Philosophers wouldn't like this approach, since zombies, in the philosophical sense, would reply that they are sentient. Still Baars uses subjects' reports as one of two criteria for consciousness (1988 p.15) and asserts that it's typical of experimental psychologists. Should we build "conscious" software agents with the ability to give such reports? Would that help settle matters? We doubt it. Call to mind the conversational software agent Julia (Mauldin 1994). It shouldn't be difficult to reprogram her to claim sentience, and she'd probably be convincing.
7.3.7 Of course, if CMattie were to be sentient, her awareness would surely be quite different from ours, possibly so different we wouldn't even recognize it. Many comparisons have been made of human consciousness with that presumed of other species (e.g. Dawkins 1986, p. 35-36), all pointing out major differences due to different senses, etc. Hofstadter makes a similar point about possible machine awareness. "If one accepts [the] somewhat disturbing view that perhaps machines, even today's machines<should be assigned various shades of gray (even if extremely faint shades) along the "consciousness continuum," then one is forced into trying to pinpoint just what it is that makes for different shades of gray" (1995, p. 311).
7.3.8 So, will CMattie be sentient? Should we assign her some shade of gray? We don't know how to tell. But, she has machinery that may give her a shot at it.
8. Limitations and Future Work
8.0.1 The CMattie model adds both architecture and mechanisms to global workspace theory. In many ways, it provides a successful conceptual model of mind (Franklin and Graesser, forthcoming). One can pose questions about human cognition, and put these questions to the model by imagining how the model would behave in the situation of the question, thus providing the model's answer to the question. But there are many limitations to this model that restrict the body of questions that it can address. In this section we'll sample only a few of these limitations, and go on to briefly describe future work designed to remove some of them.
8.1.1 CMattie has only one major sense, incoming email, though she does directly sense the operating system on which she runs (see 5.11 above). These two are quite distinct, and offer no possibility for the kind of sensory fusion human senses offer. This allows an overly simple perceptual mechanism, and avoids many issues that arise for human perception. It's a severe limitation.
8.1.2 Like CMattie, we humans tend to keep an internal to do list. We tend to forget items from our list, without some external help. CMattie does not. This tells us that CMattie's behavior net isn't designed exactly right. It's another limitation.
8.1.3 Aaron Sloman three levels of control in cognitive agents, a reactive level, a deliberative level and a meta-management level (1996). CMattie's metacognitive module operates roughly on the meta-management level. The rest of CMattie's action selection mechanisms operate within the reactive level. Sloman's deliberative level is characterized by the agent being able to construct "alternative plans that have to be compared in some way prior to selection." The plans not selected are discarded. CMattie, in her behavior net, selects between alternate behavior streams that could be considered to be plans of action. However, the streams not selected are not discarded. CMattie's action selection is relatively complex, but is still reactive. Her perception mechanism does sometimes try out alternative candidates for the message type of an incoming message discarding one in favor of another. Thus perception is in a sense deliberative, a possibility pointed out by Sloman. A deliberative level of control is important to, and some think even characteristic of, human cognition. The lack of this level in the CMattie architecture is a major limitation.
8.1.4 Human senses tend to habituate to repeated stimuli, lessening their responses to them. CMattie's domain doesn't provide for such repetitions, except perhaps in messages from the operating system. In the later case she would habituate, but only because the appropriate responses would already have been taken. CMattie's emotion component does provide for habituation. Repeated stimuli to a particular emotion have diminishing effect. Still, we view CMattie's lack of sensory habituation as a limitation for a model of human cognition.
8.1.5 In humans, conscious actions become over learned with repetition and thereafter automatic. CMattie has a mechanism for such automatization inherited from pandemonium theory. Concept demons from pandemonium theory become concept codelets in CMattie (see 2.3.6 above). With each of these codelets performing some task, the concept codelet as a whole would perform a more complex action, an automatization of the individual codelets. There are several problems. CMattie's domain is so simple as to offer little scope for this sort of thing to happen. Also, most such collections of codelets that would coalesce in this way are already built into the coalition implementing some behavior. Finally, there's the issue of instantiated codelets. The codelets that should coalesce into a concept codelet are the generator codelets sitting in the stands. The instantiated codelets on the playing field are concerned only with their specified tasks. Though this last objection is easily overcome, the lack of such automatization of actions is yet another limitation of the model.
8.1.6 Baars describes the self in humans as "the overall context of experience" (1997, p.150), going on to talk about "the desperate creativity with which humans maintain as much coherence and stability in their conscious experience as they can " (p. 149). Remember that "context" here is a technical term referring to a coalition of processors, i.e. codelets. CMattie has no such coalition, no self. Gazzaniga postulates an "interpreter" that "seeks explanations for internal and external events" (1998, p. 24). Baars includes the interpreter as part of the self, calling it the "narrative self" (p. 147). In her learning mechanisms (see 5.9 above), CMattie can connect a negative response from a human correspondent with specific words in a prior message in a cause and effect fashion. This seems a little piece of an interpreter. Still, CMattie has no general mechanism for generating explanations. This part of the self is also missing. Lacking a self seems a limitation of this model of global workspace theory. Lacking an interpreter seems a limitation of any model of human cognition.
8.1.7 We humans ignore much of what comes in as sensation as we create percepts. But even within a percept we direct attention, singling out certain relevant items or issues and ignoring others. CMattie does ignore unneeded words in her sensation, an incoming email message. However, she attends to her entire percept, obviating the need for an attention mechanism such as we humans employ. This is yet another limitation of the model.
8.1.8 I feel confident that a little more thought would turn up a host of other such limitations. Nonetheless, I expect even this so limited model to prove useful. I also expect subsequent "conscious" software agents, such as IDA (to be described next) to be, still limited, but much less so.
8.2.1 IDA (Intelligent Distribution Agent), is a "conscious" software agent being developed for the Navy. At the end of each sailor's tour of duty, he or she is assigned to a new billet. This assignment process is called distribution. The Navy employs some 280 people, called detailers, full time to effect these new assignments. IDA's task is to facilitate this process, by playing the role of detailer as best she can. Occupying a domain orders of magnitude more complex than that of CMattie, IDA will not be limited in many of the ways that CMattie is. IDA is intended as a proof of concept project for "conscious" software.
8.2.2 Designing IDA presents both communication problems and constraint satisfaction problems. She must communicate with sailors via email and in natural language, understanding the content. She must access a number of databases, again understanding the content. She must see that the Navy's needs are satisfied, for example, the required number of sonar technicians on a destroyer with the required types of training. She must understand and abide by the Navy's some ninety policies regarding distribution. She must hold down moving costs and training costs. And, she must cater to the needs and desires of the sailor as well as is possible in order to promote retention.
8.2.3 Unlike CMattie, IDA will sense her world using several different major sensory modalities. She'll receive email messages, she'll read screens from a number of different databases, and she'll sense via operating system commands and messages. Each of the different databases can be thought of as requiring a different sense, since each will require its own knowledge base and workspace within IDA. Sensory fusion will be needed for action selection.
8.2.4 In matching sailors with billets IDA will continually face constraint satisfaction problems. Among other approaches we intend to experiment with a SOAR like mechanism (Laird, Newell, and Rosenbloom, 1987) and with a Copycat like mechanism, each of which would construct possible action scenarios, choose between them, and discard those not chosen. IDA will have a level of deliberative control.
8.2.5 IDA's construction of action scenarios may well require knowledge of cause and effect, in other words, an interpreter (see 8.1.6 above). A self in the sense of a single overarching context is another matter for IDA. What would one put in it? What would its codelets do? This limitation of the model may well remain in the IDA version.
8.2.6 In solving constraint satisfaction problems IDA will need information from various databases about both the sailor and the billet. The ability to read one such database will be considered a sense. A single record will be a sensation. The perception process in this case is trivial since the fields are nicely lined up in known positions and there content is always expressed in a unique and predetermined way known to IDA. But the data in a record will in most cases contain items of no current interest to IDA. She'll need the kind of attention mechanism whose lack was considered a limitation of CMattie (see 8.1.7 above).
8.3.1 Before IDA became available, our intention was to use AutoTutor as our first proof-of-concept project. AutoTutor (Graesser, Franklin & Wiemer-Hastings, 1998; Wiemer-Hastings, et al, 1998) is a fully automated computer tutor that simulates the dialogue moves of normal human tutors. An unconscious prototype, in the sense of not implementing global workspace theory, is currently up and running. With computer literacy as its topic, it's now being perfected in many ways in parallel. AutoTutor will eventually incorporate sophisticated tutoring strategies. The architecture and mechanisms of AutoTutor are much different from that of CMattie, and from that planned for IDA. Rebuilding portions of AutoTutor to conform to the demands of global workspace theory may be quite a challenge. Still, if energy and funding hold out, we intend to try for a "conscious" version of AutoTutor as another proof-of-concept project.
9.0.1 Though this paper has a single author, the work described herein is very much a team effort. It is the work of the "Conscious" Software Research Group, a part of the Institute for Intelligent Systems at the University of Memphis. The research group currently includes Stan Franklin, Art Graesser, Satish Ambati, Ashraf Anwar, Myles Bogner, Derek Harter, Arpad Kelemen, Irina Makkaveeva, Lee McCauley, Aregahegn Negatu, Fergus Nolan, Hongjun Song, Uma Ramamurthy, Zhaohua Zhang. Each person's major contributions to the project can be inferred from the authorship of the individual papers describing various parts of the "conscious" software architecture and mechanisms. For their many, many other contributions, I thank them all.
Allen. James (1995), Natural Language Understanding. Redwood City, CA: Benjamin/Cummings.
Anwar and Franklin (forthcoming ????), "Sparse Distributed Memory
for "Conscious" Software Agents"
Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge:
Cambridge University Press.
Baars, B. J. (1997). In the Theater of Consciousness. Oxford:
Oxford University Press.
Barto, A.G., Sutton, R. S., and Brouwer, P. S. (1981). Associative
Search Network: a Reinforcement Learning Associative Memory, Biological
Cybernetics, 40(3): 201-211.
Bates, Joseph, A. Bryan Loyall, and W. Scott Reilly (1991). "Broad
Agents," Proceedings of the AAAI Spring Symposium on Integrated Intelligent
Architectures, Stanford University, March. These proceedings are available
in SIGART Bulletin, Volume 2, Number 4, August 1992.
Bonarini, A. (1997) Anytime Learning and Adaptation of Structured Fuzzy
Behaviors, Adaptive Behavior Volume 5 ???? Cambridge MA: The MIT
Bogner Myles (1998), Creating a "conscious" agent.
Master's thesis, The University of Memphis, May.
Bogner, Myles, Uma Ramamurthy, and Stan Franklin (to appear). Damasio, A. R. (1994), Descartes' Error, New York: Gosset/Putnam
Chalmers, David J. (1996), The Conscious Mind, Oxford: Oxford
Crick, Francis and Christof Koch (1990), "Towards a Neurobiological
Theory of Consciousness," The Neurosciences 2.
Dawkins, Richard (1986), The Blind Watchmaker, New York: Norton.
Edelman, Gerald M. (1987). Neural Darwinism: The Theory of Neuronal
Group Selection. New York: Basic Books.
Franklin, Stan (1995). Artificial Minds. Cambridge, MA: MIT Press.
Franklin, Stan (1997). Autonomous Agents as Embodied AI, Cybernetics
and Systems' Special issue on Epistemological Aspects of Embodied AI,
Franklin, Stan, Art Graesser, Brent Olde, Hongjun Song, and Aregahegn
Negatu (1996). "Virtual Mattie--an Intelligent Clerical Agent,"
AAAI Symposium on Embodied Cognition and Action, Cambridge MA, November.
Franklin, Stan and Graesser, Art (1997) "Is it an Agent, or just
a Program?: A Taxonomy for Autonomous Agents," Intelligent Agents
III, Berlin: Springer Verlag, 21-35,
Franklin, Stan and Graesser, Art (forthcoming), "Models of "consciousness""
Franklin, Stan, Arpad Kelemen, and Lee McCauley (1998), IDA: A Cognitive
Agent Architecture, Proceedings of the IEEE Conference on Systems, Man
and Cybernetics, 2646-2651.
Freeman, Walter (1995), Societies of Brains, Hillsdale, NJ: Lawrence
Gallup, G. (1982), "Self-awareness and the emergence of mind in
primates," American Journal of Primatology, 2:237-246.
Gazzaniga, Michael S. (1998), The Mind's Past. Berkeley: University
of California Press.
Graesser, A.C., Franklin, S., & Wiemer-Hastings, P. (1998). Simulating
smooth tutorial dialogue with pedagogical value. Proceedings of the American
Association for Artificial Intelligence (pp. 163-167). Menlo Park, CA:
Griffin, Donald R. (1984), Animal Thinking, Cambridge, Mass:
Harvard University Press.
Hacker, Douglas, (1997), Metacognitive: Definitions and Empirical Foundations,
In Hacker, D., Dunlosky, J., Graesser A. (Eds.) Metacognition in Educational
Theory and Practice. Hillsdale, NJ: Erlbaum, in press ????.
Haugeland, John (1985), Artificial Intelligence: The Very Idea,
Cambridge MA: The MIT Press.
Hill, C. S. (1991), Sensations: A Defense of Type Materialism,
Cambridge: Cambridge University Press.
Hofstadter D. R. (1995), Fluid Concepts and Creative Analogies, Basic
Hofstadter, D. R. and Mitchell, M. (1994), "The Copycat Project:
A model of mental fluidity and analogy-making." In Holyoak, K.J. &
Barnden, J.A. (Eds.) Advances in connectionist and neural computation
theory, Vol. 2: Analogical connections. Norwood, N.J.: Ablex.
Holland, J. H. (1975). Adaptation in Natural and Artificial Systems.
Ann Arbor: University of Michigan Press.
Holland, J. H. (1986), "A Mathematical Framework for Studying Learning
in Classifier Systems." In D., Farmer et al, Evolution, Games
and Learning: Models for Adaption in Machine and Nature. Amsterdam:
Holland, J. H. and Reitman, J. S. (1978). Cognitive Systems Based on
Adaptive Algorithms. In D. A. Waterman & F. Hayey-Roth (Eds.),
Pattern Directed Inference Systems (pp. 313 -329). New York: Academic
Jackson, John V. (1987), "Idea for a Mind," SIGGART Newsletter,
no. 181, July, 23-26.
Kanerva, Pentti (1988), Sparse Distributed Memory, Cambridge
MA: The MIT Press.
Kolodner, Janet (1993), Case-Based Reasoning , Morgan Kaufman
Laird, John E., Newell, Allen, and Rosenbloom, Paul S. (1987). "SOAR:
An Architecture for General Intelligence." Artificial Intelligence,
Leung, K.S., and C. T. Lin (1988), "Fuzzy concepts in expert systems."
Anwar and Franklin (forthcoming ????), "Sparse Distributed Memory for "Conscious" Software Agents"
Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
Baars, B. J. (1997). In the Theater of Consciousness. Oxford: Oxford University Press.
Barto, A.G., Sutton, R. S., and Brouwer, P. S. (1981). Associative Search Network: a Reinforcement Learning Associative Memory, Biological Cybernetics, 40(3): 201-211.
Bates, Joseph, A. Bryan Loyall, and W. Scott Reilly (1991). "Broad Agents," Proceedings of the AAAI Spring Symposium on Integrated Intelligent Architectures, Stanford University, March. These proceedings are available in SIGART Bulletin, Volume 2, Number 4, August 1992.
Bonarini, A. (1997) Anytime Learning and Adaptation of Structured Fuzzy Behaviors, Adaptive Behavior Volume 5 ???? Cambridge MA: The MIT Press.
Bogner Myles (1998), Creating a "conscious" agent. Master's thesis, The University of Memphis, May.
Bogner, Myles, Uma Ramamurthy, and Stan Franklin (to appear)."Consciousness" and Conceptual Learning in a Socially Situated Agent
Damasio, A. R. (1994), Descartes' Error, New York: Gosset/Putnam Press.
Chalmers, David J. (1996), The Conscious Mind, Oxford: Oxford University Press.
Crick, Francis and Christof Koch (1990), "Towards a Neurobiological Theory of Consciousness," The Neurosciences 2.
Dawkins, Richard (1986), The Blind Watchmaker, New York: Norton.
Edelman, Gerald M. (1987). Neural Darwinism: The Theory of Neuronal Group Selection. New York: Basic Books.
Franklin, Stan (1995). Artificial Minds. Cambridge, MA: MIT Press.
Franklin, Stan (1997). Autonomous Agents as Embodied AI, Cybernetics and Systems' Special issue on Epistemological Aspects of Embodied AI, 28:6 499-520.
Franklin, Stan, Art Graesser, Brent Olde, Hongjun Song, and Aregahegn Negatu (1996). "Virtual Mattie--an Intelligent Clerical Agent," AAAI Symposium on Embodied Cognition and Action, Cambridge MA, November.
Franklin, Stan and Graesser, Art (1997) "Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents," Intelligent Agents III, Berlin: Springer Verlag, 21-35,
Franklin, Stan and Graesser, Art (forthcoming), "Models of "consciousness""
Franklin, Stan, Arpad Kelemen, and Lee McCauley (1998), IDA: A Cognitive Agent Architecture, Proceedings of the IEEE Conference on Systems, Man and Cybernetics, 2646-2651.
Freeman, Walter (1995), Societies of Brains, Hillsdale, NJ: Lawrence Erlbaum.
Gallup, G. (1982), "Self-awareness and the emergence of mind in primates," American Journal of Primatology, 2:237-246.
Gazzaniga, Michael S. (1998), The Mind's Past. Berkeley: University of California Press.
Graesser, A.C., Franklin, S., & Wiemer-Hastings, P. (1998). Simulating smooth tutorial dialogue with pedagogical value. Proceedings of the American Association for Artificial Intelligence (pp. 163-167). Menlo Park, CA: AAAI Press.
Griffin, Donald R. (1984), Animal Thinking, Cambridge, Mass: Harvard University Press.
Hacker, Douglas, (1997), Metacognitive: Definitions and Empirical Foundations, In Hacker, D., Dunlosky, J., Graesser A. (Eds.) Metacognition in Educational Theory and Practice. Hillsdale, NJ: Erlbaum, in press ????.
Haugeland, John (1985), Artificial Intelligence: The Very Idea, Cambridge MA: The MIT Press.
Hill, C. S. (1991), Sensations: A Defense of Type Materialism, Cambridge: Cambridge University Press.
Hofstadter D. R. (1995), Fluid Concepts and Creative Analogies, Basic Books.
Hofstadter, D. R. and Mitchell, M. (1994), "The Copycat Project: A model of mental fluidity and analogy-making." In Holyoak, K.J. & Barnden, J.A. (Eds.) Advances in connectionist and neural computation theory, Vol. 2: Analogical connections. Norwood, N.J.: Ablex.
Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. Ann Arbor: University of Michigan Press.
Holland, J. H. (1986), "A Mathematical Framework for Studying Learning in Classifier Systems." In D., Farmer et al, Evolution, Games and Learning: Models for Adaption in Machine and Nature. Amsterdam: North-Holland
Holland, J. H. and Reitman, J. S. (1978). Cognitive Systems Based on Adaptive Algorithms. In D. A. Waterman & F. Hayey-Roth (Eds.), Pattern Directed Inference Systems (pp. 313 -329). New York: Academic Press.
Jackson, John V. (1987), "Idea for a Mind," SIGGART Newsletter, no. 181, July, 23-26.
Kanerva, Pentti (1988), Sparse Distributed Memory, Cambridge MA: The MIT Press.
Kolodner, Janet (1993), Case-Based Reasoning , Morgan Kaufman
Laird, John E., Newell, Allen, and Rosenbloom, Paul S. (1987). "SOAR: An Architecture for General Intelligence." Artificial Intelligence, 33: 1-64.
Leung, K.S., and C. T. Lin (1988), "Fuzzy concepts in expert systems." Computer 21(9):43-56
Loebner Hugh, (web), http://acm.org/~loebner/In-response.html
Maes, Pattie (1990), 'How to do the right thing', Connection Science, 1:3.
Maes, Pattie (1993), "Modeling Adaptive Autonomous Agents," Artificial Life , 1:1/2, 135-162.
Maturana, H. R. (1975). "The Organization of the Living: A Theory of the Living Organization. "International Journal of Man-Machine Studies, 7:313-32.
Maturana, H. R. and Varela, F. (1980). Autopoiesis and Cognition: The Realization of the Living. Dordrecht, Netherlands: Reidel.
Mauldin, Michael L. (1994) "Chatterbots, Tinymuds, And The Turing Test: Entering The Loebner Prize Competition" Proceedings of the Twelfth National Conference on Artificial Intelligence, AAAI Press, 16-21
McCauley, Thomas L. and Stan Franklin (1998) An Architecture for Emotion, AAAI Fall Symposium "Emotional and Intelligent: The Tangled Knot of Cognition"
Minsky, Marvin (1985), The Society of Mind, New York: Simon and Schuster.
Mitchell, Melanie (1993), Analogy-Making as Perception, Cambridge MA: The MIT Press.
Miyake, A., & Shah, P. (in press). (Eds). Models of Working Memory: Mechanisms of Active Maintenance and Executive Control New York: Cambridge University Press.
Moravec, Hans (1988), Mind Children, Cambridge, MA: Harvard University Press.
Newell, Allen (1990), Unified Theories of Cognition, Cambridge, Mass: Harvard University Press.
Ornstein, Robert (1986), Multimind. Boston: Houghton Mifflin.
Oyama, S. (1985). The Ontogeny of Information.. Cambridge: Cambridge University Press.
Patterson, F.G.P., and Cohn, R.H. 1994. Self-recognition and Self-awareness in Lowland Gorillas. In S.T. Parker, R.W. Mitchell and M.L. Boccia (Eds.), Self-awareness in Animals and Humans. New York: Cambridge University Press.
Picard, Rosalind (1997), Affective Computing, Cambridge MA: The MIT Press.
Pinker, Steven (1997), How the Mind Works, New York: Norton.
Ramamurthy, Uma , Myles Bogner, and Stan Franklin (1998), "conscious"
Learning In An Adaptive Software Agent, From Animals to Animats
Ramamurthy, Uma and Stan Franklin (forthcoming), Self-preservation in
Selfridge, O.G. (1959), "Pandemonium: A Paradigm for Learning,"
Proceedings of the Symposium on Mechanisation of Thought Process, National
Searle, J. R. (1992), The Rediscovery of the Mind, Cambridge
MA: The MIT Press.
A. Sloman, `Motives Mechanisms Emotions' in Cognition and Emotion 1,3,
pp.217-234 1987, reprinted in M.A. Boden (ed) The Philosophy of Artificial
Intelligence ``Oxford Readings in Philosophy'' Series Oxford University
Press, pp 231-247 1990.
Sloman, A., (1992), "Developing concepts of consciousness,"
Behavioral and Brain Sciences.
Sloman, Aaron (1996) What Sort of Architecture is Required for a Human-like
Agent?, Cognitive Modeling Workshop , AAAI96, Portland Oregon.
Sloman, Aaron and Poli, Riccardo (1996). "SIM_AGENT: A toolkit
for exploring agent designs in Intelligent Agents," Vol. II (ATAL-95),
Eds. Mike Wooldridge, Joerg Mueller, Milind Tambe, Springer-Verlag, pp.
Song, Hongjun and Stan Franklin (forthcoming), "Action Selection
Using Behavior Instantiation"
Turing, Alan (1950), "Computing Machinery and Intellignece."
Mind, 59:434-60. Reprinted in E. Feigenbaum and J. Feldmans, eds.,
Computers and Thought. New York: McGraw-Hill, 1963.
Valenzuela-Rendon, M. (1991) The Fuzzy Classifier System: a classifier
System for Continuously Varying Variables. In Proceedings of
the Fourth International Conference on Genetic Algorithms (pp. 346-353).
San Mateo, CA: Morgan Kaufmann.
Weizenbaum, J. (1966) "ELIZAA computer program for the study
of natural language communication between man and machind," Communications
of the Association for Computing Machinery. 9:36-45
Wiemer-Hastings, P., Graesser, A.C., Harter, D., and the Tutoring Research
Group (1998). The foundations and architecture of AutoTutor. Proceedings
Lecture Notes in Computer Science (pp. 334-343). Berlin, Germany: Springer-Verlag.
Wilson, Stewart W. (1994), ZCS: A Zeroth Level Classifier System,
Evolutionary Computation, MIT Press ????.
Zhang, Zhaohua, Stan Franklin and Dipankar Dasgupta (1998), Metacognition
in Software Agents using Classifer Systems, Proc AAAI 98, 82-88
Zhang, Zhaohua, Stan Franklin, Brent Olde, Yun Wan and Art Graesser
(1998) "Natural Language Sensing for Autonomous Agents,"
Proc. IEEE Joint Symposia on Intelligence and Systems, Rockville, Maryland,
Zhang, Zhaohua and Stan Franklin (forthcoming), Metacognition in Software
Agents Using Fuzzy Systems
Ramamurthy, Uma , Myles Bogner, and Stan Franklin (1998), "conscious" Learning In An Adaptive Software Agent, From Animals to Animats ????.
Ramamurthy, Uma and Stan Franklin (forthcoming), Self-preservation in Software Agents.
Selfridge, O.G. (1959), "Pandemonium: A Paradigm for Learning," Proceedings of the Symposium on Mechanisation of Thought Process, National Physics Laboratory.
Searle, J. R. (1992), The Rediscovery of the Mind, Cambridge MA: The MIT Press.
A. Sloman, `Motives Mechanisms Emotions' in Cognition and Emotion 1,3, pp.217-234 1987, reprinted in M.A. Boden (ed) The Philosophy of Artificial Intelligence ``Oxford Readings in Philosophy'' Series Oxford University Press, pp 231-247 1990.
Sloman, A., (1992), "Developing concepts of consciousness," Behavioral and Brain Sciences.
Sloman, Aaron (1996) What Sort of Architecture is Required for a Human-like Agent?, Cognitive Modeling Workshop , AAAI96, Portland Oregon.
Sloman, Aaron and Poli, Riccardo (1996). "SIM_AGENT: A toolkit for exploring agent designs in Intelligent Agents," Vol. II (ATAL-95), Eds. Mike Wooldridge, Joerg Mueller, Milind Tambe, Springer-Verlag, pp. 392--407.
Song, Hongjun and Stan Franklin (forthcoming), "Action Selection Using Behavior Instantiation"
Turing, Alan (1950), "Computing Machinery and Intellignece." Mind, 59:434-60. Reprinted in E. Feigenbaum and J. Feldmans, eds., Computers and Thought. New York: McGraw-Hill, 1963.
Valenzuela-Rendon, M. (1991) The Fuzzy Classifier System: a classifier System for Continuously Varying Variables. In Proceedings of the Fourth International Conference on Genetic Algorithms (pp. 346-353). San Mateo, CA: Morgan Kaufmann.
Weizenbaum, J. (1966) "ELIZAA computer program for the study of natural language communication between man and machind," Communications of the Association for Computing Machinery. 9:36-45
Wiemer-Hastings, P., Graesser, A.C., Harter, D., and the Tutoring Research Group (1998). The foundations and architecture of AutoTutor. Proceedings Lecture Notes in Computer Science (pp. 334-343). Berlin, Germany: Springer-Verlag.
Wilson, Stewart W. (1994), ZCS: A Zeroth Level Classifier System,
Evolutionary Computation, MIT Press ????.
Zhang, Zhaohua, Stan Franklin and Dipankar Dasgupta (1998), Metacognition in Software Agents using Classifer Systems, Proc AAAI 98, 82-88
Zhang, Zhaohua, Stan Franklin, Brent Olde, Yun Wan and Art Graesser (1998) "Natural Language Sensing for Autonomous Agents," Proc. IEEE Joint Symposia on Intelligence and Systems, Rockville, Maryland, 374-81
Zhang, Zhaohua and Stan Franklin (forthcoming), Metacognition in Software Agents Using Fuzzy Systems