================================ VOLUME 19 Number 4 November 1995 ================================ 0. MIND <> COMPUTER: INTRODUCTION TO THE SPECIAL ISSUE Matjaz Gams, Marcin Paprzycki, Xindong Wu This special issue of Informatica on Mind <> Machine aims to reevaluate the soundness of current AI research, especially the heavily disputed strong-AI paradigm, and to pursue new directions towards achieving true intelligence. It is a brainstorming issue about core ideas that will shape future AI. We have tried to include critical papers representing different positions on these issues. Submissions were invited in all subareas and on all aspects of AI research and its new directions, especially: - the current state, positions, and true advances achieved in the last 5-10 years in various subfields of AI (as opposed to parametric improvements), - the trends, perspectives and foundations of artificial and natural intelligence, and - strong AI vs. weak AI and the reality of most current ``typical'' publications in AI. Papers accepted for the special issue include invited papers from Agre, Dreyfus, Gams, Michie, Winograd and Wu, and regular submissions. The invited papers were refereed in the same way as regular submissions, and all authors were asked to accommodate comments from referees. The accepted papers are grouped into the following three categories. A. Overview and General Issues Making a Mind vs. Modelling the Brain: AI Back at a Branchpoint by H.L. Dreyfus and S.E. Dreyfus, and Thinking machines: Can there be? Are we? by T. Winograd, are both unique and worth reading again and again. Indeed, they present the motto of this special issue -- were not H.L. Dreyfus, S.E. Dreyfus and T. Winograd right about this issue years ago? Were the attacks on them by the strong-AI community and large parts of the formal-sciences community unjustified? We believe the answer is yes. ``Strong AI'': An Adolescent Disorder by D. Michie advocates an integrative approach -- let us forget about differences and keep doing interesting things. Artificial Selfhood: The Path to True Artificial Intelligence by B. Goertzel rejects formal logic and advocates designing complex self-aware systems. Strong vs. Weak AI by M. Gams presents an overview of the antagonistic approaches and proposes an AI version of the Heisenberg principle delimiting strong from weak AI. A Brief Naive Psychology Manifesto by S. Watt argues for naive commonsense psychology, by analogy to naive physics. People understand physics and psychology even in their childhood without any formal logic or equations. Stuffing Mind into Computer: Knowledge and Learning for Intelligent Systems by K.J. Cherkauer analyses knowledge acquisition and learning as the key issues necessary for designing intelligent computers. Has Turing Slain the Jabberwock? by L. Marinoff attacks strong AI through slaying Turing and Jabberwock. The papers in this section are a mixture of interdisciplinary approaches, from computer- to cognitive sciences. The average paper takes a critical stand against strong AI. However, the level of criticism and acclaim for intelligent digital computers varies. B. New Approaches Computation and Embodied Agency by P.E. Agre analyses computational theories of agents' interactions with their environments. Methodological Considerations on Modeling Cognition and Designing Human-Computer Interfaces -- An Investigation from the Perspective of Philosophy of Science and Epistemology by M.F. Peschl investigates the role of representation in both cognitive modeling and the development of human-computer interfaces. Knowledge Objects by X. Wu, S. Ramakrishnan and H. Schmidt introduces knowledge objects as a step further from programming objects. Modeling Affect: The Next Step in Intelligent Computer Evolution by S. Walczak advocates implementing features such as affects in order to design intelligent programming systems. The Extracellular Containment of Natural Intelligence: A New Direction for Strong AI by R.L. Amoroso is one of the rare papers closely connecting physics and AI in this issue. Quantum Intelligence, QI; Quantum Mind, QM by B. Soucek presents and defines concepts of quantum intelligence and quantum mind. Representations, Explanations, and PDP: Is Representation-Talk Really Necessary? by R.S. Stufflebeam addresses the connectionist approach. What has happened to the neural-network wave of optimism? C. Computability and Form vs. Meaning Is Consciousness a Computational Property? by G. Caplain proposes a detailed argument to show that mind can not be computationally modeled. Cracks in the Computational Foundations by P. Schweizer claims that computational procedures are not constitutive of the mind, and thus cannot play a fundamental role in AI. Goedel's Theorems for Minds and Computers by D. Bojadziev, presents an overview of the uses of Goedel's theorems, claiming that they apply equally to humans and computers. On the Computational Model of the Mind by M. Radovan examines various strengths and shortcomings of computers and minds. Although computers in many ways exceed natural mind, brains still have quite a few aces left. What Internal Languages Can't Do by P. Hipwell analyses the limitations of internal representation languages in contrast with the brain's representations. Consciousness and Understanding in the Chinese Room by S. Gozzano proposes yet another reason why Searle's Chinese rooms present a hypothetical situation only. Acknowledgements The following reviewers are gratefully thanked for their time and effort to make this special issue a reality: Witold Abramowicz Kenneth Aizawa Alan Aliu John Anderson Istvan Berkeley Balaji Bharadwaj Leslie Burkholder Frada Burstein Wojciech Chybowski Andrzej Ciepielewski Sait Dogru Marek Druzdzel James Geller Stavros Kokkotos Kevin Korb Aare Laakso Witold Marciszewski Tomasz Maruszewski Timothy Menzies Madhav Moganti John Mueller Hari Narayanan James Pomykalski David Robertson Piotr Teczynski Olivier de Vel Zygmunt Vetulani John Weckert Stefan Wrobel ----------------- 1. Making a Mind vs. Modeling the Brain: AI Back at a Branchpoint Hubert L. Dreyfus and Stuart E. Dreyfus, University of California, Berkeley pp. 425-442 Keywords: mind, brain, AI directions Abstract: Nothing seems more possible to me than that people some day will come to the definite opinion that there is no copy in the nervous system which corresponds to a particular thought, or a particular idea, or memory. Information is not stored anywhere in particular. Rather it is stored everywhere. Information is better thought of as "evoked'' than "found''. ------------------- 2. Thinking Machines: Can There Be? Are We? Terry Winograd, Stanford University, Computer Science Dept., Stanford, CA 95305-2140, USA, E-mail: winograd@cs.stanford.edu pp. 443-460 Keywords: thinking machines, broader understanding Abstract: Artificial intelligence researchers predict that "thinking machines'' will take over our mental work, just as their mechanical predecessors were intended to eliminate physical drudgery. Critics have argued with equal fervor that "thinking machine'' is a contradiction in terms. Computers, with their foundations of cold logic, can never be creative or insightful or possess real judgment. Although my own understanding developed through active participation in artificial intelligence research, I have now come to recognize a larger grain of truth in the criticisms than in the enthusiastic predictions. The source of the difficulties will not be found in the details of silicon micro-circuits or of Boolean logic, but in a basic philosophy of patchwork rationalism that has guided the research. In this paper I review the guiding principles of artificial intelligence and argue that as now conceived it is limited to a very particular kind of intelligence: one that can usefully be likened to bureaucracy. In conclusion I will briefly introduce an orientation I call hermeneutic constructivism and illustrate how it can lead to an alternative path of design. ------------------- 3. "Strong AI'': an Adolescent Disorder Donald Michie, Professor Emeritus, University of Edinburgh, UK, Associate Member, Josef Stefan Institute, Ljubljana, Slovenia pp. 461-468 Keywords: strong and weak AI, Turing's test, middle-ground Abstract: Philosophers have distinguished two attitudes to the mechanization of thought. "Strong AI'' says that given a sufficiency of well chosen axioms and deduction procedures we have all we need to program computers to out-think humans. "Weak AI'' says that humans don't think in logical deductions anyway. So why not instead devote ourselves to (1) neural nets, or (2) ultra-parallelism, or (3) other ways of dispensing with symbolic domain-models? "Weak AI'' thus has diverse strands, united in a common objection to "strong AI'', and articulated in popular writings, see for example Hubert Dreyfus (1979), John Searle (1990) and Roger Penrose (1989). How should one assess their objection? ------------------- 4. Artificial Selfhood: The Path to True Artificial Intelligence Ben Goertzel, Psychology Department, University of Western Australia, Nedlands WA 6009, Australia, ben@psy.uwa.edu.au pp. 469-477 Keywords: artificial intelligence, complex systems, self, psynet model In order to make strong AI a reality, formal logic and formal neural network theory must be abandoned in favor of complex systems science. The focus must be placed on large-scale emergent structures and dynamics. Creative intelligence is possible in a computer program, but only if the program is devised in such a way as to allow the spontaneous organization and emergence of ``self- and reality-theories.'' In order to obtain such a program it may be necessary to program whole populations of interacting, ``artificially intersubjective'' AI programs. ------------------- 5. Strong vs. Weak AI Matjaz Gams, Jozef Stefan Institute, Jamova 39, 61000 Ljubljana, Slovenia, Phone: +386 61 17-73-644, Fax: +386 61 161-029, E-mail: matjaz.gams@ijs.si, WWW: http://www2.ijs.si/~mezi/matjaz.html pp. 479-493 Keywords: strong and weak AI, principle of multiple knowledge, Church's thesis, Turing machines Abstract: An overview of recent AI turning points is presented through the strong-weak AI opposition. The strong strong and weak weak AI are rejected as being too extreme. Strong AI is refuted by several arguments, such as empirical lack of intelligence in the fastest and most complex computers. Weak AI rejects the old formalistic approach based only on computational models and endorses ideas in several directions, from neuroscience to philosophy and physics. The proposed line distinguishing strong from weak AI is set by the principle of multiple knowledge, declaring that single-model systems can not achieve intelligence. Weak AI reevaluates and upgrades several foundations of AI and computer science in general: Church's thesis and Turing machines. ------------------- 6. A Brief Naive Psychology Manifesto Stuart Watt, Department of Psychology, The Open University, Milton Keynes MK7 6AA. UK, Phone: +44 1908 654513; Fax: +44 1908 653169, E-mail: S.N.K.Watt@open.ac.uk pp. 495-500 Keywords: naive psychology, common sense, anthropomorphism Abstract: This paper argues that artificial intelligence has failed to address the whole problem of common sense, and that this is the cause of a recent stagnation in the field. The big gap is in common sense-or naive-psychology, our natural human ability to see one another as minds rather than as bodies. This is especially important to artificial intelligence which must eventually enable us humans to see computers not as grey boxes, but as minds. The paper proposes that artificial intelligence study exactly this-what is going on in people's heads that makes them see others as having minds. ------------------- 7. Stuffing Mind into Computer: Knowledge and Learning for Intelligent Systems Kevin J. Cherkauer, Department of Computer Sciences, University of Wisconsin-Madison, 1210 West Dayton St., Madison, WI 53706, USA, Phone: 1-608-262-6613, Fax: 1-608-262-9777, E-mail: cherkauer@cs.wisc.edu http://www.cs.wisc.edu/cherkaue/cherkauer.html pp. 501-511 Keywords: artificial intelligence, knowledge acquisition, knowledge representation, knowledge refinement, machine learning, psychological plausibility, philosophies of mind, research directions Abstract: The task of somehow putting mind into a computer is one that has been pursued by artificial intelligence researchers for decades, and though we are getting closer, we have not caught it yet. Mind is an incredibly complex and poorly understood thing, but we should not let this stop us from continuing to strive toward the goal of intelligent computers. Two issues that are essential to this endeavor are knowledge and learning. These form the basis of human intelligence, and most people believe they are fundamental to achieving similar intelligence in computers. This paper explores issues surrounding knowledge acquisition and learning in intelligent artificial systems in light of both current philosophies of mind and the present state of artificial intelligence research. Its scope ranges from the mundane to the (almost) outlandish, with the goal of stimulating serious thought about where we are, where we would like to go, and how to get there in our attempts to render an intelligence in silicon. ------------------- 8. Has Turing Slain the Jabberwock? Louis Marinoff, Department of Philosophy, The City College of New York, 137th Street at Convent Avenue, New York 10031, phone: (212) 650-7647, fax: (212) 650-764, E-mail: marinoff@cnct.com pp. 513-526 Keywords: Turing test, formalism, holism, strong AI thesis Abstract: This is a report of a three-tiered experiment designed to resemble a limited Turing imitation test. In tier #1, optical character recognition software performed automated spell-checking and "correction'' of the first stanza of Jabberwocky (Carroll, 1871). In tier #2, human subjects incognizant of the poem spell-checked and "corrected'' the same stanza. In tier #3, a widely-qualified group of academics and professionals attempted to identify the version rendered by the computer. Discussion of the experiment and its results leads to the notion of a "reverse Turing test'', and ultimately to an argument against the strong AI thesis. ------------------- 9. Computation and Embodied Agency Philip E. Agre, Department of Communication, University of California, San Diego, La Jolla, California 92093-0503, E-mail: pagre@ucsd.edu, tel (619) 534-6328, fax (619) 534-7315 pp. 527-535 Keywords: artificial intelligence, planning, structural coupling, critical cognitive science, history of ideas, interaction, environment Abstract: An emerging movement in artificial intelligence research has explored computational theories of agents' interactions with their environments. This research has made clear that many historically important ideas about computation are not well-suited to the design of agents with bodies, or to the analysis of these agents' embodied activities. This paper will review some of the difficulties and describe some of the concepts that are guiding the new research, as well as the increasing dialog between AI research and research in fields as disparate as phenomenology and physics. ------------------- 10. Methodological Considerations on Modeling Cognition and Designing Human-Computer Interfaces - an Investigation from the Perspective of Philosophy of Science and Epistemology Markus F. Peschl, Dept. for Philosophy of Science, University of Vienna, Sensengasse 8/10, A--1090 Wien, Austria, Europe, Tel. +431402-7601/41, Fax: +431408-8838, E-mail: a6111daa@vm.univie.ac.at pp. 537-556 Keywords: cognition, epistemology, HCI, knowledge representation Abstract: This paper investigates the role of representation in both cognitive modeling and the development of human-computer interfaces/interaction (HCI). It turns out that these two domains are closely connected over the problem of knowledge representation. The main points of this paper can be summarized as follows: (i) Humans and computers have to be considered as two representational systems which are interacting with each other via the externalization of representations. (ii) There are different levels and forms of representation involved in the process of HCI as well as in the processing mechanisms of the respective system. (iii) As an implication there arises the problem of a mismatch between these representational forms - in some cases this mismatch leads to failures in the effectiveness of HCIs. The main argument is that representations (e.g., symbols) typically ascribed to humans are built/projected into computers - the problem is, however, that these representations are merely external manifestations of internal neural representations whose nature is still under investigation and whose structure seems to be different from the traditional (i.e., referential) understanding of representation. This seems to be a serious methodological problem. This paper suggests a way out of this problem: first of all, it is important to understand the dynamics of internal neural representations in a deeper way and seriously consider this knowledge in the development of HCIs. Secondly, the task of HCI-design should be to trigger appropriate representations, processes, and/or state transition in the participating systems. This enables an effective and closed feedback loop between these systems. The goal of this paper is not to give detailed instructions, how to build a better cognitive model and/or HCI'', but to investigate the epistemological and representational issues arising in these domains. Furthermore, some suggestions are made how to avoid methodological and epistemological "traps'' in these fields. ------------------- 11. Knowledge Objects Xindong Wu, Sita Ramakrishnan, Heinz Schmidt, Department of Software Development, Monash Universit, 900 Dandenong Road, Melbourne, VIC 3145, Australia, E-mail: xindong,sitar,hws@insect.sd.monash.edu.au pp. 557-571 Keywords: AI programming rules, objects, intelligent objects, knowledge objects Abstract: True improvements in large computer systems always come through their engineering devices. In AI, one of the fundamental differences from conventional computer science (such as software engineering and database technology) is its own established programming methodology. Rule-based programming has been dominant for AI research and applications. However, there are a number of inherent engineering problems with existing rule-based programming systems and tools. Most notably, they are inefficient in structural representation, and rules in general lack software engineering devices to make them a viable choice for large programs. Many researchers have therefore begun to integrate the rule-based paradigm with object-oriented programming, which has its engineering strength in these areas. This paper establishes the concepts of knowledge objects and intelligent objects based on the integration of rules and objects, and outlines an extended object model and an on-going project of the authors' design along this direction. ------------------- 12. Modeling Affect: The Next Step in Intelligent Computer Evolution Steven Walczak, University of South Florida, 4202 E. Fowler Ave., CIS 1040, Tampa FL 33620, Phone: 813 974 6768, E-mail: walczak@bsn.usf.edu pp. 573-584 Keywords: affect, emotion, machine learning, adaptation, problem solving Abstract: Artificial intelligence has succeeded in emulating the expertise of humans in narrowly defined domains and in simulating the training of neural systems. Although "intelligent'' by a more limited definition of Turing's test, these systems are not capable of surviving in complex dynamic environments. Animals and humans alike learn to survive through their perception of pain and pleasure. Intelligent systems can model the affective processes of humans to learn to automatically adapt to their environment, allowing them to perform and survive in unknown and potentially hostile environments. A model of affective learning and reasoning has been implemented in the program FEEL. Two simulations demonstrating FEEL's use of the affect model are performed to demonstrate the benefits of affect--based reasoning. ------------------- 13. The Extracellular Containment of Natural Intelligence: A New Direction for Strong AI Richard L. Amoroso, The Noetic Institute, 120 Village Sq. #49, Orinda, Ca, 94563-2502 USA, Phone: 510 893 0467, E-mail: ramoroso@hooked.net pp. 585-590 Keywords: AI, conscious computing, molecular electronics, teleology Abstract: Attempts to mimic human intelligence through information processing alone have failed because human rationality contains an element of non-linear acausality - something left out of the design criterion of linear machine intelligence. Based on the fundamental premise that a noumenon of consciousness is an inherent teleology in the fabric of the physical universe; the architecture of a molecular quantum holonomic computer, can be designed to embody the physical elements of natural intelligence. Consciousness emerges within its core because the utility of the missing parameters of mind contained in the deeper ontology function as a carrier to simulate a platform of natural intelligence. ------------------- 14. Quantum Intelligence, QI; Quantum Mind, QM Branko Soucek, IRIS International Center, Via M.Troisi 18/I, 70125 BARI, Italy fax: 0039805490290 pp. 591-597 Keywords: intelligence, quantum intelligence, quantum mind, message quantum, brain, brain-windows, generalisation, courting, mimicry, aggression, mind, behaviour, decision support systems, business systems, multi agent intelligent systems. Abstract: The computer-based data mining has been used to search for quantal processes. Quantizing has been observed in experimental data that come from: the frog Rana temporaria; the firefly Photuris versicolor; the brainstem auditory potentials from the human scalp. Within the frame of these experimental data, concepts of the Quantum Intelligence, QI, and of the Quantum Mind, QM, have been defined. Elementary components of QI and QM have been identified: the Optimal Quantizing; the Quantal Generalisation; the Quantum Brain Windows; the Message Quantum; the Context. QI, QM model is in excellent agreement with experimentally observed reasoning and behaviour modes: selective courting; mimicry; context switching; aggression; alternation; solo; transmitting. The relevance of these modes to the intelligent decision support business systems is shown. These fundamental modes of reasoning, behaviour, emotion, present the link between mind and computers. QI, QM leads to the new solutions for: neurological diagnoses; complex spatiotemporal data analysis and explanation; multiagent intelligent systems; brain and mind modelling. ------------------- 15. Representations, Explanations, and PDP: Is Representation-Talk Really Necessary? Robert S. Stufflebeam, Washington University, Philosophy-Neuroscience-Psychology Program, Campus Box 1073, One Brookings Dr., St. Louis, MO, 63108, USA, Phone: (314) 935-6670, Fax: (314) 935-7349, E-mail: rob@twinearth.wustl.edu pp. 599-613 Keywords: representation, computation, discovery, explanation, PDP Abstract: A ccording to the received view, since the brain is a computational device, ``internal representations'' need to figure in any plausible explanation for biological computational processing. My aim here is to show that such is not the case: internal distributed representations' can be dropped altogether from mechanistic explanations of parallel distributed processing PDP. By focusing on the discovery of mechanistic explanations for complex systems sections 2-3, I argue that PDP networks cannot be functionally decomposed into component internal distributed representations. I also argue that distributed representations' are not internal representations, but rather constructs section 4-interpretations imposed on the processing. So, if the brain is a PDP-style computer, then there are reasons for thinking that internal representations are not doing the work they are commonly thought to do. ------------------- 16. Is Consciousness a Computational Property? Gilbert Caplain, ENPC-Cermics, La Courtine, F-93167, Noisy-le-Grand, Cedex, France, E-mail: caplain@cermics.enpc.fr pp. 615-619 Keywords: consciousness, knowledge, belief, artificial intelligence Abstract: We will outline a proof that consciousness cannot be adequately described as a computational structure and(or) process. This proof makes use of a well-known, but paradoxical, ability of consciousness to reach ascertained knowledge (as opposed to mere belief) in some cases. Although such a result rules out "naive reductionism'', it does not fully settle the reductionism vs dualism debate in favor of the latter, but merely leads to some kind of weak dualism. ------------------- 17. Cracks in the Computational Foundations Paul Schweizer, Centre for Cognitive Science, University of Edinburgh, Scotland, E-mail: paul@cogsci.ed.ac.uk pp. 621-626 Keywords: computational paradigm, mental content, consciousness Abstract: The main thesis of the paper is that the computational paradigm can explain neither consciousness nor representational content, and hence cannot explain the mind as it standardly conceived. Computational procedures are not constitutive of mind, and thus cannot play the foundational role they are often ascribed in AI and cognitive science. However, it is possible that a computational description of the brain may provide a scientifically fruitful level of analysis which links consciousness and representational content with physical processes. ------------------- 18. Godel's Theorems for Minds and Computers Damjan Bojadziev, Institute "Jozef Stefan'', Jamova 39, 1000 Ljubljana, Slovenia, Phone: +386 61 1773 768, Fax: +386 61 1258 058, E-mail: damjan.bojadziev@ijs.si, WWW: http://nl.ijs.si/~damjan/me.html pp. 627-634 Keywords: Godel's theorems, self-reference, artificial intelligence, reflexive sequences of theories Abstract: Formal self-reference in Godel's theorems has various features in common with self-reference in minds and computers. These theorems do not imply that there can be no formal, computational models of the mind, but on the contrary, suggest the existence of such models within a conception of the mind as something that has its own limitations, similar to those which formal systems have. If reflexive theories do not themselves suffice as models of mind-like reflection, reflexive sequences of reflexive theories could be used. ------------------- 19. On the Computational Model of the Mind Mario Radovan, FET - Pula, University of Rijeka, Preradoviceva 1/1, 52000 PULA, Croatia, Phone: +385 52 23455 Fax: +385 52 212 034, E-mail: Mario.Radovan@efpu.hr pp. 635-645 Keywords: mind, consciousness, computability, functionalism, language of thought, metaphor, hardware independence, connectionism Abstract: The paper examines the power and limitations of the computational model of the mind. It is argued that conscious mind and human brain are not programmable machines, but that there are pragmatical reasons to assign them a computational interpretation. In this context, I speculate on the possibility that programmable machines exceed natural mind (in all kinds of mental abilities), but I also show that not all features of actual computer systems can be successfully mapped on the human mind/brain. ------------------- 20. What Internal Languages Can't Do Peter Hipwell, Centre for Cognitive Science, University of Edinburgh, E-mail: petehip@cogsci.ed.ac.uk pp. 647-652 Keywords: language, analogy, emergence Abstract: The ability of artificial internal languages to mirror the world is compared to the power of natural language systems. It is concluded that internal languages are equally as arbitrary, and therefore have no representational advantage. Alternative forms of representation, including particle interaction in cellular automata, are considered. ------------------- 21. Consciousness and Understanding in the Chinese Room Simone Gozzano, via della Balduina 73, 00136 Rome, Italy, E-mail: s.gozzan@phil.uniroma3.it pp. 653-656 Keywords: Searle's Chinese room Abstract: In this paper I submit that the "Chinese room'' argument rests on the assumption that understanding a sentence necessarily implies being conscious of its content. However, this assumption can be challenged by showing that two notions of consciousness come into play, one to be found in AI, the other in Searle's argument, and that the former is an essential condition for the notion used by Searle. If Searle discards the first, he not only has trouble explaining how we can learn a language but finds the validity of his own argument in jeopardy.