Brief Introduction to Educational Implications of Artificial Intelligence

David Moursund

Creative Commons License
This work is licensed under
Creative Commons Attribution-Noncommercial 3.0 License.

Updated 6/2/07

Link to PS Book

Dave Moursund's Blog for the discussion of his current and past writing projects.

You are Currently Viewing the Website Home Page for:

Brief Introduction to Educational Implications of Artificial Intelligence

 

Dave Moursund's Home Page

Other free books written by Moursund

Contact Website
Author

Visits to this page since 1/31/03: many

Cite this book as:

Moursund, D.G. (2005, 2006). Brief introduction to educational implications of Artificial Intelligence. Access at http://darkwing.uoregon.edu/~moursund/
Books/AIBook/index.htm

Abstract  

Table of Contents of the book.

PDF version of the book.

Microsoft Word version of the book. (This may download to your desktop under the name AI.doc.)

Search Engine for use from this Website 

Interesting tidbits and references not included in current version of the book

Other free educational materials developed by Dave Moursund are listed at http://uoregon.edu/~moursund/dave/Free.html.

Abstract

This book is designed to help preservice and inservice teachers learn about some of the educational implications of current uses of Artificial Intelligence as an aid to solving problems and accomplishing tasks. Humans and their predecessors have developed a wide range of tools to help solve the types of problems that they face. Such tools embody some of the knowledge and skills of those who discover, invent, design, and build the tools. Because of this, in some sense a tool user gains in knowledge and skill by learning to make use of tools.

This document uses the term “tool” in a very broad sense. It includes the stone ax, the flint knife, reading and writing, arithmetic and other math, the hoe and plough, the telescope, microscope, and other scientific instruments, the steam engine and steam locomotive, the bicycle, the internal combustion engine and automobile, and so on. It also includes the computer hardware, software, and connectivity that we lump together under the title Information and Communication Technology (ICT).

Artificial intelligence (AI) is a branch of the field of computer and information science. It focuses on developing hardware and software systems that solve problems and accomplish tasks that—if accomplished by humans—would be considered a display of intelligence. The field of AI includes studying and developing machines such as robots, automatic pilots for airplanes and space ships, and “smart” military weapons. Europeans tend to use the term machine intelligence (MI) instead of the term AI.

The theory and practice of AI is leading to the development of a wide range of artificially intelligent tools. These tools, sometimes working under the guidance of a human and sometimes without external guidance, are able to solve or help solve a steadily increasing range of problems. Over the past 50 years, AI has produced a number of results that are important to students, teachers, our overall educational system, and to our society.

This short book provides an overview of AI from K-12 education and teacher education points of view. It is designed specifically for preservice and inservice teachers and school administrators. However, educational aides, parents, school site council members, school board members, and others who are interested in education will find this booklet to be useful.

This book is designed for self-study, for use in workshops, for use in a short course, and for use as a unit of study in a longer course on ICT in education. It contains a number of ideas for immediate application of the content, and it contains a number of activities for use in workshops and courses. An appendix contains suggestions for Project-Based Learning activities suitable for educators and students.

Table of Contents

Abstract 2
Chapter 1: Intelligence and Other Aids to Problem Solving 3
Chapter 2: Goals of Education 11
Chapter 3: Computer Chess and Chesslandia 20
Chapter 4: Algorithmic and Heuristic Procedures 26
Chapter 5: Procedures Used by a Word Processor 34
Chapter 6: Procedures Used in Game Playing 39
Chapter 7: Machine Learning 45
Chapter 8: Summary and Conclusions 59
Appendix: PBL Activities for Students and Educators 68
References 71
Index 74


Get a GoStats hit counter

Google Search Engine set up for use from this Website

To use the Search Engine, select one of the three Radio Buttons in the Google Search Engine Form given below.

  • The domain darkwing.uoregon.edu includes all of Moursund's Websites and many other University of Oregon Websites.
  • The domain otec.uoregon.edu is the Website of the Oregon Technology in Education Council
  • The Google Search Engine can also be used to search the WWW.

Google


Search WWW Search darkwing.uoregon.edu Search otec.uoregon.edu

Top of Page

  

Top of Page

Materials for possible use in a future revision

Adaptive Interface (2006). An adaptive interface for controlling the computer by thought. Retrieved 6/16/06: http://www.basqueresearch.com/berria_irakurri.asp?Gelaxka=1_1&Berri_Kod=983&hizk=I.

Controlling a computer just by thought is the aim of cerebral interfaces. The engineer from Pamplona, Carmen Vidaurre Arbizu, has designed a totally adaptive interface that improves the performance of currently existing devices in, reducing the time needed to become skilled in their operation and enhance the control that users have over the interface. Moreover, according to Ms Vidaurre, the majority of the population is capable of using it.

The results appear in the PhD thesis, Online Adaptive Classification for Brain-Computer Interfaces, defended recently at the Public University of Navarre.

Cerebral interface

A cerebral interface or brain-computer interface (BCI) allows people with communication problems to relate to their surroundings using a computer and the electrophysiological signals from the brain. The actual interface with which Carmen Vidaurre has worked with is based on electroencephalograms (EEG) of the individual, although there are others that use signals recorded from electrodes fitted directly into the brain.

The user and the interface are highly interdependent “systems” that, up to recently, adapted to each other independently. In the past, when a non-experienced individual started to use a BCI, the systems were unable to supply feedback, i.e. the individual was unable to see the results of their brain patterns on the screen.

With those outdated systems and, after a number of prior, data-collecting sessions, feedback was included and, in this way, the subjects started to adapt themselves to the computer, using the interface response to the patterns extracted from the signals. However, few users could use these interfaces because the patterns generated during the trial sessions had to be greatly similar to those sessions with feedback.

One of the biggest problems found by other users and researchers into these “static” systems was that the patterns extracted from the signals recorded in the trial sessions were quite different from those signals recorded in the presence of feedback. For example, the visual input was different between both types of sessions and this difference significantly changes brain activity in specific areas thereof.

For inexpert users it is very difficult to adapt to the traditional interface because they are unable to generate stationary patterns in time, probably due to their inexperience. They find it very complicated to reproduce mental states sufficiently similar to be correctly classified.

Argon national Laboratory (2007). Flexible electronics could find applications as sensors, artificial muscles. Retrieved 4/10/07: http://www.anl.gov/Media_Center/News/2007/news070402.html. Quoting from the article:

ARGONNE, Ill. (April 2, 2007) — Flexible electronic structures with the potential to bend, expand and manipulate electronic devices are being developed by researchers at the U.S. Department of Energy's Argonne National Laboratory and the University of Illinois at Urbana-Champaign. These flexible structures could find useful applications as sensors and as electronic devices that can be integrated into artificial muscles or biological tissues.

Arner, M. (2006). Flesh and machines. boston's weekly dig. Retrieved 6/11/06: http://www.weeklydig.com/index.cfm/fuseaction/article.view/issueID/36f747b1-8328-42ee-bd4a-9f2e28d83203/articleID/e1a96fb1-9694-478d-b7e8-5e109073651a/nodeID/4b1339d1-be3a-44a2-be8b-1484963a003a.

MIT robotic scientist Rodney Brooks foresees a future mercifully free of robot-inflicted terror.

Brooks was able to thrive during this pre-‘90s lean period, and he did it by revolutionizing the discipline, effectively removing the “intelligence” from it. Rather than trying to create machines that could act like doctors and lawyers, he set out to create machines that could emulate amoebae and insects. These robots’ actions are linked directly to their perceptions—they’re mostly just reacting to the physical world, instead of building and testing theories about it. As a result, all kinds of things began to work. Turns out, this “just reacting” can facilitate some very sophisticated, lifelike behavior. And indeed, a philosophical argument can be made that this is how we work, to that even what we believe to be our “conscious intentions” are just the sum of many, many small, dumb local reflexes working together.

In the mid-’90s, Brooks and colleague Colin Angle sent the insect-like Sojourner robot to Mars—where it explored the planet’s surface autonomously, following its own agenda, entirely apart from human control. Over the last decade or so, Brooks has devoted himself to humanoid-type robots, capturing one sense or limb or behavior at a time. With his graduate students, he has created machines that learn, that emote, that crave attention, that pay attention (to human eye movements, human voice intonations, human behavior), that are social, that are helpful—robots that take you by surprise.

Carnegie Mellon University (April 2006). CMU CS Machine Learning Group. Accessed 4/4/06: http://www.cs.cmu.edu/Groups/ml/ml.html.

What is Machine Learning?

Machine Learning is a scientific field addressing the question "How can we program systems to automatically learn and to improve with experience?" We study learning from many kinds of experience, such as learning to predict which medical patients will respond to which treatments, by analyzing experience captured in databases of online medical records. We also study mobile robots that learn how to successfully navigate based on experience they gather from sensors as they roam their environment, and computer aids for scientific discovery that combine initial scientific hypotheses with new experimental data to automatically produce refined scientific hypotheses that better fit observed data.

To tackle these problems we develop algorithms that discover general conjectures and knowledge from specific data and experience, based on sound statistical and computational principles. We also develop theories of learning processes that characterize the fundamental nature of the computations and experience sufficient for successful learning in machines and in humans.[Quoted from the CMU Website.]

=======================================

Carnegie Mellon University's School of Computer Science is re-christening its Center for Automated Learning and Discovery (CALD) as the Department of Machine Learning to recognize advances in the science of machine learning and its significance to computer vision, speech recognition, and data mining. Tom M. Mitchell, the Fredkin Professor of Artificial Intelligence and Learning, heads the department. It is the first to offer a Ph.D. in the field of machine learning.

The roots of machine learning extend back almost 50 years, when a few researchers began to explore whether it was possible to develop software that could improve its performance by learning from experience. In speech recognition systems, for instance, machine learning has proven to be vital not only for initially training the system to understand the spoken word, but also for customizing each system to respond to the speech patterns of individual users. "The niche where machine learning will be used is growing rapidly as applications grow in complexity and as we develop more accurate learning algorithms," Mitchell said.

"The transition from the Center for Automated Learning and Discovery to the Machine Learning Department recognizes the emergence of machine learning as a rigorous academic discipline," said Randal Bryant, dean of the School of Computer Science. Bryant added that the discipline had "fostered especially strong ties between the School of Computer Science and the Statistics Department, providing computer scientists with more rigorous mathematical tools, and statisticians with new challenges and opportunities." Quoted from: CT News Update: An Online Newsletter from Campus Technology.

Chabrow, Eric (6/19/06). Researchers Teach Computers To See As Humans Do. InformationWeek. Retrieved 6/26/06: http://www.informationweek.com/news/showArticle.jhtml?articleID=189500006. Quoting from the article:

Can Computers be taught to see just like people? Scientists at MIT's Center for Biological and Computational Learning think so.

Researchers are tackling computerized visual recognition by using mathematical models that work the same way our brains process images. This approach is fundamentally different from current visual recognition methods and could result in search tools that can identify people's faces in seconds.

The scientists work with the center's neurophysiologists, who are studying how the brain sorts images, such as how the tiniest part of an image rouses a photoreceptor in the eye and induces neurons to fire in a specific pattern. At MIT and elsewhere, computer scientists are developing mathematical models of the neuron simulation patterns for particular things--cars, faces, and buildings. Eventually, when a computer sees a car, it's hoped the machine will respond by comparing the neural pattern it processes to earlier instances of car viewing, just as humans do.

Christadoss, Crissanka (n.d.). Purdue researching mind-reading tech. The Exponent–Purdue's Independent Studetn Newspaper. Accessed 4/20/06: http://www.purdueexponent.com/index.php/module/Issue/action/Article/article_id/3703 Quoting:

Moving the cursor on a computer by the simple act of thinking may sound like a task that is beyond the technological parameters of today's world.

According to Pedro Irazoqui, an assistant professor in the Walden School of Biomedical Engineering, Purdue is home to a strong multi-disciplinary approach in this futuristic-sounding technology, called a brain computer-interface.

Irazoqui develops and researches this technology, also referred to as a BCI.

The BCI is an interface between the brain and the computer (or another machine that processes information) where recorded neural signals control the computer. Components of the BCI that Irazoqui develops include electrodes, implantable circuits and a computer.

Cottrell, Garrison (April, 2007). News from the National Science Foundation’s Temporal Dynamics of Learning Center. Retrieved 4/1/07: http://www.brainconnection.com/content/254_1. Quoting from the article:

A better understanding of the role that timing plays in human learning could lead to improved teaching techniques and alter the trajectories of countless human lives.

When you learn the sounds of your language, interact with colleagues and teachers, become proficient at sports or playing a musical instrument, or engage in countless other learning activities, timing plays a critical role in the development of the wiring of your brain cells, in the communication between and within sensory and motor systems, and in the interactions between different regions of your brain. The success or failure of interpersonal communication and social interaction using gestures, facial expressions and verbal language also depend critically on exact timing.

Duffy, Jonathan (January 29, 2006). What happened to the Robot Age? BBC News Magazine. Accessed 1/30/06: http://news.bbc.co.uk/2/hi/uk_news/magazine/4654332.stm. Quoting:

Sony's decision to ditch its Aibo robotic dog, along with its entire robot development team, is a reminder that we are still a long way from the age of automated domestic servants. Architects of the Robot Age have been busy rethinking the future.

ERMIS (06,10.05). "Emotional Intelligence for Computer-Based Characters?"
IST Results . Accessed 6/10/05: http://istresults.cordis.lu/index.cfm/section/news/tpl/
article/BrowsingType/Features/ID/77083

The IST-funded ERMIS project yielded insights into linguistic and paralinguistic cues in human speech that were incorporated into a "sensitive artificial listener," a prototype computer character that can realistically express emotions in human-computer communications. Professor Stefanos Kollias of the National Technical University of Athens says the ERMIS researchers extracted emotional language cues from analysis of linguistics in English and Greek speech, paralinguistic features such as emphasis and intonation, and facial expressions. About 400 common-speech features, 20 to 25 of which were selected as the most important emotional cues, were entered into a neural network architecture that integrated all the various linguistic, paralinguistic, and facial communication elements. The analytical results were fed into a system with several on-screen characters that could respond to and replicate the emotional content in speech and facial expressions, and that were programmed to try to make the people they interacted with angry, happy, sad, and bored. The ERMIS project partners are exploring how the results of the ERMIS team's work could be incorporated into their own products: Nokia is looking into the enhancement of its multimedia phones, BT is considering how its call center technologies could benefit, and Eyetronics plans to augment the simulation of facial movements in its virtual characters. Kollias says the four-year HUMAIN (FP6) project was inspired by the ERMIS results.

Gerber, Cheryl (March 13, 2006). Found in translation. Military InformationTechnology. Accessed 3/20/06: http://www.military-information-technology.com/article.cfm?DocID=1350

Spurred by the military and intelligence communities’ growing need to translate and retrieve pertinent foreign-language intelligence, the Defense Advanced Research Project Agency has launched a program aimed at improving automated, searchable translations.

Gomes, Lee (1/10/07). After years of effort, voice recognition is starting to work. The Wall Stree Journal Online. Retrieved 1/10/07: http://online.wsj.com/public/article/SB116839144214572104-wEjWHBpFggWzlsUSjwbGzZxF8II_20070209.html?mod=tff_main_tff_top.

The article provides a nice summary of current successful applications.

Halpern, Mark (2006). The trouble with the Turing Test. The New Atlantis: A Journal of Technology and Society. Number 11, Winter 2006, pp. 42-63. Accessed 2/0/06: http://www.thenewatlantis.com/archive/11/halpern.htm.

  • This article is an abridged version of a more detailed and fully documented paper that can be found on his website, www.rules-of-the-game.com.
  • Turing’s thought experiment was simple and powerful, but problematic from the start. Turing does not argue for the premise that the ability to convince an unspecified number of observers, of unspecified qualifications, for some unspecified length of time, and on an unspecified number of occasions, would justify the conclusion that the computer was thinking—he simply asserts it. Some of his defenders have tried to supply the underpinning that Turing himself apparently thought unnecessary by arguing that the Test merely asks us to judge the unseen entity in the same way we regularly judge our fellow humans: if they answer our questions in a reasonable way, we say they’re thinking. Why not apply the same criterion to other, non-human entities that might also think?

Joy, Bill (April 2000). Why the future doesn't need us. Wired Magazine. Retrieved 6/19/06: http://www.wired.com/wired/archive/8.04/joy.html?pg=1&topic=&topic_set=.

From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.

Ray and I were both speakers at George Gilder's Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.

I had missed Ray's talk and the subsequent panel that Ray and John had been on, and they now picked right up where they'd left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn't happen, because the robots couldn't be conscious.

While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray's proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.

Knapp, Susan (6/19/06). Artificial Intelligence turns 50. Retrieved 6/21/06: http://www.dartmouth.edu/~news/releases/2006/06/19.html.

"When I'm asked whether computers will ever really mimic humans, I say, yes and no," says Dartmouth philosophy professor James Moor, director of AI@50, a conference this summer at Dartmouth commemorating the golden anniversary of the field of artificial intelligence. "Yes, neural net computers are being built that operate somewhat analogously to the brain; and no, humans are biological creatures with emotions, feelings, and creativity that are unlikely to be fully captured by machines, at least for the foreseeable future."

The field of AI has its roots at the 1956 Dartmouth Summer Research Project on Artificial Intelligence. In those early days, says Moor, researchers wanted to make machines more cognizant and to lay out a framework to better understand human intelligence. Today, according to Moor, these remain goals for AI, but AI has become more focused on specific aspects of intelligence, such as learning, reasoning, vision, and action.

MacDonald, G. Jeffrey (May 22, 2006). Kits let kids add science, engineering, math to art explorations. boston.com News. Retrieved 5/23/06: http://www.boston.com/news/education/k_12/articles/2006/05/22/
kits_let_kids_add_science_engineering_math_to_art_explorations/
. Quoting from the article:

During two decades of designing high-tech tools to encourage children's creativity, Mitchel Resnick has found robots disappointing in one respect: They rarely appeal to girls or to kids unexcited by science.

''Lots of kids like to play with robots, but not all kids," says Resnick, an associate professor of learning research at the Massachusetts Institute of Technology.

That observation got him and fellow researchers in MIT's Lifelong Kindergarten project to imagine ''something with an artistic twist that would engage a wider range of kids than just classical robots."

Their brainchild makes its debut today as the Playful Invention Co. begins taking orders online for its new Cricket kits, which are designed to build on kids' interest in art and music while bringing science, math, and engineering into their artistic exploration. For $250, children get a box of tools that will allow them to customize countless creations that sing, move, and flicker in response to changes in their environment.

Neurobiology of Aging Information Center (n.d.).Accessed 10/23/05: http://www.infoaging.org/b-neuro-1-what.html. Quoting from the Website:

Cognition refers to mental processes used for perceiving, remembering, and thinking. Most studies show that, in general, cognitive abilities are the greatest when people are in their 30s and 40s. Cognitive abilities stay about the same until the late 50s or early 60s, at which point they begin to decline, but to only a small degree. The effects of cognitive changes are usually not noticed until the 70s and beyond. These statements are based on data from studies where averages were calculated for each age group. Within each age group, however, there are wide variations in cognitive ability. The information presented here represents general findings about age-related cognitive change. They do not necessarily happen to everyone.

One study of intelligence over a lifetime found that by the age of 81, only 30-40% of study participants had a significant decline in mental ability. Two-thirds of people at this age had only a small amount of decline. And only certain cognitive abilities decline, while others may improve.

NSF (1 Septembr 2005). Man Against Machine: Computer-generated method outperforms human-designed program for fingerprint improvement. Accessed 9/7/05: http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=104378&org=NSF.

Olsen, Stefanie (5/11/06). This is your brain on a microchip. CNET News.com. retrieved 5/12/06: http://news.com.com/This+is+your+brain+on+a+microchip/2100-11395_3-6071404.html. Quoting from the article:

James Albus, a senior fellow and founder of the Intelligent Systems Division of the National Institute of Standards and Technology, made the most convincing case for why the era of "engineering the mind" is here. He also proposed a national program for developing a scientific theory of the mind.

"We are at a tipping point...analogous to where nuclear physics was in 1905. The technology is emerging to conduct definitive experiments. The neurosciences have developed a good idea of computation and representation of the brain," he said Wednesday at the two-day gathering.

He laid out several specific projects and figures. For example, computational power is advancing. The human brain produces between 10^13 (10 to the 13th power) and 10^16 operations per second, emitting 100 watts of energy while at rest. The human brain is incredibly efficient, too: The brain takes about 20 percent of the body's oxygen to perform at that rate.

Today's supercomputer, such as IBM's Blue Gene, processes about 10^14 operations per second, but with six orders of magnitude more wattage.

Posner, Michael (2004). Neural systems and individual differences. TCRecord. Accessed 8/3/05: http://www.tcrecord.org/Content.asp?ContentID=11663.

In this article, Posner argues that perhaps Attention should be one of the Multiple Intelligences in Howard Gardner's list. He analyzes and supports the MI owrk of Gardner in terms of modern imaging contributions. Quoting from the article:

One of the major contributions of Howard Gardner’s (1983) book Frames of Mind was an important link between two major approaches to psychology, which were then and for the most part still are quite separate. First was an approach to the common mental processes and behavior of human beings, and second was the psychometrics of individual differences implicit in the term intelligences. Gardner’s effort to embed the measurement of individual difference in intelligence within a theory based on neuropsychology was of note for psychology independent from its application to education and other domains. This aspect of Frames of Mind has been underappreciated, perhaps because the two approaches continued along their separate way in the years following the book. However, it may be time to salute Gardner by renewing his effort to forge a deeper connection between cognitive psychology and psychometrics. Current studies in cognitive neuroscience may have potential for accomplishing this goal and could also provide some new approaches to research on education.

Quotations from http://www.aaai.org/AITopics/html/quotes.html.

Feigenbaum, Edward; McCorduck, Pamela; and Nii, H. Penny -- Consider how much more valuable than data is the company's knowledge. In some cases it's unique expertise. Will the standard methods for protection suffice? . . . Who owns the knowledge, anyway?. . .Who gets to hold the copyright on an expert's lifetime of experience in performing his niche task? From The Rise of the Expert Company, 1988. New York: Times Books/Random Hous, Inc.

Feigenbaum, Edward; McCorduck, Pamela; and Nii, H. Penny -- Today's expert systems deal with domains of narrow specialization. . .For expert systems to perform competently over a broad range of tasks, they will have to be given very much more knowledge. ... The next generation of expert systems ... will require large knowledge bases. How will we get them? From The Rise of the Expert Company, 1988. New York: Times Books/Random House, Inc.

Feigenbaum, Edward; McCorduck, Pamela; and Nii, H. Penny -- The user of the library of the future need not be a person. It may be another knowledge system -- that is, any intelligent agent with a need for knowledge. Such a library will be a network of knowledge systems, in which people and machines collaborate. 1988. From The Rise of the Expert Company, p. 257. New York: Times Books/Random House, Inc.

Minsky, Marvin -- In the 1960s and 1970s, students frequently asked, "Which kind of representation is best?" and I usually replied that we'd need more research. . .But now I would reply: To solve really hard problems, we'll have to use several different representations. This is because each particular kind of data structure has its own virtues and deficiencies, and none by itself would seem adequate for all the different functions involved with what we call common sense. From Logical vs. Analogical . . .AI Magazine 12.2

Newell, Allen -- From where I stand, it is easy to see the science lurking in robotics. It lies in the welding of intelligence to energy. That is, it lies in intelligent perception and intelligent control of motion. From The Scientific Relevance of Robotics (Remarks at the Dedication of the CMU Robotics Institute). AI Magazine 2(1): 24-26, 34 (Winter 1980).

Artificial intelligence is used to solve complex problems that are:
• Usually resolved by an expert
• Not amenable to straight forward solution by numerical computation; or, if they might theoretically be solved numerically, the computations would take an impractically long time and/or use too much computational resources
• Usually solved by people using rules of thumb (heuristics), that work most of the time but with no guarantees
• Ill-defined
• Related to situations that constantly change over time (i.e., are dynamic), so that a better solution is likely to be made by someone (or some software) that can take the changes into account as they happen, rather than set up rules for decision making in advance by trying to anticipate what changes may happen
• Not readily solvable by breaking the problem into interacting sub-problems
• Highly dependent on the context within which the problem occurs in terms of determining an adequate solution
Derek Partridge (1998) http://www.stottlerhenke.com/ai_general/quotations.htm

I think it is getting increasingly difficult to draw a circle around it (artificial intelligence). Like everybody else, I have started a company and as I go out into the real world, the scales fall from my eyes. One of these scales has been the belief that AI could be sold to anybody by itself. It really must be blended with other more standard technology to be useful. The new enterprise of AI is to combine with people to produce something that neither can produce alone. It means your programs don't even have to be really smart. If all you do is save a $200 million blunder once in a while by asking somebody to look at something, that's good enough to be very important. I think we are going to enter into a new era with respect to applications of AI that's quite different from the 1980s. This was the age where expert systems were replacing people, whereas the 1990s will be the age of what we could call "raisin bread systems" for making people smarter. AI is now embedded in systems like raisins in raisin bread. It doesn't have to occupy much volume and may carry a lot of the nutrition. You can't have the raisin bread without the raisins, and there can be different kinds of raisins. That's the way I think the 1990s will benefit from AI: raisin bread systems for making people smarter.
Patrick Winston, director of MIT's AI Laboratory, 1991 quoted by Daniel Crevier, 
"The Tumultuous History of the Search for Artificial Intelligence," 1993

Red Herring (5/12/06). IBM's Dharmendra Modha. Retrieved 5/12/06: http://www.redherring.com/Article.aspx?a=16764&hed
=Q%26amp%3BA%3A+IBM’s+Dharmendra+Modha
.

This interview fits in well with a Colloguium presentation I attended a few weeks ago, where the person talked about biologicla computing and reverse engineering the algorithms tha a brain uses. Quoting from the article:

"Q: Why use the term “cognitive computing” rather than the better-known “artificial intelligence”?
A: The rough idea is to use the brain as a metaphor for the computer. The mind is a collection of cognitive processes—perception, language, memory, and eventually intelligence and consciousness. The mind arises from the brain. The brain is a machine—it’s biological hardware.

Q: Are programs or algorithms that, for example, measure feelings and thoughts similar to this?
A: No. Cognitive computing is less about engineering the mind than it is the reverse engineering of the brain. We’d like to get close to the algorithm that the human brain [itself has]. If a program is not biologically feasible, it’s not consistent with the brain.

Q: How will you achieve this?
A: We’re interested in neurology and psychology. … We hope to emulate mathematical and computational [processes]."

Schiff, Debra (06/19/06). Research: spatial abilities key to engineering. EE Times. Retrieved 6/26/06: http://www.eetimes.com/news/design/showArticle.jhtml?articleID=189401733. Quoting the article:

There is clear evidence that men perform better at spatial tasks and women outpace men on tests of verbal usage and perceptual speed, according to research conducted by Wendy Johnson, postdoctoral research fellow at the University of Minnesota, and Thomas Bouchard, director of the Minnesota Center for Twin and Adoption Research. The findings, which will be published in the journal Intelligence, indicate that there is little difference in how the genders fare as far as general intelligence, however. But since engineering positions are overwhelmingly filled by men, this further supports the theory that spatial abilities are key to success in the field.

Simonite, Tom (8/31/06). Crossword software thrashes human challengers. Retrieved 9/11/06: http://www.newscientisttech.com/article/dn9888-crossword-software-thrashes-human-challengers.html. Quoting from the article:

A crossword-solving computer program yesterday triumphed in a competition against humans. Two versions of the program, called WebCrow, finished first and second in a competition that gave bilingual entrants 90 minutes to work on five different crosswords in Italian and English.

Skillings, Jonathan (7/3/06). Newsmaker: Getting machines to think like us. cnet News. Retrieved 7/8/06: http://news.com.com/Getting+machines+to+think+like+us/2008-11394_3-6090207.html.

In summer 2006 there will be a conference celebrating the first conference (held in 1956) on AI. John McCarthy, who organized this first conference and is credited with making up the term "artificial intelligence," will be attending. This article is an interview with McCarthy. Quoting from the article:

You're credited with coining the term "artificial intelligence" just in time for the 1956 conference. Were you just putting a name to existing ideas, or was it something new that was in the air at that time?

McCarthy: Well, I came up with the name when I had to write the proposal to get research support for the conference from the Rockefeller Foundation. And to tell you the truth, the reason for the name is, I was thinking about the participants rather than the funder.

What's needed is to figure out good ways of constructing new ideas from old ones.

Claude Shannon and I had done this book called "Automata Studies," and I had felt that not enough of the papers that were submitted to it were about artificial intelligence, so I thought I would try to think of some name that would nail the flag to the mast.

Would you elaborate on that--on nonmonotonic reasoning?

McCarthy: OK. In ordinary logical deduction, if you, say, have a sentence P that is deducible from a collection of sentences--call it A--and we have another collection of sentences B, which includes all the sentences of A, then it will still be deducible from B because the same proof will work. However, humans do reasoning in which that is not the case. Suppose I said, "Yes, I will be home at 11 o'clock, but I won't be able to take your call." Then the first part, "I will be home at 11 o'clock,"--you would conclude that I could take your call, but then if I added the "but" phrase, then you would not draw that conclusion.

So nonmonotonic reasoning is where you draw a conclusion, which may be a correct conclusion to draw, but it isn't guaranteed to be true because some added facts may prevent it. Now, that was around 1980, or a little bit before, that formalizing nonmonotonic reasoning began, and it's turned into a fairly big field now.

What would be the biggest achievements in the last 50 years? Or how much of the original goals were accomplished?

McCarthy: Well, we don't have human-level intelligence. However, I would say driving the car 128 miles shows a considerable advance. (Editors' note: In last fall's DARPA Grand Challenge, the winning vehicle--Stanford's robotic car, "Stanley"--drove itself 131.6 miles across the Mojave Desert.)

Stix, Gary (March 2006). The elusive goal of machine translation. Scientific American.com. Accessed 3/18/06: http://www.sciam.com/article.cfm?chanID=sa006&colID
=1&articleID=0004E1DF-E490-13
F5-A49083414B7F011E

Summary: Software developers contend that machine translation (MT) is starting to approach human-level performance thanks to brute-force computing techniques. Slow progress in this area since the first MT experiments in the 1950s led to a scarcity of funding and enthusiasm, while Systrans, the largest MT company currently in existence, saw only $13 million in annual revenue for 2004 because of the shortcomings of its rules-based system. Such systems require language specialists and linguists in specific dialects to arduously produce large lexicons and rules relating to semantics, grammar, and syntax. Statistical MT uses brute-force calculation to crunch through existing translated documents to ascertain the probability that a word or phrase in one language corresponds to another. Using statistics to gauge how frequently and where words occur in a given phrase in both languages provides a word reordering template for the translation model. A language model uses its own statistical analysis of English-only texts to predict the most likely word and phrase ordering for the already-translated text; thus, the probability that a phrase is correct directly reflects how often it occurs in the text. The differences between statistical MT and rules-based MT are fading slightly as statistical MT researchers have begun to employ methods that account for syntax, and that eliminate the intercession of linguists. Nevertheless, "The use of statistical techniques, coupled with fast processors and large, fast memory, will certainly mean we will see better and better translation systems that work tolerably well in many situations, but fluent translation, as a human expert can do, is...not achievable," says Keith Devlin of Stanford University's Center for the Study of Language and Information.

Williams, Mark (May 30, 2007). Better Face-Recognition Software Computers outperform humans at recognizing faces in recent tests. Technology Review. Retrieved 6/2/07: http://www.technologyreview.com/Infotech/18796/. Quoting from the article:

For scientists and engineers involved with face-recognition technology,the recently released results of the Face Recognition Grand Challenge--more fully, the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006--have been a quiet triumph. Sponsored by the National Institute of Standards and Technology (NIST), the match up of face-recognition algorithms showed that machine recognition of human individuals has improved tenfold since 2002 and a hundredfold since 1995. Indeed, the best face-recognition algorithms now perform more accurately than most humans can manage. Overall, facial-recognition technology is advancing rapidly.

Winterstein, Daniel (8/10/05). Searching for Intelligence in Edinburgh. The Register. http://www.theregister.co.uk/2005/08/10/
edingburgh_artificial_intelligence_conference/
Quoting from the article:

Last week the top researchers in Artificial Intelligence (AI) gathered in Edinburgh to analyse the state of their subject. The topics under discussion ranged from robotic exoskeletons, to what tool-using crows can teach us about our own brains. Impressive results were reported in several fields, with previously intractable problems dropping like flies. Yet true machine intelligence seems as much of a dream as ever.

Over a thousand scientists came from around the world to attend the prestigious week-long International Joint Conference in AI (IJCAI). They were an eclectic mix of computer scientists, mathematicians, and psychologists, plus a few philosophers.

The mainstream AI community is focused on specific technical problems and applications. It is an approach which has been very successful. By contrast, attempts to solve the 'big problems' of intelligence have typically sunk without a trace. However, it may now be time to return to the bigger picture. Aaron Sloman of Birmingham University is launching an ambitious new project. Called CoSy, it will use a substantial fraction of the EU's research budget (a cool €7m) to address the bigger questions of general reasoning and meaning. The result will be a series of new robots that try to tie together the different strands of AI into one coherent system.
Professor Sloman is realistic about the likely outcome: "We don't promise any results. We assume that [human-like thinking] is far beyond the current state of the art and will remain so for many years. But we are asking important questions."

Also, by the same author as above, see http://www.theregister.co.uk/2005/08/16/bioinformatics_2005_report/ This article includes an example of a cancer medical test discovered by AI.

Wertheim, Margaret 2005?). I Think, Therefore I Am — Sorta: The belief system of a virtual mind. Accessed 7/26/05: http://www.laweekly.com/ink/05/35/quark-wertheim.php.

PsychSim, a virtual reality artificial intelligence technology, is helping train the U.S. military as it crafts real-life scenarios and thrusts its trainees in the middle of them, forcing them to interact with simulations, known as agents, endowed with human intelligence. Stacy Marsella, one of the PsychSim's chief architects and a project leader at USC's Information Sciences Institute, envisions an expansive role for AI-powered agents in the future, claiming that, over time, they will become an integral part of our world and be able to interact seamlessly with humans on a complex level. Marsella is also involved in an agent-based project in which a virtual therapist counsels parents of children with cancer and simulations that could treat people afflicted with phobias and Post-Traumatic Stress Disorder, both exploring the potential to create human thoughts and emotions through technology.