AAAI-11 Tutorial Forum
The Tutorial Forum of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI-11) will be held August 7–8, 2011 in San Francisco. The Tutorial Forum provides an opportunity for junior and senior researchers to spend two days each year freely exploring exciting advances in disciplines outside their normal focus. We believe this type of forum is essential for the cross fertilization, cohesiveness, and vitality of the AI field. We all have a lot to learn from each other; the Tutorial Forum promotes the continuing education of each member of the AAAI.
To encourage full participation by technical conference registrants, no separate fee will be charged for admittance to the Tutorial Forum in 2011.
Sunday, August 7, 9:00 AM – 1:00 PM
Machine Learning in Time Series Databases (and Everything Is a Time Series!)
Chris Kiekintveld, Nicola Gatti, and Manish Jain
Discourse Structure: Theory and Practice
Bonnie Webber, Markus Egg, and Valia Kordoni
SA4: Trust Theory: A Socio-Cognitive and Computational Model
Cristiano Castelfranchi and Rino Falcone
This tutorial has been cancelled.
Sunday, August 7, 2:00 PM – 6:00 PM
Event Processing – State of the Art and Research Challenges
Opher Etzion and Yagil Engel
Human Computation: Core Research Questions and State of the Art
Luis von Ahn and Edith Law
Large-Scale Data Processing with MapReduce
Recognizing Behavior in a Spatio-Temporal Context
Hans W. Guesgen, Mehul Bhatt, and Stephen Marsland
Monday, August 8, 9:00 AM – 1:00 PM
Discourse Models for Generating Optimized User Interfaces: Theory from AI and Application in HCI
From Structured Prediction to Inverse Reinforcement Learning
Hal Daume III
Opinion Mining and Sentiment Analysis
Monday, August 8, 2:00 PM – 6:00 PM
Algorithms for Classical Planning
Conformal Predictions for Reliable Machine Learning: Theory and Applications
Vineeth N. Balasubramanian and Shen-Shyang Ho
Information Organization and Retrieval with Collaboratively Generated Content
Eugene Agichtein and Evgeniy Gabrilovich
Philosophy as AI and AI as Philosophy
SA1 Machine Learning in Time Series Databases (and Everything Is a Time Series!)
Time series and multimedia data are ubiquitous; large volumes of such data are routinely created in scientific, industrial, entertainment, medical and biological domains. Examples include gene expression data, medical imagery, electrocardiograms, electroencephalograms, gait analysis, stock market quotes, space telemetry, microarrays, zoology, and others. Furthermore, many kinds of data that are not true time series can be fruitfully transformed into pseudo-time series, including text, DNA, shapes, video, and others.
To deal with such data we must carefully chose algorithms and data representations. While most representations used in the past have been real valued (wavelets and Fourier methods) in this tutorial I advocate for using discrete (symbolic) representations of the data. Symbolic representations allow us to use very useful algorithms and data structures which are not available for real data, for example suffix trees, hashing and Markov models.
The tutorial will be illustrated with numerous real world examples created just for this tutorial, including examples from archeology (petroglyphs and projectile points), microscopy (nematodes and blood cells), historical manuscripts, zoology, motion capture and biometrics. The data mining tasks considered include indexing, classification, clustering, novelty discovery, motif discovery and visualization.
Eamonn Keogh's research interests are in data mining, machine learning and information retrieval, especially for time series data. He is one of the top-ten most prolific authors in the ACM SIGKDD, IEEE ICDM and SIAM SDM conferences, and his time series papers have been cited thousands of time are used in research and commercial efforts worldwide.
SA2 Security Games
Chris Kiekintveld, Nicola Gatti, and Manish Jain
Game theory is an increasingly important paradigm for modeling and decision-making in security domains, including homeland security resource allocation decisions, robot patrolling strategies, and computer network security. Several deployed real-world systems use game theory to randomize critical security decisions to prevent terrorist adversaries from exploiting a predictable security schedule. The ARMOR system deployed at the LAX airport and the IRIS system deployed by the Federal Air Marshals Service were first presented at the AAMAS conference.
This tutorial will introduce a wide variety of game-theoretic modeling techniques and algorithms that have been developed in recent years for security problems. Introductory material on game theory and mathematical programming (optimization) will be included in the tutorial, so there is no prerequisite knowledge for participants. After introducing the basic security game framework, we will describe algorithms for scaling to very large games, methods for modeling uncertainty and attacker observation capabilities in security games, and applications of these techniques for randomized resource allocation and patrolling problems. At the end we will highlight the many opportunities for future work in this area, including exciting new domains and fundamental theoretical and algorithmic challenges.
Christopher Kiekintveld is currently an assistant professor at the University of Texas at El Paso, USA. He has done extensive work recently in applications of game theory for Homeland Security, including work on deployed software tools for the Federal Air Marshals Service and Transportation Security Administration. He has coauthored several papers on the subject of security games that have been presented at previous AAMAS conferences, in both the main track and industry track (including the winner of the 2009 industry track best paper award). He has given numerous presentations at conferences, workshops, seminars, and guest lectures for courses, including many on the topic of security games. He is currently teaching an undergraduate course on data structures and algorithms, and was previously a teaching assistant in three different courses.
Nicola Gatti is currently assistant professor with tenure at the Politecnico di Milano, Italy. His research topics are artificial intelligence, multiagent systems, and specifically algorithmic game theory. Currently, he is lecturer of informatics systems (undergraduate course) and algorithmic game theory (PhD course), and assistant of the graduate courses of artificial intelligence and of autonomous agents and multiagent systems in computer engineering, at Politecnico di Milano. He has coauthored more than ten papers on the subject of security games that have been presented at several conferences (for example, AAAI, AAMAS, IAT, GAMESEC), receiving two best paper awards at IAT and ICUMT, respectively.
Manish Jain is currently a PhD candidate at the University of Southern California, USA. He is a part of the Teamcore Research group. His work is on the applications of game theoretic and large scale optimization techniques for homeland security, including deployed software tools for the Federal Air Marshals Service and Los Angeles World Airports police. He has coauthored papers on the subject of security games that have been presented at AAMAS and AAAI conferences, in both the main track and the industry track. He has given presentations at AAMAS, AAAI and IJCAI conferences, and several other workshops. He has been a teaching assistant for courses on multiagent systems at both the graduate and the under-graduate level. His work published at the operations research venue Interfaces was recently nominated as a finalist for the EURO excellence in Practice award. In addition to many presentations at technical conferences and workshops, he has been a teaching assistant for Milind Tambe.
SA3 Discourse Structure: Theory and Practice
Bonnie Webber, Markus Egg, and Valia Kordoni
Understanding natural language is a long-standing goal of AI. Since language is more than just individual sentences, achieving this goal means confronting discourse and its structure. But doing so can also improve system performance on tasks such as information extraction, summarization, essay grading, and opinion mining.
This tutorial presents aspects of discourse structure and the improvements they can bring to system performance. The tutorial eschews a monolithic view of discourse structure in favor of an integrated approach to topic modeling and segmentation, genre-specific discourse segmentation, entity-based structure, structure based on discourse relations, and hierarchical structure.
Part 1 of the tutorial presents complementary bases for organizing and structuring discourse and their formal properties, to demonstrate the challenges of discourse modeling. Since an integrated, multifaceted approach to discourse structure can use different algorithms to recognize different structures, Part 2 describes algorithms in current use and resources for their training and/or evaluation. The possibility of joint modeling, addressing multiple aspects of discourse structure simultaneously, will emerge as a possible way forward. Part 3 outlines established and novel uses of discourse structure in the tasks noted above. We close with a list of problems for attempts to use discourse structure more widely or, more significantly, to achieve AI's Holy Grail of understanding natural language.
Bonnie Webber (email@example.com) is a professor in the School of Informatics, Edinburgh University. She is best known for work on question answering (starting with LUNAR in the early 1970s) and discourse phenomena (starting with her PhD thesis on discourse anaphora). She has also carried out research on animation from instructions, medical decision support systems and biomedical text processing.
Markus Egg (firstname.lastname@example.org) is a professor of linguistics at the Department of English and American Studies of the Humboldt University in Berlin. His main areas of interest are syntax, semantics, pragmatics, and discourse; the interfaces between them; and their implementation in NLP systems.
Valia Kordoni (email@example.com) is a senior researcher at the Language Technology Lab of the German Research Centre for Artificial Intelligence (DFKI GmbH) and an assistant professor at the Department of Computational Linguistics of Saarland University. Her main areas of interest are syntax, semantics, pragmatics and discourse. She works on the theoretical development of these areas as well as on their implementation in NLP systems.
SP1 Event Processing — State of the Art and Research Challenges
Opher Etzion and Yagil Engel
The term event processing refers to an approach to software systems that is based on reaction to events, often under time constraints, and that includes specific decision making logic based on detection of patterns in events as they occur. In comparison to traditional AI methods, event processing avoids comprehensive decision theoretic analysis in favor of tractability and quick reaction given high volumes of real-time data. Event processing is being used for dynamic operational behavior (for example, real-time algorithmic trading in capital markets), active diagnostics (for example, problem determination in network management), information dissemination (for example, detecting patterns of signals from multiple monitors connected to a patient), and observation (for example, a luggage arrived at a cart destined to the wrong flight). The tutorial is intended for the AI community; it is self-contained and does not require any prior knowledge. The audience will gain insights about event processing, and on how it relates to various AI subdisciplines. The first part of the talk will detail the current state of the art, the leading architectures, the basic building blocks, and the various programming styles; the second part will be dedicated to research challenges in temporal probabilistic reasoning, semantic correctness, pattern matching, inexact processing, and machine learning.
Opher Etzion is an IBM senior technical staff member, chair of the Event Processing Technical Society, and an ACM distinguished speaker, and in addition is an adjunct senior teaching fellow at the Technion. He is one of the pioneers of the event processing area, coauthor of the first technical book in this area, and published more than 70 papers on related topics.
Yagil Engel is a research scientist at IBM Research Lab – Haifa, and an adjunct lecturer at the Technion – Israel Institute of Technology. He joined IBM after two years of post-doc research at the Technion, in the areas of planning and decision theory in artificial intelligence. He received his PhD degree in computer science from the University of Michigan, where his thesis work is in the areas of graphical models for decision making, in the context of business-to-business electronic commerce and procurement problems. Before graduate studies he worked in the software industry, mainly in the area of electronic commerce. His current research activities combine operational decision making, probabilistic reasoning, and decision theoretic planning.
SP2 Human Computation: Core Research Questions and State of the Art
Luis von Ahn and Edith Law
Human computation is the study of systems where humans perform a major part of the computation or are an integral part of the overall computational system. With the growth of the web, human computation systems, for example, Games With A Purpose (the ESP Game), crowdsourcing marketplaces (Amazon Mechanical Turk, oDesk), and identity verification tasks (reCaptcha), can now leverage the abilities of an unprecedented number of people to solve complex problems that are beyond the scope of existing AI algorithms.
Our tutorial will highlight the core research questions in human computation, and focus on the design of mechanisms, algorithms and interfaces for tackling each of those questions. The tutorial will be divided into two sections: (1) human computation algorithms (for example, programming paradigms and tools, efficiency, output aggregation, task routing, the role of machine intelligence), and (2) design (for example, games with a purpose, incentives, human computation markets, crowd-driven interfaces). We expect participants to leave with a bird's eye view of the research landscape, as well as some tools to begin their own investigations. More details can be found at the supplemental tutorial site.
Prerequisite Knowledge: Basic Knowledge of AI and machine learning.
Edith Law is a Ph.D. candidate at Carnegie Mellon University, who is doing research on human computation systems that harness the joint efforts of machines and humans, particularly in the context of games. She is the co-organizer of HCOMP 2009 and 2011, and the recipient of the Microsoft Graduate Research Fellowship 2009–2011.
Luis von Ahn is a professor at Carnegie Mellon University. He is the recipient of a MacArthur Fellowship, a Packard Fellowship, a Microsoft New Faculty Fellowship, and a Sloan Research Fellowship and has been named one of the 50 best minds in science by Discover Magazine and one of the "brilliant 10" scientists of 2006 by Popular Science Magazine.
SP3 Large-Scale Data Processing with MapReduce
This tutorial provides an introduction to large-scale data processing with MapReduce, focusing in particular on scalability and the tradeoffs associated with distributed processing of large datasets. Emphasis will be placed on analysis of large unstructured text collections, although material will touch on management of structured data and large-scale graph algorithms as well. Content will include general discussions about algorithm design (for example, representational issues associated with large event spaces), presentation of illustrative algorithms (for example, iterative optimization algorithms), and case studies in a range of applications.
Jimmy Lin is an associate professor in the iSchool at the University of Maryland, College Park. He joined the faculty in 2004 after completing his Ph.D. in EECS at MIT. Lin's research interests lie at the intersection of NLP and IR, with a particular focus on large-scale distributed algorithms. He is currently on sabbatical at Twitter.
SP4 Recognizing Behavior in a Spatio-Temporal Context
Hans W. Guesgen, Mehul Bhatt, and Stephen Marsland
Recognizing human behavior plays a significant role in many applications, ranging from ensuring security in public and private places to monitoring people (for example, with diminished or challenged mental and physical capabilities) and their interactions with systems and artefacts in a smart home, to name just two examples. Recent years have seen significant progress in methods, algorithms, and technologies for behavior recognition, but the task remains to be a challenging one and far from being solved.
The tutorial will provide an introduction to the recent developments of the area of behavior recognition, highlighting the AI techniques that are most frequently used to achieve the task. It will argue that behavior recognition in isolation is likely to fail, independently of which method is used. Based on this observation, it will demonstrate how considering the spatio-temporal context in which the behavior occurs can boost the performance of the recognition process. The tutorial concludes by highlighting some challenges and opportunities for AI research in this vibrant and emerging area of sociological, scientific, and economic interest. Since the topic integrates many areas of AI, it is appropriate to a general audience.
Hans W. Guesgen holds a chair in computer science at Massey University, New Zealand. He has taught courses at all levels of the computer science curriculum for about 20 years and has published extensively in his area of research, which in particular includes spatio-temporal aspects of ambient intelligence.
Mehul Bhatt is a research fellow at the Cognitive Systems Institute at the University of Bremen, Germany. His research encompasses the areas of spatial and temporal reasoning, commonsense reasoning, cognitive robotics, ontology, and parallel and distributed systems. He has been a recipient of the Alexander von Humboldt Fellowship (Germany), a German Academic Exchange Service (DAAD) Award for Young Scientists and Academics, and an Australian Post-graduate Award. Bhatt has contributed in areas such as architectural design, cognitive robotics, ambient intelligence and smart environments, and medical decision support systems.
Stephen Marsland is an associate professor in the School of Engineering and Advanced Technology at Massey University, New Zealand. His research interests are in diffeomorphism groups and shape spaces, machine learning and smart homes, and complex networks and complexity. He is the author of Machine Learning: An Algorithmic Perspective.
MA1 Collective Intelligence
Collective intelligence refers to our ability to bring people and computing together in ways that exhibit behaviors and achieve outcomes that, collectively, are more intelligent than is possible by people or machines alone. Collective intelligence makes contact with AI in three ways. First, AI practitioners are increasingly using collective intelligence as a routine element in their work, such as to create corpora in computer vision or to evaluate results in information retrieval. Second, AI methods are a key enabler of many examples of collective intelligence, such as mining consumer behaviors and product review sentiments to facilitate product recommendation. Third, collective intelligence offers a provocative phenomenon to consider by those seeking computationally tractable understandings of intelligence. This tutorial will survey the state of the art in collective intelligence from an AI perspective. It will discuss examples of collective intelligence in which people knowingly act to achieve collective outcomes, such as editing Wikipedia articles, identifying astronomical objects, or rating news articles, as well as examples in which collectively intelligent outcomes arise through automated analysis of people's routine activities, as exhibited by Google's page ranking algorithm and Amazon's recommendation system. The tutorial will conclude with a discussion of prospects for the future.
Haym Hirsh is a professor of computer science at Rutgers University, and a visiting scholar at MIT's Center for Collective Intelligence. His BS is in mathematics-computer science from UCLA and MS and PhD in computer science from Stanford. From 2006–2010 he was director of information and intelligent systems at the National Science Foundation.
MA2 Discourse Models for Generating Optimized User Interfaces: Theory from AI and Application in HCI
This intermediate-level tutorial shows how human-computer interaction can be based on discourse modeling, even without employing speech or natural language. Communicative acts as abstractions from speech acts can model, for example, a question or an answer. These are glued together as a so-called adjacency pair. Deliberately complex discourse structures can be modeled using relations from Rhetorical Structure Theory (RST). The content of a communicative act can refer to ontologies of the domain of discourse. Taking all this together, we created a new discourse metamodel that specifies what discourse models may look like. Such discourse models can specify an interaction design. This tutorial also sketches how such an interaction design can be used for automated user-interface generation.
Experience has been gained, for example, having applied this approach to the interaction design and user interface generation for a semi-autonomous robot (in the form of a shopping cart for a supermarket). For small devices like current Smart Phones, these user interfaces are ready for use in real-world applications, through optimization based on heuristic search.
In effect, such user interfaces are generated from models underpinned through theories from artificial intelligence (AI), and these theories are applied to human-computer interaction (HCI) and user-interface generation.
There are no prerequisites such as knowledge about AI or HCI in general, nor on heuristic search in particular.
Hermann Kaindl joined the Vienna University of Technology in Vienna, Austria, in early 2003 as a full professor, and he is currently the director of an institute. Prior to moving to academia, he has gained more than 24 years of industrial experience. He is a Distinguished Scientist Member of the ACM, and he is on the executive board of the Austrian Society for Artificial Intelligence.
MA3 From Structured Prediction to Inverse Reinforcement Learning
Hal Daume III
Machine learning is all about making predictions; in many AI application domains (language, vision, biology) we see lots of complex rich structure. Structured prediction marries these two. However, structured prediction isn't always enough: sometimes the world throws even more complex data at us, and we need reinforcement learning techniques. This tutorial is all about the how and the why of structured prediction and inverse reinforcement learning (also known as inverse optimal control): participants should walk away comfortable that they could implement many structured prediction and IRL algorithms, and have a sense of which ones might work for which problems.
The first half of the tutorial will cover the basics of structured prediction the structured perceptron and Magerman's incremental parsing algorithm. It will then build up to more advanced algorithms that are shockingly reminiscent of these simple approaches: maximum margin techniques and search-based structured prediction.
The second half of the tutorial will ask the question: what happens when our standard assumptions about our data are violated? This is what leads us into the world of reinforcement learning (the basics of which we'll cover) and then to inverse reinforcement learning and inverse optimal control.
Hal Daume III is an assistant professor in computer science at the University of Maryland, with a joint appointment in Linguistics. He is primarily interested in the interface between natural language processing, computational linguistics and machine learning. His work in statistical modeling spans multiple aspects of language processing, including structured prediction, Bayesian methods, domain adaptation, and linguistic typology.
MA4 Opinion Mining and Sentiment Analysis
Opinion mining or sentiment analysis is the computational study of people's opinions, appraisals, and emotions toward entities, individuals, topics and their attributes expressed in text. Opinions are important because they are key influencers of our behaviors. Our beliefs and perceptions of reality are to a considerable degree conditioned on how others see the world. For this reason, when we need to make a decision we often seek out the opinions of others. In recent years, opinion mining from social media has emerged as a popular research area in NLP and text mining due to many challenging research problems and a range of applications. It has gone from computer science to reach social sciences and management science. In fact, apart from NLP, it also presents many challenges to other areas of AI, for example, machine learning, data mining, and automated reasoning. In the tutorial, I will first define the problem, describe its main tasks, and then present the current state-of-the-art techniques. Many examples will also be given to help participants better understand the key concepts. One feature of the tutorial is that it will not only address seminal research issues but also will look at the technology from an application angle.
Bing Liu is a professor of computer science at the University of Illinois at Chicago. He received his PhD from the University of Edinburgh. His research interests include opinion mining or sentiment analysis, and data mining. He has given more than 20 keynote and invited talks on opinion mining and has served as program chairs of KDD, ICDM, WSDM, SDM, CIKM, and PAKDD.
MP1 Algorithms for Classical Planning
Planning is a fundamental aspect of the behavior of intelligent beings. Construction of intelligent agents, physical or virtual, often requires the solution of complex forms of the planning problem, identifying sequences of actions that fulfill the performance objectives of the agent. Understanding of planning has made dramatic progress in the last 10 years, tremendously improving its applicability to hard problems requiring intelligent decision making and acting.
The focus of the tutorial is in the most important state-space traversal methods: heuristic state-space search and logic-based symbolic methods such as planning as satisfiability. These directly solve the classical planning problem, but are also the basis of solving its extensions such as temporal and conditional planning. We explain the most important algorithms for classical planning, relations between them, and their applicability to more general planning problems. We also give a practical overview of existing planning systems.
The target audience is AI researchers interested in understanding the landscape of modern planning and the issues arising in their application. As background knowledge we assume a basic understanding of standard search algorithms and logic.
Jussi Rintanen is a principal researcher at NICTA, the leader of NICTA's planning group, as well as an adjunct professor at the Australian National University. In the last years, his main interests have been in constraint-based search methods and their application to hard combinatorial problems such as diagnosis, planning and control.
MP2 Conformal Predictions for Reliable Machine Learning: Theory and Applications
Vineeth N. Balasubramanian and Shen-Shyang Ho
Reliable estimation of confidence remains a significant challenge as learning algorithms proliferate into challenging real-world applications. The Conformal Predictions framework is a recent development in machine learning to associate reliable measures of confidence with results in classification and regression. This framework is founded on the principles of algorithmic randomness, transductive inference and hypothesis testing, and has several desirable properties for potential use in various real-world applications, such as the calibration of the obtained confidence values in an online setting. Further, this framework can be applied across all existing classification and regression methods, thus making it very generalizable. In recent years, there has been a growing interest in applying this framework to real-world problems such as clinical decision support, medical diagnosis, sea surveillance, network traffic classification, and face recognition. This tutorial will: (1) expose the audience to the basic theory of the framework; (2) demonstrate examples of how the framework can be applied in real-world problems, and (3) provide sample adaptations of the framework to related problems such as active learning, transfer learning, anomaly detection, and model selection.
A basic understanding of machine learning approaches (such as classification, clustering, regression, and others) is the only prerequisite for this tutorial.
Sethuraman Panchanathan and Vladimir Vovk served as co-organizers of this tutorial.
Vineeth Balasubramanian is an assistant research professor at Arizona State University. His research interests include pattern recognition, machine learning, computer vision and multimedia computing. His work on the conformal predictions framework was nominated as an outstanding PhD dissertation at ASU, as well as the annual ACM Doctoral Dissertation Award.
Shen-Shyang Ho is a research associate at the University of Maryland, Institute for Advanced Computer Studies (UMIACS). His research interests include data mining, machine learning, and pattern recognition in spatiotemporal/data streaming settings. His current research involves the application of data mining and machine learning approaches to support NASA Earth science research.
MP3 Information Organization and Retrieval with Collaboratively Generated Content
Eugene Agichtein and Evgeniy Gabrilovich
Ubiquitous access to the Internet enables millions of web users to collaborate online. These efforts often result in the construction of large repositories of knowledge, either as their primary aim (for example, Wikipedia) or as a by-product (for example, Yahoo! Answers). The unprecedented amounts of information in collaboratively generated content (CGC) enable new, knowledge-rich approaches to information access. Some examples include the use of human-defined concepts to augment the bag of words, using large-scale taxonomies to construct additional class-based features, or using Wikipedia for better word sense disambiguation. However, the quality of CGC varies significantly, and a substantial amount of post-processing is necessary to take full advantage of the knowledge therein.
The tutorial will cover two complementary directions: (1) Using CGC as an enabling resource for knowledge-enriched, intelligent information retrieval algorithms, and (2) Development of supporting technologies for extracting, filtering, and organizing CGC.
As we will show, not only the knowledge repositories can be used to improve information retrieval methods, but the reverse pollination is also possible. For example, better information extraction methods can be used to automatically collect more knowledge, or to verify the contributed content.
Understanding of basic concepts in machine learning and data mining is expected. Information retrieval and NLP concepts will be introduced as needed.
Eugene Agichtein is an assistant professor in the Emory University Mathematics and Computer Science Department, where he directs the Intelligent Information Access Lab (IRLab). Agichtein 's expertise is in web search and information retrieval, with an emphasis on organizing and searching collaboratively-generated content. His work has been supported by NSF, HP Labs, and Yahoo.
Evgeniy Gabrilovich is a senior research scientist and manager of the NLP and IR Group at Yahoo! Research. His research interests include information retrieval, machine learning, and computational linguistics. Gabrilovich has published extensively on computational advertising, as well as on using world knowledge to enhance text representation beyond the bag of words, and has served as a senior pc / area chair at AAAI, IJCAI, SIGIR, EMNLP, and ICWSM.
MP4 Philosophy as AI and AI as Philosophy
Although most AI researchers have engineering objectives, some are primarily interested in the scientific study of minds, both natural and artificial. Some of the deep connections between both scientific and applied AI are linked to old problems in philosophy about the nature of mind and knowledge, what exists, how minds are related to matter, about causation and free will, about the nature of consciousness, about how language is possible, about creativity, and about whether non-biological machines can have minds. Questions linking AI and philosophy motivated AI pioneers such as Ada Lovelace, Alan Turing, Marvin Minsky, John McCarthy and Herbert Simon, and are also addressed by Margaret Boden, Andy Clark, David Chalmers, Daniel Dennett, John Searle, and others. Yet many questions remain unanswered and some philosophers and scientists think AI is only engineering.
I'll try to explain how progress can be based on unnoticed relationships between AI and Philosophy, including the connections of both with unexplained features of biological evolution and the development of various kinds of intelligence in individual animals. The tutorial will be highly interactive and provocative.
Prerequisites:: None, except interest in how AI and philosophy are mutually relevant, and illuminate the nature of mind and intelligence. See the author's website for details.
Aaron Sloman received his first degree in mathematics and physics, DPhil in philosophy of mathematics, then worked in philosophy, cognitive science, AI and theoretical biology. He is the author of The Computer Revolution in Philosophy (1978) and many articles and book chapters, a Fellow of AAAI, and also a contributor to Poplog multilanguage toolkit for AI research and teaching.
Proceedings are available in the AAAI Digital Library
Technical Track Calls
Special Tracks Calls
Robotics, and Workshops