AAAI 2009 Symposia

  • About Us
  • Gifts
  • AITopics
  • AI Magazine
  • Conferences
  • Library
  • Membership
  • Publications
  • Symposia
  • Contact

AAAI 2009 Spring Symposium Series Call for Participation

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, is pleased to present the 2009 Spring Symposium Series, to be held Wednesday through Friday, March 23–25, 2009 at Stanford University. The titles of the eight symposia are as follows:

  • Agents that Learn from Human Teachers
  • Benchmarking of Qualitative Spatial and Temporal Reasoning Systems
  • Experimental Design for Real-World Systems
  • Human Behavior Modeling
  • Intelligent Event Processing
  • Intelligent Narrative Technologies II
  • Learning by Reading and Learning to Read
  • Social Semantic Web: Where Web 2.0 Meets Web 3.0
  • Technosocial Predictive Analytics

Agents that Learn from Human Teachers

The Agents that Learn from Human Teachers Spring Symposium will bring together a multidisciplinary group of researchers to discuss how we can enable agents to learn from real-time interaction with an everyday human partner, exploring the ways in which machine learning can take advantage of elements of human-like social learning.

The goal of this meeting is to foster a collaborative dialog and bring multiple perspectives to bear on this challenge. We have an exciting agenda planned, with participation from researchers across machine learning, human-computer interaction, human-robot interaction, intelligent user interfaces, developmental psychology, and cognitive science. We believe that learning will be a key component to the successful application of intelligent agents in everyday human environments (physical and virtual). It will be impossible to give agents all of the knowledge and skills a priori that they will need to serve useful longterm roles in our dynamic world. The ability for everyday users, not experts, to adapt their behavior easily will be key to their success. Machine learning techniques have had much success over the years when applied to agents, but machine learnin techniques have not yet been specifically designed for learning from nonexpert users and current techniques are generally not suited for it out of the box.

The program will cover a variety of topics spanning the disciplines mentioned above. For example:

  1. How do everyday people approach the task of teaching autonomous agents?
  2. What mechanisms of human social learning will machine learning agents need?
  3. Are there machine learning algorithms that are more/less amenable to learning with non-expert human teachers?
  4. What are proper evaluation metrics for social machine learning systems?
  5. What is the state of the art in human teachable systems?
  6. What are the grand challenges in building agents that learn from humans? Our schedule will include several presentations of ongoing work in the realm of robots and software agents that learn from human interaction, including demonstrations of agents learning from human teachers. Additionally, we will hold a joint session with the Experimental Design for Real-World Systems symposium in which we will brainstorm experimental design and performance metrics for social learning agents.

Organizing Committee

Andrea L. Thomaz, chair (Georgia Institute of Technology), Cynthia Breazeal (MIT Media Lab), Sonia Chernova (CMU), Dan Grollman (Brown University), Charles Isbell (Georgia Institute of Technology), Olufisayo Omojokun (Georgia Institute of Technology), Satinder Singh (University of Michigan)

For More Information

For more information about the symposium, see the supplementary symposium web site.


Benchmarking of Qualitative Spatial and Temporal Reasoning Systems

Over the past 25 years the domain of qualitative spatial and temporal reasoning has evolved to an established subfield of AI. Qualitative reasoning aims at the development of formalisms that are close to conceptual schematas used by humans for reasoning about their physical environment, in particular, about temporal and spatial information. Application fields of qualitative reasoning include human-machine interaction, high-level agent control, geographic information systems, spatial planning, ontological reasoning, and cognitive modeling.

To foster real-world applications, representation and reasoning methods used in qualitative reasoning need to be tested against evaluation criteria adapted from other AI fields and cognitive science. The aim of the symposium is to boost the development of well-founded and widely accepted evaluation standards and practical benchmark problems. This includes the measures to compare different qualitative formalisms in terms of cognitive adequacy, expressiveness, and computational efficiency; the development of a domain and problem specification language for benchmarking purposes; the identification of significant benchmark domains and problem instances based on natural use cases, as well as the creation of a problem repository; and the measures to evaluate the performance of implemented reasoning systems. The symposium will foster the benchmarking idea in the qualitative reasoning domain, contribute to identify a graded set of challenges for future research, and push the development of qualitative reasoning methods and systems towards application-relevant problems.

Format

The symposium program will include invited talks, paper presentations, working groups, and a tool demonstration session. Working group sessions will cover topics such as standards for calculus and problem instance specifications, qualitative reasoners, application-driven benchmark cases, and measures for the cognitive adequacy of qualitative formalisms. The symposium schedule will allow for extensive discussion time and group interaction.

Symposium participants are invited to present their current work on benchmarking of qualitative spatial and temporal reasoning systems, significant use cases for qualitative reasoning, and more general midterm and long-term challenges in the field.

Organizing Committee

Bernhard Nebel (University of Freiburg, Germany), Anthony G. Cohn (University of Leeds, UK), Jean-Francois Condotta (Université d'Artois, France), Max J. Egenhofer (University of Maine, USA), Ulrich Furbach (University of Koblenz-Landau, Germany), Jochen Renz (Australian National University, Australia), Peter van Beek (University of Waterloo, Canada), Stefan Woelfl (University of Freiburg, Germany), Diedrich Wolter (University of Bremen, Germany)

For More Information

For more information about the symposium, see the supplementary symposium web site.


Experimental Design for Real-World Systems

As more artificial intelligence (AI) research is fielded in real-world applications, the evaluation of systems designed for human-machine interaction becomes critical. AI research often intersects with other areas of study, including human-robot interaction, human-computer interaction, assistive technology, and ethics. Designing experiments to test hypotheses at the intersections of multiple research fields can be incredibly challenging. Many commonalities and differences already exist in experimental design for real-world systems. For example, the fields of human-robot interaction and human-computer interaction are two fields that have both joint and discrete goals. They look to evaluate very different aspects of design, interface, and interaction. In some instances, these two fields can share aspects of experimental design, while, in others, the experimental design must be fundamentally different.

We will provide a forum for researchers from many disciplines to discuss experiment design and the evaluation of real-world systems. We invite researchers from all applicable fields of human-machine interaction. We also invite researchers from allied fields, such as psychology, anthropology, design, human-computer interaction, human-robot interaction, rehabilitation and clinical care, assistive technology, and other related disciplines.

This symposium will focus on a wide variety of topics that address the challenges of experiment design for real-world systems including successes and failures in system evaluations, uses of quantitative and qualitative data, design of system evaluations, type and size of the participant pool, uses of laboratory experiments, field trials, Wizard of Oz studies, and observational studies, and other related topics.

We will have a mix of guest speakers, paper presentations, and break-out groups. We will tour Cliff Nass's Communication between Human and Interactive Media (CHIMe) lab. We will also have a joint session with the Agents that Learn from Human Teachers symposium to discuss performance metrics for agents that learn from humans (social learning).

Organizing Committee

David Feil-Seifer (USC), Heidy Maldonado (Stanford University), Bilge Mutlu (CMU), Kristen Stubbs (University of Massachusetts), Leila Takayama (Stanford University), Katherine Tsui (University of Massachusetts, Lowell).

Program Committee

Jenny Burke (USF), Kerstin Dautenhahn (Hertfordshire), Gert Jan Gelderbloom (VILANS), Maja Mataric (USC), Aaron Steinfeld (CMU), Holly Yanco (University of Massachusetts, Lowell).

For More Information

For more information about the symposium, see the supplementary symposium web site.


Human Behavior Modeling

The Human Behavior Modeling symposium will explore methods for creating models of individual and group behavior from data. Models include generative and discriminative statistical models, relational models, and social network models. Data includes low-level sensor data (GPS, RFID, accelerometers, physiological measures, and so on), video, speech, and text. Behaviors are high-level descriptions of purposeful or meaningful activity, including activities of daily living (for example, preparing a meal), interaction between small sets of individuals (for example, having a conversation), and mass behavior of groups (such as the flow of traffic in a city).

While behavior modeling is part of many research communities, such as intelligent user interfaces, machine vision, smart homes for aging in place, discourse understanding, social network analysis, and others, this symposium will be distinguished by its emphasis on exploring general representations and reasoning methods that can apply across many different domains. Questions the participants in the symposium will discuss include the following:

  • Representation: Is it important to make all levels of the model interpretable?
  • Generalization: What are some effective strategies for generalization?
  • Domain knowledge: How can commonsense prior knowledge be combined with sensor data?
  • Evaluation: How do we evaluate models in real world scenarios, especially when ground truth data is sparse or unavailable?

Format

The symposium agenda will include 16 oral and poster presentations that collectively span the range of human behavior modeling — from individuals to groups to societies — using a variety of different computational techniques and data sources. In addition, we will have a moderated panel and open discussions to encourage brainstorming, and to specifically identify grand challenge problems that could serve as a focal point for research efforts and innovation, and would provide a context in which to compare different methodologies and tools.

Organizing Committee

Henry Kautz (University of Rochester), Tanzeem Choudhury (Dartmouth College), Ashish Kapoor (Microsoft Research).

Program Committee

Samy Bengio (Google), Hung Bui (SRI), Dieter Fox (University of Washington), Eric Horvitz (Microsoft Research), Rana El Kaliouby (MIT), Jiebo Luo (Kodak Research Laboratories), Chris Pal (University of Rochester), Alex (Sandy) Pentland (MIT), Daniel Gatica-Perez (IDIAP), Matthai Philipose (Intel Research), Nicu Sebe (University of Amsterdam.

For More Information

For more information about the symposium, see the supplementary symposium web site.


Intelligent Event Processing

The Intelligent Event Processing symposium will be organized around two central topics for the intelligent event processing. First, what progress can be achieved compared to the traditional event processing (and what is the price) and second, what challenges from the industry can be addressed by this progress (and what is the price). The primary goal will be to analyze the gap between the first and second topics to detect current problems and define the priorities for the future research, and strategize how to generate impact in the research and industry communities, including standardization efforts.

The symposium is designed to attract the attention of a wider public, ranging from younger researchers who would like to learn the basics of some EP-related disciplines, to senior management who will learn what new research trends can impact the business in the near future.

The symposium will be structured toward achieving these goals. It will contain two keynotes, given by one prominent researcher and one prominent industry representative, presenting critical overviews of the state of the art in the research and state of affairs in the industry.

The symposium will be divided into several slots, each dedicated to one topic. So far we have selected four general topics: modeling, discovery, reasoning and applications. Each slot will contain three types of presentations: tutorials that introduce the topics, relevant scientific contributions, and breakout sessions that conclude the slot with main findings for the next steps. For each breakout session we will prepare a set of initial issues to be discussed. To enable focused discussions, we will advise presenters of the papers to make explicit statements about the relation and progress toward traditional event processing and the industry or application relevance of their presented work.

The symposium will conclude by defining the guidelines for the next steps, especially for the future work of the Event Processing Technical Society, whose mission is to promote understanding and advancement of event processing technologies, to assist in the development of standards to ensure long-term growth, and to provide a cooperative and inclusive environment for communication and learning.

Organizing Committee

Nenad Stojanovic, chair, (FZI - Research Center for Information Technologies at the University of Karlsruhe, Germany), Andreas Abecker (FZI, Germany), Opher Etzion (IBM Research Lab, Haifa, Israel), Adrian Paschke (RuleML Inc, Canada).

For More Information

For more information about the symposium, see the supplementary symposium web site.


Intelligent Narrative Technologies II

The year 2009 marks a decade of research on intelligent narrative technologies. The 2009 AAAI Spring Symposium on Intelligent Narrative Technologies II is the third in a successful series that started in 1999. Like previous symposia, it aims at advancing research in interactive and non-interactive narrative technologies by bringing together relevant research communities to discuss innovations, progress and developing work.

This year, the symposium will focus discussions and presentations on themes and activities relating to narrative understanding, authoring, generation and the technology required to develop suitable tools. The series has always aimed to be a venue for discussions and debates and this symposium will continue in this fine tradition. It will be structured to bolster the exchange of ideas and concepts as well as stimulate in depth discussion. It will feature an exploration of approaches to narrative and content generation through a special improvised comedy games workshop run by Kathryn Farley and extensive poster and demonstration sessions. The symposium will have both theoretical and practical presentations on story generation, interactive storytelling, social agents, and computational models in narrative.

Improvised Comedy Games Workshop

When pioneering theatre educator Viola Spolin crafted a set of exercises called “theatre games,” she was primarily interested in establishing a learning environment dedicated to collaboration, experimentation and play. Some of the games she devised were intended to teach students how to create stories using three basic elements of a scene: who, what and where. The “Improvised Comedy Games Workshop” by Kathryn Farley applies Spolin's theatre exercises to a comedic context. The workshop will provide a foundation in the spontaneous creation of narrative, by drawing on specific games that use comedy as a story-making device. The workshop will present a diverse offering of narrative-building exercises that encourage participants to get out of their heads and into their bodies, to listen and respond, to trust their instincts and make good choices — all skills that are immensely beneficial to the creation of stories in any medium or platform of expression.

Organizing Committee

Sandy Louchart, cochair (Heriot-Watt University), Manish Mehta, cochair (Georgia Institute of Technology), David L. Roberts, cochair (Georgia Institute of Technology), David Herman (Ohio State University), Marie-Laure Ryan, David Thue (University of Alberta).

For More Information

For more information, visit the symposium's supplementary web site.


Learning by Reading and Learning to Read

The majority of human knowledge is encoded in text, and much of this text is available in machine readable form on the web. But to machines, the knowledge encoded in the texts they read remains inaccessible. Significant progress has been made in such basic areas of language processing as morphological analysis, syntactic parsing, proper name recognition, and logical form extraction. This has already advanced information extraction and filtering capabilities, as a variety of current application systems demonstrate. Still, intelligent machines of today cannot yet claim to be able to generate semantic representations on the scale and of the depth sufficient to support automatic reasoning, a situation often blamed on the knowledge acquisition bottleneck.

The goal of this symposium is to stimulate discussion and open exchange of ideas about two aspects of making texts semantically accessible to, and processable by, machines. The first, learning by reading, relates to automatically extracting machine-understandable (machine-tractable) knowledge from text. The second, learning to read, is related to automating the process of knowledge extraction required to acquire and expand resources (for example, ontologies and lexicons) that facilitate learning by reading. There is a clear symbiotic relationship between these two aspects — expanding knowledge resources enables systems that extract knowledge from text to improve at that task over time and vice versa. Given significant diversity in topics, terminology, and writing styles, learning to read will be crucial to large-scale deployment of systems that learn by reading.

Topics of interest include, but are not limited to, the following:

  • Extracting ontologies de novo from text
  • Expanding ontologies (learning new concepts or properties) by automatic processing of text
  • Expanding lexicons (adding new terms or linking lexicons to ontologies) through automatic text processing
  • End-to-end self-bootstrapping systems that learn by reading by learning to read
  • Special challenges posed by extracting knowledge from text gathered from the web
  • Semantic integration and interoperability
  • Evaluation metrics for systems that learn by reading or learn to read
  • Learning from expository texts (e.g., encyclopedias)
  • Targeted (goal-directed) machine reading
  • Special challenges posed by learning (either to read or by reading) for long periods of time (called “lifelong learning” in the machine learning community)
  • Reasoning with knowledge acquired from text
  • Knowledge mining

Organizing Committee

James Allen (University of Rochester), Peter Clark (Boeing Corporation), Jon Curtis (Cycorp), Graeme Hirst (University of Toronto), Sergei Nirenburg, cochair (University of Maryland, Baltimore County), Tim Oates, cochair (University of Maryland, Baltimore County), Lenhart Schubert (University of Rochester), John F. Sowa (VivoMind Inc.)

For More Information

For more information about the symposium, see the supplementary symposium web site.


Social Semantic Web: Where Web 2.0 Meets Web 3.0

The social web and the semantic web complement each other in the way they approach content generation and organization. Social web applications are fairly unsophisticated at preserving the semantics in user-submitted content, typically limiting themselves to user tagging and basic metadata. Because of this, they have only limited ways for consumers to find, customize, filter and reuse data. Semantic web applications, on the other hand, feature sophisticated logic-backed data handling technologies, but lack the kind of scalable authoring and incentive systems found in successful social web applications. As a result, semantic web applications are typically of limited scope and impact. We envision a new generation of applications that combine the strengths of these two approaches: the data flexibility and portability of that is characteristic of the semantic web, and the scalability and authorship advantages of the social web. In this symposium, we are interested in bringing together the semantic web community and the social web community to promote the collaborative development and deployment of semantics in the World Wide Web context.

For this purpose, the symposium will provide several avenues for all participants, including five to six technical sessions that consist of oral presentations of long papers, short papers and statements of interest; a poster session; social semantic web in action demos, (such as semantic wikis); two invited talks from world leaders in this area; two panel discussions; and break-out sessions that are self-organized by participants.

Organizing Committee

Mark Greaves (Vulcan Inc.), Li Ding (Rensselaer Polytechnic Institute), Jie Bao (Rensselaer Polytechnic Institute), Uldis Bojars (National University of Ireland, Galway).

For More Information

For more information about the symposium, see the supplementary symposium web site.


Technosocial Predictive Analytics

Events occur daily that challenge the security, health, and sustainable growth of our planet, and often find the international community unprepared for the catastrophic outcomes. These events involve the interaction of complex processes such as climate change, energy reliability, terrorism, nuclear proliferation, natural and man-made disasters, social or political and economic resiliency. If we are to help the international community to meet the challenges that emerge from these events, we must develop novel methods for predictive analysis that can support a concerted decision-making effort by relevant actors to anticipate and counter strategic surprise.

There is now increased awareness among subject-matter experts, analysts, and decision makers alike that a combined understanding of interacting physical and human factors is essential in addressing strategic surprise proactively. The Technosocial Predictive Analytics Symposium will further this insight by exploring new methods for anticipatory analytical thinking that implement a multi-perspective approach to predictive modeling through the integration of human and physical models, leveraging knowledge from both the social and natural sciences, and utilize disciplines capable of supporting the modeling tasks by enhancing cognitive access, and facilitating the achievement of knowledge inputs.

The symposium will bring together scientists and government agency representatives interested in this emerging field to create a new community of interest. Our program features three keynote speakers: Nigel Gilbert (Professor of Sociology, University of Surrey); Greg Zacharias (Senior Principal Scientist, Charles River Analytics, Inc.); and Jean MacMillan (Chief Scientist, Aptima, Inc.).

There will be both long and short papers sessions, and a poster session. Papers and posters address three areas: technosocial modeling, knowledge inputs and cognitive enhancement.

The technosocial modeling area targets the development, implementation, and evaluation of new multi-perspective methods and algorithms for predictive modeling.

The knowledge inputs area deals with capabilities that support the modeling task through the acquisition, vetting and dissemination of expert knowledge and evidence.

The cognitive enhancement area focuses on the use of visual analytics and enhanced cognition techniques to empower the user in the modeling task, promote inferential transparency, and support collaborative/competitive decision-making. The symposium will conclude with two panels in which government agency representatives will discuss current and prospective application domains, and technical and funding challenges for technosocial predictive analytics.

Organizing Committee

Antonio Sanfilippo, chair (Pacific Northwest National Laboratory), Peter Brooks (Intelligence Advanced Research Projects Agency), Kathleen Carley (Carnegie Mellon University), Claudio Cioffi-Revilla (George Mason University), Nigel Gilbert (University of Surrey), David Sallach (Argonne National Laboratory), Jim Thomas (Pacific Northwest National Laboratory), Steve Unwin (Pacific Northwest National Laboratory).

For More Information

For more information about the symposium, see the supplementary symposium web site.

AAAI Symposia

AAAI Fall Symposia

AAAI Spring Symposia

AAAI Educational Advances in AI Symposia

Technical Reports

For Accepted Authors

Other Links

AAAI Home Page

Awards

Calendar

Jobs

Meetings

AAAI Press

Resources

AAAI Workshops

Follow @RealAAAI

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 1995–2019 Association for the Advancement of Artificial Intelligence.
Your use of this site is subject to our Terms and Conditions and Privacy Policy | Home | About AAAI | Search | Contact AAAI
AAAI Conferences | AI Magazine | AITopics | Awards | Calendar | Digital Library | Jobs | Meetings | Membership | Press | Press Room | Publications | Resources | Symposia | Workshops