AAAI-10 Tutorial Forum
The Tutorial Forum of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) will be held July 11-12, 2010 in Atlanta. The Tutorial Forum provides an opportunity for junior and senior researchers to spend two days each year freely exploring exciting advances in disciplines outside their normal focus. We believe this type of forum is essential for the cross fertilization, cohesiveness, and vitality of the AI field. We all have a lot to learn from each other; the Tutorial Forum promotes the continuing education of each member of the AAAI.
Sunday, July 11, 9:00 AM – 1:00 PM
AI and Machine Consciousness
Exploiting Statistical and Relational Information on the Web and in Social Media: Applications, Techniques, and New Frontiers
Lise Getoor and Lilyana Mihalkova
Large-Scale Ontology Reasoning and Querying
Jeff Z. Pan, Guilin Qi, and Jianfeng Du
Reinforcement Learning Algorithms for MDPs
Csaba Szepesvari and Rich Sutton
Sunday, July 11, 2:00 PM – 6:00 PM
Bayesian Networks with Imprecise Probabilities: Theory and Applications to Knowledge-based Systems and Classification
Alessandro Antonucci and Giorgio Corani
Cooperative Games in Multi-Agent Systems
Georgios Chalkiadakis, Edith Elkind, and Mike Wooldridge
Rules on the Semantic Web: Advances in Knowledge Representation and Standards
Benjamin Grosof, Mike Dean, and Michael Kifer
Towards Intelligent Web Search: Inferring Searcher Intent
Monday, July 12, 9:00 AM – 1:00 PM
An Introduction to Constraint Programming and Combinatorial Optimisation through Numberjack
Barry O'Sullivan, Emmanuel Hebrard, and Eoin O'Mahony
How to Integrate Ontologies and Rules?
Thomas Eiter, Stijn Heymans, Luis Polo, and Adeline Nazarenko
Sampling Techniques for Probabilistic and Deterministic Graphical Models
Rina Dechter, Bozhena Bidyuk, and Vibhav Gogate
Monday, July 12, 2:00 PM – 6:00 PM
Description Logics for Data Access
Giuseppe De Giacomo and Domenico Lembo
Machine Learning Meets Knowledge Representation in the Semantic Web
Preferences and Partial Satisfaction in Planning
J. Benton, Jorge Baier, and Subbarao Kambhampati
SA1 AI and Machine Consciousness
Machine consciousness is an emerging field that addresses the problems of designing and implementing computational models of consciousness in an agent. The target of machine consciousness research is twofold: the possibility of building phenomenally conscious machines (that is, facing the hard problem of qualia) and the analysis of the active role of consciousness in controlling and planning the behaviour of an agent.
Machine consciousness is placed at the crossing between technical disciplines (AI, robotics, computer science and engineering), theoretical disciplines (philosophy of mind, linguistics, logic), and empirical disciplines (psychology and neuroscience). It focuses on attempts to apply the methods of AI, robotics and computer science to understand consciousness and to examine the possible role of consciousness in AI systems. On the one hand there is the hope that facing the problem of consciousness would be a decisive move to design better AI systems; on the other hand the implementations of AI systems could be helpful for understanding natural consciousness.
The tutorial will present the current state of research in machine consciousness and it will discuss the theoretical foundations and the experimental results of the field and their importance for the AI community.
The tutorial will be divided in four parts: i) theoretical and philosophical issues of consciousness, ii) models of machine consciousness, iii) case studies and implemented systems, and iv) discussions and perspectives of machine consciousness.
Prerequisite knowledge: No specific prior knowledge is required.
Antonio Chellais a professor of robotics in the Computer Engineering Department of the University of Palermo, Italy, where he leads the robotics laboratory. He is an associate editor of the Artificial Intelligence Journal. In 2007 he organized and cochaired the AAAI Fall Symposium on AI and Consciousness. He is cofounder and editor-in-chief of the International Journal of Machine Consciousness started in 2009. His recent research interests address the implementation of machine consciousness models in autonomous robots.
SA2 Exploiting Statistical and Relational Information on the Web and in Social Media: Applications, Techniques, and New Frontiers
Lise Getoor and Lilyana Mihalkova
The popularity of Web 2.0, characterized by a proliferation of social media sites, and Web 3.0, with more richly semantically annotated objects and relationships, brings to light a variety of important prediction, ranking, and extraction tasks. The input to these tasks is often best seen as a (noisy) multi-relational graph, such as the graph of the Web itself; the click graph, defined by user interactions with Web sites; and the social graph, defined by friendships and affiliations on social media sites.
The first part of this tutorial will describe several common Web applications and will focus on their shared abstractions, showing how they can be cast as reasoning over multi-relational graphs. The second part of the tutorial will describe statistical relational learning (SRL) techniques, arguing in favor of the use of SRL as a unifying framework for learning and reasoning with multi-relational information on the Web, and will describe in detail several Web applications of SRL.
We expect that our audience will walk away with an appreciation for the diversity of Web applications naturally modeled as graphs, and with sufficient knowledge of available SRL tools to start exploring Web applications.
Prerequisite knowledge: We assume basic familiarity with knowledge representation and machine learning.
Lise Getoor is an associate professor at the University of Maryland, College Park. Her research interests are in machine learning and reasoning under uncertainty. She has also done work in areas such as database management, social network analysis, and visual analytics.
Lilyana Mihalkova is a "Computing Innovations" post-doctoral fellow at the University of Maryland, College Park. She received her Ph.D from the University of Texas at Austin. Her research interests are in statistical relational learning and reasoning under uncertainty.
SA3 Large-Scale Ontology Reasoning and Querying
Jeff Z. Pan, Guilin Qi, and Jianfeng Du
The goal of the Large-Scale Ontology Reasoning and Querying tutorial is twofold: first, to introduce scalable reasoning and querying techniques to AI researchers as a powerful tool to make use of large-scale ontologies, and second, to present interesting research problems for AI that arise in finding justifications of entailments in large-scale ontologies. The tutorial consists of four parts. It will begin with an introduction of the semantic web standard ontology language OWL 2 and its related reasoning services. The introduction will include examples of how to make use of OWL 2 constructors to build ontologies and how to use reasoning services to facilitate such process. It will then introduce how to make use of divide and conquer approach to provide sound and complete optimisations for large-scale ontology reasoning and querying. The third part of the tutorial will present recent work on quality guaranteed approximations, preserving soundness and/or completeness, so as to deal with very large-scale ontologies. The last part of the tutorial will introduce a variety of justification techniques that are relevant to current challenges in finding justifications of entailments in large-scale ontologies.
Jeff Z. Pan received his Ph.D from University of Manchester in 2004 and joined the faculty in the Department of Computing Science at University of Aberdeen in 2005. He serves as an associate Editor of the Journal of Advances in Artificial Intelligence and on the Editorial Board of both the International Journal on Semantic Web and Information Systems (IJSWIS) and the Journal of Emerging Technologies in Web Intelligence (JETWI), and as program chair of RR2007, Ontology and Reasoning Track in ESWC2010 and Doctoral Consortium in ISWC2010.
Guilin Qi is a professor of computer science working at Southeast University in China. Before he moved to Southeast University, he was a postdoctoral researcher at the Institute AIFB at the University of Karlsruhe, working for EU FP7 project NeOn: Lifecycle Support for Networked Ontologies. He received his Ph.D in computer science from Queen's University of Belfast in 2006. His research interests include knowledge representation and reasoning, uncertainty reasoning, and semantic web. He has published about 50 papers in these areas, many of which published in proceedings of major conferences (such as IJCAI and AAAI) or top journals (such as Information Science and Fuzzy Sets and Systems). He is an associate editor of the Journal of Advances in Artificial Intelligence and is coediting a special issue of Annals of Mathematics and Artificial Intelligence. He has organized several international workshops. He has served as PC members of several international conferences and workshops, such as RR'09, ISWC'09, RR'08. He has also served as many conference reviewers and journal reviewers.
Jianfeng Du is currently a research fellow in Institute of Business Intelligence and Knowledge Discovery in Guangdong University of Foreign Studies. He received the Ph.D degree from the State Key Laboratory of computer science, Institute of Software, Chinese Academy of Sciences, and both the Master degree and the Bachelor degree from Sun Yet-Sen University in P.R. China. His current research interests include Knowledge Representation and Reasoning, and semantic web. He has published papers in major conferences in these areas, such as IJCAI, WWW and ISWC.
SA4 Reinforcement Learning Algorithms for MDPs
Csaba Szepesvari and Richard S. Sutton
Reinforcement learning is a popular and highly-developed approach to artificial intelligence with a wide range of applications. By integrating ideas from dynamic programming, machine learning, and psychology, reinforcement learning methods have enabled much better solutions to large-scale sequential decision problems than had previously been possible. This tutorial will cover Markov decision processes and approximate value functions as the formulation of the reinforcement learning problem, and temporal-difference learning, function approximation, and Monte Carlo methods as the principal solution methods. The focus will be on the algorithms and their properties. Applications of reinforcement learning in robotics, game-playing, the web, and other areas will be highlighted. The main goal of the tutorial is to orient the AI researcher to the fundamentals and research topics in reinforcement learning, preparing them to evaluate possible applications and to access the literature efficiently.
Prerequisite knowledge: We will assume familiarity with basic mathematical concepts such as conditional probabilities, expected values, derivatives, vectors and matrices.
Csaba Szepesvari, an associate professor at the Department of Computing Science of the University of Alberta, is the coauthor of a book on nonlinear approximate adaptive controllers. His main interest is the design and analysis of efficient learning algorithms in various active and passive learning scenarios.
Richard S. Sutton is a professor and iCORE chair in the Department of Computing Science at the University of Alberta. He is a fellow of the AAAI and coauthor of Reinforcement Learning: An Introduction. His research interests center on the learning problems facing a decision-maker interacting with its environment, which he sees as central to artificial intelligence.
SP1 Bayesian Networks with Imprecise Probabilities: Theory and Applications to Knowledge-Based Systems and Classification
Alessandro Antonucci and Giorgio Corani
Bayesian networks are important tools for uncertain reasoning in AI; their quantification requires a precise assessment of the conditional probabilities. Credal networks generalize Bayesian networks, so that probabilities can vary in a set (eg., interval). This provides a more realistic model of expert knowledge and returns more robust inferences.
The first part of this tutorial describes the specification procedure for credal network and the existing inference algorithms; two examples of expert systems for military and environmental identification problems based on credal networks are indeed presented.
In the second part, we show how credal networks can be used for classification (credal classifiers). Credal classifiers generalize the traditional Bayesian classifiers, which are based on a single prior density and on a single likelihood. Credal classifiers are instead based on (i) a set of priors, thus removing the need for subjectively choosing a prior and (ii) possibly also on a set of likelihoods, to allow robust classification even with missing data. Credal classifiers can return more classes if the assignment to a single class is too uncertain; in this way, they preserve reliability. The tutorial presents algorithms for credal classification and comparison with traditional classifiers on a large number of data sets.
Prerequisite knowledge: Background on Bayesian methods could be helpful for the attendants.
Alessandro Antonucci is a post-doctoral researcher at IDSIA (Switzerland). He teaches at the University of Applied Sciences and Arts of Southern Switzerland. He is currently working on a military project, granted by the Swiss Army, for surveillance systems based on credal networks. He is also the Executive Editor of the Society for Imprecise Probability: Theories and Applications (SIPTA).
Giorgio Corani is a post-doctoral researcher at IDSIA (Switzerland). He has published several papers on credal classifiers, among which the naive credal classifier (JMLR, 2009). He is in the program committee of ISIPTA (International Symposium on Imprecise Probabilities). He teaches 'Uncertain Reasoning and Data Mining' at the University of Lugano (Switzerland).
SP2 Cooperative Games in Multi-Agent Systems
Edith Elkind, Georgios Chalkiadakis and Michael Wooldridge
Cooperative (or coalitional) games provide an expressive and flexible framework for modeling collaboration in multi-agent systems. However, from a computational perspective, cooperative games present a number of challenges, chief among them being how they can be succinctly represented and how to reason efficiently with such representations. In this tutorial, we survey work on several aspects of cooperative games and their applications to multi-agent systems. We introduce the basic models used in cooperative game theory, and the relevant solution concepts. We then describe the key computational issues surrounding such models, and survey the main approaches developed over the past decade for representing and reasoning about cooperative games in AI and computer science generally. We then discuss the aspects of cooperative games that are particularly important in multi-agent settings, such as uncertainty and decentralized coalition formation algorithms. We conclude by presenting recent applications of these ideas in multi-agent scenarios.
Prerequisite knowledge: We assume a basic knowledge of AI principles (for example, rule-based knowledge representation, search, very basic logic), but no knowledge of game theory or cooperative games.
Edith Elkind is an assistant professor, Division of Mathematical Sciences, Nanyang Technological University, Singapore. She received her Ph.D from Princeton in 2005. Her main research interests are algorithmic game theory and computational social choice, with a particular emphasis on coalitional games.
Georgios Chalkiadakis is a research fellow in the School of Electronics and Computer Science, University of Southampton, UK. Georgios gained his Ph.D in computer science from the University of Toronto in 2007, for work combining Bayesian reinforcement learning with game-theoretic, coalition formation-related ideas.
Michael Wooldridge is a professor at the University of Liverpool, UK. His research interests are in the use of formal methods for multi-agent systems. Wooldridge was the recipient of the ACM Autonomous Agents Research Award in 2006, was elected ECCAI Fellow in 2007, and AAAI Fellow in 2008.
SP3 Rules on the Semantic Web: Advances in Knowledge Representation and Standards
Benjamin Grosof, Mike Dean and Michael Kifer
The area of semantic rules is perhaps the most important frontier today for the semantic web’s core technology and standards. Rules extend databases and ontologies with more powerful, flexible, and active forms of structured knowledge, and also have a number of close relationships to query and search, policies and trust, wikis, and services. Recent progress includes major initial industry standards adopted or nearing finalization from W3C (Rule Interchange Format, OWL 2 RL profile) and OMG.
Recent progress also includes demonstrations of radical technology advances in KR expressiveness and knowledge integration (for example, SILK). These include efficient reactive higher-order defaults, with sound integration of first order logic, based on declarative logic programs that supersume relational/RDF databases and interoperate with production rules.
Finally, there has been recent progress in accelerating investments/acquisitions, for example, by several of the largest software companies, and a wide range of emerging applications in business, government, and science.
There are a number of exciting research issues — including how to integrate unstructured knowledge gained from text understanding and machine learning.
This up-to-date and comprehensive tutorial will cater to those first learning about semantic web rules, as well as those who already have some background in them.
Prerequisite knowledge: Prerequisite is only basic knowledge of FOL, DBMS, and XML.
Benjamin Grosof leads a large research program in semantic rules and AI at Vulcan Inc., the parent company of Paul G. Allen. He also has a part-time expert consulting business. He has pioneered semantic technology and standards for rules, their combination with ontologies, and their applications in e-commerce and policies.
Mike Dean, a principal engineer at Raytheon BBN Technologies, has been developing semantic web tools and applications since 2000 and has contributed to the development of OWL, SWRL, RIF, and SILK.
Michael Kifer is a professor in the Computer Science Department, Stony Brook University. His works on knowledge representation, particularly on F-logic, HiLog, and others, are among the most widely cited in semantic web research. He was twice awarded the prestigious ACM-SIGMOD “Test of Time” awards for his works on object-oriented database languages. Recently he also received SUNY Chancellor's Award for Excellence in Scholarship.
SP4 Towards Intelligent Web Search: Inferring Searcher Intent
Billions of users search the web, clicking on the results, submitting and refining queries and otherwise interacting with the search engines. Mining the vast amount of information generated by these interactions has been an active area of research, resulting in significant advances in web search ranking, crawling, query suggestions, and other areas of web search. This tutorial will focus on one crucial area of web search, namely inferring the intent of the searcher by using computational models of searcher behavior, interests, and actions. The emphasis will be on the state-of-the-art machine learning and data mining techniques for learning and applying user intent inference models. The tutorial will consist of four short lectures on 1) user modeling in web search; 2) inferring searcher intent; 3) improving web search with inferred intent information; 4) search personalization. Prerequisite knowledge: Understanding of basic machine learning and data mining is assumed; Information retrieval (search) techniques will be introduced as needed.
Eugene Agichtein is an assistant professor in the mathematics and computer science departments at Emory University. Eugene’s expertise is in information retrieval, currently focusing on understanding and modeling user interactions in web search and social media. Eugene has published extensively on web search, information retrieval and information extraction.
MA1 An Introduction to Constraint Programming and Combinatorial Optimisation through Numberjack
Barry O'Sullivan, Emmanual Hebrard, and Eoin O'Mahony
Computers play an increasingly important role in helping individuals and industries make decisions. For example they can help individuals make decisions about which products to purchase or industries make decisions about how best to manufacture these products. Constraint programming provides powerful support for decision-making; it is able to search quickly through an enormous space of choices, and infer the implications of those choices.
This tutorial will teach attendees how to develop interesting models of combinatorial problems and solve them using constraint programming, satisfiability and mixed integer programming techniques. The tutorial will make use of Numberjack, an open-source Python-based optimisation system developed at the Cork Constraint Computation Centre. As such, this tutorial is ideal for graduate students, industrialists, and advanced researchers interested in knowing how to apply optimisation technology to challenging problems. A number of real-world case-studies from domains such as telecommunications, network security, sensor networks, warehouse location, and computational sustainability will be presented. A feature of this tutorial is that it will be hands-on. Attendees will walk away with the basic skills required to implement their own models using an open-source optimisation system.
Barry O'Sullivan is the associate director of the Cork Constraint Computation Centre at University College Cork, president of the Association for Constraint Programming, coordinator of the European Research Consortium for Informatics and Mathematics Working Group on Constraints, chair of the Artificial Intelligence Association of Ireland, and executive council member of the Management Science Institute of Ireland.
Emmanuel Hebrard is a postdoctoral research fellow at the Cork Constraint Computation Centre.
Eoin O'Mahony works as a research assistant at 4C.
Numberjack, the system used throughout this tutorial has been developed by the presenters at 4C.
MA2 How to Integrate Ontologies and Rules?
Thomas Eiter, Stijn Heymans, Luis Polo, and Adeline Nazarenko
It is a challenge in a business to enable the right people to interact in their own way with the right part of their business application. We believe that this can be achieved by cleanly separating the domain ontology from the actual business rules, on the one hand; and the representation of the knowledge from its operationalization in IT applications, on the other hand. As such, one can distinguish three views on the business organization: (1) the view of the business analyst via business policies and rules, (2) the view of the knowledge engineer via ontologies and rules, and (3) the view of the IT department via an operationalization in applications.
In the four-hour technical tutorial we give an overview of the above three steps and focus on (2), the management of combinations of ontologies and rules. In doing so, we touch upon topics such as natural language processing, ontology languages such as OWL 2, and logical and production rules, with attention for the integration of ontologies and rules and its arising issues.
Prerequisite knowledge: The target audience is a general AI audience that is familiar with notions of formal ontology languages (such as description logics and FOL and so on), logic programming, or production rules.
Thomas Eiter is a professor at the Vienna University of Technology (since 1998), where he heads the Knowledge Based Systems Group. His main research area is knowledge representation and reasoning, with a stress on logic-based AI and nonmonotonic formalisms; he was elected as an ECCAI Fellow in 2006.
Stijn Heymansworks at the Knowledge-Based Systems group at the Vienna University of Technology, with a focus on the integration of ontologies and rules. He was formerly a researcher at DERI Innsbruck, where he led the research cluster, Intelligent Reasoning for Integrated Systems.
Luis Polo is a researcher at CTIC Foundation, a research center based at Gijon, Spain. He is the head of the semantic technologies unit and designs complex ontologies applied to practical solutions in domains such as representing preferences and profiles, delivery context, social communities, tourism and business processes.
Adeline Nazarenko is a professor in the Computer Science Department of Paris 13 University and the head of the natural language and knowledge representation team. She has been working for almost 20 years in natural language processing focusing on natural language semantics and on text understanding applications.
MA3 Sampling Techniques for Probabilistic and Deterministic Graphical Models
Rina Dechter, Bozhena Bidyuk, and Vibhav Gogate
This half-day tutorial will provide participants with a firm understanding of sampling based simulation techniques used for approximating various probabilistic inference tasks defined over discrete graphical models. After covering the necessary background in sampling theory, the tutorial will survey progress achieved over the past few decades in two parts. In the first part, we will discuss schemes that exploit structural features and problem decomposition to improve the quality of sampling-based approximations. Examples of such techniques are cutset conditioning and AND/OR search spaces for graphical models. In the second part, we will present schemes that harness the power of CSP and SAT techniques to overcome the difficulty associated with performing sampling based inference in presence of deterministic dependencies. The techniques covered in the tutorial will be exemplified on different types of graphical frameworks (constraint networks, satisfiability, probabilistic networks and mixed networks) using a variety of applications (for example, functional verification, bioinformatics tasks).
Prerequisite knowledge: Basic understanding of Bayesian networks, constraint networks, Satisfiability and Statistics. Familiarity with exact inference techniques for graphical models will be helpful but not essential.
Rina Dechter is a professor of computer science at the University of California, Irvine. She received her Ph.D in computer science from UCLA in 1985, an MS degree in applied mathematics from the Weizmann Institute and a B.S in mathematics and atatistics from the Hebrew University, Jerusalem. Her research centers on computational aspects of automated reasoning and knowledge representation including search, constraint processing and probabilistic reasoning. Dechter is an author of Constraint Processing, has authored over 100 research papers, and has served on the editorial boards of Artificial Intelligence, Constraint Journal, and the Journal of Artificial Intelligence Research and Logical Methods in Computer Science. She was awarded the Presidential Young investigator award in 1991, is a fellow of the Association for the Advancement of Artificial Intelligence, was a Radcliffe Fellowship 2005-2006 and received the 2007 Association of Constraint Programming (ACP) research excellence award.
Bozhena Bidyuk is currently working at Google Inc. in Irvine, California on research problems associated with measuring and predicting the effectiveness of television ads. She received her Ph.D in computer science from the University of California, Irvine, in June 2006 under the supervision of Rina Dechter. Her research interests are in automated reasoning and machine learning with a focus on sampling based approximate inference techniques. Her research interests span areas of exact and approximate inference in graphical models, and in particular, sampling methods, and their use in unsupervised machine learning.
Vibhav Gogate is a postdoctoral research associate at University of Washington, Seattle working with Pedro Domingos. He received his Ph.D in computer science from University of California, Irvine, in June 2009 under the supervision of Rina Dechter. His research interests are in automated reasoning and machine learning with a focus on sampling based approximate inference techniques. Currently, he is working on developing exact and approximate inference algorithms for Markov logic networks, which are first-order or lifted probabilistic graphical models.
MA4 Voting Theory
Voting theory is the study of methods for conducting elections. It has attracted a lot of interest from AI researchers in recent years: there are important applications of voting theory in AI (for example, in multiagent systems) and the tools and techniques of AI have proven useful for the study of voting methods (for example, complexity theory, knowledge representation, and automated reasoning).
This tutorial will provide an introduction to the theory of voting for AI researchers. We will present the most important voting procedures and cover some of the classical theorems in the field. We will also see examples for recent work in computational social choice, which brings together ideas from social choice theory (including voting theory) and computer science (including AI).
Prerequisite knowledge: No specific background knowledge will be assumed; the tutorial will be accessible to anybody working in AI.
Ulle Endriss is an assistant professor at the Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam, where he carries out research at the interface of logic, artificial intelligence, and mathematical economics. In 2006, he organized the inaugural edition of the International Workshop on Computational Social Choice (COMSOC).
MP1 Description Logics for Data Access
Giuseppe De Giacomo and Domenico Lembo
The tutorial illustrates recent developments in Description Logics (DLs) aimed at coupling DL knowledge bases with relational data stores. For this coupling to be effective, two main issues need to be addressed: first, the language used to query the knowledge base needs to be much more expressive than DL expressions, in particular the ability of expressing arbitrary joins as in conjunctive queries (CQs) studied in databases is essential; and second, query answering should remain tractable (PTIME or less) with respect to the size of the data. In order to address such issues, as often happens in DL research, the language used to specify the knowledge base has to be carefully designed to suitably balance expressiveness and computational complexity of reasoning. However, differently from what typically happens in DL research, reasoning is not based on first-order or modal tableaux, but on chase techniques originally developed in databases for reasoning over data dependencies. We will discuss DL languages designed for data access, and techniques for reasoning in such DLs, focusing specifically on query answering. We also illustrate reasoning tools that are available for such logics.
Giuseppe De Giacomo is a full professor in the Department of Computer and System Sciences (Dipartimento di Informatica e Sistemistica), Sapienza Università di Roma. His research interests include description logics, information integration, knowledge representation and reasoning, service composition, reasoning about actions, cognitive robotics, and object-oriented methodologies. He is the author of more than 200 publications in international conferences and journals.
Domenico Lembo is an assistant professor in the Department of Computer and System Sciences (Dipartimento di Informatica e Sistemistica), Sapienza Università di Roma. His main research interests include conceptual and semantic data modeling, description logics and ontologies, knowledge representation and reasoning, semantic web, database theory, and information integration. He is the author of numerous publications in international conferences and journals.
MP2 Machine Learning Meets Knowledge Representation in the Semantic Web
Francesca A. Lisi
Defining ontology and rule languages for the semantic web poses several challenges to knowledge representation (KR) research in description logics and their hybridizations with clausal logics. Specifying ontologies and rules is a very demanding task also from the knowledge acquisition viewpoint. Yet, it could be partially automated by applying machine learning (ML) methods and techniques, especially those following the logic-based approach also known as Inductive Logic Programming.
The tutorial will provide in four hours a survey of research in KR and ML, which can support the management of semantic web ontologies and rules with both deductive and inductive reasoning. The ultimate goal is to show that the semantic web is an AI-intensive application area. The tutorial is expected to be profitable for Ph.D students, young researchers or professionals with a strong background in logics in AI, expertise in either KR or ML, and interest in the semantic web. Each member of the target group will benefit from the tutorial in different ways depending on her profile. For example, KR researchers will learn more about non-standard reasoning tasks, like induction, which are peculiar to ML. Conversely, ML researchers will get acquainted with KR issues raised by the advent of the semantic web.
Francesca A. Lisi, Ph.D in computer science, is currently an assistant professor at the University of Bari (Italy). She investigates the intersection between machine learning and knowledge representation aiming at semantic web applications. Awarded with 2006 Cyc Prize, she has given several seminars and tutorials on this research.
MP3 Preferences and Partial Satisfaction in Planning
J. Benton, Jorge Baier and Subbarao Kambhampati
Partial satisfaction planning involves solving for a subset of goals when few resources exist, while preference based planning relates to finding the most preferred plan among possible solutions. Although different in perspective, these two problems are two sides of the same coin in that they both require handling soft constraints with a measure of plan quality defined a priori. The objective in both is to find the highest quality plan, reflecting user-defined metrics while maintaining the problem constraints. Introducing soft constraints forces the consideration of a wider range of possible solutions than classical planning, which involves only hard goals. In fact, in a problem with only soft goals, every state can be a solution.
In this tutorial we will examine the current state-of-the-art in preference and partial satisfaction planning and will cover representation of preferences and soft constraints, objective function definitions, frameworks for solving problems and current open challenges. We will focus on details of recent technical innovations in handling preferences in traditional STRIPS and hierarchical task network (HTN) oriented planning models.
Prerequisite knowledge: Attendees should have introductory knowledge of automated planning.
J. Benton is a Ph.D student at Arizona State University, having earned an MS there in 2005. His current research is on techniques for off- and on-line oversubscription and preference-based planning. His planner YochanPS won a distinguished performance award in the Simple Preferences track of the 2006 International Planning Competition.
Jorge Baier is an assistant professor at Pontificia Universidad Católica de Chile (PUC). His research has focused on the exploitation of techniques developed in classical planning in more expressive planning domains such as those containing preferences, temporally extended goals and procedural control knowledge. His planner Hplan-P won a distinguished performance award in the Qualitative Preferences track of the 2006 International Planning Competition. He earned his Ph.D in 2010 from University of Toronto.
Subbarao Kambhampati is a professor of computer science at Arizona State University. In the past five years, his group has been involved in effective solutions to over-subscription planning. He is a 1994 NSF Young Investigator, a 2004 IBM faculty fellow, a AAAI 2005 conferenc cochair, a JAIR associate editor, and an elected fellow of AAAI in 2004 for his contributions to automated planning. He received the 2002 college of engineering teaching excellence award.
OTHER TRACKS AND PROGRAMS
INFORMATION FOR AUTHORS
CALLS FOR PAPERS