AAAI-13 Tutorial Forum
The Tutorial Forum of the Twenty-Seventh AAAI Conference on Artificial Intelligence (AAAI-13) will be held July 14-15 in Bellevue, Washington. The Tutorial Forum provides an opportunity for junior and senior researchers to spend two days each year freely exploring exciting advances in disciplines outside their normal focus. We believe this type of forum is essential for the cross fertilization, cohesiveness, and vitality of the AI field. We all have a lot to learn from each other; the Tutorial Forum promotes the continuing education of each member of the AAAI. To encourage full participation by technical conference registrants, no separate fee will be charged for admittance to the Tutorial Forum.
Sunday, July 14, 9:00 AM – 1:00 PM
- SA1: Combining Logic and Probability: Languages, Algorithms, and Applications
Pedro Domingos and Kristian Kersting
- SA2: Decision Diagrams for Discrete Optimization
Willem-Jan Van Hoeve
- SA3: Game Theory for Security
Albert Xin Jiang, Manish Jain, Rong Yang, and Bo An
- SA4: Sentiment Mining from User Generated Content
Lyle Ungar and Ronen Feldman
Sunday, July 14, 2:00 PM – 6:00 PM
- SP1: Deep Learning of Representations
- SP2: Prediction, Belief, and Markets
Jenn Wortman Vaughan, and Jacob Abernethy
- SP3: Symbolic Methods for Probabilistic Inference, Optimization, and Decision-Making
- SP4: Textual Inference
Sebastian Pado and Rui Wang
Monday, July 15, 9:00 AM – 1:00 PM
- MA1: Answer Set Solving in Practice
Martin Gebser and Torsten Schaub
- MA2: From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Games, Optimization, and Planning
- MA3: Information Trustworthiness
Jeffrey Pasternack, Dan Roth, and V.G.Vinod Vydiswaran
- MA4: Maximum Satisfiability and Extensions
Monday, July 15, 2:00 PM – 6:00 PM
- MP1: Automatically Improving Empirical Performance: Algorithm Configuration and Selection
Frank Hutter, Lars Kotthoff, Yuri Malitsky, Barry O'Sullivan, and Lin Xu
- MP2: Equilibrium Computation
Nicola Gatti, Troels Bjerre Sorensen
- MP3: Moving Agents in a Graph
Adriaan ter Mors
- MP4: Semantic Web Rules: Fundamentals, Applications, and Standards
Benjamin Grosof, Mike Dean, and Michael Kifer
SA1: Combining Logic and Probability: Languages, Algorithms, and Applications
Pedro Domingos and Kristian Kersting
AI problems are characterized by high degrees of complexity and uncertainty. Complexity is well handled by first-order logic, and uncertainty by probability. Combining the two in one language would be highly desirable, and the last decade has seen rapid progress in this direction. Many probabilistic logical languages have been proposed, and efficient inference and learning algorithms for them are available, often in open source software. Probabilistic logical techniques have been successfully applied to a wide variety of problems in natural language processing, vision, robotics, planning, social networks, the web, and other areas.
This tutorial begins with an overview of the key issues in this area and the solutions that have been proposed, from representation to learning and inference. As an example, we then focus on Markov logic, which attaches weights to first-order formulas and treats them as templates for features of log-linear models. We look in particular at the application of lifting techniques to probabilistic inference in relational domains (such as lifted loopy belief propagation, lifted linear programming, and probabilistic theorem proving), the combination of statistical learning with inductive logic programming (statistical relational learning), and the application of these techniques to machine reading.
This tutorial will be of immense use to motivated students and AI researchers in general who wish to pursue research in statistical relational AI. It aims at motivating and explaining a topic of emerging importance for AI. No special background is assumed.
Pedro Domingos is a professor of computer science and engineering at the University of Washington. His research interests are in artificial intelligence, machine learning and data mining. He received a PhD in information and computer science from the University of California at Irvine, and is the author or coauthor of over 200 technical publications. He is a member of the editorial board of the Machine Learning journal, cofounder of the International Machine Learning Society, and past associate editor of JAIR. He was program cochair of KDD-2003 and SRL-2009, and has served on numerous program committees. He is a AAAI Fellow, and has received several awards, including a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, and best paper awards at several leading conferences.
Kristian Kersting is a junior professor at the Agricultural Department of the University of Bonn, Germany, an adjunct assistant professor at the Medical School of the Wake Forest University, North Carolina, USA, and an ATTRACT fellow at Fraunhofer IAIS, Bonn, Germany. He received his PhD from the University of Freiburg, Germany, in 2006. After a PostDoc at the Massachusetts Institute of Technology, he joined Fraunhofer IAIS in 2008. His main research interests are statistical relational learning and artificial intelligence, machine learning, and data mining. He has published over 100 technical papers and received the ECCAI Dissertation Award 2006 for the best European dissertation in AI. He serves regularly as area chair or senior program committee member for several conference and cochaired SRL and StarAI, among other international workshops. Currently, he is cochairing ECMLPKDD, the European machine learning and data mining conference, and is an action and associate editor of Machine Learning Journal, Data Mining and Knowledge Discovery, and JAIR as well as on the editorial board of New Generation Computing.
SA2: Decision Diagrams for Discrete Optimization
Willem-Jan Van Hoeve
This presents an overview of the recent successful applications of multivalued decision diagrams (MDDs) to discrete optimization problems, with applications to constraint programming, scheduling, and integer optimization.
In the first part of the tutorial, attendees will learn how constraint propagation algorithms can be extended to operate on MDDs. By propagating MDDs instead of variable domains, more structural information can be communicated between constraints, which can lead to substantial speedups.
The second part of the tutorial will focus on the use of MDDs in the context of constraint-based scheduling. In particular, we discuss how MDDs can be effectively applied to solve complex disjunctive scheduling problems, and are able to outperform state of the art industrial CP solvers by orders of magnitude in certain cases.
The third part of the tutorial discusses the application of limited-size MDDs to obtain upper and lower bounds for integer optimization problems. The resulting discrete bounds can be much stronger than continuous bounds based on linear programming, while being faster to compute.
Willem-Jan van Hoeve is an associate professor of Operations Research at Carnegie Mellon University. He received his PhD in computer science from the University of Amsterdam and CWI (2005), and was a postdoctoral researcher at Cornell University (2005–2007). His main research area is discrete optimization, with a specific focus on constraint programming. Van Hoeve is editor of Constraints and Constraint Programming Letters, and an associate editor of Decision Analytics. Van Hoeve's research on decision diagrams for discrete optimization has been supported by NSF and Google.
SA3: Game Theory for Security
Albert Xin Jiang, Manish Jain, Rong Yang, and Bo An
Game theory is an increasingly important paradigm for modeling security games and decision-making in these domains, including homeland security resource allocation decisions, robot patrolling strategies, and computer network security. Several deployed real-world systems use game theory to randomize critical security decisions to prevent terrorists and criminals from exploiting a predictable security schedule. The ARMOR system has been deployed at the Los Angeles International Airport since 2007 and the IRIS system has been deployed by the Federal Air Marshals Service since 2009. PROTECT has been deployed in the port of Boston since April 2011, has been in use at the port of New York since February 2012, and is headed for nationwide deployment; GUARDS is under evaluation for national deployment by the US Transportation Security Administration (TSA), and TRUSTS is being tested by the Los Angeles Sheriffs Department (LASD) in the Los Angeles Metro system to schedule randomized patrols for fare inspection.
This tutorial will introduce a wide variety of game-theoretic modeling techniques and algorithms that have been developed in recent years for security problems. Introductory material on game theory and mathematical programming (optimization) will be included in the tutorial, so there is no prerequisite knowledge for participants. After introducing the basic security game framework, we will describe algorithms for scaling to very large games, methods for modeling uncertainty and attacker observation capabilities in security games, and applications of these techniques for randomized resource allocation and patrolling problems. We will then talk about progress in the recent research area of game theory and human behavior, which deals with developing algorithms that perform well against human opponents.
At the conclusion of the tutorial we will highlight the many opportunities for future work in this area, including exciting new domains and fundamental theoretical and algorithmic challenges.
Our primary target audience is students and other novices who will benefit from an extensive introduction to one of the most important areas of applied multiagent systems research. This tutorial should also be of interest to researchers in the topic of human behavior, since we describe the application of concepts from behavioral game theory. We also anticipate that this tutorial will appeal to industry participants interested in applied work, and to game theory practitioners who may be interested in learning more about the specific techniques employed in this important class of games.
The primary tools the audience can learn are game theoretic models for security scenarios and exact/approximate algorithms to find solutions. The audience will also learn how concepts from behavioral game-theory have been applied and deployed in the context of security games. Although it is not the primary objective of the tutorial, attendees can learn also basic groundings on game theory and algorithmic game theory.
Albert Xin Jiang is a postdoctoral researcher in the Department of Computer Science at the University of Southern California. He received his PhD from the Department of Computer Science, University of British Columbia. Much of his research is addressing computational problems arising in game theory, including the efficient computation of solution concepts such as Nash equilibrium, pure-strategy Nash equilibrium, Stackelberg equilibrium and correlated equilibrium. He has published over a dozen peer-reviewed technical articles, including articles in Games and Economic Behavior and Machine Learning. In 2011 he received the best student paper award at the ACM Conference on Electronic Commerce. He received the 2012 CAIAC Doctoral Dissertation Award and the runner up IFAAMAS-11 Victor Lesser Distinguished Dissertation Award.
Manish Jain is currently a PhD candidate in the Department of Computer Science at the University of Southern California. He is a part of the TEAMCORE Research group. His work is on the applications of game theoretic and large scale optimization techniques for Homeland Security, including deployed software tools for the Federal Air Marshals Service and Los Angeles World Airports police. He has coauthored papers on the subject of security games that have been presented at AAMAS and AAAI conferences, in both the main track and the industry track. He has also organized tutorials on the topic of security games at both AAMAS and AAAI conferences. His work "Software Assistants for Patrol Planning at LAX, Federal Air Marshals Service, and Transportation Security Administration" was recently award the RIST Prize by the Military Operations Research Society. His work published at the Operations Research venue Interfaces was nominated as a finalist for the EURO excellence in practice award. He has also won the best research assistant award at the University of Southern California.
Rong Yang is currently a PhD student in the Department of Computer Science at the University of Southern California. She is a part of the TEAMCORE Research group. Her work is on the application of behavioral game theory for Homeland Security, along with a deployed application under evaluation by the United States Coast Guard. She has coauthored papers on the subject of security games that have been presented at the AAMAS and AAAI conferences.
Bo An is an associate professor at the Institute of Computing Technology of the Chinese Academy of Sciences since June 2012. Before that, he was a postdoctoral researcher at the University of Southern California, working with professor Milind Tambe. His research interests include artificial intelligence, multiagent systems, game theory, automated negotiation, resource allocation, and optimization. He has published over 30 refereed papers at conferences including AAMAS, IJCAI, and AAAI, and journals including JAAMAS and IEEE Transactions. He is the recipient of the 2010 International Foundation for Autonomous Agents and Multiagent Systems Victor Lesser Distinguished Dissertation Award. He won an operational excellence award from the Commander, First Coast Guard District of the United States. He also won the Best Innovative Application Paper award at the 11th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2012). Recently, he won the 2012 INFORMS Daniel H. Wagner Prize for Excellence in Operations Research Practice.
SA4: Sentiment Mining from User Generated Content
Lyle Ungar and Ronen Feldman
The proliferation of user generated content on the web is driving a new wave work on the determination of user sentiment from web texts such as message boards, blogs, tweets, and Facebook status updates. Both researchers and practitioners are developing and applying new methods to determine how users feel about everything: products and politicians, friends and family, scientific articles and celebrities. This tutorial will cover the state of the art in this rapidly growing area, including recent advances that combine information extraction with sentiment analysis to improve accuracy in assessing sentiment about specific entities. We will present several real world applications of sentiment analysis. Special emphasis will be given to lessons learned from years of experience in developing real world sentiment analysis systems. The tutorial is suitable for people with a very basic familiarity with text mining or information retrieval.
Ronen Feldman is an associate professor of information systems at the Business School of the Hebrew University in Jerusalem. He has worked extensively in information extraction. He founded the text-mining company ClearForest, has given more than 30 tutorials on text mining, and wrote The Text Mining Handbook.
Lyle H. Ungar is an associate professor of computer and information science at the University of Pennsylvania. He has published over 150 papers, done text mining at companies including Google and Dow Jones, and widely taught executive education courses. He is currently collaborating with psychologists to measure the happiness of the world from tweets and Facebook status updates.
SP1: Deep Learning of Representation
The success of machine learning algorithms generally depends on how data are represented, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although domain knowledge can be used to help design representations, learning can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms. In particular, deep learning involves the discovery of multiple levels of representation, and is supported both by theoretical arguments and recent breakthroughs using deep learning in applications and industry, from computer vision and natural language processing to speech recognition, modeling music, and outperforming other methods in transfer settings (generalizing to new classes or contexts). We view the ultimate goal of representation learning algorithms as disentangling the unknown underlying factors of variation that explain the observed data. This tutorial reviews the basics of feature learning and deep learning, which can often be viewed as relying on broad priors about AI tasks. The tutorial also covers recent work relating representation learning to probabilistic modeling (the discovery of latent variables explaining the observed data) and manifold learning (the discovery of the local geometric structures near which the data generating distribution concentrates). The audience will also be introduced to practical guidelines and how to set hyperparameters for representation learning, following a recently published book on the subject. A final objective is to raise questions and issues about the appropriate objectives for learning good representations, for computing representations (that is, inference), and the geometrical connections between representation learning, density estimation and manifold learning.
Yoshua Bengio's lab was one of the three (University of Toronto, University of Montreal, New York University) that launched the area of deep learning in 2006. Since then Bengio has been one of the most prolific contributors to the area, and setup the deeplearning.net community web site (recently followed by a Google+ community). Bengio is a member of the Neural Information Processing Systems board. He was program chair for NIPS'2008 and general chair for NIPS'2009. He coorganized with Yann LeCun the Learning Workshop from 1999 to 2012, and together they also created the upcoming International Conference on Learning Representations. Bengio holds the Canada Research Chair on Statistical Learning Algorithms, and is a Fellow of the Canadian Institute for Advanced Research. He also holds a National Science and Engineering Research Council Chair. He was an actor in the creation of the Journal of Machine Learning Research, of which he is action editor. Bengio also holds or held editorial responsibilities for Foundations and Trends in Machine Learning, Computational Intelligence, Machine Learning, and the IEEE Transactions on Neural Networks. He was area chair for numerous conferences, including NIPS and ICML. At the end of 2012, Google Scholar finds 11591 citations to Bengio's work, and computes an H-index of 47.
SP2: Prediction, Belief, and Markets
Jenn Wortman Vaughan and Jacob Abernethy
This tutorial will cover some of the basic mathematical ideas used in the design of prediction markets, financial markets designed to aggregate opinions across large populations of traders, and illustrate several fundamental connections between these ideas and techniques used in machine learning. We will begin with an overview of proper scoring rules, which can be used to measure the accuracy of a single entity's prediction, and are closely related to proper loss functions. We will then discuss market scoring rules, automated market makers based on proper scoring rules which can be used to aggregate the predictions of many forecasters, and describe how market scoring rules can be implemented as inventory-based markets in which securities are bought and sold. We will describe recent research exploring a duality-based interpretation of market scoring rules which can be exploited to design new markets that can be run efficiently over very large state spaces. Finally, we will explore the fundamental mathematical connections between market scoring rules and two areas of machine learning: online "no-regret" learning and variational inference with exponential families. This tutorial will be self-contained. No background on markets or specific areas of machine learning is required.
Jenn Wortman Vaughan is a researcher at Microsoft Research, New York City, where she studies algorithmic economics and market design, machine learning, and social computing. Jenn came to Microsoft in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and spent a year as a Computing Innovation Fellow at Harvard.
Jake Abernethy received a Bachelor's in mathematics from MIT in 2002, a Master's degree in computer science from TTI-C in 2006, and a PhD in computer science at the University of California, Berkeley. He is now the Simons Postdoctoral Fellow at University of Pennsylvania. Abernethy's interests range between machine learning, games, and markets.
SP3: Symbolic Methods for Probabilistic Inference, Optimization, and Decision-Making
This tutorial explores recent advances in symbolic data structures for exact probabilistic inference and decision-making in mixed discrete and continuous models. The tutorial is in two parts. In the first part, I introduce an extension of the algebraic decision diagram (ADD) to continuous variables — termed the extended ADD (XADD) — to represent arbitrary piecewise functions over discrete and continuous variables and show how to efficiently compute elementary arithmetic operations, integrals, and maximization for these functions. In the second part, I cover a wide range of novel applications where the XADD may be applied: (a) exact inference in expressive discrete and continuous variable graphical models, (b) factored, parameterized linear and quadratic optimization, and (c) exact solutions to continuous state and action Markov decision processes — which includes closed-form exact solutions to previously unsolved problems in operations research. The tutorial is accompanied by free software that audience members may wish to download in order to gain hands-on experience with examples from the tutorial.
Scott Sanner is a senior researcher at NICTA Canberra and an adjunct faculty member at the Australian National University. Sanner earned a PhD from the University of Toronto, an MS from Stanford, and a double BS from Carnegie Mellon. Scanner's research interests span AI, machine learning, and information retrieval.
SP4: Textual Inference
Sebastian Pado and Rui Wang
Inference, that is the ability to derive conclusions from facts, is a central building block of intelligent behavior, and has predominantly been approached through the development of formal representations for knowledge that enable efficient inference.
The goal of the proposed tutorial is to provide an introduction to textual inference, a recent paradigm from natural language processing that provides an alternative approach to inference by defining it as a relation between natural language texts, avoiding commitment to any specific knowledge representation, inference procedure or learning approaches. It thus establishes a level playing field to compare and combine various approaches to inference as well as establishing an application-independent paradigm for applied semantics.
The proposed tutorial would take four hours, and span the range from fundamental to applied aspects of textual inference. It will cover the basics (motivation and definition); an overview of major strategies and algorithms; methods for acquiring and applying the linguistic background knowledge necessary for deciding inference at the language level; and an introduction to practical system building using a new open and modular software platform for textual inference.
Rui Wang is a researcher in the Language Technology lab of German Research Center for Artificial Intelligence (DFKI GmbH). He obtained both master and doctor degrees in computational linguistics in Saarland University, Germany. His research topics include recognizing textual entailment, parsing, and machine translation. The RTE systems he has developed achieved top rankings in both RTE challanges (2007–2009) and answer validation exercises (2007–2008).
Sebastian Pado is a professor of computational linguistics at Heidelberg University, Germany. His research focuses on representing and processing natural language semantics. He obtained his PhD from Saarland University, Germany, in 2007 and worked as a postdoctoral researcher at Stanford University.
MA1: Answer Set Solving in Practice
Martin Gebser and Torsten Schaub
Answer set programming (ASP) provides a declarative tool for modeling various problems typical to Knowledge Representation and Reasoning in particular and AI in general. The unique pairing of declarativeness and performance allows for concentrating on an actual problem, rather than a smart way of implementing it. The ASP approach is not only highly suitable for the practitioner solving an AI problem at hand but also for disseminating many basic AI techniques through teaching their (executable) formalization in ASP. The tutorial aims at acquainting the participant with ASP's modeling and solving methodology, enabling her/him to independent problem solving using ASP systems. To this end, it starts with an introduction to the essential formal concepts of ASP, needed for understanding its semantics and solving technology. With it, we provide a characterization of ASP's inference schemes that are in turn mapped into algorithms relying on advanced Boolean Constraint technology. We illustrate this technology through the ASP solver clasp. Similarly, ASP's grounding inferences are discussed in conjunction with (deductive) database techniques. The remainder of the tutorial is dedicated to applying ASP, involving an introduction to ASP's modeling language, its solving methodology, and advanced techniques fostering scalability. All involved ASP systems are freely available from potassco.sourceforge.net.
What makes this tutorial different from others on ASP is its focus on putting ASP at work. This comprises a good understanding of ASP solving technology and systems as well as basic skills in ASP's modeling capacities.
Torsten Schaub is university professor at the University of Potsdam, Germany. His group focuses on the automatisation of reasoning from incomplete, contradictory, and evolutive information with a focus on answer set programming (ASP). This has led to the open source project potassco.sourceforge.net, bundling various tools for ASP, among them the award winning solver clasp.
Martin Gebser is a postdoctoral researcher at the University of Potsdam. His research interests include the theoretical and practical aspects of declarative problem solving methods. Since its inception, Martin contributed to the Potassco project, leading to powerful and successful systems in answer set programming and related areas.
MA2: From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Games, Optimization, and Planning
This tutorial provides an introduction to the principle of "optimism in the face of uncertainty" from the simple multiarmed bandit to large scale decision making or optimization problems. Our initial motivation originated from the empirical success of the so-called Monte-Carlo tree search (MCTS) approach popularized in computer-go and further extended to many other games as well as optimization and planning problems. We will report elements of theory that enable to characterize the complexity of the underlying problems and describe efficient algorithms with performance guarantees.
The tutorial starts with an introduction to the stochastic multiarmed bandit and the presentation of the UCB strategy as well as several variants. Then the MCTS method and the UCT algorithm will be presented and the limitations of current approaches will be illustrated. Then we will follow the optimistic principle for the problem of function optimization under finite numerical budget and describe algorithms with nonasymptotic performance guarantees. We will explain how their performance relates to some measure of the quantity of near-optimal solutions as well as our knowledge of the "Local smoothness" of the function around its maxima.
Remi Munos received his PhD in 1997, completed a postdoc at Carnegie Mellon University, and became professor at Ecole Polytechnique, France, in 2000. In 2006 he joined INRIA as senior researcher in the SequeL project team in Lille. His main research topics include reinforcement learning and bandit theory.
MA3: Information Trustworthiness
Jeffrey Pasternack, Dan Roth and V. G. Vinod Vydiswaran
Decision makers and citizens are heavily influenced these days by information they obtain from online resources — from news portals, online encyclopedias, blogs and forums, product websites and reviews, etc. However, the lack of control over what gets published online can often lead to dissemination of unreliable and misleading information. In such a scenario, how can one verify if a claim is true or determine which sources are trustworthy?
In this tutorial, the instructors will systematically consider various approaches that address this problem. After briefly motivating and introducing the problem, the instructors would present the approaches that compute the trustworthiness of sources in isolation. Then, they would present the class of iterative, fact-finding approaches that are capable of estimating the trustworthiness of the source in addition to the believability of the claims. The presenters would extend the discussion to free-text claims, and present approaches that make use of textual evidence in favor of or against the claims in computing trustworthiness of sources and claims. Finally, they will present challenges in presenting credible information to users to help them validate claims.
This tutorial is aimed at AI researchers who are interested in future research on trustworthiness of information. The goal of the tutorial is to present the audience with an exhaustive survey of relevant research on this topic and to outline future directions of research that may interest the AI community. No specific background knowledge is assumed of the audience.
Jeffrey Pasternack is a research scientist at Facebook. He has researched both nonprobabilistic (fact-finding) and generative (latent credibility analysis) models capable of leveraging prior knowledge for evaluating the credibility of information and of information sources; other recent work includes trust metrics, information extraction, transliteration, and constrained learning.
Dan Roth is a computer science professor at the University of Illinois at Urbana-Champaign and a Fellow of the ACM, AAAI, and the ACL, for his contributions to machine learning and natural language processing. He has published broadly in machine learning, natural language processing, and knowledge representation and reasoning, has chaired several major conferences (AAAI, ACL), and is currently the associate editor-in-chief of JAIR.
V. G. Vinod Vydiswaran completed his Ph.D. in computer science at the University of Illinois at Urbana-Champaign. His dissertation research focused on modeling and predicting trustworthiness on online textual information, and addressed algorithmic, data-oriented, and user-centric challenges in computing information trustworthiness. His research interests include text informatics, natural language processing, machine learning, and information extraction.
MA4: Maximum Satisfiability and Extensions
The maximum satisfiability (MaxSAT) problem is a generalization of the Satisfiability problem. The idea is that sometimes not all restrictions of a problem can be satisfied, and we try to satisfy the maximum number of them. The MaxSAT problem is a natural combinatorial optimization problem, and the technology developed for its resolution can be applied in several domains, such as: scheduling and timetabling problems, FPGA routing, software package installation, design debugging, bioinformatics, probabilistic reasoning, etc. In the last five years, state of the art MaxSAT solvers have experienced a significant advance and they can be applied on industrial problems.
In this tutorial, we will show how to encode optimization problems into MaxSAT. Then, we will study the algorithms behind successful MaxSAT solvers. In particular, we will emphasize algorithms and implementation details that give the best performance on industrial instances. We will also show how to apply other technologies from satisfiability modulo theories (SMT) and integer linear programming (ILP). Finally, we will introduce other extensions of MaxSAT and related problems such as the weighted SMT problem, the minimum satisfiability problem and the density of states of MaxSAT formulas. Introductory material on satisfiability will be included, so there is no prerequisite for participants.
Carlos Ansótegui is an associate professor at the University of Lleida, Spain. His research interests include modeling and solving decision and optimization problems, through the application of constraint programming techniques, in particular, those from satisfiability, satisfiability modulo theories or maximum satisfiability. He has coauthored several papers on efficient solvers for the MaxSAT problem. These solvers have ranked among the best at the international MaxSAT Evaluation.
MP1: Automatically Improving Empirical Performance: Algorithm Configuration and Selection
Frank Hutter, Lars Kotthoff, Yuri Malitsky, Barry O'Sullivan, and Lin Xu
Configurable algorithms for problem solving are ubiquitous in artificial intelligence. Configuring these algorithms, by finding good settings for their parameters, can make the difference between solving a problem quickly or taking hours to find the same solution. Similarly, different problem solving algorithms perform well on different types of instances, and the instance specific choice of which algorithm to use often has a similarly large impact as properly configuring the algorithm.
This four-hour tutorial discusses automated methods for identifying good parameter settings for problem solving algorithms (the algorithm configuration problem), for selecting the best algorithm on a per-instance basis (the algorithm selection problem), and for the combined problem of selecting the best setting of a configurable algorithm on a per-instance basis. It covers related techniques and touches on advanced topics. Case studies for SAT-based formal verification, mixed integer programming (MIP), supervised machine learning and others illustrate the results and show how to apply the techniques in practice.
Frank Hutter is a postdoctoral researcher at the University of British Columbia (Canada) and will take a faculty position at Freiburg University (Germany) in 2013. Hutter's research focuses on the use of machine learning and optimization for improving algorithms for solving NPhard problems. His work on algorithm configuration was recognized with the 2010 CAIAC doctoral dissertation award for the best AI thesis in Canada. With his coauthors, he has received best paper awards from JAIR and LION, and numerous medals for the algorithm selection method SATzilla in recent SAT competitions. More details can be found at www.cs.ubc.ca/~hutter.
Lars Kotthoff is a postdoctoral researcher at University College Cork, Ireland. He has been doing research into algorithm selection and portfolios since he started his PhD at the University of St Andrews, Scotland. His research has resulted in a best paper award at SoCS and the award of several grants and fellowships. More details on numerous publications on the subject, including a recent survey of the literature and a tabular summary available at 4c.ucc.ie/~larsko.
Yuri Malitsky is a postdoctoral researcher at the University College Cork in Ireland whose research focuses on applying machine learning techniques to improve the performance of combinatorial optimization and constraint satisfaction solvers. In particular, his work centers around automated algorithm configuration, algorithm portfolios, algorithm scheduling, and adaptive search strategies. Publications on the subject can be found at 4c.ucc.ie/~ymalitsky/publications.html.
Barry O'Sullivan holds the chair of constraint programming at the Department of Computer Science at University College Cork and is director of the Cork Constraint Computation Centre. O'Sullivan has been a Science Foundation Ireland (SFI) principal investigator since 2006. He is a Fellow of ECCAI (the European Coordinating Committee for Artificial Intelligence) and a senior member of AAAI (the Association for the Advancement of Artificial Intelligence). More details can be found at osullivan.ucc.ie.
Lin Xu is a PhD candidate about to defend his PhD thesis on algorithm selection and automated portfolio construction at the Computer Science Department of the University of British Columbia in Vancouver, Canada. He is the lead author on the portfolio-based algorithm selection approach SATzilla, which received the 2010 IJCAI/JAIR Best Paper Prize and won numerous medals in recent international SAT competitions. More details can be found at cs.ubc.ca/~xulin730.
MP2: Equilibrium Computation
Nicola Gatti, Troels Bjerre Sørensen
The tutorial aims at providing a gentle introduction to the computational results for noncooperative game theory. The tutorial will introduce the basics of noncooperative game theory: mechanisms in strategic and extensive form and strategies, solution concepts and their motivation, and examples of applications. Then, for the most adopted solution concepts, the computational complexity of verifying a solution and finding an equilibrium will be discussed and subsequently the main algorithms will be presented with some examples of applications. The tutorial will provide basics, well established results, and recent advancements. The tutorial will be of 4 hours duration.
Our primary target audience is students and other novices who will benefit from an extensive introduction to one of the most important areas of artificial intelligence research. Basic knowledge of game theory and operations research are useful.
Nicola Gatti is currently assistant professor with tenure at the Politecnico di Milano, Italy. His research topics are artificial intelligence, multiagent systems, and specifically algorithmic game theory. Currently, he is lecturer of informatics systems (undergraduate course) and algorithmic game theory (phd course), and assistant of the graduate courses of artificial intelligence and of autonomous agents and multiagent systems in computer engineering, at Politecnico di Milano.
Troels Bjerre Sørensen is a research fellow at the Centre for Discrete Mathematics and its Applications (DIMAP) at the University of Warwick, United Kingdom. His primary research area is the computational aspects of solution concept in noncooperative game theory. He is currently a postdoctoral associate at Duke University.
MP3: Moving Agents in a Graph
Adriaan ter Mors
This four-hour tutorial will present both theoretical background and practically applicable algorithms on two interesting problems in the domain of coordinating agents moving in a shared environment that can be modeled as a graph. In multiagent pathfinding, algorithms must be capable of dealing with situations of extreme congestion, which means equal emphasis is placed on ensuring that each agent can reach its goal location and getting the agents there efficiently. Multiagent pathfinding has applications in, for example, video games, puzzling, and robotics.
Prioritized route planning, on the other hand, is not complete in general, (though completeness can be achieved with specific assumptions of the application domains), and focus of the research is on efficiency and robustness of route plans when executing in dynamic environments. Application domains of prioritized route planning include robotics, light and control of automated guided vehicles.
Adriaan ter Mors completed his master's thesis at Delft University of Technology, winning the national prize for the best master's thesis in the category of technical computer science. He pursued his PhD on multiagent route planning at Rotterdam-based research company Almende, and defended his thesis in March 2010. He is currently working as a postdoc at Delft University of Technology, in the algorithmics group, specializing in multiagent route planning and pathfinding.
MP4: Semantic Web Rules: Fundamentals, Applications, and Standards
Benjamin Grosof, Mike Dean and Michael Kifer
The area of semantic rules is perhaps the most important frontier today for the semantic web's core technology and standards. Recent progress includes major initial industry standards from W3C and OMG, and fundamental advances in the underlying knowledge representation techniques in declarative logic programs, including most recently for efficient higher-order defaults with sound integration of first order logic ontologies (OWL). Recent progress also includes methods to use rules for, or with, more expressive OWL ontologies; increasing integration of rules with query or search in SPARQL and relational databases; substantive translations between heterogeneous types of commercial rule engines; development of open-source tools for inferencing and interoperability; performance benchmarking of rule systems; a wide range of emerging applications including in business, science, and trust; and accelerating industry investments or acquisitions in the technology including by integrated software companies such as Oracle, IBM, and Microsoft. This tutorial will provide a comprehensive and up to date introduction to these developments and to the fundamentals of the key technologies and outstanding research issues involved. It will explore example application scenarios, overall requirements and challenges, and touch upon business and social value and strategy considerations.
The area of semantic rules is perhaps the most important frontier today for the semantic web's core technology and standards. Rules extend databases and ontologies with more powerful, flexible, and active forms of structured knowledge (as opposed to unstructured knowledge such as text), and have a number of close relationships to other aspects of the overall semantic web such as ontologies, query or search, trust, and services. There are a number of exciting research issues, important initial standards from W3C and OMG have been completed, and major software companies such as Oracle are now supporting web rules functionality in their core products. Most semantic web researchers (and developers) are not yet up to speed in this area. This half-day tutorial will help them get there.
Most of the AAAI-13 audience, especially those interested in rules and their applications in semantic web, ontologies, querying of RDF and relational data, business on the web, services on the web. This includes researchers interested in core technologies, and developers interested in standards and applications, as well as those interested in closely related areas such as query, search, question answering, natural language processing, collective intelligence, ontologies, policies, trust, security, wikis, e-commerce, financial services, and biomedical. The tutorial will cater to those first learning about semantic web rules, as well as those who already have some background in them. It will assume only background knowledge of basics of logical knowledge representation: first order logic and relational DBMS; and basics of XML. Helpful also, but not absolutely required, are basics of RDF and OWL.
Benjamin Grosof (lead presenter) is cofounder and colead of Coherent Knowledge Systems, a new startup on semantic technology that is commercializing a major research breakthrough in logic-based artificial intelligence. He also is president of Benjamin Grosof &am; Associates, an expert consulting business on software technology and related strategy. Previously he was a senior research program manager at Vulcan Inc. (2007-2013), the asset management company of Paul G. Allen (cofounder of Microsoft). There he conceived and led a large research program in the area of rule-based semantic technologies and artificial intelligence (AI). Before Vulcan, he was an IT professor at MIT Sloan (2000-2007) and a senior software scientist at IBM Research (1988-2000). He has pioneered semantic technology and industry standards for rules; the combination of rules with ontologies; the application of rules in e-commerce, e-contracts, and policies; and the acquisition of rules and ontologies from natural language (NL). He cofounded the influential RuleML industry standards design effort and prototyped it in SweetRules, the main bases for the W3C Rule Interchange Format (RIF) standard. He cofounded the International Conference on Rules and Rule Markup Languages for the semantic web (which since became the RR and RuleML conferences). He led the invention of several fundamental technical advances in knowledge representation, including courteous defeasibility, restraint bounded rationality, and the rule-based technique which rapidly became the currently dominant approach to commercial implementation of W3C OWL (Web Ontology Language) and the main basis of its RL (Rules Profile) standard. He also has extensive experience in machine learning, probabilistic reasoning, and user interaction design. His background includes four major industry software releases, two years in software startups, a Stanford PhD, a Harvard BA, and over 50 refereed publications.
Grosof has given numerous invited talks about rules on the web, and developed several MIT courses with substantial focus on it. He presented (with coauthors) related tutorials at the International Joint Conference on Artificial Intelligence (2001), ACM Conference on E-Commerce (2004), International Semantic Web Conferences (2004, 2005, 2006, 2009, 2010, 2012), and the WWW conference (2006, 2009).
Mike Dean is a principal engineer at Raytheon BBN Technologies. As principal investigator for integration and transition within the DARPA Agent Markup Language program, he chaired the Joint United States European Union ad hoc markup language committee responsible for the DAML+OIL and SWRL languages, coedited the OWL web ontology language reference, and was a member of the W3C RDF core and web ontology working groups and the architecture committee of the semantic web services initiative. He remains a member of the RuleML steering committee and the W3C rule interchange format working group. He currently works on the development of SILK. He is the developer of a number of semantic web tools and reference data sets and has been actively using SWRL and SILK in a variety of semantic web applications. He holds a B.S. in computer engineering from Stanford University.
Dean has given numerous talks on the semantic web, including an early tutorial, DAML+OIL for Application Developers, and tutorials (with Benjamin Grosof) at the International Semantic Web Conference (2004, 2005, 2006, 2009, 2010, 2012) and WWW (2006, 2009).
Michael Kifer is a professor with the Department of Computer Science, Stony Brook University, USA. He received his Ph.D. in computer science in 1984 from the Hebrew University of Jerusalem, Israel, and the M.S. degree in mathematics in 1976 from Moscow State University, Russia.
Kifer is a cofounder of Coherent Knowledge Systems, a new startup on semantic technology, and since 2012 he has been serving as the president of the Rules and Reasoning Association (RRA). His interests include web information systems, knowledge representation, and database systems. He has published four text books and numerous articles in these areas. In particular, he coinvented F-logic, HiLog, and transaction logic, which are among the most widely cited works in computer science and, especially, in semantic web research. Kifer serves on the editorial boards of several computer science journals and chaired a number of conferences. Twice, in 1999 and 2002, he was a recipient of the prestigious ACM-SIGMOD Test of Time awards for his works on F-logic and object-oriented database languages. In 2006, Kifer was Plumer Fellow at Oxford University's St. Anne's College and in 2008 he received SUNY Chancellor's Award for Excellence in Scholarship.