AAAI-16 Tutorial Forum
Thirtieth Conference on Artificial Intelligence
February 12–17, 2016, Phoenix, Arizona, USA
What Is the Tutorial Forum?
The Tutorial Forum provides an opportunity for researchers and practitioners to spend two days each year freely exploring exciting advances in disciplines outside their normal focus. We believe this type of forum is essential for the cross fertilization, cohesiveness, and vitality of the AI field. We all have a lot to learn from each other; the Tutorial Forum promotes the continuing education of each member of AAAI.
(All tutorials are 4 hours, including breaks, unless otherwise noted.)
Friday, February 12
9:00 AM – 1:00 PM
(except where noted; times include breaks, if applicable)
- FA1: CP-Nets
- FA2: Organ Exchanges — A Success Story of AI in Healthcare
- FA3: Recent Directions in Heuristic-Search
- FA4: Symbolic Methods for Hybrid Inference, Optimization, and Decision-making
2:00 PM – 6:00 PM
(except where noted; times include breaks, if applicable)
- FP1: AI for Disasters (1 hour, 45 minutes)
- FP2: Answer Set Programming Modulo Theories (1 hour, 45 minutes)
- FP3: AI Planning and Scheduling for Real-World Applications
- FP4: Deep Learning: from Foundations to Implementation
- FP5: Type-based Methods for Interaction in Multiagent Systems
Saturday, February 13
9:00 AM – 1:00 PM
(except where noted; times include breaks, if applicable)
- SA1: CogSketch
- SA2: Constraint (Logic) Programming
- SA3: Diffusion in Social Networks
- SA4: How to Automatically Machine Read the Web
2:00 PM – 6:00 PM
(except where noted; times include breaks, if applicable)
- SP1: Algorithm Configuration — A Hands on Tutorial
- SP2: Algorithms for Maximum Satisfiability with Applications to AI
- SP3: Computational Epidemiology and Public Health Policy Planning
- SP4: Learning and Inference in Structured Prediction Models
Thomas E. Allen, Judy Goldsmith, and Francesca Rossi
Conditional preference networks (CP-nets) are a compact and intuitive representation of preferences. They provide a modelling and reasoning tool for conditional qualitative preferences. Combinatorial preference domains abound, and CP-nets have a wide range of interesting and important applications, including automated negotiation, interest-matching in social networks, cybersecurity, and as aggregation primitives for making group decisions.
It is known that certain reasoning tasks, in particular that of dominance (for example, given a CP-net N and two outcomes or items, which if either is preferred according to N?), are computationally complex — NP-hard or worse. However, there is a growing base of heuristic algorithms for dominance testing. This bolsters the case for more widespread use of CP-nets in practice. In this tutorial we hope to introduce participants to CP-nets and methods for using them.
Those familiar with preference representations will be interested in our demonstration of a new GUI for visualizing CP-nets, recent approaches to dominance testing, an algorithm to generate CP-nets i.i.d., and results of extensive human-subjects experiments.
We expect participants to complete the session with an increased appreciation for the usefulness and usability of CP-nets.
Thomas E. Allen is a PhD student in computer science at the University of Kentucky. He hopes to graduate in spring 2016. His research includes work on CP-net algorithms for learning, reasoning, and random generation. He is interested in applications of preference models to assistive environments and ethics in AI.
Judy Goldsmith is a professor of computer science at the University of Kentucky. She's been active in the AI community since 1996. Her research areas include many aspects of decision making, including preference elicitation, representation, and aggregation; decision making under uncertainty; computational social choice; computational learning theory, and computational complexity.
Francesca Rossi is a professor of computer science at the University of Padova, Italy, currently on leave at the IBM T.J. Watson research center. Her research interests include constraint reasoning, preferences, multi-agent systems, and computational social choice. She is the associate editor in chief of JAIR and has been president of ACP and of IJCAI.
FA2: Organ Exchanges: A Success Story of AI in Healthcare
John Dickerson and Tuomas Sandholm
This tutorial covers past and current research in organ exchange, a method by which patients in need of a organ can swap willing but incompatible donors. Throughout, it also gives a higher-level overview of the steps taken to translate a purely academic idea into a large fielded healthcare system. The tutorial focuses on the computational aspects of organ exchange, starting by introducing past research that has now been implemented in real-world kidney exchange (or deemed impractical and not yet fielded) and then by covering the current research problems available at the intersection of AI, optimization, and economics. It especially dives into the computational methods developed and used to solve extremely large discrete optimization problems that reflect kidney exchange, along with the interplay between modeling decisions, computational tractability, exchange efficiency, equity, dynamism in matching, and a variety of other real-world constraints and considerations. While we focus on kidney exchange, research toward exchanges for other organs such as livers and lungs, as well as cross-organ exchanges, will also be covered.
This tutorial will be accessible to the general attendee of AAAI and should not require specialized knowledge in a specific subfield of AI.
John Dickerson is a final-year Ph.D. candidate in computer science at Carnegie Mellon University. He has published extensively on kidney exchange in top-tier AI conferences; that work has set policy at the United Network for Organ Sharing (UNOS) nationwide kidney exchange. He is an NDSEG Fellow, Facebook Fellow, and Siebel Scholar.
Tuomas Sandholm is a professor at Carnegie Mellon University's Computer Science Department. His algorithms run the national kidney exchange of 143 transplant centers; his kidney exchange designs have been adopted worldwide. He published 450 papers and fielded 800 largest-scale combinatorial auctions totaling $60 billion. He is Founder and CEO of Optimized Markets, for advertising campaign sales/scheduling.
FA3: Recent Directions in Heuristic-Search
Ariel Felner, Wheeler Ruml, and Nathan Sturtevant
Heuristic state-space search is a fundamental technique in AI and virtually every AI class includes A* and IDA*. But what more advanced topics should be included? What is exciting in heuristic search today? This tutorial provides overviews of important results in three areas within heuristic search that have seen recent progress, presented by researchers active in those areas:
- Optimal search. This topic has always been at the core of this field.
- Suboptimal search. Since it can require significant time and memory to prove that a solution is optimal, many applications can benefit from search algorithms that optimize a different objective.
- Search in explicit domains. Problems such as grid pathfinding can be very challenging under tight time, memory, or solution constraints.
Throughout this tutorial, we aim to look at a broad range of approaches, search scenarios and recent understandings. We will highlight the different characteristics of different search problems, and identify areas that have not been well-explored.
We assume that participants are familiar with the basics of state-space search (for example, the A* algorithm and admissible heuristics).
Wheeler Ruml is an associate professor of computer science at the University of New Hampshire. He is cofounder of the International Symposium on Combinatorial Search (SoCS), associate editor of JAIR, and was general cochair of ICAPS-2014. Even after many years, he still thinks search algorithms are pretty cool.
Ariel Felner is an associate professor at Ben-Gurion University. He is the chair of the Israeli Association for Artificial intelligence (IAAI) and a council member SoCS. He is interested in all aspects of heuristic search with specific care in pedagogical and historical aspects of teaching concepts in this field.
Nathan Sturtevant is an associate professor at the University of Denver. He is the SoCS Treasurer and the founder of the international Grid-Based Path Planning competition and the associated grid benchmark problems. Nathan does research on large-scale heuristic search, game playing, and the application of heuristic search to video games.
FA4: Symbolic Methods for Hybrid Inference, Optimization, and Decision-Making
To date, our ability to perform exact closed-form inference or optimization in hybrid domains (that is, containing mixed discrete and continuous variables) is largely limited to special well-behaved cases. This tutorial argues that with an appropriate representation and data structure, we can vastly expand the class of models for which we can perform exact, closed-form inference.
In this tutorial, I review the algebraic decision diagram (ADD) and introduce an extension to continuous variables — termed the extended ADD (XADD) — to represent arbitrary piecewise functions over discrete and continuous variables and show how to efficiently compute elementary arithmetic operations, integrals, and maximization for these functions. Then I cover a wide range of novel applications where the XADD may be applied: (1) exact learning and inference in expressive discrete and continuous variable graphical models, (2) factored, parameterized linear and quadratic optimization, and (3) exact solutions to continuous state, action, and observation MDPs and POMDPs.
Scott Sanner is an assistant professor of computer science at Oregon State University having previously spent eight years in the Machine Learning group at National ICT Australia (NICTA). Sanner earned a PhD from the University of Toronto, an MS from Stanford, and a double BS from Carnegie Mellon. Sanner's research interests span AI, machine learning, information retrieval, and operations research.
FP1: AI for Disasters
Robin R. Murphy
(1 hour, 45 minutes)
The goal of the tutorial is to enable researchers in all areas of artificial intelligence to determine how their research might apply to disaster response and recovery. Participants will be presented with a domain theory of disasters from an informatics perspective and the opportunities for artificial intelligence. The domain theory will introduce the organization of disaster responses and recovery operations, the difference between disaster response and recovery and disaster relief, the phases of a disaster and decisions made by the stakeholders during each phase, the spatial distribution of stakeholders, and the roles of formal and informal stakeholders, especially their information needs. The tutorial will concentrate on the current state of the practice impacting the design of intelligent systems, notably responder access to computing and networking resources, the three emerging sources of data overload (mobile devices, UAVs, and social networks), commonly used decision support tools and visualization packages, constraints on informatics: environmental, connectivity, costs, regulatory including privacy and public accountability, and recommendations on working with emergency professionals Researchers will be better able to converse with emergency professionals and establish relationships.
Robin R. Murphy is Raytheon Professor of Computer Science and Engineering at Texas A&M University and directs the Center for Robot-Assisted Search and Rescue and Center for Emergency Informatics. She received a B.M.E., a M.S. and Ph.D in computer science in 1980, 1989, and 1992, respectively, from Georgia Institute of Technology. She has over 150 publications on artificial intelligence, human-robot interaction, and robotics including the Introduction to AI Robotics and Disaster Robotics, which won the 2014 PROSE honorable mention for engineering and science. An IEEE Fellow, a TED speaker, and a founder of Roboticists Without Borders, she has worked in disaster robotics research and deployment since 1995. She has deployed ground, air, and marine robots at 19 disasters world-wide including the 9/11 World Trade Center disaster, Hurricane Katrina, and Fukushima Daiichi. Her numerous professional awards include the Motohiro Kisoi award (Japan), the AUVSI Foundation Al Aube award, and the ACM Eugene L. Lawler Award for Humanitarian Contributions within Computer Science and Informatics. She has been declared an "Innovator in AI" and "Agent of Change" by TIME, an "Alpha Geek" by WIRED Magazine, one of the "Most Influential Women in Technology" by Fast Company, and a "Top 25 Doers, Dreamers and Drivers for 2015" by Government Technology Magazine.
FP2: Answer Set Programming Modulo Theories
(1 hour, 45 minutes)
Answer set programming (ASP) is a successful declarative programming method oriented towards solving combinatorial and knowledge intensive problems. It has well-developed foundations, efficient reasoning systems, and a methodology of use tested on a number of industrial applications. The relationship between ASP and propositional satisfiability (SAT) has led to a method of computing answer sets using SAT solvers and techniques adapted from SAT.
Some recent extensions of ASP are to overcome the propositional setting of ASP by extending its mathematical foundation and integrating ASP with other computing paradigms. The tutorial will cover "Answer Set Programming Modulo Theories," which tightly integrates ASP with Satisfiability Modulo Theories (SMT), thereby overcoming some of the computational limitations of ASP and some of the modeling limitations of SMT. A high level action language based on ASPMT allows for succinct representation of hybrid transition systems, where discrete changes and continuous changes coexist. In a broader sense, ASPMT covers an extension of ASP combined with any external computation sources.
The target audience is those who are interested in declarative problem solving methods, recent developments in ASP and SMT, and their relations. Basic knowledge of first-order logic will be assumed, but no prior knowledge of ASP and SMT is required.
Joohyung Lee is an associate professor at Arizona State University. His research interests are in knowledge representation, logic programming, commonsense reasoning, and computational logic. He received a Ph.D. from the University of Texas at Austin. His 2004 paper on ASP received an Honorable Mention for the Outstanding Paper Award.
FP3: AI Planning and Scheduling for Real-World Applications
Steve Chien and Daniele Magazzeni
This tutorial provides an overview of applications of AI planning and scheduling in a number of real-world domains, on the earth, on the sea, and on the space.
We will start with a brief introduction to AI Planning and PDDL for modelling applications. We then give an overview of some recent applications of planning and scheduling in various domains (including smart-grid, energy management, underwater mission control), and present some recent advances in planning and robotics integration.
In the second part we present planning and scheduling applications on space, focusing on timeline-based scheduling, coverage observation scheduling, periodic and geometry driven scheduling. We will present experiences from various missions, including modified Antarctic mapping mission, orbital express missions, Ni-SAR and Europa Multi Flyby missions, and the Rosetta mission.
The target audience is the broad AI community (including researchers in robotics). General knowledge of artificial intelligence concepts such as search and optimization is presumed.
Steve Chien is senior research scientist at the Jet Propulsion Laboratory, California Institute of Technology. He has been awarded four NASA medals for his work in artificial intelligence for space. He has led the deployment of AI software for the NASA EO-1, NASA MER, and ESA Rosetta Orbiter mission.
Daniele Magazzeni is a lecturer in artificial intelligence at King's College London. His research explores the links between automated planning and model-checking, with a particular focus on hybrid systems and planning/robotics integration. He's coeditor-in-chief of AI Communications and cochair of ICAPS-16.
FP4: Deep Learning: from Foundations to Implementation
Reza Borhani, Jeremy Watt and Aggelos K. Katsaggelos
Due to their wide applicability, the tools of Deep Learning have quickly become some of the most important for today's researchers in computer vision, machine learning, robotics, and related fields. However understanding the foundations of deep learning can be intimidating to the uninitiated, as can comprehending the details of their implementation which demands an understanding of numerical optimization often unfamiliar to those with a traditional computer science or engineering background. In this tutorial we provide a user-friendly introduction to the basic tools of deep learning, describe their many applications, discuss how they relate to more traditional ideas in machine learning, and provide an introduction to most useful techniques from numerical optimization crucial to their implementation. These include an overview of essential concepts from nonlinear programming, stochastic gradient descent, backpropagation, greedy techniques, as well as regularization and momentum methods. To make full use of this tutorial one only needs a basic understanding of linear algebra and vector calculus. No prior knowledge of numerical optimization or machine learning is expected. Since all the demos in this tutorial are implemented in Python/Matlab, familiarity with the syntax of either of these languages is a plus as we intend to provide the code for all the algorithms presented in this talk.
Reza Borhani received the B.Sc. degree in electrical engineering from Sharif University of Technology, and the M.Sc. degree in computer science from Northwestern University. He is currently a Ph.D. candidate at the Image and Video Processing Laboratory at Northwestern University, where he conducts research in the intersection of machine learning and numerical optimization. His personal homepage is at www.rzborhani.com.
Jeremy Watt received the Ph.D. degree in electrical engineering and computer science from Northwestern University. He also holds a B.S. degree in religious studies and an M.A. in pure mathematics, both from Indiana University. His research interests lie in machine learning and computer vision, as well as numerical optimization.
Aggelos K. Katsaggelos received the Diploma degree in electrical and mechanical engineering from the Aristotelian University of Thessaloniki, Greece, in 1979, and the M.S. and Ph.D. degrees in electrical engineering from the Georgia Institute of Technology, in 1981 and 1985, respectively. He is the Joseph Cummings professor of electrical engineering and computer science at Northwestern University, where he also heads the Image and Video Processing Laboratory.
FP5: Type-Based Methods for Interaction in Multiagent Systems
Stefano Albrecht and Prashant Doshi
Inter-agent interactions are fundamental to a wide range of research in AI. A method for modeling interactions that is receiving much interest over the past decade in AI is to reason about the interaction using a space of predefined behaviours. Specifically, in the absence of knowing the true behaviour of the other agent, the approach supposes that the other agents' behaviours are drawn from a set of known or hypothesised behaviours, and to decide the agent's own actions in expectation of these dynamic types which may change as the agents act and observe. We refer to a private and sufficient abstraction of a hypothesised behaviour as a type.
This idea has been studied extensively in economics by game theorists and investigated in preliminary ways by sub-groups in AI for game playing and planning. Preliminary outcomes emphatically suggest that reasoning about types is an important tool for solving problems involving interactions with high uncertainty about agent behaviours and in which online learning based on trial-and-error is undesirable or infeasible. This half-day tutorial will provide a comprehensive and unified introduction to the theory and practice of type-based methods spanning early research in game theory to the latest work in AI, as well as outlining open problems for future research. The tutorial requires no prior knowledge of multiagent theory but assumes familiarity with basic probability and statistics.
Stefano Albrecht is a postdoctoral researcher and Humboldt fellow at UT Austin. His research made a number of contributions to the type-based method and led to publications in leading AI conferences, including AAAI, UAI, and AAMAS. He is cochair of the AAAI workshop series on Multiagent Interaction without Prior Coordination (MIPC) and editor of a special issue on MIPC at the AAMAS Journal.
Prashant Doshi is an Associate Professor of Computer Science and faculty member of the AI Institute at The University of Georgia, USA. His research interests lie in decision-making and specifically in decision-making under uncertainty in multiagent settings and in game theory. He has also had short stints at the IBM T. J. Watson Research Center where he worked in the eBusiness Group. He has published extensively in journals, conferences, and other forums in the fields of agents and AI. He has served on the program and reviewing committees of several conferences, including AAMAS and AAAI, and workshops in the field of agents. Prof. Doshi has taught introductory courses on AI to undergraduate and graduate students for more than 10 years, and an advanced course on decision making under uncertainty for more than 5 years, all of which were well received among the students. He has been a cospeaker on a longstanding tutorial on decision making in multiagent settings that has been held at the AAMAS conference consecutively for 8 years. He has given numerous presentations in conferences and invited talks at research institutions and universities.
Kenneth D. Forbus, Maria D. Chang and Matt McLure
People sketch to work out ideas and to communicate with others, making it a way to explore spatial reasoning and creating software that works more naturally with people. CogSketch is a sketch understanding system that supports AI and cognitive science research and sketch-based educational software. CogSketch incorporates visual processing of digital ink, qualitative spatial representations, analogical matching over integrated spatial and conceptual representations, and a large open-source knowledge base. CogSketch is used to build reasoning and learning systems, for gathering data in laboratory experiments, to simulate human visual reasoning, and in classroom experiments in geoscience, biology, and engineering. You can download CogSketch from Northwestern University.
This tutorial is intended for AI researchers who are interested in spatial reasoning, diagrammatic reasoning, building new kinds of intelligent educational software, or are interested in using sketch understanding in their research. Attendees will learn the basics of CogSketch and how it might be used to facilitate their research. This includes a summary of its representations and processing, and the support it provides for building AI systems, educational software, and cognitive science experiments. The equivalent of an AI undergraduate course should be sufficient preparation.
Kenneth D. Forbus is the Walter P. Murphy Professor of Computer Science and a professor of education at Northwestern University. He leads the CogSketch team, which is a project of the NSF-funded Spatial Intelligence and Learning Center. He is a Fellow of the AAAI, ACM, and the Cognitive Science Society.
Maria D. Chang is a Ph.D. candidate at Northwestern University. Her research interests include knowledge representation and reasoning, human-computer interaction, and artificial intelligence in education. She has worked with multiple instructors using CogSketch in their classes, and her thesis involves using CogSketch for multimodal knowledge capture.
Matt McLure is a Ph.D. candidate at Northwestern University. His research interests include learning and reasoning from sketches, including recognition and generation. He has developed visual representations for CogSketch that facilitate visual reasoning and learning, and for generating ink to draw learned concepts.
SA2: Constraint (Logic) Programming
Constraint logic programming (CLP) is a natural extension of logic programming, where unification is generalized to constraint satisfaction. Hence Prolog can be seen as one of the most suitable host languages for constraint programming.
This tutorial explains mainstream constraint satisfaction techniques and shows how they influence problem modeling in the context of logic programming. The core notions of constraint consistency, in particular arc consistency will be explained together with major consistency algorithms. The notion of a global constraint will also be explained and demonstrated with examples. After that we will show how consistency techniques can be integrated with search to obtain a complete constraint solver. In the second part, the tutorial will focus on modeling examples. It will show how to model problems using constraint logic programming. We will discuss some modeling rules and show how to improve efficiency of constraint programs.
The tutorial is targeted mainly to students and researchers interested in declarative programming techniques for solving combinatorial optimization problems such as planning, scheduling, routing, configuration etc. No prior knowledge of constraint programming is required; the tutorial only assumes basic understanding of Prolog programming.
Roman Barták works as a full professor and a researcher at Charles University in Prague (Czech Republic), where he leads the Constraint Satisfaction and Optimization Research Group. His research work focuses on techniques of constraint satisfaction and modeling and their application to planning, scheduling, and other areas.
SA3: Diffusion in Social Networks
This tutorial shall provide an overview of the major results on diffusion in social networks covering a wide variety of diffusion models from multiple disciplines (computer science, physics, biology), including the independent cascade and linear threshold models, deterministic tipping point models, graphical voter models, evolutionary graph theory models, and graphical epidemiology models (that is, SIR/SIS). In particular, we shall focus on how these models have been studied in various artificial intelligence paradigms, such as agent-based modelling, logic programming, game theory, learning, and data mining.
Paulo Shakarian is an assistant professor at Arizona State where he also directs the Cyber-Socio Intelligent Systems (CySIS) Laboratory. He is a recipient of the AFOSR Young Investigator Award, DoD Minerva Award (as co-PI), and the DARPA Service Chiefs' Fellowship. His research has been featured in the media including Popular Science, The Economist, andMIT Technology Review - where his work was included in their "Best of 2013" feature. He has also authored several books, including Elsevier's Introduction to Cyber-Warfare and the forthcoming Diffusion in Social Networks.
SA4: How to Automatically Machine Read the Web
Estevam Rafael Hruschka Junior
The web is inundated with information in many different formats including semi-structured and unstructured data. Machine Reading is a research area aiming to build systems that can read natural-language-based information, extracting knowledge and storing it into (structured) knowledge bases. In this tutorial the idea of automatically reading the web using machine reading techniques will be explored. Four of the most successful machine reading approaches intended to read the web (namely DBPedia, Yago, OIE (Open Information Extraction) and NELL) will be presented and discussed. Principles, subtleties, as well as current results of each approach will be addressed. On-line resources (from each approach) will be explored and the future directions in each project will be pointed out. In spite of mainly focusing on the four aforementioned projects, some other independent contributions on machine reading the web will be mentioned and pointed out as related works, as well as two other industrial projects, namely Google Knowledge Vault and IBM Watson.
The tutorial is intended to prepare the attendees to start new research works in this area, as well as to get to know about the state-of-the-art, the main challenges, and some of the most promising future directions, as well as available resources.
Estevam R. Hruschka Jr. has been a young research fellow at FAPESP (in Brazil) and was visiting professor (2008-2010) at Carnegie Mellon University, where he has, since then, been helping building the Read the Web project. Currently he is a research fellow at CNPq (in Brazil), and also associate professor at Federal University of Sao Carlos.
SP1: Algorithm Configuration: A Hands on Tutorial
Frank Hutter and T. Marius Lindauer
Algorithms with free parameters abound in AI and beyond, and the setting of these parameters often makes the difference between failing to solve a problem and achieving state-of-the-art performance. Since manual parameter tuning is tedious and time-consuming, in recent years, general methods for solving this algorithm con figuration problem have been developed in the AI literature, based on various principles from machine learning, statistics, and optimization. By now there exist many success stories of algorithm con guration, for both hard combinatorial problems (for example, SAT solving, AI planning, mixed integer programming, time tabling, etc) and supervised machine learning (for example, Auto-WEKA, auto-sklearn, and parameter tuning in deep learning).
In this hands-on tutorial, we will demonstrate how to effectively use algorithm con guration in practice. Attendees do not require any specialized knowledge and will walk away with hands-on experience in con figuring various algorithms that will allow them to apply algorithm con guration in their respective fields of research.
Frank Hutter is an Emmy Noether Research Group Lead (equivalent to an assistant professor) in computer science at the University of Freiburg (Germany). He works at the intersection of machine learning and combinatorial optimization, with a focus on developing techniques for automating the design and analysis of high-performance algorithms.
Marius Lindauer is a postdoctoral research fellow in Computer Science at the University of Freiburg (Germany). He works on algorithm configuration and selection using techniques from machine learning and optimization to improve the performance of arbitrary algorithms.
SP2: Algorithms for Maximum Satisfiability with Applications to AI
Fahiem Bacchus and Matti Jãrvisalo
Many important AI problems involve an optimization component. Maximum satisfiability (MaxSAT) is an optimization version of Boolean satisfiability (SAT) that can be applied to a variety of optimization problems that arise in AI. From a simple propositional encoding one can automatically achieve a working optimizer for your application by using a MaxSAT solver. This can be much less time consuming than building a custom solution to one's optimization problem. In fact, a well engineered MaxSAT solver can often be more efficient than a custom solution. MaxSAT also offers an alternative to integer programming (IP) in many AI applications. In this tutorial we will introduce MaxSAT, give an overview of the different techniques have been developed for solving MaxSAT (with a focus on which techniques tend to be more effective on which types of problems), and explain via case studies how MaxSAT has been used to solve problems from different areas of AI. After this tutorial you should be better equipped to assess whether or not MaxSAT would be useful in your work and better able to use modern MaxSAT solvers in your work. The tutorial assumes no background knowledge beyond what a new graduate student in Computer Science would possess.
Fahiem Bacchus is a professor of computer science at the University of Toronto. His research fits broadly in the area of knowledge representation and reasoning, a subfield of artificial intelligence. He has made a number of key contributions during his career, including the development of logics for representing different forms of probabilistic knowledge and automated planning algorithms that can exploit domain specific control knowledge expressed in linear temporal logic (LTL). For the past 15 years his work has concentrated on automated reasoning algorithms for solving various forms of the satisfiability problem: finding a model (SAT), counting the number of models (#SAT), solving quantified Boolean formulas (QBF), and finding optimal models (MaxSat). His group has been successful in building award winning solvers for all of these problems. He has served as the program chair of several major AI conferences, including Uncertainty in AI (UAI), the International Conference on Automated Planning and Scheduling (ICAPS) and the International Conference on Theory and Applications of Satisfiability Testing (SAT); and will serve as conference chair of IJCAI-17. Fahiem is also a Fellow of the Association for the Advancement of AI (AAAI).
Matti Jãrvisalo is academy research fellow and adjunct professor at the Department of Computer Science, University of Helsinki, Finland, where he leads the Constraint Reasoning and Optimization group. His research record lists over 60 refereed publications with a main focus on theoretical and algorithmic aspects of Boolean satisfiability (SAT) and optimization and applications of SAT-based techniques in solving various hard search and optimization problems in AI. He as served as program chair for the International Conference on Theory and Applications of Satisfiability Testing (SAT).
SP3: Computational Epidemiology and Public Health Policy Planning
Madhav Marathe, Naren Ramakrishnan and Anil Kumar Vullikanti
Recent epidemic outbreaks have highlighted the importance of public health policy planning, involving questions like: How can an outbreak be contained before it becomes an epidemic? What disease surveillance strategies should be implemented? These are challenging problems at the interface of different areas of computer science, including AI, machine learning and data mining, mathematics, economics, and statistics. In this tutorial, we provide an overview of the state of the art in computational epidemiology from a multi-disciplinary perspective. The specific topics we will discuss include: (1) mathematical and computational models for epidemic spread, primarily focusing on network based models; (2) design of interventions for epidemic control and response, which involve a broad class of multicriteria optimization and game-theoretical problems; (3) inference and state assessment for network models, and their use in forecasting epidemic dynamics. These problems are complicated by the fact that behaviors, disease dynamics, and social networks coevolve. We will discuss a suite of tools that combine diverse kinds of datasets and optimization tools.
The tutorial requires basic background in computer science and related fields, and hence will be accessible for most AAAI attendees.
Madhav V. Marathe is a professor of computer science and the director of the Network Dynamics and Simulation Science Laboratory at the Virginia Bioinformatics Institute, Virginia Tech. He is the recipient of numerous awards, including the George Michael Distinguished Scholarship at the Lawrence Livermore National Laboratory, and is a fellow of ACM, IEEE, and AAAS.
Naren Ramakrishnan is the Thomas L. Phillips Professor of Engineering at Virginia Tech, and directs the Discovery Analytics Center at Virginia Tech. He is an ACM distinguished scientist and serves on the editorial boards of several journals, including the ACM Transactions on Knowledge Discovery from Data.
Anil Kumar S. Vullikanti is an associate professor in the Department of Computer Science and the Network Dynamics and Simulation Science Laboratory at Virginia Bioinformatics Institute, Virginia Tech. His research interests are in the areas of combinatorial optimization, dynamical systems, computation epidemiology. He serves on the editorial board of the ACM Transactions of Algorithms.
SP4: Learning and Inference in Structured Prediction Models
Kai-Wei Chang, Gourab Kundu, Dan Roth, and Vivek Srikumar
Many prediction problems require assigning values to multiple interdependent variables. The relationships between these variables could represent a sequence, a set of clusters, or in the general case, a graph. Over past decades, structured prediction models for such problems have demonstrated success in a range of applications, including natural language processing, computer vision and computational biology. However, the high computational cost often limits both the expressive power of the models and the size of the data that can be handled. Therefore, designing efficient inference and learning algorithms for these models is a key challenge.
In this tutorial, beyond introducing the algorithmic approaches, we will discuss ideas that result in significant improvements both in the learning and the inference stages of structured prediction models. In particular, we will discuss the use of caching techniques to reuse computations and methods for decomposing complex structures, along with learning procedures that use these approaches to simplify the learning stage. We will also present a formulation that captures similarities between structured labels using distributed representations. Participants will learn about the recent trends in this domain, tools developed in this area, and how they can be applied to AI applications.
Prerequisite knowledge: No specific background knowledge is assumed of the audience.
Kai-Wei Chang is a post-doctoral researcher at the Microsoft Research New England lab. His research interests are in designing practical machine learning techniques for large and complex data. He was awarded the KDD Best Paper Award and won the Yahoo! KSC Award.
Gourab Kundu is a research staff member at IBM research. He is broadly interested in all aspects of machine learning and natural language processing. He has publications in top tier machine learning and natural language processing conferences along with a best student paper in CoNLL 2011.
Dan Roth is a computer science professor at the University of Illinois at Urbana-Champaign. He is a Fellow of the AAAS, ACM, AAAI and ACL, for his contributions to Machine Learning and to Natural Language Processing and is currently the editor-in-chief of the Journal of Artificial Intelligence Research (JAIR). Roth received his B.A Summa cum laude in mathematics from the Technion, Israel, and his Ph.D in computer science from Harvard University in 1995.
Vivek Srikumar is an assistant professor in the School of Computing at the University of Utah. He works on structured prediction, focusing on learning structured classifiers of text and scaling up inference. He has published in AI, NLP and ML venues and has received the best paper award at EMNLP.
AAAI-16 Call for Papers
Special Track on Cognitive Systems
Special Track on Computational Sustainability
Special Track on Integrated AI Capabilities
IAAI-16 Call for Papers
EAAI Symposium Call
Student Abstract Call
Tutorial Forum Call
DC Call for Applications
Senior Member Track Call