AAAI-14 Tutorial Forum
Twenty-Eighth Conference on Artificial Intelligence
July 27-28, 2014, Quebec City, Quebec, Canada
What Is the Tutorial Forum?
The AAAI-14 Tutorial Forum provides an opportunity for researchers and practitioners to spend two days each year freely exploring exciting advances in disciplines outside their normal focus. We believe this type of forum is essential for the cross fertilization, cohesiveness, and vitality of the AI field. We all have a lot to learn from each other; the Tutorial Forum promotes the continuing education of each member of AAAI. To encourage full participation by technical conference registrants, no separate fee will be charged for admittance to the Tutorial Forum in 2014.
Sunday, July 27
9:00 AM – 1:00 PM
- SA1: Large-Scale nonLinear Classification: Algorithms and Evaluations
Zhuang J. Wang
- SA2: Representing and Reasoning with Qualitative Preferences: Tools and Applications
Ganesh Ram Santhanam, Samik Basu, and Vasant Honavar
- SA3: SAT in AI: High Performance Search Methods with Applications
- SA4: (Title TBD)
Alex Smola and Amr Ahmed
2:00 PM – 6:00 PM
- SP1: A Concise Introduction to Planning Models and Methods
Hector Geffner and Blai Bonet
- SP2: Education and AI/Machine Learning
Ken Koedinger and John Stamper
- SP3: Game Theory for Security
Bo An, Manish Jain, and Albert Jiang
- SP4: Programming by Optimization: A Practical Paradigm for Computer-Aided Algorithm Design
Holger H. Hoos and Frank Hutter
Monday, July 28
9:00 AM – 1:00 PM
- MA1: From Deep Blue to Monte Carlo: An Update on Game Tree Research (full day)
Akihiro Kishimoto and Martin Mueller
- MA2: Latent Tree Models
Nevin L. Zhang
- MA3: Lifted Approximate Inference: Methods and Theory
Hung Bui, Fabian Hadiji, Kristian Kersting, Martin Mladenov, and Sriraam Natarajan
- MA4: Planning in Hybrid Domains
Maria Fox, Daniele Magazzeni
2:00 PM – 6:00 PM
- MP1: Bayesian Mechanism Design
- MP2: Sentiment Mining from User Generated Content
Ronen Feldman and Lyle Ungar
- MP3: Tensor Decompostions for Learning Latent Variable Models
Anima Anandkumar, Daniel Hsu, and Sham Kakade
SA1: Large-Scale NonLinear Classification: Algorithms and Evaluations
Most of the research in AI has been directed to the problem of data classification in which the algorithm learns linear/nonlinear models from data. Nonlinear data classification is particularly important as complex nonlinear concepts often occur in the nature. An accurate and scalable algorithm with the ability of learning nonlinear model plays a key role in various data mining, NLP, computer vision, and information retrieval problems. In an environment where new large-scale problems are emerging in various disciplines and pervasive computing applications are becoming common, there is a real need for classification algorithms that are able to process increasing amounts of data efficiently. Recent advances in large-scale learning resulted in many popular algorithms for linear classification using large data. However, technologies for large-scale nonlinear classification are still under developed and the best practices are less known. To fill this gap we present a survey of state of the art algorithms and software packages and the evaluation on benchmark data sets. We discuss algorithms on different aspect of this area in details. We also present a comprehensive experimental evaluation of these algorithms and off-the-shelf software on a collection of the large real-life data sets across various domains. Basic background in supervised learning is assumed.
Zhuang Wang is a managing consultant of IBM Global Business Services, where he is focused on bridging science and business by developing innovative big data analytics solutions. Before that, he was a machine learning scientist with Siemens Research. He earned his Ph.D. in computer science at Temple University in 2010.
SA2: Representing and Reasoning with Qualitative Preferences: Tools and Applications
Ganesh Ram Santhanam, Samik Basu, and Vasant Honavar
This tutorial will present the state of the art in the practical aspects of representing and reasoning with qualitative preferences. We will begin with motivating applications followed by a brief review of qualitative preference representation languages (QPRL) such as CP-nets, TCP-nets and CI-nets. We will introduce the key preference reasoning tasks: dominance testing (checking the dominance of one alternative over another), consistency checking (testing for cyclic induced preferences), ordering (computing a sequence of solutions in descending order of preference), and equivalence and subsumption checking (testing if the set of preferences induced by one agent is (respectively) equal to, or is subset of, the set induced by the other). We will then show how to reduce preference reasoning to model checking by (a) encoding the semantics of the preference languages into a Kripke structure, and (b) translating reasoning queries into appropriate temporal logic formulas for the Kripke structure. We will introduce i-PrefR, an open source tool that supports preference reasoning in popular QPRL. The attendees will learn how to specify and reason with preferences using i-PrefR.
Prior knowledge of QPRL although beneficial, is not required.
Ganesh Ram Santhanam is postdoctoral research associate in the Department of Computer Science at Iowa State University. His research interests include knowledge representation and reasoning, decision theory, formal methods and model checking.
Samik Basu is associate professor in the Department of Computer Science and the director of the software engineering program at Iowa State University. His primary area of research is formal specification and verification of systems and model checking.
Vasant Honavar is a professor and Edward Frymoyer Chair of Information Sciences and Technology at Pennsylvania State University. His research interests span knowledge representation, machine learning, semantic web, big data analytics, discovery informatics, social informatics, bioinformatics and health informatics. He has published over 240 peer-reviewed research articles on these topics.
SA3: SAT in AI: High Performance Search Methods with Applications
Forms of reasoning embodied in SAT and its extensions are fundamental in the constructing and analyzing intelligent systems. The focus of the tutorial is in the algorithmic basis of different SAT variants, as well as their role in efficiently solving a wide range of central problems in AI, from state-space reachability problems such as planning to probabilistic reasoning and machine learning. NP-hard constraint satisfaction and optimization problems arise in most areas of AI, making SAT a reasoning task of a high relevance to a large range of AI researchers. The tutorial is intended for the general AI community audience and only assumes basic knowledge of AI and the propositional logic.
Jussi Rintanen earned his doctoral degree from Helsinki University of Technology in 1997, and has since worked at the University of Ulm, Albert-Ludwigs University Freiburg (Germany), National ICT Australia, and since 2012 at Aalto University (former Helsinki University of Technology) in Helsinki, Finland. His research focuses on the use of combinatorial search methods, automated reasoning, and applications of logics in computer science, including plan and program synthesis, diagnosis and other modes of reasoning.
SA4: Title TBD
Alex Smola and Amr Ahmed
This description and bios will be available soon.
SP1: A Concise Introduction to Planning Models and Methods
Hector Geffner and Blai Bonet
Planning is one of the oldest areas in AI, and yet it has changed a great deal over the years, becoming both more mathematical and more empirical. Planning is best conceived as the model-based approach to autonomous behavior where the agent behavior is derived automatically from a model. In this sense, planning contrasts with model-free approaches, where the agent behavior results from learning, and programming-based approaches, where the agent behavior is hardwired. The basic planning models assume complete state information, deterministic actions, and hard goals, yet other models relax these assumptions. All the models are intractable in the worst case, and hence one of main challenges in planning is computational: how to scale up to large models represented in compact form.
In the tutorial, we will look at the variety of models used in AI planning, and the techniques that have been developed for solving them. The goal is to provide a modern and coherent view of planning that is comprehensive but not shallow. It is addressed to researchers and students interested in autonomous behavior and cognitive science. It assumes a basic knowledge of math and algorithms, and follows our recent book, A Concise Introduction to Models and Methods for Automated Planning, Morgan and Claypool, 2013.
Hector Geffner is an ICREA Research professor at the Universitat Pompeu Fabra in Barcelona. He is interested in models of reasoning, learning, and action, in people and machines. He has a PhD from UCLA and received the 1990 ACM Dissertation Award, and the 2009 and 2010 ICAPS Influential Paper Awards. Geffner is a fellow of AAAI and ECCAI.
Blai Bonet is a professor at the Universidad Simón Bolívar in Caracas, Venezuela. His main research interests are automated planning, knowledge representation and search. He has a PhD degree from UCLA, and received the 2009 ICAPS Influential Paper Award. He is AE Editor of AI Journal and JAIR.
SP2: Techniques for Data-Driven Adaptive Educational System Development and Optimization
Ken Koedinger and John Stamper
Increasing widespread use of educational technologies is producing vast amounts of data. Such data can be used to help advance our understanding of student learning and enable more intelligent, interactive, engaging, and effective education. In this tutorial, we will examine the status and prospects of this new and powerful opportunity for data-driven development and optimization of educational technologies, focusing on Intelligent Tutoring Systems and large educational datasets. We will provide examples of use of a variety of techniques to develop or optimize the select, evaluate, suggest, and update functions to improve courseware through adaptive learning, including probabilistic grammar learning, rule induction, Markov decision process, classification, and integrations of symbolic search and statistical inference. No prior experience is needed for this tutorial besides an interest in using educational data to improve courseware.
Kenneth Koedinger is a professor of human-computer interaction and psychology at Carnegie Mellon. His research has contributed new principles and techniques for the design of educational software and has produced basic cognitive science research results on the nature of student thinking and learning. Koedinger is a cofounder of Carnegie Learning (carnegielearning.com
John Stamper is a member of the research faculty at the Human-Computer Interaction Institute at Carnegie Mellon University. He is also the Technical Director of the Pittsburgh Science of Learning Center DataShop. His primary areas of research include educational data mining and intelligent tutoring systems.
SP3: Game Theory for Security
Bo An, Manish Jain, and Albert Jiang
Game theory is an increasingly important paradigm for modeling security games and decision-making in these domains, including homeland security resource allocation decisions, robot patrolling strategies, and computer network security. Several deployed real-world systems use game theory to randomize critical security decisions to prevent terrorists and criminals from exploiting a predictable security schedule. The ARMOR system has been deployed at the Los Angeles International Airport since 2007 and the IRIS system has been deployed by the Federal Air Marshals Service since 2009. PROTECT has been deployed in the port of Boston since April 2011, has been in use at the port of New York since February 2012, and is headed for nationwide deployment; TRUSTS is being evaluated for deterring fare evasion, suppressing urban crime and counter-terrorism within the Los Angeles Metro System, and GUARDS was earlier tested by the US Transportation Security Administration (TSA) for security inside the airport.
This tutorial will introduce a wide variety of game-theoretic modeling techniques and algorithms that have been developed in recent years for security problems. Introductory material on game theory and mathematical programming (optimization) will be included in the tutorial, so there is no prerequisite knowledge for participants. After introducing the basic security game framework, we will describe algorithms for scaling to very large games, methods for modeling uncertainty and attacker observation capabilities in security games, and applications of these techniques for randomized resource allocation and patrolling problems. We will then talk about progress in the recent research area of game theory and human behavior, which deals with developing algorithms that perform well against human opponents.
At the conclusion of the tutorial we will highlight the many opportunities for future work in this area, including exciting new domains and fundamental theoretical and algorithmic challenges.
Our primary target audience is students and other novices who will benefit from an extensive introduction to one of the most important areas of applied multiagent systems research. This tutorial should also be of interest to researchers in the topic of human behavior, since we describe the application of concepts from behavioral game theory. We also anticipate that this tutorial will appeal to industry participants interested in applied work, and to game theory practitioners who may be interested in learning more about the specific techniques employed in this important class of games.
The primary tools the audience can learn are game theoretic models for security scenarios and exact/approximate algorithms to find solutions. The audience will also learn how concepts from behavioral game-theory have been applied and deployed in the context of security games. Although it is not the primary objective of the tutorial, attendees can learn also basic groundings on game theory and algorithmic game theory.
Bo An is an assistant professor at the School of Computer Engineering of the Nanyang Technological University. He was a postdoctoral researcher at the University of Southern California. He received the Ph.D degree from the University of Massachusetts, Amherst in 2011.
His research interests include artificial intelligence, multiagent systems, game theory, automated negotiation, resource allocation, and optimization. He has published over 30 referred papers at top conferences (for example, AAMAS, IJCAI, and AAAI) and journals. He has served as program committee members for many top conferences (for example, AAMAS, IJCAI, AAAI) and was cochair for some key international conferences/symposia. He is the recipient of the 2010 IFAAMAS Victor Lesser Distinguished Dissertation Award. He won an Operational Excellence Award from the Commander, First Coast Guard District of the United States. He also won the Best Innovative Application Paper award at the 11th International Joint Conference on Autonomous Agents and multiAgent Systems (AAMAS'2012). Recently, he won the 2012 INFORMS Daniel H. Wagner Prize for Excellence in Operations Research Practice.
Manish Jain completed his PhD from the Department of Computer Science at the University of Southern California in August 2013. His work is on the applications of game-theoretic and large-scale optimization techniques for scheduling limited resources. His work has been deployed by many agencies within the Department of Homeland Security, including the Federal Air Marshals Service and Los Angeles World Airports police.
He has published numerous papers and given multiple tutorials on the topic of Security Games at many AI conferences and publication venues. His work is also the recipient of the RIST Prize by the Military Operations Research Society, and a finalist for the EURO excellence in Practice award. He has also won the best Dissertation award from the Computer Science Department at the University of Southern California. He has also been recently nominated for the best ACM Dissertation Award in 2013. He is currently the chief technology Officer of Armorway Inc.
Albert Xin Jiang is a postdoctoral researcher in the TEAMCORE research group at the Department of Computer Science at the University of Southern California. He received his PhD from the Department of Computer Science at the University of British Columbia. Much of his research is addressing computational problems arising in game theory, including the efficient computation of solution concepts such as Nash equilibrium, Stackelberg equilibrium and correlated equilibrium, as well as applications of game-theoretic computation to real-world domains such as large-scale infrastructure security and electronic commerce. He has published over 40 peer-reviewed technical articles.
He received the best student paper award at the ACM Conference on Electronic Commerce (ACM-EC) in 2011 and was a finalist for the best paper award at AAMAS 2013. He received the 2012 CAIAC Doctoral Dissertation Award and the runner up IFAAMAS Victor Lesser Distinguished Dissertation Award. As part of the Port Resilience Operational/Tactical Enforcement to Combat Terrorism (PROTECT) team, he received a Coast Guard Meritorious Team Commendation in 2013 for creating "an innovative approach to optimize patrol schedules and actions for the Coast Guard Ports, Waterways and Coast Security missions."
SP4: Programming by Optimization: A Practical Paradigm for Computer-Aided Algorithm Design
Holger H Hoos and Frank Hutter
When creating software, particularly solvers for the computationally challenging problems arising in many areas of AI, developers frequently explore multiple ways of achieving certain tasks. Often, these alternatives are eliminated or abandoned early in the process. Programming by Optimization (PbO) aims to avoid such premature design choices and to actively develop promising alternatives for parts of the design. The resulting design spaces are explored using powerful optimization techniques to automatically generate programs that perform well in a given use context. As a result, PbO allows human experts to focus on the creative task of imagining possible mechanisms for solving given problems or subproblems, while the tedious job of determining what works best in a given use context is performed automatically, substituting human labor with computation.
In this tutorial, we describe the PbO paradigm and the automated algorithm configuration and selection techniques that enable it and demonstrate how PbO enables dramatic performance gains on many AI problems, including SAT, planning, classification and regression tasks. Our presentation is aimed at AI researchers interested in using the PbO approach and the practical tools that support it to better realize the performance potential inherent in their algorithmic ideas; it does not assume any specialized background knowledge.
Holger Hoos is a professor of computer science at the University of British Columbia in Vancouver (Canada). His main research interests lie at the intersection of AI, empirical algorithmics and operations research, combining optimisation and machine learning techniques. His current work is focused on the automated design of high-performance algorithms.
Frank Hutter is an Emmy Noether Research Group Lead (eq. assistant professor) in computer science at the University of Freiburg (Germany). He works at the intersection of machine learning and combinatorial optimization, with a focus on developing techniques for automating the design and analysis of high-performance algorithms.
MA1 (FULL DAY): From Deep Blue to Monte Carlo: An Update on Game Tree Research
Akihiro Kishimoto and Martin Mueller
Research in board games has made great strides since the classic win of IBM's Deep Blue over Kasparov, and since checkers was solved by Schaeffer et al. This half day long tutorial presents an up to date overview of research techniques for classical two player games. Some of the questions we will discuss are: How did game tree search develop since Deep Blue? What are the ideas behind current methods and which successes have they achieved in games and elsewhere? What are the biggest open problems in games research?
The tutorial reviews the most important game-tree search techniques developed over the last half century, starting with traditional methods such as minimax and alphabeta, and leads up to state of the art methods such as Monte Carlo Tree Search and Proof-Number Search. The tutorial includes an application overview which showcases the use of the presented techniques in a large variety of games, as well as in topics as diverse as AI planning, mixed integer programming, biometrics security authentication, physics simulations and organic chemical synthesis.
The tutorial assumes only general undergraduate level AI knowledge. It is suited for graduate and advanced undergraduate students as well as for a general AI audience who wants to refresh and update their knowledge of the subject.
Akihiro Kishimoto is a research staff member at IBM Research, Ireland. For over ten years, his research interests are artificial intelligence and parallel computing. He was a coauthor of one of the strongest commercial computer shogi (Japanese chess) programs that won the World Computer Shogi Championships four times among over 40 participants. Additionally he is known as a developer of high-performance tsume-Go (life and death problem in Go) and tsume-shogi (check mating problem in shogi) solvers. He obtained his PhD degree from the University of Alberta in 2005.
Martin Mueller is a professor in computing science at the University of Alberta, Canada. He has worked in computer games, especially the game of Go, for almost 30 years. His second research area is domain-independent planning. Mueller is the leader of the open source Fuego project, which consists of both a game-independent framework for two player games and a strong Go-playing program based on Monte Carlo Tree Search. In 2009, Fuego was the first program to defeat a top human professional on even terms in a 9x9 Go game.
MA2: Latent Tree Models
Nevin L. Zhang
Latent tree models (LTMs) are tree-structured probabilistic graphical models where the variables at leaf nodes are observed while those at internal nodes are latent. They represent complex relationships among observed variables and yet are computationally simple to work with. When used for data analysis, they divide observed variables into groups such that the correlations among variables in each group are properly modeled by a single latent variable. As such, LTMs are an effective tool for detecting co-occurrence patterns in binary data and correlation patterns in general data. An LTM typically contains multiple latent variables and each latent variable represents a soft partition of data. As such, LTMs are also a novel tool for clustering that produces multiple partitions of data.
The tutorial will start with the definition of latent tree models and a discussion on their relationships with finite mixture models and phylogenetic trees. Several theoretical considerations will then be addressed. After that, various algorithms for learning LTMs will be surveyed. Finally, application examples will be presented to demonstrate the use of LTMs for co-occurrence pattern detection, latent variable discovery, multidimensional clustering, topic detection and general probabilistic modeling. This tutorial is targeted at researchers who have basic knowledge of probabilistic graphical models.
Nevin L. Zhang is a professor at The Hong Kong University of Science and Technology. He received his PhD in computer science from University of British Columbia and his PhD in applied mathematics from Beijing Normal University. His research interests are probabilistic graphical models and their applications. He is coauthor of the variable elimination algorithm for Bayesian networks and the incremental pruning algorithm for POMDPs. He is the first to study latent tree models in their general forms and the applications of those models in traditional Chinese medicine research. He is an associate editor of IJAR and was past associate editor and advisor of JAIR. He is program cochair of UAI-2014 and was program cochair of ECSQURA-2003.
MA3: Lifted Approximate Inference: Methods and Theory
Fabian Hadiji, Hung Hai Bui, Kristian Kersting, Martin Mladenov, and Sriraam Natarajan
Graphical models often contain symmetries. Prominent examples are graphical models obtained by grounding relational probabilistic models. Relational probabilistic models subsume many different languages and approaches combining probabilities and logic. Although relational models have the potential to scale AI to the next level, inference in these models is a challenging problem due to the large complexity of the models. Hence, one typically has to resort to approximation algorithms. To scale even further, exploiting symmetries during inference in the models has attracted a lot attention recently in the statistical relational learning and probabilistic inference community. This line of work, which we refer to as "lifted approximate inference", has produced many promising practical and interesting theoretical results. However, lifting different kind of approximate inference algorithms requires different mathematical definitions of symmetry, and therefore, different algorithms to find these symmetries. Our proposed tutorial aims to give a unified and coherent view of different lifted approximate inference techniques, their underlying symmetries, and the associated symmetry discovery algorithms. This is the first tutorial on approximate lifted inference, covering the ideas mentioned above and giving a detailed introduction on algorithmic methods as well as the underlying mathematical theory.
Fabian Hadiji is a PhD student at the Technical University of Dortmund. His research interests lie in the field of Statistical Relational Learning with application to problems in natural language processing and information extraction.In particular, he is working on lifted inference and message passing algorithms.
Hung Hai Bui is a principal scientist at Nuance Communications, USA. From 2003-2012, he was a senior computer scientist at the AI center, SRI International. His interests are in probabilistic inference, machine learning, human activity/intent recognition and natural language processing. He has published one book and more than 60 technical papers in these areas.
Kristian Kersting is an associate professor for CS at the TU Dortmund University and an ATTRACT fellow at Fraunhofer IAIS. He received his PhD from the University of Freiburg and was previously with the MIT and the University of Bonn. His main research interests are statistical relational AI and data mining. He has published over 130 technical papers and received the ECCAI Dissertation Award 2006. He cochaired ECMLPKDD 2013 and is an (associate) editor of JAIR, AIJ, DAMI, NGC, and MLJ.
Martin Mladenov is a PhD student in the Artificial Intelligence group at the Computer Science department of TU-Dortmund. His research interests are based around the interplay of regularities (such as symmetry) in convex optimization problems with the continuous relaxations of the graph automorphism problem and their combinatorial counterparts.
Sriraam Natarajan is an assistant professor at Indiana University, USA. He was previously an assistant professor at Wake Forest School of Medicine, a post-doctoral research associate at University of Wisconsin-Madison and had graduated with his PhD from Oregon State University.
MA4: Planning in Hybrid Domains
Maria Fox and Daniele Magazzeni
Hybrid systems are systems with both continuous control variables and discrete logical modes. Many interesting real problems give rise to hybrid systems, including mission planning for autonomous vehicles, smart power management, satellite observation planning and disaster recovery.
Planning in these domains requires rich models to capture the interaction between discrete and continuous change, and methods for reasoning with temporal, spatial and continuous constraints.
The tutorial starts with an introduction to modelling hybrid systems as Planning problems. Two approaches that are current in the literature are considered: the Domain Predictive Control approach of Lohr, Eyerich, Keller and Nebel, and the hybrid planning methods of Coles, Coles, Fox and Long and of Della Penna, Intrigila, Magazzeni and Mercorio. The tutorial then considers the relationship between planning and the hybrid system model-checking approaches of Cimatti, Mover and Tonetta, and of Bogomolov, Podelski, Wehrle et al., showing how mappings can be defined between the mixed discrete-continuous planning domain description language, PDDL+, and model-checking formalisms such as HyTech, HyDi, and SpaceEx. The concept of model-checking as planning is then discussed. An overview of techniques for planning in hybrid domains is provided. Finally, some recent challenging case studies are presented and open problems are discussed.
The target is the broad audience of AAAI-14, as the tutorial covers multidisciplinary topics, relevant to planning, machine learning, robotics, etc., with a proper mix of theory and applications.
Students are very welcome. A basic familiarity with AI planning and model-checking is assumed.
Maria Fox is a professor of computer science at Kings College London. Her research is in deterministic temporal and metric planning and planning in continuous domains. She codeveloped the PDDL2.1 and PDDL+ modelling languages. Her recent work focusses on the relationship between task planning and control, and is concerned with capturing physical constraints within planning domain models to support control-aware planning.
Dan Magazzeni is a lecturer in artificial intelligence at King's College London. His research explores the links between automated planning and model-checking, with a particular focus on temporal continuous planning, planning in mixed discrete-continuous domains, and hybrid systems control and verification.
MP1: Bayesian Mechanism Design
This tutorial surveys the classical economic theory of Bayesian mechanism design and recent advances from the perspective of algorithms and approximation. Classical economics gives simple characterizations of Bayes-Nash equilibrium and optimal mechanisms when the agents' preferences are linear and single-dimensional. The mechanisms it predicts are often complex and overly dependent on details of the model. Approximation complements this theory and suggests that simple and less-detail-dependent mechanisms can be nearly optimal. Furthermore, techniques from approximation and algorithms can be used to describe good mechanisms beyond the single-dimensional, linear model of agent preferences.
Jason Hartline's research introduces design and analysis methodologies from computer science to understand and improve outcomes of economic systems. Optimal behavior and outcomes in complex environments are complex and, therefore, should not be expected; instead, the theory of approximation can show that simple and natural behaviors are approximately optimal in complex environments. This approach is applied to auction theory and mechanism design in his graduate textbook Mechanism Design and Approximation which is under preparation.
Hartline received his Ph.D. in 2003 from the University of Washington under the supervision of Anna Karlin. He was a postdoctoral fellow at Carnegie Mellon University under the supervision of Avrim Blum, and subsequently a researcher at Microsoft Research in Silicon Valley. He joined Northwestern University in 2008 where he is an associate professor of computer science. He is currently on sabbatical at Harvard.
MP2: Sentiment Mining from User Generated Content
Ronen Feldman and Lyle Ungar
The proliferation of user generated content on the web is driving a new wave work on the determination of user sentiment from web texts such as message boards, blogs, tweets, and Facebook status updates. Both researchers and practitioners are developing and applying new methods to determine how users feel about everything: products and politicians, friends and family, scientific articles and celebrities. This tutorial will cover the state of the art in this rapidly growing area, including recent advances that combine information extraction with sentiment analysis to improve accuracy in assessing sentiment about specific entities. We will present several real world applications of sentiment analysis. Special emphasis will be given to lessons learned from years of experience in developing real world sentiment analysis systems. The tutorial is suitable for people with a very basic familiarity with text mining or information retrieval.
Ronen Feldman is a professor of information systems at the Business School of the Hebrew University in Jerusalem. He has worked extensively in information extraction. He founded the text-mining company ClearForest, has given more than 30 tutorials on text mining, and wrote The Text Mining Handbook.
Lyle Ungar is a professor of computer and information science at the University of Pennsylvania. He has published over 200 papers, done text mining at companies including Google and Dow Jones, and widely taught executive education courses. He is currently collaborating with psychologists to measure the happiness of the world from tweets and Facebook status updates.
MP3: Tensor Decompostions for Learning Latent Variable Models
Anima Anandkumar, Daniel Hsu and Sham Kakade
This tutorial surveys algorithms for learning latent variable models based on the method-of-moments, focusing on algorithms based on low-rank decompositions of higher-order tensors. The target audiences of the tutorial include (i) users of latent variable models in applications, and (ii) researchers developing techniques for learning latent variable models. The only prior knowledge expected of the audience is a familiarity with simple latent variable models (for example, mixtures of Gaussians), and rudimentary linear algebra and probability. The audience will learn about new algorithms for learning latent variable models, techniques for developing new learning algorithms based on spectral decompositions, and analytical techniques for understanding the aforementioned models and algorithms. Advanced topics such as learning overcomplete represenations may also be discussed.
Anima Anandkumar is a faculty at the EECS Dept. at U.C. Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She has been a visiting faculty at Microsoft Research New England in 2012 and a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009-2010. She is the recipient of the Alfred.P. Sloan Fellowship Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, IBM Fran Allen PhD fellowship, thesis award from ACM SIGMETRICS society, and paper awards from the ACM SIGMETRICS and IEEE Signal Processing societies.
Daniel Hsu is an assistant professor in the Department of Computer Science at Columbia University. Previously, he was a postdoc at Microsoft Research New England, the Department of Statistics at Rutgers University, and the Department of Statistics at the University of Pennsylvania. He holds a Ph.D. in computer science from the University of California San Diego, and a B.S. in computer science and engineering from the University of California Berkeley.
His research interests are in algorithmic statistics, machine learning, and privacy.
AAAI-14 Tutorial Forum Cochairs
- Emma Brunskill
Carnegie Mellon University
Pittsburgh, PA, USA
- Kevin Leyton-Brown
University of British Columbia
Vancouver, BC, Canada
Online Technical Schedule
Conference Program (PDF)
Technical Schedule (PDF)
Technical Schedule (epub)
Speed Dating (New!)
Senior Member Track
What's Hot Talks
AAAI-14 Women's Lunch (New!)
AAAI Community Meetings
Fun and Games Night