AAAI-17 — AI in Practice
The Thirty-First Conference on Artificial Intelligence
February 4–9, 2017, San Francisco, California, USA
A Special Event on February 5
AI in Practice will showcase invited presentations of visionary AI practitioners that will reflect on key successes of AI in the commercial world and crystalize emerging technologies and promising new directions.
AI in Practice is a special event of the Association for the Advancement of Artificial Intelligence (AAAI). AAAI is the premier membership organization in artificial intelligence (AI). With several thousand members, the Association for the Advancement of Artificial Intelligence (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society founded in 1979 devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions. Major AAAI activities include organizing and sponsoring conferences, symposia, and workshops, publishing a quarterly magazine for all members, publishing books, proceedings, and reports, and awarding grants, scholarships, and other honors.
Evgeniy Gabrilovich (Google Research) and Vanja Josifovski (Pinterest)
- Invited keynote by Haifeng Wang (Baidu)
- Invited talk by Deepak Agarwal (LinkedIn)
- Invited talk by Michael Witbrock (IBM)
- Invited keynote by Gary Marcus, Uber and NYU
- Invited talk by Vincent Vanhoucke (Google)
- Invited talk by Alex Smola (Amazon)
- Fireside Chat with Ray Kurzweil (Google)
- Invited talk by Joaquin Quinonero Candela (Facebook)
- Invited talk by Xavier Amatriain (Quora)
Sunday February 5, 2017
- 9:30 – 10:15 — Invited keynote by Haifeng Wang (Baidu)
- 10:15 – 11:00 — Invited talk by Deepak Agarwal (LinkedIn)
- 11:00 – 11:45 — Invited talk by Michael Witbrock (IBM)
- 11:45 – 1:00 — Lunch
- 1:00 – 1:45 — Invited keynote by Gary Marcus (Uber and NYU)
- 1:45 – 2:30 — Invited talk by Vincent Vanhoucke (Google)
- 2:30 – 3:15 — Invited talk by Alex Smola (Amazon)
- 3:15 – 3:45 — Coffee break
- 3:45 – 4:30 — Fireside Chat Ray Kurzweil (Google)
- 4:30 – 5:15 — Invited talk by Joaquin Quinonero Candela (Facebook)
- 5:15 – 6:00 — Invited talk by Xavier Amatriain (Quora)
Slot 1: Invited Keynote
Natural Language Processing at Baidu
Haifeng Wang (Baidu)
Language is a carrier of knowledge and thoughts. Natural language processing (NLP) is one of the most important fields in artificial intelligence. Baidu's NLP technologies support about 100 applications and the modules are called up more than 100 billion times per day by search engine, feeds, intelligent assistant, machine translation, internet finance and others. In this talk, I will introduce Baidu's NLP technologies including semantic understanding, sentiment analysis, deep question answering, language generation, machine translation, dialogue system, knowledge graph, and others.
Dr. Haifeng Wang, a vice president of Baidu, the head of Baidu search engine, feeds, translate, durobot, nlp, knowledge graph. He received his PhD in computer science from Harbin Institute of Technology In 1999. Dr. Wang was the president of the ACL in 2013 and is an ACL fellow. He has served as a program chair, workshop chair, tutorial chair, area chair and industry chair for several top conferences including ACL, COLING, IJCAI, IJCNLP, KDD, and SIGIR, as well as an associate editor, guest editor and reviewer for academic journals. He was awarded the prize of national science and technology of China in 2015.
Slot 2: Invited Talk
AI that Creates Professional Opportunities at Scale
Deepak Agarwal (LinkedIn)
Professional opportunities can manifest itself in several ways like finding a new job, enhancing or learning a new skill through an online course, connecting with someone who can help with new professional opportunities in the future, finding insights about a lead to close a deal, sourcing the best candidate for a job opening, consuming the best professional news to stay informed, and many others. LinkedIn is the largest online professional social network that connects talent with opportunity at scale by leveraging and developing novel AI methods. In this talk, I will provide an overview of how AI is used across LinkedIn and the challenges thereof. The talk would mostly emphasize the principles required to bridge the gap between theory and practice of AI, with copious illustrations from the real world.
Deepak Agarwal is a vice president of engineering at LinkedIn where he is responsible for all AI efforts across the company. He is well known for his work on recommender systems and has published a book on the topic. He has published extensively in top-tier computer science conferences and has coauthored several patents. He is a Fellow of the American Statistical Association and has served on the Executive Committee of Knowledge Discovery and Data Mining (KDD). Deepak regularly serves on program committees of various conferences in the field of AI and computer science. He is also an associate editor of two flagship statistics journals
Slot 3: Invited Talk
AI for Complex Situations: Beyond Uniform Problem Solving
Michael Witbrock (IBM)
The majority of recent technical advances in AI stem from problems that are structurally fairly uniform, but have complex and hard-to-describe patterns of variation within that structure. This structural uniformity characterizes, for example, speech signals up to transcription, text up to approximate translation, information extraction, video game play, lane-following, object labeling, and even the game of Go. However, many of the problems that we typically write programs for do not appear to be structurally uniform in this way: understanding a contract, or a regulation, and deciding how it affects a particular business process, is structurally complex: each detail of how the elements of the problem instance relate to one another is potentially critical. General reading comprehension, or automated programming seem similarly complex. If we are to produce AI systems that provide professional level assistance, we must address this complexity along with the variation. In this talk, I will discuss some of the complex, professional level problems we are attempting to address at IBM, and sketch some research paths, from both the past and possible future of AI.
Michael Witbrock is a Distinguished Research Staff Member at IBM Research where he leads the Learning to Reason department, and coordinates global research efforts in knowledge extraction and knowledge representation. Michael recently joined IBM after an extended period as Vice President for Research at Cycorp, where he directed research projects in automated reasoning (including speed-up learning), automated and interactive knowledge acquisition, and machine reading, in domains as varied as military operations planning, counter terrorism, NL medical records query, code analysis and vulnerability detection, video analysis and retrieval, and the molecular mechanisms underlying cancer. Before joining Cycorp, Michael was Principal Scientist at Terra Lycos, working on integrating statistical and knowledge based approaches to understanding web user behavior, and IP capture; a research scientist at JustSystems Pittsburgh Research Center, working on statistical summarization; and a systems scientist at Carnegie Mellon on the Informedia visual and spoken document information retrieval project. While maintaining a strong interest in knowledge representation and capture and in natural language understanding, his current research goals involve the development and use of quasi-logics, which retain approximations of the formal properties of logic while adding the learnability and flexibility of distributed representations, and the development and application of large, inferentially productive knowledge bases and inference systems across the resulting range of reasoning paradigms. He hopes to apply these representations and reasoners to the construction of "dense models" of domains, which are sufficiently complete support the full range of in-domain reasoning. He is author of numerous publications in areas ranging across computational linguistics, speech modeling and recognition, neural networks, automated inference, automated reading and multimedia information retrieval, and has dabbled in web browser design and implementation, genetic design and parallel computer architecture. As well as his technical work, Michael is very interested in entrepreneurship around AI and for social good, and in the social and economic outcomes of advances in AI. He has been pursuing the former interest as a member of the board of StartOut, and the latter interest, inter alia, as a co-founder of AI4Good.org. Michael has a PhD in Computer Science from Carnegie Mellon University and a BSc Hons in Psychology from Otago University in New Zealand.
Slot 4: Invited Keynote
Artificial Intelligence, Local Minima, and the Quest for AGI
Gary Marcus (Uber and NYU)
AGI (or artificial general intelligence) is the quest for superhuman machines that have the flexibility and resourcefulness of human intelligence, but the computational power and resources of machines. Recently we have seen some impressive advances in artificial intelligence, but the general in artificial general intelligence is still lacking (despite some apparent counterexamples). By and large, machines still need to be programmed for particular tasks, or failing that to be provided with enormous numbers of supervised training examples that pertain largely to a single task. Most AI software remains narrowly engineered, to specific tasks.
In this talk I will try to assess why progress has been slower than many of us might have anticipated, and examine some possible obstacles to progress, and suggest that — all recent enthusiasm to the contrary — the field could be headed for a local minimum, solving some problems extremely well, yet still falling far short of what is theoretically possible. I will close by suggesting a few relatively unpopular avenues that might be worth exploring.
[Disclaimer: My comments represent my own views, not necessarily those of the institutions for which I work.]
Gary Marcus is director of Uber AI Labs, and was a founder and CEO of Geometric Intelligence, a machine learning startup recently acquired by Uber. Trained by Steven Pinker, he is also both an award-winning professor of psychology and neural science at New York University, and a bestselling author (Guitar Zero, Kluge) who frequently appears on radio and television.
Slot 5: Invited Talk
"OK Google, fold my laundry s'il te plaît"
Vincent Vanhoucke (Google)
Deep learning has enabled computers to approach human-level performance on many practical perception and language understanding tasks, ranging from speech recognition to computer vision and machine translation. One of today's grand AI challenges is to bring these new capabilities into the physical world, and teach machines how to behave and make themselves useful in human-centered environments. In this talk, I'll argue how robotics may be on the cusp of its very own deep learning revolution, but that for this endeavor to succeed, machine learning practitioners have to break from the relative comfort of the large-scale supervised learning setting that has buoyed the field for the past decade and humbly face some thorny problems that have comparatively been neglected: data scarcity and skill transfer, active and lifelong learning, as well as safety and predictability. The good news is that tackling these problems is also one of the necessary next steps towards bridging the gap between mere learning and actual intelligence.
Vincent Vanhoucke is a principal scientist in the Google Brain Team, and leads Google's robotics research effort. His research has spanned many areas of machine learning, from speech recognition to deep learning, computer vision and robotics. He also chairs the upcoming 2017 Conference on Robot Learning.
Slot 6: Invited Talk
Fast and Personal — Scaling Deep Learning with MxNet
Alex Smola (Amazon)
In this talk I will address the challenges of building deep learning systems that are able to adjust to users for content recommendation and user engagement estimation. They rely on nonparametric latent variable models, such as LSTMs to deal with nonstationary time-series data. Going beyond models, I will discuss how scalable deep learning models can be implemented efficiently in MxNet, a parallel distributed high performance deep learning framework. In particular, I will discuss programming models, its execution engine and how to distribute computation efficiently over hundreds of GPUs with linear scaling.
Alex Smola studied physics in Munich at the University of Technology, Munich, at the Universita degli Studi di Pavia and at AT&T Research in Holmdel. In 1996 he received the Master degree at the University of Technology, Munich and in 1998 the Doctoral Degree in computer science at the University of Technology Berlin. After that, he worked as a Researcher and Group Leader at the Australian National University. From 2004-08 he worked as program leader at the Statistical Machine Learning Program at NICTA. From 2008 to 2012 he worked at Yahoo Research and from 2012–2014 at Google Research. He joined the Carnegie Mello University faculty in 2013 as a professor. After cofounding Marianas Labs in 2015 he now works at Amazon Web Services as Director of Machine Learning. He has written over 200 papers and several books.
Slot 7: Fireside Chat
The Future Capability and Impact of AI
Fireside Chat with Ray Kurzweil
I'll briefly recount my fifty year experience in AI. It began with my meeting Marvin Minsky in 1962 when I was fourteen which began a 54 year mentorship until his passing a year ago. That same month I also met Frank Rosenblatt, then leader of the nascent connectionist school. He shared an intuition with me which would be proven correct decades after his passing in 1971.
Deep Neural Nets and Long-Short Temporal Memory techniques, along with the ongoing progression of what I call the "Law of Accelerating Returns" (the exponential growth of the price-performance and capacity of information technologies, which is a much broader phenomenon than Moore's Law) is fueling a wave of optimism in AI. But DNNs and LSTMs have a limitation characterized by the motto "life begins at a billion examples."
I'll share an alternative model based on self-organizing hierarchies of sequential models and explain why I believe that this is how the human neocortex works, and why this approach has the potential to overcome the apparent limitations of big DNNs.
I would then like to explore these ideas and get questions from the insightful AAAI audience.
Ray Kurzweil was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to- speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.
Among Ray's many honors, he received the Grace Murray Hopper Award from the Association for Computing Machinery, a Grammy award for outstanding achievements in the field of music technology; he is the recipient of the National Medal of Technology, was inducted into the National Inventors Hall of Fame, holds twenty-one honorary Doctorates, and honors from three U.S. presidents. Ray has written five national best-selling books, including New York Times best sellers The Singularity Is Near (2005) and How To Create A Mind (2012). He is cofounder and chancellor of Singularity University and a director of engineering at Google heading up a team developing machine intelligence and natural language understanding."
Slot 8: Invited Talk
Designing AI at Scale to Power Everyday Life
Joaquin Quinonero Candela (Facebook)
The majority of the experiences and interactions people have on Facebook today are made possible with AI. Well over 1 billion people enjoy unique, personalized experiences on Facebook that are powered by a wealth of AI and machine learning algorithms. AI is an incredibly fast-moving field: engineers and researchers across the company are turning the latest research breakthroughs into tools, platforms, and infrastructure that make it possible for anyone at Facebook to use AI in the experiences and products they build. This talk will look at how Facebook is conducting and applying industry-leading research to help drive advancements in AI disciplines like computer vision, language understanding, speech and video. We will also talk about building an infrastructure that anyone at Facebook can use to easily reuse algorithms in different products, scale to run thousands of simultaneous custom experiments, and give concrete examples of how employees across the company are able to leverage these platforms to build new AI products and services.
Joaquin Quinonero Candela, engineering director at Facebook, leads the Applied Machine Learning (AML) team, driving product impact at scale through applied research in machine learning, language technologies, computer vision, computational photography and other AI disciplines. Prior to Facebook, Joaquin worked at Microsoft Research in Cambridge, UK, building the click prediction and auction optimization teams at Microsoft AdCenter as well as co-creating and teaching a new ML class at the University of Cambridge. He was a postdoctoral researcher at the Fraunhofer Institute in Berlin, at the Technical University of Berlin and also at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. Joaquin received his PhD from the Technical University of Denmark in 2004.
Slot 9: Invited Talk
Lessons Learned from Building Practical AI Systems
Xavier Amatriain (Quora)
There are many good textbooks and courses where you can be introduced to machine learning and maybe even learn some of the most intricate details about a particular approach or algorithm. While understanding that theory is a very important base and starting point, there are many other practical issues related to building real-life ML systems that you don't usually hear about. In this talk I will share some of the most important lessons learned in years of building the large-scale ML solutions that power products like Quora or Netflix to delight millions of users across the world. I will discuss issues such as model and feature complexity, sampling, regularization, distributing/parallelizing algorithms, the importance of metrics, or how to think about offline vs. online computation. I will also address how to combine supervised and non-supervised approaches, the deep learning "hype," or the role of ensembles in practical ML systems.
Xavier Amatriain is vice president of engineering at Quora, where he leads the team building the platform to share and grow knowledge in the Internet. With over 50 publications in different fields, Xavier is best known for his work on machine learning in general and recommender systems in particular. Before Quora, he was a research/engineering director at Netflix, where he led the team building the famous Netflix recommendation algorithms. Previously, Xavier was a research scientist at Telefonica Research and a research director at UCSB. He has lectured at different universities both in the US and Spain and is frequently invited as a speaker at conferences and companies.
Other Relevant AAAI Conference Activities
The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) includes activities that should be of interest to the participants of AI in Practice.
On Saturday February 4, several tutorials and workshops will be held that address practical uses and applications of AI. Participants in this event can register for those sessions:
On Sunday February 5, the AAAI/SIGAI Job Fair will take place from 9 am to 7 pm. The Job Fair provides a forum for students and professionals looking for internships or jobs to meet with representatives from companies and academe in an informal "meet-and-greet" atmosphere. To participate in this event, an organization must register.
From February 6 to 8 the Twenty-Ninth Conference on Innovative Applications of Artificial Intelligence (IAAI-17) will include invited talks and presentations about emerging and deployed applications of a wide-range of AI technologies. The conference has paper proceedings for all the presentations. Registration for IAAI includes attendance to the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), which takes place in parallel with IAAI on the same dates. Registration also includes poster and demonstration sessions and other conference events.
During the entire event, an Exhibit Program will showcase new technologies and research from industry and academe. Details are provided on the Exhibits page:
The conference offers a range of sponsorship levels and benefits, described on the Sponsorship Page:
To complete your registration, please go to the AAAI registration site:
AAAI-17 Call for Papers
Special Track on Cognitive Systems
Special Track on Computational Sustainability
Special Track on Integrated Systems
EAAI Symposium Call
IAAI-17 Call for Papers
Tutorial Forum Call
Student Abstract Call
DC Call for Applications
Senior Member Track Call