Much has been achieved in the field of AI, yet much remains to be done if we are to reach the goals we all imagine. One of the key challenges with moving ahead is closing the gap between logical and statistical AI. Recent years have seen an explosion of successes in combining probability and (subsets of) first-order logic respectively programming languages and databases in several subfields of AI: Reasoning, Learning, Knowledge Representation, Planning, Databases, NLP, Robotics, Vision, etc. Nowadays, we can learn probabilistic relational models automatically from millions of inter-related objects. We can generate optimal plans and learn to act optimally in uncertain environments involving millions of objects and relations among them. Exploiting shared factors can speed up message-passing algorithms for relational inference but also for classical propositional inference such as solving SAT problems. We can even perform exact lifted probabilistic inference avoiding explicit state enumeration by manipulating first-order state representations directly.
Lifted Inference algorithms can be classified into three major categories: exact inference, approximate inference and preprocessing for lifted inference. The first lifted inference algorithm was proposed by David Poole in 2003. This is an exact lifted inference algorithm that combines variable elimination with resolution. This was later extended by Braz et al., Milch et al., Sen et al., and then by Kizyanski and Poole. The approximate lifted inference methods can be further classified as deterministic approximate methods, sampling-based methods and interval-based methods. The deterministic approximate methods are based on message passing methods and were introduced by Singla and Domingos. This was later generalized by Kersting et al., and further extended by Nath and Domingos. Sen et al., meanwhile extended their bisimulation based algorithm for approximate inference. The first sampling based lifted inference method was based on MCMC sampling and developed by Milch and Russell. Zettlemoyer and Natarajan et al., developed sampling based algorithms for dynamic models. Sampling based methods were also developed for Markov Logic networks based on a combination of satisfiability and MCMC techniques by Poon and Domingos. Braz et al., proposed an interval-based message passing technique in 2009 that propagated bounds as messages. Shavlik and Natarajan in a distinct yet related work proposed a preprocessing method for inference that yielded smaller networks to make inference tractable.
This one day tutorial on lifted inference is the first-of-its-kind with the goal of getting an "unbiased" view on lifted inference algorithms. To this effect, the researchers who developed the original lifted inference algorithms will themselves present their work. These algorithms range from exact inference to message passing algorithms to lifted algorithms and pre-processing for lifted inference algorithms. The presenters will use suitable examples and introduce the necessary background required for understanding their algorithms. In doing so, this tutorial will provide an excellent and an unique opportunity for students and researchers who wish to pursue research in probabilistic logical models, lifted inference, and statistical relational AI.
Presenters: Eyal Amir, Pedro Domingos, Lise Getoor, Kristian Kersting, Sriraam Natarajan, David Poole, Rodrigo de Salvo Braz, and Prithviraj Sen.
Biographies of presenters:
Eyal Amir is an Associate Professor of Computer Science at the University of Illinois at Urbana-Champaign (UIUC). His research focuses on reasoning, learning, and decision making with logical and probabilistic knowledge, dynamic systems, and commonsense reasoning. Before joining UIUC in 2004 he was a postdoctoral researcher at UC Berkeley, received his Ph.D. in Computer Science from Stanford University, and received B.Sc. and M.Sc. degrees in mathematics and computer science from Bar-Ilan University, Israel in 1992 and 1994, respectively. Eyal is a recepient of a number of awards for his academic research. Among those, he was chosen by IEEE as one of the "10 to watch in AI" (2006), and awarded the Arthur L. Samuel award for best Computer Science Ph.D. thesis (2001-2002) at Stanford University.
Rodrigo de Salvo Braz is a computer scientist at the Artificial Intelligence Center of SRI International. He works on higher-level probabilistic inference, that is, working on systems that can quickly and correctly make deductions in domains involving uncertainty and about which we have rich, complex knowledge. This is currently being applied to the understanding of natural language processing, in the presence of knowledge about the domain being read about. He has completed a PhD in computer science at the University of Illinois at Urbana-Champaign in 2007 and been a postdoctoral researcher at UC Berkeley under the supervision of Prof. Stuart Russell. Previous to that, he has completed a M.Sc and B.Sc in Computer Science at the University of São Paulo, Brazil.
Pedro Domingos is Associate Professor of Computer Science and Engineering at the University of Washington. His research interests are in artificial intelligence, machine learning and data mining. He received a PhD in Information and Computer Science from the University of California at Irvine, and is the author or co-author of over 150 technical publications. He is member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. He was program co-chair of KDD-2003 and SRL-2009, and has served on numerous program committees. He is a AAAI Fellow, and has received several awards, including a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, and best paper awards at KDD-98, KDD-99, PKDD-05 and EMNLP-09.
Lise Getoor is an Associate Professor at the University of Maryland, College Park. Her main research areas are machine learning and reasoning under uncertainty and she has also done work in areas such as database management, social network analysis and visual analytics. She is author of over 150 papers, and is co-editor with Ben Taskar of the MIT Press book "An Introduction to Statistical Relational Learning". She was PC co-chair of ICML 2011, and has served as senior PC or PC member for conferences including AAAI, ICML, IJCAI, ICWSM, KDD, SIGMOD, UAI, VLDB, and WWW. She is an ACM Transactions on Knowledge Discovery and Data Associate Editor, was a Journal of Artificial Intelligence Research Associate Editor, and Machine Learning Journal Action Editor. She is on the board of the International Machine Learning Society, and has served on the AAAI Council. She is a recipient of an NSF Career Award, was a Microsoft New Faculty Fellow finalist and was awarded a National Physical Sciences Consortium Fellowship. She received her PhD from Stanford University, her Masters degree from University of California, Berkeley, and her BS from the University of California, Santa Barbara.
Kristian Kersting is an ATTRACT fellow at Fraunhofer IAIS, Bonn, Germany, and a research fellow at the University of Bonn. He received his Ph.D. from the University of Freiburg, Germany, in 2006. After a PostDoc at the MIT, USA, he joined Fraunhofer IAIS in 2008. His main research interests are machine learning, data mining, and statistical relational artificial intelligence. He is the (co-)author of over 60 technical publications, has received the ECML Best Student Paper Award in 2006 and the ECCAI Dissertation Award 2006 for the best European dissertation in the field of AI, and is an ERCIM Cor Baayen Award 2009 finalist for the "Most Promising Young Researcher In Europe in Computer Science and Applied Mathematics". He gave several tutorials at top conferences. He co-chaired MLG-07, SRL-09, and StarAI-10 and (will) serve(d) as area chair for ECML (06,07), ICML (10,11), ICANN (11), as SPC at IJCAI-11, and on the PCs of several top conferences. He is a member of the editorial boards of the Machine Learning journal and the Journal of Artificial Intelligence Research, and an action editor of the Data Mining and Knowledge Discovery journal.
Sriraam Natarajan is currently an Assistant Professor in the Translational Science Institute of Wake Forest University School of Medicine. He was previously a Post-Doctoral Research Associate at the Department of Computer Science at University of Wisconsin-Madison. He graduated with his PhD from Oregon State University working with Dr. Prasad Tadepalli. His research interests lie in the field of Artificial Intelligence, with emphasis on Statistical Relational Learning, Machine Learning, Reinforcement Learning, Graphical Models and Bio-Medical Applications.
David Poole is a Professor of Computer Science at the University of British Columbia. He has a Ph.D. from the Australian National University. He is known for his work on assumption-based reasoning, diagnosis, relational probabilistic models, combining logic and probability, algorithms for probabilistic inference, representations for automated decision making, probabilistic reasoning with ontologies and semantic science. He is a co-author of a new AI textbook, Artificial Intelligence: Foundations of Computational Agents (Cambridge University Press, 2010), co-author of an older AI textbook, Computational Intelligence: A Logical Approach (Oxford University Press, 1998), co-chair of AAAI-10 (twenty-Fourth AAAI Conference on Artificial Intelligence) and co-editor of the Proceedings of the Tenth Conference in Uncertainty in Artificial Intelligence (Morgan Kaufmann, 1994). He is a former associate editor of the Journal of AI research, is an associate editor of AI Journal and is on the editorial board of AI Magazine. He is the secretary of the Association for Uncertainty in Artificial Intelligence, is a Fellow of the Association for the Advancement Artificial Intelligence (AAAI).
Prithviraj Sen is a researcher working at Yahoo! Labs, Bangalore. Prior to this, he completed his PhD at the University of Maryland, College Park. His PhD thesis was devoted to designing more expressive probabilistic databases and how to make inference during query evaluation more efficient. On a broader scale, his areas of interest encompass machine learning, databases systems and problems lying in the intersection of these areas.