Despite advances in AI, machines still have limitations in accomplishing tasks that come naturally to humans. When AI systems are fielded in the open world, these limitations cause concerns around reliability, biases and trust. In this talk, I will argue that hybrid systems that combine the strengths of machine and human intelligence is key to overcoming the limitations of AI algorithms and developing reliable systems. In the first part of the talk, I will present techniques that can guide human labeling efforts for efficient discovery of blind spots of machine learned classifiers. Then, I will discuss how blind spots emerge in physical systems learning in simulations when they function in the real-world and present techniques for modeling blind spots based on human demonstrations and corrections.
About the Speaker
Ece Kamar is a Senior Researcher in the Adaptive Systems and Interaction Group at Microsoft Research. Ece received her Ph.D. in computer science from Harvard University in 2010. Her research is inspired by real-world applications that can benefit from the complementary abilities of people and AI. Since many real-world problems requires interdisciplinary solutions, her work spans several subfields of AI, including planning, machine learning, multi-agent systems and human-computer teamwork. She is passionate about investigating the impact of AI on society and studying ways to develop AI systems that are reliable, unbiased and trustworthy. She has over 40 peer-reviewed publications at the top AI and HCI venues including AAAI, IJCAI, AAMAS and CHI. She served in the first Study Panel of Stanford’s 100 Year Study of AI (AI100) and has served on program committees of AAAI, IJCAI, HCOMP, AAMAS, WWW and UAI and recently served as the co-chair the Emerging Topic of AAAI 2018 on human-AI collaboration.