Assistant Professor, Stanford University
Bio: Dorsa Sadigh is an assistant professor in Computer Science at Stanford University. Her research interests lie in the intersection of robotics, machine learning, and human-AI interaction. Specifically, she is interested in developing algorithms that learn robot policies from various sources of data and human feedback, and can seamlessly interact and coordinate with humans. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012. She is awarded the Sloan Fellowship, NSF CAREER, ONR Young Investigator Award, AFOSR Young Investigator Award, DARPA Young Faculty Award, Okawa Foundation Fellowship, MIT TR35, and the IEEE RAS Early Academic Career Award.
Interactive Learning in the Era of Large Models
Thursday 5th, 01:30 pm - 02:30 pm @ Plenary Paris Nord
Abstract: In this talk, I will discuss the role of grounded representations in learning from interactions with humans. I will first talk about how language instructions along with latent actions can enable shared autonomy in robotic manipulation problems, and its potential impact in the field of assistive robotics. I will then talk about the role of large pretrained models in today’s robotics systems. Specifically, I will present two viewpoints: 1) pretraining large models for downstream robotics tasks, and 2) finding creative ways of tapping into the rich context of large models to enable more aligned embodied AI agents. For pretraining, I will introduce Voltron, a language-informed visual representation learning approach that leverages language to ground pretrained visual representations for robotics. For leveraging large models, I will talk about a few vignettes about how we can leverage LLMs and VLMs to learn human preferences, allow for grounded social reasoning, or enable teaching humans using corrective feedback. Finally, I will conclude the talk by discussing how large models can be effective pattern machines that can identify patterns in a token invariant fashion and enable pattern transformation, extrapolation, and even show some evidence of pattern optimization for solving control problems.
VP of Research, Google DeepMind
Bio: Pushmeet Kohli is VP of Research at Google DeepMind where he leads the AI for Science program, as well as conducts research on approaches to ensure AI systems are Safe, Reliable and Trustworthy. The Science program that Pushmeet leads tackles problems across a large span of disciplines including Structural Biology, Genomics, Quantum Chemistry, Mathematics and Fusion. The team's most notable success was in developing AlphaFold, the state-of-the-art AI system that solved the problem of protein structure prediction - a grand challenge in structural biology. On the responsibility and reliability side, Pushmeet led the development of Google's recently launched SynthID system for watermarking and detection of AI generated images. Pushmeet’s research papers have won multiple awards in the fields of machine learning, computer vision, game theory and human computer interaction.
The potential of AI in advancing science and the importance of ensuring AI's responsible use
Friday 6th, 01:30 pm - 02:30 pm @ Plenary Paris Nord
Abstract: Scientific advances over the last several centuries have raised the standard of living for many people across the globe. However, there is much that remains to be understood as evidenced by the massive challenges of climate change and the Covid pandemic. In this talk, I will discuss the potential of AI (Machine Learning) in advancing Science and improving our understanding of the world, and our ability to predict the result of interventions. I will conclude by highlighting the importance of using AI responsibly, and show how AI itself can help with this.