Below is the information for three early career keynotes (Sep 29, 11:30 am-12:30 pm).
Title: From Simulation to Reality: Learning Transferable Models for Tactile Sensors
Abstract: Simulation is widely used in robot learning as a fast and low-cost way to collect large-scale data. In this talk, I will explore a less common but powerful application: using simulation to advance tactile sensing. A key challenge in this field is the domain gap between different tactile sensor hardware, which makes it difficult to transfer knowledge learned on one sensor to another. Despite its importance, effective solutions for cross-sensor transfer remain limited. Recent advances in robot and sensor simulation open new opportunities to address this problem. I will present our work on using simulated tactile data to train machine learning models that bridge the domain gap between sensors, enabling more transferable tactile perception. I will also introduce our outlook on future directions of bridging the domain gaps of tactile sensors.
Bio:
Wenzhen Yuan is an assistant professor at the Siebel School of Computing and Data Science at the University of Illinois Urbana-Champaign and the director of the RoboTouch Lab. She is a pioneer in high-resolution tactile sensing for robots, and she also works in multi-modal robot perception, soft robots, robot manipulation, and haptics. She is a recipient of the IEEE RAS Early Career Award and NSF CAREER Award.
Title: Touch-driven Sensorimotor Perception and Control: From Today’s Research to Tomorrow’s Deployment
Abstract: Touch is central to human dexterity — enabling us to rummage through a grocery bag, adjust our grip when wielding tools, or assess the texture of clothing with ease. For robots, however, tactile sensing remains an underdeveloped and underutilized modality. In this talk, I’ll explore how touch can become a first-class citizen in robot learning and control. We’ll begin with a look at the current landscape of sensorimotor perception and control, highlighting key progress and persistent challenges in tactile sensing, representation learning, and planning under contact. Next, I’ll outline a 2–5 year outlook: how emerging advances in high-resolution tactile sensors, multimodal representation learning, and simulation could enable robotic touch to have its own “ImageNet moment.” Finally, I’ll discuss what it takes to go beyond academic prototypes — the systems, standards, and open challenges that stand between today's research and scalable, real-world deployment of touch-enabled robots.
Bio: Nima Fazeli is an Assistant Professor of Robotics, Computer Science (EECS), and Mechanical Engineering at the University of Michigan, and an Amazon Scholar with Amazon Robotics. He leads the Manipulation and Machine Intelligence (MMint) Lab, which focuses on intelligent and dexterous robotic manipulation through advances in sensing, learning, and control. Nima received his Ph.D. from MIT in 2019, where he worked with Prof. Alberto Rodriguez. He earned his M.Sc. from the University of Maryland, College Park in 2014, where his research focused on modeling the human (and occasionally swine) arterial tree for applications in cardiovascular disease, diabetes, and cancer diagnosis. His work has been recognized with the NSF CAREER Award, support from the National Robotics Initiative and NSF Advanced Manufacturing, and the Rohsenow Fellowship. His research has also been featured in major media outlets including The New York Times, CBS, CNN, and BBC.
Title: Robot Data is Not Enough Data
Abstract: The past decade of robot learning has been fueled by piles of human-teleoperated robot data. But this strategy is hitting a wall. Unlike computer vision and natural language processing, fields supercharged by mountains of passive, internet-scale human-labeled data, robotics faces a harsher reality. Robot data is expensive. It is slow. It is narrow. And most critically, we don’t even know which demonstrations or labels truly matter for embodied intelligence. Chasing more of the same is a dead end.
In this talk, I will argue that robot data alone will never deliver the leap we need. We must demand more. Robots should learn directly from humans. They should feel the world through touch, rather than staring at pixels alone. And they must go beyond purely reactive modes and instead reason, plan, and act with foresight. If we are serious about building intelligent machines, we must move beyond the fixation on “just more data” and instead embrace the hard, messy, human-centered problems that will define the next era of robotics.
Bio: Lerrel's research is aimed at getting robots to generalize and adapt in the messy world we live in. To this end his work focuses broadly on robot learning and decision making, with an emphasis on large-scale learning (both data and models), representation learning for sensory data, developing algorithms to model actions and behavior, reinforcement learning for adapting to new scenarios, and building open-sourced affordable robots. This work has received best paper awards at ICRA 2016, RSS 2023, and ICRA 2024. Lerrel has received the Sloan Fellowship, Packard Fellowship, CIFAR Fellowship, TR35 innovator under 35, IEEE RAS Early Career and NSF CAREER awards. Several of his works have been featured in popular media such as The Wall Street Journal, TechCrunch, MIT Tech Review, Wired, and BuzzFeed. His recent work can be found on www.lerrelpinto.com.