Research
The goal of my research is to build toward a future where robots are capable of human-level dexterity. I have investigated this goal mostly in the context of scaling up robot imitation learning. This includes imitation learning algorithms that scale to large robot fleets (Fleet-DAgger), synthetic data generation (IntervenGen), robot-free data collection via augmented reality with Apple Vision Pro (ARMADA), and community-wide efforts to train large vision-language-action models (RT-X). In addition to selected publications below
(see my CV for the full list), check out:
|
|
Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Open X-Embodiment Collaboration (incl. myself and 172 other authors)
IEEE International Conference on Robotics and Automation (ICRA) 2024. Best Conference Paper Award.
[Paper]
[Website]
[Twitter TL;DR]
(In collaboration with Google DeepMind and 33 academic labs) Cross-embodiment fleet learning with heterogeneous robots. An open-source dataset of 1M+ robot trajectories from 22 robot embodiments, and results with robot foundation models trained on this data.
|
|
ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning
Ryan Hoque, Ashwin Balakrishna, Ellen Novoseller, Daniel S. Brown, Albert Wilcox, Ken Goldberg
Conference on Robot Learning (CoRL) 2021. Oral Presentation (6.5% of papers).
[Paper]
[YouTube]
[Website]
[Twitter TL;DR]
A state-of-the-art robot-gated interactive imitation learning algorithm that reasons about both state novelty and risk to actively query for human interventions more efficiently than prior algorithms.
|
|
Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision
Ryan Hoque, Lawrence Yunliang Chen, Satvik Sharma, Karthik Dharmarajan, Brijen Thananjeyan, Pieter Abbeel, Ken Goldberg
Conference on Robot Learning (CoRL) 2022. Oral Presentation (6.5% of papers).
[Paper]
[YouTube]
[Website]
[Twitter TL;DR]
We introduce new formalism, algorithms, and open-source benchmarks for "Interactive Fleet Learning": interactive learning with multiple robots and multiple humans.
|
|
IntervenGen: Interventional Data Generation for Robust and Data-Efficient Robot Imitation Learning
Ryan Hoque, Ajay Mandlekar, Caelan Garrett, Ken Goldberg, Dieter Fox
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024.
[Paper]
[Website]
[Twitter TL;DR]
(In collaboration with NVIDIA SRL) A data generation system extending MimicGen to the interactive imitation learning setting, improving policy robustness up to 39x with a data budget of only 10 human interventions.
|
|
IIFL: Implicit Interactive Fleet Learning from Heterogeneous Human Supervisors
Gaurav Datta*, Ryan Hoque*, Anrui Gu, Eugen Solowjow, Ken Goldberg
Conference on Robot Learning (CoRL) 2023.
[Paper]
[Twitter TL;DR]
An extension of Interactive Fleet Learning to heterogeneous and multimodal human supervision, including a novel approach for quantifying uncertainty in energy-based models.
|
|
VisuoSpatial Foresight for Physical Sequential Fabric Manipulation
Ryan Hoque*, Daniel Seita*, Ashwin Balakrishna, Aditya Ganapathi, Ajay Tanwani, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg
Autonomous Robots. Vol 45(5), 2021.
[Paper]
[Website]
(In collaboration with Honda Research Institute) A novel model-based reinforcement learning technique that trains a visual dynamics model for sequentially manipulating fabric toward a variety of goal images, entirely from random interaction RGBD data in simulation.
|
Not Research
I'm a Bay Area native and a lifelong bear: I received my B.S., M.S., and Ph.D. in EECS at UC Berkeley in a single stretch of 8 years. Outside of research, I enjoy playing piano covers of heavy metal bands, reading and writing about philosophy, traveling, and exploring the outdoors. For the philosophically inclined, I am steadily working on a book on metaphysics and mysticism, but it's still in progress - stay tuned!
|
|