Ryan Hoque

I am a Master's student and incoming PhD student in the UC Berkeley EECS department studying Robotics and Artificial Intelligence. I am advised by Ken Goldberg and am associated with Berkeley Artificial Intelligence Research.

Before graduate school, I spent some time in industry working on autonomous driving at Uber's Advanced Technologies Group (ATG) and received a Bachelor's degree summa cum laude in Electrical Engineering and Computer Science, also from UC Berkeley (Go Bears!). Outside of research, I enjoy freestyle rapping, playing the piano, reading and writing about philosophy, and parkour.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn  /  Social

profile photo

My master's and undergraduate research has been focused on robotic manipulation of deformable objects. We have explored several deep reinforcement learning and imitation learning techniques in simulation ranging from visual foresight to dense object nets to achieve state-of-the-art results on fabric manipulation with real robotic systems.

VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation
Ryan Hoque*, Daniel Seita*, Ashwin Balakrishna, Aditya Ganapathi, Ajay Tanwani, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg
To appear at Robotics: Science and Systems (RSS), 2020.
Project Website

By applying a model-based reinforcement learning technique we call VisuoSpatial Foresight, we train a visual dynamics model and use it to sequentially manipulate fabric toward a variety of goal images, entirely from random interaction RGBD data in simulation.

Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images
Aditya Ganapathi, Priya Sundaresan, Brijen Thananjeyan, Ashwin Balakrishna, Daniel Seita, Jennifer Grannen, Minho Hwang, Ryan Hoque, Joseph Gonzalez, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg
Under Review at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
Project Website

Here we learn correspondences between images of fabric to capture visual structure (as opposed to dynamical structure) and specify policies by demonstrating actions on a standardized configuration of the fabric (e.g. smooth).

Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor
Daniel Seita, Aditya Ganapathi, Ryan Hoque, Minho Hwang, Edward Cen, Ajay Tanwani, Ashwin Balakrishna, Brijen Thananjeyan, Jeffrey Ichnowski, Nawid Jamali, Katsu Yamane, Soshi Iba, John Canny, Ken Goldberg
Under Review at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
Project Website

In this paper we learn to smooth real fabric from arbitrarily complex starting states by leveraging our access to ground truth state in simulation.


Various other things you may find interesting.

Website template from Jon Barron.