As independent research my friend Dexter Friis-Hecht and I built our own version of an existing open source bipedal robot called Open Duck Mini. We took on this project in the interest of learning about reinforcement learning and strengthening our systems integration skill. We soldered and assembled the system, and brought up the RaspPi. Troubleshooting our build and working with an existing system was a valuable experience. Reading through the Disney paper the project is inspired by, working through some of David Silver’s RL Course and UC Berkeley’s CS285, and understanding and running the Open Duck Mini Python implementation to generate an ONNX and run the policy, was a blast and taught me a lot about RL and how it can be used in a real system. Linked are videos of our robot walking!
A big thanks to our advisor Victoria Preston, the Franklin W. Olin College of Engineering SAG grant, and the Open Duck community for making this possible!
The second phase of this project is writing our own code to train a walking policy.