One step forward

Teams compete to design an algorithm that can help people with prosthetics learn to move

A wheelchair athlete watches a rival's game. / Luis Robayo/AFP/Getty Images

Designing a computer model of a virtual person learning to walk is a little like a baby learning to walk: There’s a lot of falling down.

The difference is that, unlike a baby, a virtual person doesn’t have a brain to guide the process of moving muscles, bones and joints to ensure that upright movement is possible.

Still, Stanford researcher Lukasz Kidzinski, PhD, is optimistic that, through crowdsourcing, it’s possible to help living, breathing humans by creating algorithms that can simulate the movements of the limbs of virtual people.

Last year, Kidzinski, a postdoctoral scholar in bioengineering, created a contest that enticed 442 teams of academics, private-sector artificial intelligence researchers and enthusiasts from around the world to design algorithms to teach virtual musculoskeletal models of athletes how to walk, run and eventually navigate an obstacle course.

Contestants used highly accurate computer models of musculoskeletal structures that were created by Kidzinski’s adviser, Scott Delp, PhD, professor of bioengineering and of mechanical engineering. His models are widely used for surgical navigation.

This year, teams are working with a virtual body that includes a prosthetic leg. The aim is to guide research into better prosthetic designs and to determine the best approaches for helping people learn to move with them.

“Last year was more of a proof of concept,” Kidzinski said. “This year we want to get closer to medical applications.”

More than 250 teams have signed up. They are judged on how far their virtual competitors can walk from a starting point. No one has succeeded in designing a model that can walk with a prosthetic leg, but Kidzinski said that by this time last year, no team had managed more than a few steps. Some fell flat on their faces, virtually speaking.

The fact that last year’s winner made it through an obstacle course showed that the approach could work.

“Compared with the first challenge, this new challenge is a big step forward,” Kidzinski said.

Nvidia will award graphics-processing units to the top three teams, and Google has offered cloud computing resources for teams who might otherwise find it difficult to take part.

Details about NIPS 2018: AI for Prosthetics Challenge are available at https://stan.md/2Zg8yRF. The deadline to enter is Sept. 15.

Author headshot

Nathan Collins

Nathan Collins is associate director of interdisciplinary life sciences communications for the Stanford News Service.

Email the author