Repository logo
 

Robust and Adaptive Dexterous Manipulation with Vision-Based Learning from Multi-Demonstrations

dc.contributor.authorChen, Nuo
dc.contributor.copyright-releaseNot Applicableen_US
dc.contributor.degreeMaster of Applied Scienceen_US
dc.contributor.departmentDepartment of Mechanical Engineeringen_US
dc.contributor.ethics-approvalNot Applicableen_US
dc.contributor.external-examinerJason Guen_US
dc.contributor.manuscriptsNot Applicableen_US
dc.contributor.thesis-readerMohammad Saeedien_US
dc.contributor.thesis-supervisorYa-Jun Panen_US
dc.date.accessioned2024-08-09T18:39:01Z
dc.date.available2024-08-09T18:39:01Z
dc.date.defence2024-07-31
dc.date.issued2024-08-07
dc.description.abstractRobotic manipulators perform tasks as humans with the potential to greatly assist people in industry, health care, and general society services. In this thesis, a vision-based learn from demonstration framework for a 7-degree-of-freedom robotic manipulator has been proposed. This framework enables learning from multiple human hand demonstrations to execute dexterous pick-and-place tasks. Conventional methods for collecting demonstration data involve manually and physically moving the robot. These methods can be cumbersome, lack dexterity, and be physically straining. The proposed contactless and markerless approach leverages MediaPipe software, dynamic time warping, Gaussian mixture model, and Gaussian mixture regression to capture and regress multiple dexterous hand motions. The proposed approach results in a more comprehensive motion representation, simplifying multiple demonstrations, and mitigating the non-smoothness inherent in a single demonstration. Dynamic movement primitives (DMP) with a force coupling term are employed to adaptively assimilate human actions into trajectories executable in dynamic environments. By considering the estimated variance from demonstration data, the path planning parameters are automatically fine-tuned and associated with the linear and nonlinear terms to adapt the trajectories. To compensate for unknown external disturbances, a non-singular terminal sliding mode controller (NTSMC) is applied to the Franka Emika robot for precise trajectory tracking. Experimental studies demonstrated the effectiveness and robustness of the proposed framework in executing human hand demonstrations, motion planning, and control for pick-and-place tasks.en_US
dc.identifier.urihttp://hdl.handle.net/10222/84393
dc.language.isoenen_US
dc.subjectLearn from Demonstrationen_US
dc.subjectVision-Based Detectionen_US
dc.subjectPath Planningen_US
dc.titleRobust and Adaptive Dexterous Manipulation with Vision-Based Learning from Multi-Demonstrationsen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
NuoChen2024.pdf
Size:
4.33 MB
Format:
Adobe Portable Document Format
Description:
Main article

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: