Modeling Human Motion-Capture Data for Creativity
dc.contributor.author | Napier, Emily | |
dc.contributor.copyright-release | Not Applicable | en_US |
dc.contributor.degree | Master of Computer Science | en_US |
dc.contributor.department | Faculty of Computer Science | en_US |
dc.contributor.ethics-approval | Not Applicable | en_US |
dc.contributor.external-examiner | n/a | en_US |
dc.contributor.manuscripts | Yes | en_US |
dc.contributor.thesis-reader | Joseph Malloch | en_US |
dc.contributor.thesis-reader | Carlos Hernandez Castillo | en_US |
dc.contributor.thesis-supervisor | Sageev Oore | en_US |
dc.contributor.thesis-supervisor | Gavia Gray | en_US |
dc.date.accessioned | 2024-04-25T13:10:56Z | |
dc.date.available | 2024-04-25T13:10:56Z | |
dc.date.defence | 2023-08-11 | |
dc.date.issued | 2024-04-16 | |
dc.description.abstract | Human motion-capture data can be represented, modeled, and generated through computational techniques. This thesis explores representations and strategies for querying, interpolating, and sequence modeling of motion-capture data. We employ spectral analysis of motion capture data to facilitate the query and comparison of movements, and identify target features for interpolation. We train a decoder-only transformer model on text-encoded motion-capture data, which we fine-tune for dance generation and movement classification. Our core contributions are defining interpolation and language model training procedures for generating motion-captured dance. | en_US |
dc.identifier.uri | http://hdl.handle.net/10222/84070 | |
dc.language.iso | en | en_US |
dc.subject | motion-capture | en_US |
dc.subject | machine learning | en_US |
dc.title | Modeling Human Motion-Capture Data for Creativity | en_US |