Next Article in Journal
The Optimization of Numerical Algorithm Parameters with a Genetic Algorithm to Animate Letters of the Sign Alphabet
Next Article in Special Issue
User Experience in Immersive Virtual Reality-Induced Hypoalgesia in Adults and Children Suffering from Pain Conditions
Previous Article in Journal
Mapping or no Mapping: The Influence of Controller Interaction Design in an Immersive Virtual Reality Tutorial in Two Different Age Groups
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Virtual Reality Direct-Manipulation Tool for Posing and Animation of Digital Human Bodies: An Evaluation of Creativity Support

UpnaLab, ISC—Institute of Smart Cities, Public University of Navarra, 31006 Pamplona, Spain
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2024, 8(7), 60; https://doi.org/10.3390/mti8070060
Submission received: 6 June 2024 / Revised: 1 July 2024 / Accepted: 9 July 2024 / Published: 10 July 2024
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality—2nd Edition)

Abstract

:
Creating body poses and animations is a critical task for digital content creators, movement artists, and sports professionals. Traditional desktop-based tools for generating 3D poses and animations often lack intuitiveness and are challenging to master. Virtual reality (VR) offers a solution through more intuitive direct-manipulation capabilities. We designed and implemented a VR tool that enables direct manipulation of virtual body parts with inverse kinematics. This tool allows users to pose and animate virtual bodies with one- or two-handed manipulations, while also moving, including bending, jumping, or walking. Our user study demonstrated that participants could produce creative poses and animations using this tool, which we evaluated for creativity support across six factors. Additionally, we discuss further opportunities to enhance creativity support.

1. Introduction

Body posing and animation is demanded in multiple scenarios and industries. Sports professionals, fitness practitioners, professional dancers, movement artists, and digital content creators need to define, modify, visualize or calculate body poses and animations. Sports professionals will define a body pose to improve performance and reduce injury probabilities. Movement artists will create virtual body poses and animations, which will later be reproduced live in shows. Digital content creators need to pose and animate virtual characters for the production of static images or animated movies.
Computer-supported creativity tools, encompassing both software and hardware interfaces, are designed to foster innovation and the creation of new artifacts [1]. Traditional desktop-based tools like Maya, Unreal, and Blender have enabled the production of a wide range of creations, from movies and games to industrial and architectural models. However, these desktop-based tools can restrict the expressiveness of creators. Creators need to manually pose the joints of the 3D character with two-dimensional input and output interfaces, which takes a significant part of the time. The keyframe-based animation creation requires extensive learning and exercise to create convincing results, which can be complicated and difficult for novice users.
Today’s immersive environments, featuring head-mounted displays and 6 DOF hand-held controllers, provide complete 3D experiences. Artists and designers love virtual-reality creative tools like Google Tilt Brush and Gravity Sketch for their highly intuitive controls, which offer a tremendous sense of creative freedom. However, there is no virtual reality tool with direct manipulation of virtual body parts implementing inverse kinematics that permits the production of poses and animations based on creative-support design principles and which are designed for natural manipulation movements.
In this paper, we first describe the design objectives and implementation of the tool. Second, we present our research methodology for creative support evaluation and the results of a user study. Finally, we present our findings and interpretation of the six individual factors of creativity support and suggest avenues for future research.

2. Related Work

Digital body posing and animation has traditionally used expensive motion capture equipment consisting either of facilities with multiple cameras or body suites with integrated sensors [2]. Those type of equipment produce digital content that requires filtering and adjustment with additional computer programs. Specific software and hardware has been developed for specific applications: animation of facial expressions [3], motion analysis in sport science [4], and digital tools for a dance learning environment [5]. Other technologies used for such applications are live body animation with augmented reality mirrors, used to enhance movement training [6] or improvise dance movement [7]. Recently, advances in computer vision and machine learning have permitted the detection of human poses and animations from videos with tools such as OpenPose [8], enabling the augmentation of human action videos with visual effects [9] or the importation of videos, modifying the motion timeline, and combining the motions from different videos to create new animations [10]. A very different approach is used by Puppetteer [11], which explores hand gesture recognition to manipulate human avatar actions.
Artists and designers highly appreciate virtual-reality 3D sketching and modeling tools like Google Tiltbrush and Gravity Sketch for their intuitive controls that offer a vast sense of creative freedom. In these tools, strokes are drawn in 3D space following the position of the user’s hand, and curved surfaces are formed by moving a line that connects both hands. Innovative extended-reality tools have been introduced to enhance sketching capabilities by incorporating real-world objects, distances, or compositions [12], and to enable the animation of physical phenomena using hand gestures [13]. Furthermore, prototyping in immersive environments has been applied to movie scenes pre-production [14], cross-reality system experimentation [15], or authoring augmented-reality instructional systems [16]. Commercial VR animations tools, Quill [17] and Tvori [18], use hand-held controllers to apply direct manipulation to animate strokes representing humans, producing cartoon-like human animation. Also, in virtual reality, there are proposals for human animation for posing virtual humans with in-the-air sketches of the user [19], or animating virtual characters with hand gestures [20]. The limitations of virtual reality tools can be summarized as either not using virtual human skeletons, or not allowing intuitive direct manipulation of virtual bodies.

3. Design and Implementation of Tools

Our first and second requirements are as follows: (1) to implement virtual reality tools for posing and animating digital human characters with skeletons incorporating full inverse kinematics capabilities, improving the existing work [17,18] that focuses on posing and animating strokes representing humans, and (2) to improve upon the current methods by implementing direct manipulation of virtual bodies and body parts through physical interaction, such as approaching and colliding the user’s real hand with the virtual body part, instead of les-precise methods like drawing in the air or using hand gestures [19,20].
An additional third requirement is to take advantage of virtual-reality handheld controllers and tethered devices so that users can manipulate with real-world physical gestures; i.e., users can reach elements at any position in the room-sized space; users can walk, bend down, jump, turn their body around the room-sized space to translate virtual elements; or they can stretch one arm to manipulate parts in high positions and translate other parts with the other arm.
As a fourth requirement, we aim to develop tools that support creativity taking into account the principles stated by Ben Shneiderman et al. [1], especially the concept of “low thresholds, high ceilings, and wide walls”, which we have specified into the following design objectives:
  • The user is allowed free movement in room-sized spaces;
  • Users should primarily utilize a single controller to ensure free and balanced movement, while still maintaining precise and controlled interactions;
  • The playback and creation functions are assigned to separate controllers, allowing them to be used independently, which ensures that only one hand is occupied.
This implementation has been developed in Unity 2021.3.13f1, incorporating XR plugins from the Oculus Integration package (https://developer.oculus.com/downloads/package/unity-integration, accessed on 11 November 2023). Virtual bodies are implemented using the Unity Avatar class (https://docs.unity3d.com/2021.3/Documentation/Manual/class-Avatar.html, accessed on 11 November 2023), which represents humanoids with two legs, two arms, and a head. Currently, the model used is the YBot from Mixamo (https://www.mixamo.com/#/?query=YBot&type=Character, accessed on 11 November 2023), scaled to a height of 1.7 m in virtual reality. Virtual body parts can be selected when users position their VR controllers to collide with a virtual body part. The controller input is mapped to actions using the Oculus OVRInput class (https://developer.oculus.com/documentation/unity/unity-ovrinput/, accessed on 11 November 2023). When the user presses the primary button on the controller, the body part is grabbed, and the position of the controller is translated to that part by direct manipulation. This solution improves upon existing methods, which rely on less-precise techniques such as drawing in the air or using hand gestures to create poses [19,20]. The proposed implementation allows for intuitive manipulation of a virtual body with the same movements that the user would make in the real world. Users can bend to move knees and legs, as shown in Figure 1a, which is a composition image of a virtual body and the real user. Figure 1b shows a composition of the real user stretching his arms to reach both hands of the virtual body simultaneously.
The avatar implements inverse kinematics provided by Unity (https://docs.unity3d.com/2021.3/Documentation/Manual/InverseKinematics.html, accessed on 11 November 2023). We configured inverse kinematics of the virtual body centered in the torso, so grabbing and moving a hand leads to movement of the arm, shoulder, and upper body. Any body part can be locked by selecting it with the controller’s secondary button, which freezes the position and rotation of the corresponding rigid body part (https://docs.unity3d.com/2021.3/Documentation/Manual/class-Rigidbody.html, accessed on 11 November 2023). This prevents the inverse kinematics from transferring to the rest of the body, allowing movement of only the hand and arm while locking the shoulder, as seen in Figure 2a. Using humanoids with inverse kinematics permits more realistic poses and animations than existing virtual reality tools [17,18], which focus on posing and animating strokes representing humans and produce cartoon-like poses and animations.
Virtual reality also allows for non-natural operations, such as selection and translation from a distance. Virtual controllers can launch a virtual ray to select objects at a distance. The virtual ray is a line attached and perpendicular to each controller. If the ray collides with a virtual body, the body part is grabbed, as previously explained. Translating and rotating the controller redirects the ray, and the body is manipulated with full inverse kinematics. As seen in Figure 2b, both controllers can have their own ray, as in the near-contact manipulation. This manipulation mode is less precise and controllable but facilitates further creative options.
Virtual body manipulations can also be recorded as they are performed. The positions and rotations of the virtual body skeleton are saved as Unity animation clips (https://docs.unity3d.com/2021.3/Documentation/Manual/AnimationClips.html, accessed on 11 November 2023) for each interval during which the user manipulates the body. After releasing the virtual body, if the user presses the play button, the animation clip is read, and the positions and rotations are translated to the virtual body, resulting in the body playing the animation. This mode of creating animations is more agile than existing solutions.
To facilitate the creation of smooth animations, previous positions of the body can be visualized as ghost representations, which are semi-transparent visualizations of the body that are animated and represent the last movement of the virtual body. They are shown with a duplicated virtual body that has collisions and inverse kinematics deactivated, see Figure 2c. This video demonstrates those tool functionalities: https://youtu.be/RIXoVGl2Rzs, accessed on 11 November 2023.

4. Creativity Support Evaluation

To assess the creativity support offered by the implemented tool, we conducted a user study in line with the recommendations of Shneiderman et al. [1] and Hargood and Green [21]. Our study was centered on collecting qualitative data and executing a complex creative task to capture creativity strategies and explore the potential impact. Our evaluation utilized primarily qualitative methods involving open-ended questions, supplemented by the CSI (Creativity Support Index) questionnaire [22,23] to provide quantitative insights. The CSI is a validated survey designed to gauge how effectively a tool supports the creative process of its users. It should be noted that the CSI does not assess the outputs of the creative process, nor does it measure individual creativity. Additionally, to illustrate the tool’s potential for fostering creativity, we present examples of poses and animations created by the users.
Our participants were twelve in number, with an average age of 28.3, 2 female and 10 male, with a degree in computer science. They were selected from students on a Master’s program and researchers at a research institute; we tried to obtain as many participants as possible with experience in animation software (4 out of 12). Prior to performing the creative task, participants were asked about their previous creative performance and technology experience. They reported that they considered themselves as average creatively, all of them with some experience in virtual reality.
The study procedure was as follows: participants were given a demonstration of how the tools are operated and were given some minutes to become used to the operation by practicing with some predefined poses and animation. Afterwards, they were instructed to be creative and, firstly, to create a pose, followed by creating an animation. Throughout both tasks, the positions and rotations of virtual body parts were continuously recorded using additional logging capabilities integrated into the tool. These data allowed the animations to be replayed and final poses to be reproduced by researchers after the intervention. Subsequently, participants completed questionnaires via a web form. Initially, they used a Likert scale (ranging from 1 to 7) to respond to CSI questions. Following this, participants were asked to provide free-text responses regarding both positive and negative aspects of the posing and animation functionalities, as well as their preferences for animation workflows compared to traditional tools.

4.1. Creative Outcomes

In this section we report the resulting creation of participants. Participants were asked to title their creation. Titles given to some of the created poses were the following: “Vampire watching over a room corner”, “Breakdance with head over floor waving a hand”, “Bridge”, “Superman pose”, “Cross in air”, and “Ready to run”. Titles give to some of the created animations were the following: “Lateral flip”, “Dance class with me as instructor”, “Bomb falling on a pool”, “Jump into the bed”, Man flying over me”, and “Fall and hit”. Figure 3 and Figure 4 show some of those creations.
The creations were performed very fast, between 30 s to 3 min, with an average of 100 s. Two participants created more than one pose or animation.

4.2. CSI Questionnaire

The CSI survey encompasses six dimensions: Exploration, Expressiveness, Immersion, Enjoyment, Results Worth Effort, and Collaboration. Participants were questioned about each dimension with the following statements asking about agreement:
  • ENJOYMENT: I enjoyed using the system or tool.
  • EXPLORATION: The system or tool was helpful in allowing me to track different ideas, outcomes, or possibilities.
  • RESULTS WORTH EFFORT: What I was able to produce was worth the effort I had to exert to produce it.
  • EXPRESSIVENESS: The system or tool allowed me to be very expressive.
  • COLLABORATION: It was really easy to share ideas and designs with other people inside this system or tool.
  • IMMERSION: I became so absorbed in the activity that I forgot about the system or tool that I was using.
Figure 5 reports the resulting individual-factor mean scores on a 1–7 scale and standard errors for creative posing and creative animation tasks. Individual-factor scores indicate which aspects of creativity support are well supported or need to be improved. We conducted paired-sample single-tailed t-tests of the six CSI factors. Score for Enjoyment, Exploration and Immersion for the animation task are statistically significantly higher than those for posing: t = −2.8, p = 0.017; t = −2.27, p = 0.044; and t = −2.34, p = 0.039. No statistically significant differences between animation and posing were found for Results Worth Effort, t = −1.609, p = 0.136; Expressiveness, t = −1.14, p = 0.275 and Collaboration, t = 0, p = 1.

4.3. Findings

We interpret the results from the CSI survey and open-ended questions for each individual factor: Exploration, Expressiveness, Immersion, Enjoyment, Results Worth Effort, and Collaboration, to evaluate the creative support of the tool.

4.3.1. Enjoyment

The implemented tool obtained a high score for the Enjoyment factor, with 5.6 and 6.5 out of 7. This was one of the factors obtaining the highest scores. It is an important factor, as one of the most respected theories for creativity by psychologist Mihaly Csikszentmihalyi focuses on the concept of enjoyment [24]. His studies of people indicate that their enjoyment comes from a deep but effortless involvement “flow” with elements also present in creative processes, such as willingness to continue or repeat an activity, and that distractions are excluded from consciousness and the sense of time becomes distorted. Participants’ comments on the usage of the tools confirm some of those elements: “I would keep on using it”; and basic comments confirm their enjoyment, such as “I have a lot of fun”, “I liked it the most”. The tool was not used in a work task, and some participants chose amusing creative outcomes: “I would choose (this pose) as it seems to me the funnier”. Thus, future work might involve performing studies in a work task where participants might not be expected to be happy and enjoy the task.

4.3.2. Exploration

The scores 5.3 and 6.1 out of 7 were obtained for the Exploration factor. Exploration is critical, and is usually seen as the most important factor by participants in the evaluation of creative tools with CSI [25]. In our study, participants agreed that tools should provide the necessary support for creativity by making it easy to explore different possibilities and try out new ideas and outcomes, with comments such as “freedom to explore poses at will made me realize about what I want to show”. For creative participants, the tool was able to spark ideas: “many ideas ran through my head”. As a very important factor, we should take into account participant comments on further tool development regarding exploration: “It would be great to be able to have different poses from which to start”. Also, a missing feature in our tools but very common in other creative support tools for supporting exploration is the provision of rich history-keeping and the capacity to undo and redo actions.

4.3.3. Results Worth Effort

This factor captures the tradeoff in complexity of the tool, how much work is required by the tool and the quality and variety of things that can be produced using the tool. This factor obtained scores of 5.3 and 6 out of 7. Half of the participants report the tool “is quick to use” as a positive point, in that not much effort was required. The resulting scores for this factor and the variety of creative outcomes reported previously confirm that the results are worth the effort. Moreover, the tool can be used to produce both poses and animations. This factor can be controversial, as for complex tools it raises a barrier to obtaining satisfying outcomes; however, the principles for design of creative support tools by Ben Shneiderman et al. propose a design with “low thresholds, high ceilings, and wide walls” [1]. We added creative functionality without compromising ease of use, and the next section provides some examples.

4.3.4. Expressiveness

This factor refers to users’ expectations to be able to express ideas clearly. This factor obtained scores of 4.5 and 5 out of 7, lower than previous factors. There were some positive comments regarding this topic, such as “it allows to create general poses very easily” or “I liked a lot to represent my movements in the dummy, to be able to see them from the outside”. This last comment refers to an unexpected functionality put into practice by an evaluator. However, several negative comments were made regarding the ability to express ideas clearly: “it lacks precision at the time of positioning body limbs”, “I could not rotate joints at will”, and “I would like to move body limbs without affecting the rest of the body”. This last functionality is already implemented, but was not demonstrated to users so as not to complicate usage of the tools. Also, regarding the animation functionality, some users complained that they “would like to have more control on the animation timeline”.

4.3.5. Immersion

Immersion in creative practice is mostly associated with losing track of time. However, comments by participants refer more to immersion in the virtual environment, as this is a word commonly associated with virtual reality technologies, meaning the engagement and presence (the extent of a user’s perception of actually being there) in the virtual environment. Participant comments refer to this as “being so near to the dummy it feels so real”, and “it felt authentic, as much as I can imagine as there is no physical contact”. Nonetheless, there is some similarity in meaning. The resulting scores for the corresponding question, “I became so absorbed in the activity that I forgot about the system or tool that I was using”, were 3.6 and 4.3 out of 7. These were not very high compared to other questions, and were maybe related to the lack of physical feedback when touching the virtual body and to the non-realistic body poses and animations that can be created, which can be a positive feature for creative purposes.

4.3.6. Collaboration

Our tools were not designed to support collaborative work. As expected, most users evaluated this item with the lowest possible value. Unfortunately, collaboration in immersive applications is not immediate, as in screen-based applications simple visualization sharing requires some form of screen casting or file sharing. However, implementing basic networking for multiuser collaboration will permit multiple users to simultaneously manipulate virtual bodies with multiple hands.

4.3.7. Comparison with Existing Tools

Participant were further questioned on their preferences for animation functionality. The paired-sample single-tailed t-test indicates that using a continuous animation method, such as the one implemented in the tool (M = 7.6, SE = 0.3), resulted in statistically significantly higher preference than using a timeline with a sequence of poses as in traditional animation tools such as Maya or Blender (M = 5.5, SE = 0.6), where t = −2.57, p = 0.026. Participants commented that “I think making a rough animation with this tool and then refining it would be much faster than doing it from scratch in another program” and “I really like controlling the dolls with my hands because of the capabilities it gives you and the freedom you have, but when it comes to making precise animations you may need help from other tools or more complex methods to make sure that what you are doing is right”. However, one commented that “continuous animations with VR would be great. However, the methods I have used should be refined so that they are more fluid or allow you to rotate joints”.

5. Discussion

5.1. Intuitive Manipulation of Digital Bodies

The results of the study suggest that users were satisfied with the tool’s capabilities to produce results worth the effort, especially at it is a quick-to-use tool, for both posing and animation. In addition, users demanded further control when manipulating virtual bodies for rotating joints or positioning some body limbs. The solution improves upon existing methods [19,20], which rely on less-precise techniques such as drawing in the air or using hand gestures to create poses. The proposed implementation enables intuitive manipulation of a virtual body, mimicking movements that users would naturally perform in the real world, thanks to the perception that the virtual body shares the same dimensions as a real one. Users are able to bend to move knees and legs and stretch arms to move the head or arms, and have precise control over these actions in a user-friendly manner. Direct manipulation of virtual bodies was revealed to be an appreciated feature for posing and animation that can complement technologies such as motion capture [2] or computer-vision pose detection [7].

5.2. Creative Posing and Animation

CSI questionnaire results show a high overall index and high values for several CSI dimensions: Enjoyment, Exploration and Results Worth Effort. By providing an immersive and intuitive interface, the tool lowers the barriers to entry for animators, allowing them to focus more on the creative aspects rather than the technical intricacies of animation. This aligns with existing research that highlights the potential of VR to provide a more engaging and effective medium for creative tasks compared to traditional desktop-based tools [26].
Creative outcomes suggest the evaluated tool can produce uncommon poses and animation, i.e., those which are not usually produced in desktop authoring tools. For example, in immersive virtual reality, users make use of large volumes of space around a user’s space to create posing bodies around large objects such as walls or beds, and dancing-body animations are created by users performing as real dancing partners.
CSI results are better for the animation task than for the posing task. The posing workflow is executed similarly to the existing tools. However, the workflow for creating body animations differs significantly from traditional tools like Maya or Blender, as well as from novel tools such as [17,18], which utilize a timeline with a sequence of poses for animation creation. Instead, we save the positions and rotations of all body parts of the virtual avatar in real-time as they are manipulated, resulting in a much more agile process where complex animations can be created with simple one- or two-handed manipulations. However, this method does not offer the same level of control as a timeline-based approach. This workflow can be likened to tools such as [20], which also animate using simple hand gestures. The distinction lies in our approach of directly manipulating the virtual body, which eliminates the need to learn mappings from gestures to animations.
Overall, creative evaluation indicates that the future of the tool is encouraging for novel digital animations of bodies in creative application domains such as game design [27], ergonomic design [28], or choreographic practice [29].

5.3. Limitations and Further Work

As an unavoidable drawback of our tool, extended sessions involving full-body movements can lead to user fatigue. Other limitations commented on by study participants can be overcome in future work: improving control for precise posing and animations, and enabling undo features and import/export functionality to integrate with other motion-capture technologies. Further work will have multiple pathways, including the application to specific domains and performing corresponding user studies with participants with different backgrounds and experiences, plus enabling the manipulation of multiple virtual bodies by multiple users.

6. Conclusions

Our evaluation suggested that participants enjoyed our core ideas and were able to utilize the system successfully for creating the poses and animations they desired. The artifacts created by the study participants confirm that the presented tool effectively makes it extremely easy to create human body poses and animations. Quantitative results of CSI metrics showed that users prefer our tool for animation tasks over posing in terms of Enjoyment, Exploration and Immersion. Additionally, other quantitative results indicate a preference among users for using our tool for animations over traditional tools such as Maya or Blender, particularly because the workflow is more agile. While the CSI factors Results Worth Effort and Expressiveness did not show statistical significance, qualitative data (open-ended questions and created artifacts) suggest that the tool is very quick to use and allows the creation of general poses very easily. Moreover, we discovered that it has the potential to produce uncommon poses and animations. Additionally, we identified creative strategies, such as creating while dancing, which can lead to novel creations. Along with the encouraging results, the user evaluation also showed some limitations, with opportunities to improve in the future.

Author Contributions

Conceptualization, O.A.; methodology, O.A.; software, Y.B.; validation, Y.B.; data curation, A.L.; writing—review and editing, O.A.; visualization, A.L.; supervision, O.A.; funding acquisition, O.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the EU European Research Council (ERC) StartingGrant programme under grant agreement No 101042702 (Intevol ERC-StartingGrant-2021).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Public University of Navarre (protocol code “Interacción indirecta con displays volumetricos no solidos PI-011/22” and processed on 31/03 /2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are provided on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shneiderman, B.; Fischer, G.; Czerwinski, M.; Resnick, M.; Myers, B.; Candy, L.; Edmonds, E.; Eisenberg, M.; Giaccardi, E.; Hewett, T.; et al. Creativity Support Tools: Report From a U.S. National Science Foundation Sponsored Workshop. Int. J. Hum. Comput. Interact. 2006, 20, 61–77. [Google Scholar] [CrossRef]
  2. Zordan, V.B.; Van Der Horst, N.C. Mapping optical motion capture data to skeletal motion using a physical model. In Proceedings of the Eurographics Symposium on Computer Animation’03, Granada, Spain, 1–5 September 2003. [Google Scholar]
  3. Pighin, F.; Lewis, J.P. Performance-Driven Facial Animation. In Proceedings of the SIGGRAPH Courses, Boston, MA, USA, 30 July–3 August 2006. [Google Scholar]
  4. Pueo, B. High speed cameras for motion analysis in sports science. J. Hum. Sport Exerc. 2006, 11, 53–73. [Google Scholar] [CrossRef]
  5. Cisneros, R.E.; Stamp, K.; Whatley, S.; Wood, K. WhoLoDancE: Digital tools and the dance learning environment. Res. Danc. Educ. 2019, 20, 54–72. [Google Scholar] [CrossRef]
  6. Anderson, F.; Grossman, T.; Matejka, J.; Fitzmaurice, G. YouMove: Enhancing movement training with an augmented reality mirror. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, St. Andrew, UK, 8–11 October 2013. [Google Scholar] [CrossRef]
  7. Zhou, Q.; Grebel, L.; Irlitti, A.; Minaai, J.A.; Goncalves, J.; Velloso, E. Here and Now: Creating Improvisational Dance Movements with a Mixed Reality Mirror. In Proceedings of the Conference on Human Factors in Computing Systems (CHI’23), Hamburg, Germany, 23–28 April 2023. [Google Scholar] [CrossRef]
  8. Cao, Z.; Hidalgo-Martinez, G.; Simon, T.; Wei, S.; Sheikh, Y.A. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 43, 172–186. [Google Scholar] [CrossRef] [PubMed]
  9. Liu, J.; Fu, H.; Tai, C.H. PoseTween: Pose-driven Tween Animation. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST’20), Virtual, 20–23 October 2020. [Google Scholar] [CrossRef]
  10. Wang, C.Y.; Zhou, Q.; Fitzmaurice, G.; Anderson, F. VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos. Proc. ACM Hum.-Comput. Interact. 2022, 6, 448–467. [Google Scholar] [CrossRef]
  11. Hung, C.W.; Chang, R.C.; Chen, H.S.; Liang, C.H.; Chan, L.; Chen, B.Y. Puppeteer: Exploring Intuitive Hand Gestures and Upper-Body Postures for Manipulating Human Avatar Actions. In Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST’22), Tsukuba, Japan, 29 November–1 December 2022. [Google Scholar] [CrossRef]
  12. Arora, R.; Kazi, R.H.; Grossman, T.; Fitzmaurice, G.; Singh, K. SymbiosisSketch: Combining 2D & 3D Sketching for Designing Detailed 3D Objects in Situ. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18), Montreal, QC, Canada, 21–26 April 2018. [Google Scholar] [CrossRef]
  13. Arora, R.; Kazi, R.H.; Kaufman, D.M.; Li, W.; Singh, K. MagicalHands: Mid-Air Hand Gestures for Animating in VR. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST’19), New Orleans, LA, USA, 20–23 October 2019. [Google Scholar] [CrossRef]
  14. Nebeling, M.; Lewis, K.; Chang, Y.C.; Zhu, L.; Chung, M.; Wang, P.; Nebeling, J. XRDirector: A Role-Based Collaborative Immersive Authoring System. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20), Honolulu, HI, USA, 25–30 April 2020. [Google Scholar] [CrossRef]
  15. Gruenefeld, U.; Auda, J.; Mathis, F.; Schneegass, S.; Khamis, M.; Gugenheimer, J.; Mayer, S. VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’22), New Orleans, LA, USA, 30 April–5 May 2022. [Google Scholar] [CrossRef]
  16. Rajaram, S.; Nebeling, M. Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’22), New Orleans, LA, USA, 30 April–5 May 2022. [Google Scholar] [CrossRef]
  17. Quill by Smoothstep. Available online: https://quill.art (accessed on 11 November 2023).
  18. Tvori. Available online: http://tvori.co (accessed on 11 November 2023).
  19. Cannavò, A.; Zhang, C.; Wang, W.; Lamberti, F. Posing 3D Characters in Virtual Reality through In-the-Air Sketches. In Proceedings of the 33rd International Conference on Computer Animation and Social Agents, CASA, Bournemouth, UK, 13–15 October 2020. [Google Scholar] [CrossRef]
  20. Ye, H.; Kwan, K.C.; Su, W.; Fu, H. ARAnimator: In-situ character animation in mobile AR with user-defined motion gestures. ACM Trans. Graph. 2020, 39, 83. [Google Scholar] [CrossRef]
  21. Hargood, C.; Green, D. The Authoring Tool Evaluation Problem. In The Authoring Problem; Hargood, C., Millard, D.E., Mitchell, A., Spierling, U., Eds.; Human–Computer Interaction Series; Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  22. Carroll, E.A.; Latulipe, C.; Fung, R.; Terry, M. Creativity factor evaluation: Towards a standardized survey metric for creativity support. In Proceedings of the ACM Creativity & Cognition, San Diego, CA, USA, 23–26 June 2009. [Google Scholar]
  23. Cherry, E.; Latulipe, C. Quantifying the Creativity Support of Digital Tools through the Creativity Support Index. ACM Trans. Computer Human Interact. 2014, 21, 21. [Google Scholar] [CrossRef]
  24. Csikszentmihalyi, M. Creativity: Flow and the Psychology of Discovery and Invention; Harper Perrennial: New York, NY, USA, 1997. [Google Scholar]
  25. Dewit, I.; Celine Latulipe, C.; Dams, F.; Jacoby, A. Using the creativity support index to evaluate a product-service system design toolkit. J. Design Res. 2020, 18, 434–457. [Google Scholar] [CrossRef]
  26. Houzangbe, S.; Masson, D.; Fleury, S.; Jáuregui, D.A.; Legardeur, J.; Richir, S.; Couture, N. Is virtual reality the solution? A comparison between 3D and 2D creative sketching tools in the early design process. Front. Virtual Real. 2022, 3, 958223. [Google Scholar] [CrossRef]
  27. Granqvist, A.; Takala, T.; Takatalo, J.; Hämäläinen, P. Exaggeration of Avatar Flexibility in Virtual Reality. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY’18), Melbourne, Australia, 28–31 October 2018. [Google Scholar] [CrossRef]
  28. Lee, B.; Jin, T.; Lee, S.H.; Saakes, D. SmartManikin: Virtual Humans with Agency for Design Tools. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19), Glasgow, UK, 4–9 May 2019. [Google Scholar] [CrossRef]
  29. Burns, T. Extending the Body: An Embodied Practice as Choreography in Virtual Reality Performance. In Proceedings of the 8th International Conference on Movement and Computing (MOCO’22), Chicago, IL, USA, 22–24 June 2022. [Google Scholar] [CrossRef]
Figure 1. User manipulation of virtual bodies with full body movements: (a) user bending to move knees, (b) user stretching arms.
Figure 1. User manipulation of virtual bodies with full body movements: (a) user bending to move knees, (b) user stretching arms.
Mti 08 00060 g001
Figure 2. Functionality of the tool: (a) blocking virtual body parts, (b) manipulation at distance with rays, (c) “ghosts” previewing previous poses.
Figure 2. Functionality of the tool: (a) blocking virtual body parts, (b) manipulation at distance with rays, (c) “ghosts” previewing previous poses.
Mti 08 00060 g002
Figure 3. Poses created by participants: (a) “Vampire watching over a room corner”, (b) “Breakdance with head over floor waving a hand”.
Figure 3. Poses created by participants: (a) “Vampire watching over a room corner”, (b) “Breakdance with head over floor waving a hand”.
Mti 08 00060 g003
Figure 4. Animations created by participants. (top) “Lateral flip”, (bottom) “Dance class with me as instructor.
Figure 4. Animations created by participants. (top) “Lateral flip”, (bottom) “Dance class with me as instructor.
Mti 08 00060 g004
Figure 5. CSI individual-factor mean scores and standard error for creative posing task (dark-blue color) and creative animation task (light-orange color).
Figure 5. CSI individual-factor mean scores and standard error for creative posing task (dark-blue color) and creative animation task (light-orange color).
Mti 08 00060 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Benbelkheir, Y.; Lerga, A.; Ardaiz, O. A Virtual Reality Direct-Manipulation Tool for Posing and Animation of Digital Human Bodies: An Evaluation of Creativity Support. Multimodal Technol. Interact. 2024, 8, 60. https://doi.org/10.3390/mti8070060

AMA Style

Benbelkheir Y, Lerga A, Ardaiz O. A Virtual Reality Direct-Manipulation Tool for Posing and Animation of Digital Human Bodies: An Evaluation of Creativity Support. Multimodal Technologies and Interaction. 2024; 8(7):60. https://doi.org/10.3390/mti8070060

Chicago/Turabian Style

Benbelkheir, Youssef, Alvaro Lerga, and Oscar Ardaiz. 2024. "A Virtual Reality Direct-Manipulation Tool for Posing and Animation of Digital Human Bodies: An Evaluation of Creativity Support" Multimodal Technologies and Interaction 8, no. 7: 60. https://doi.org/10.3390/mti8070060

Article Metrics

Back to TopTop