FVEA 419/619: Motion Capture for Artists 2019

marey2red

“Progress is not possible without deviation” –Frank Zappa

Evolving Class Schedule for 2019:

January 28
First class meeting.  The class was intended to open with a live demonstration of Jasper Brekelman’s Brekel Pro Body v2 mocap software for the Microsoft Kinect v2.  Unfortunately there was a change in the licensing system at CalArts and it took awhile for the tech staff to straighten that out and get the software up and running.  We ended up moving ahead in the schedule and had a look at the first and last required reading of the semester: Maureen Furniss’s seminal and comprehensive overview paper, Motion Capture, while on the readings page we also touched on some of the recommended readings in that list. This was followed by a general discussion of various motion capture technologies and the nature of the systems we have available to us here at CalArts.  We experienced a seemingly unending flood of stories relating to various aspects of  mocap at CalArts and beyond.  During that monologue we explored the notion of various ways that the course might work for the diverse group of students making up the class.  We learned that the scope of the specifics of the week-to-week course structure is intended to be determined in large part by the individual interests of the group.  We viewed several approaches to motion capture as a performative art with a particular emphasis on figurative dance and abstraction.  This year we will be introducing our newly acquired OptiTrack passive marker mocap system which is in the process of being installed in E58 (we visited E58 during the break to see the impressive truss loaded with a dozen mocap cameras brimming with potential). After viewing a brilliantly cynical warning about the type of mocap abuse we hope to avoid, we looked at several mocap based works created by students in previous years classes.  Most of the following videos include scenes created via our  PhaseSpace active marker mocap system.

.

The California Institute of Motion Capture, CalArts Producers Show Intro 2007
We can do this the easy way or the hard way –or not do it at all.
Let’s push the boundaries of irony and do the dance of the mushroom cloud anyway!

.


Ke Jiang made Taxi while he was a student at CalArts.  He used The PhaseSpace mocap system to create a quirky performance by taking advantage of the artifacts that occur at the edge of the capture volume:

.


Visiting Artist Max Hattler conducted a workshop during the Program in Experimental Animation interim sessions in 2011.  The goal was to produce one or more short works using abstracted motion capture.  Forms I (Taekwondo) is one of those works:

.


Shimbe used the PhaseSpace motion capture system in a unique way for the making of this film. He rigged a Bunraku puppet with active markers and directed Danielle Ash as the puppeteer. The natural floppiness of the puppet provided an extraordinary quality to the targeted motion. You can see some still photos of the process in my photo album of the initial PhaseSpace tests at CalArts.

“The Wonder Hospital, a 3D & puppet animated film, is a surreal journey of oddity and empty illusion. In a mysterious hospital, modification of physical beauty is not what you would expect. A girl’s desire for superficial beauty leads her to chase after the luring ‘After’ images on a path of advertisements throughout the hospital. But in the end she finds something unimaginable and irreversible.”

.


A Maya playblast from 18 March 2010 of Sara Pocock‘s little girl character animated via a simple mocap T-pose test. The T-pose test was performed in class by Justin Leon in order to double check that we had setup the MotionBuilder marker mapping process correctly before moving on to a directed capture session. We came close to doing a brief capture session but ran out of time and had to postpone the session until the upcoming class. The realtime data from the T-pose test is all that we used in this test. No clean-up, filtering, retargeting, or other adjustments were done. Justin’s simple casual movements gave the character an unintended sense of presence. In subsequent class meetings Justin and Sara worked on directed performance tests in order to gain more experience with that form of mocap –even though Sara’s goal was to keyframe all of the animation in the final film.

.


For her MFA thesis project, A Scenic View of the End of the WorldIvy Flores chose to collaborate with choreographer Daniel Charon and composer Alex Wand in an iterative process wherein each participant would base what they were doing on the work the others had done in an earlier version.  This enfolding process modified the work with each iteration until Ivy reached the form she found most interesting.  The motion capture of the dancers was recorded with the PhaseSpace Impulse system mounted in the computer animation lab located in room F105 and processed to extract essential movement.  The final installation was presented in the Black and White Studio Gallery A404 from April 4-5, 2013.

.


Ivy Flores’ documentation of the process of creating  “A Scenic View of the End of the World”.

.


Prior to her work with the PhaseSpace Impulse motion capture system, Ivy created this performance animation piece using two orthogonally placed video cameras and Adobe After Effects motion tracking.

.


Rachel Ho collaborated with Julian Petschek, and Rob Gordon in the creation of the live motion capture performance, Mo Cap Mo Problems, staged in the Black and White Studio Gallery A404 as part of the Motion Capture for Artists course exhibition in the Spring of 2013.

Mo Cap Mo Problems is a 15 minute performance and video installation that employs live motion-capture in the engagement of virtual characters and spaces. The performance deals with issues of identity and technology in the service of pop culture, as explored through role-playing and the form of music gigs/concerts.”

.

Following upon the critical success of Mo Cap Mo Problems, Rachel Ho developed SLEIGHTING. This video features footage from 5 shows performed on April 3rd, 2014 in the CalArts B&W Studio A404 and was constructed to promote performances as part of the LAX 2014 Festival on September 20th, 2014, at the Bootleg Theater, Los Angeles.

SLEIGHTING is an unprecedented approach to multimedia performance using real-time motion capture and pre-visualization tools to enable a new breed of performer. It is about showmanship, hype and the future of entertainment.

Through the ability to pilot avatars before a live audience, SLEIGHTING creates a new type of superstar who is no longer confined to being one personality, but is able to be anyone and everyone. Like the superstar DJ or sportsman who commands arenas full of fans, SLEIGHTING presents itself as a future art and sport, and an event that people descend upon to witness and partake in. In this case, the arena is now the site of reinvention for the event film, and the director is now conductor and performer, empowered through technological extensions of the self.

The show has a running time of around 20 minutes and mainly consists of three sketches in which the spectacle of interfacing with virtual realities drives both narrative and design. Real-time motion capture and pre-visualization tools, typically used in the film and gaming industries, are now used to DJ virtual camera moves and special effects for this live event.”

“Penelope the penalized elephant has found himself in prison. Little does he know the true, sinister purpose of the prison.”

OCTOFORNIA is an experimental project created by Gordon Straub and Cole Mercier in their first year at CalArts.  They boldly decided to challenge themselves to make a film using 3D and Mocap computer software that they had not previously used –and were just in the initial process of learning.  They adapted their project ideas to incorporate and reveal  fundamental aspects and “flaws” of the technologies they were working with as a method of enhancing the unsettling aspects of their dystopian story.

.


Jorge Ravelo has experimented with using the Kinect v2 and Jasper Brekelman’s Brekel Pro Body v2 in the creation of both real-time performance and recorded work. In this film, Skip Jumbo, he worked with rapid montage layering of mocap video recordings from MotionBuilder with live action 16mm film shot with a Bolex camera.

“Skip smiles // Skip jumps // Skip eats cereal // Skip looks at the sky and the news all at once.”

.

pippaspanatt

Narae Kim, Celine Tien, and Julian Soros of the Pippa’s Pan team at the AT&T Developer’s Summit where they were declared third place winners of the AT&T VR/AR Challenge.

Pippa’s Pan   is a reactive VR short film that takes you, our Agent, through the forest of Pippa’s mind to help re-capture fragments of her forgotten memories. Experimenting with techniques in animation, world building, motion-capture, 3D spatial audio, and even light field technology, Pippa’s Pan is set to be one of virtual reality’s first hybrid live-action short films. A literal forest woven from the sinews of this team’s ideas and youthful naiveté, the group of young dreamers will deconstruct concepts of storytelling to re-invent the relationship between audience and narrative.

.

February 4
Second class meeting.   After a preliminary discussion we will move into a demonstration of Brekel Pro Body v2, Jasper Brekelman’s mocap software for the Microsoft Kinect v2  which we were briefly introduced to during the first class meeting.  Some of the features and limitations of single depth camera mocap will be described and a demonstration of key aspects of Brekel Pro Body 2’s real-time performance capture and display will be conducted.  We will  then move into a look at the integration between Brekel Pro Body 2 and Autodesk MotionBuilder.  The process of loading the Pro Body 2 plug-in for MotionBuilder into the scene by dragging and dropping it from the Devices folder into the Producer Perspective window will be covered.  Loading a stock “characterized” polygonal mesh model already set up to deform via the integral bone system of the character will be demonstrated.  The procedure for setting the character up to be driven by the Kinect skeleton system motion data will be shown as well as the use of multiple models animated by a single real-time performer.  We will also look at the Brekel Pro Face v2 markerless facial capture software and discuss its advantages and disadvantages in comparison to more expensive professional facial capture solutions such as Dynamixyz and Faceware Technologies  (Faceware provided a live demonstration to the class in 2016). We will also learn of the sad loss of public access to the brilliantly designed markerless facial capture system, Faceshift,which was purchased by Apple in 2015. Instead of continuing to make it available, they removed it from the market unfortunately depriving artists of a great tool. Students will log into the online Autodesk Character Generator program which is free to students and educators using a valid school domain email address.  Autodesk’s online help provides a good overview of Character Generator  workflow. At the end of the creation process it is possible to download the generated character for use in several programs.  Our goal will be to obtain an FBX file suitable for use in MotionBuilder.  There can be  some confusion about the paid options actually being free with an education account.  The process indicates you must pay to download premium features, however as a student or educator you are granted unlimited “credits” which suffices to “purchase” and download the character.   Everyone should be able to download and use their characters during class in the coming weeks.

Jasper Brekelman showcasing some of the key features of Brekel Pro Body v2

.

February 11
Third class meeting.  We will take a look at the chapter on Joan Stavely in Robert Russett’s book, Hyperanimation: Digital Images and Virtual Worlds in preparation for viewing her seminal work Wanting for Bridge as well as her work in the end by Chris Landreth. Afterwards we will discuss that work and its relevance to Joan Stavely’s participation, along with Sandy Stone, Rob Tow, and Brenda Laurel, in the brilliant 1995 SIGGRAPH Conference Panel organized by Greg Garvey to address questions raised at SIGGRAPH the previous year regarding sexist and non-sexist readings of inherent gender differences: Grids, Guys and Gals: Are you oppressed by the Cartesian Coordinate System?.  Last week we logged into the online Autodesk Character Generator program  At the end of the creation process it is possible to download the generated character for use in several programs.  Our goal will be to obtain an FBX file suitable for use in MotionBuilder.  At the end of last week as I was working with our mocap consultant John Brennan on setting up and testing the new OptiTrack motion capture system in E58 he showed me a superior option to Character Generator.  I have requested that our IT staff make the Adobe Fuse and Mixamo packages available on the workstations in F105 so that we may explore them in class (this may not be possible this week if there are complications with adding this software to system on short notice).  The goal is to customize models in Fuse then upload them to Mixamo for automatic rigging. The rigged models can be loaded into MotionBuilder, Unity, and other programs to be animated via recorded or live motion capture data from the OptiTrack system.  In any case everyone should be able to download and experiment with  their characters during class in the coming weeks. The new OptiTrack is almost ready for use and we plan to go head down to E58 next class meeting and suit up class TA Vincent De La Torre to learn about the placement of markers on the suit relative to the body, and examine other aspects of the OptiTrack system, including several ways the live data may be viewed in the OptiTrack Motive software.

Screen Shot 2018-01-30 at 1.51.19 AM

Actual education benefit: Unlimited credits – Account (Shared) 999999999999.

.

February 18
Presidents Day

.

February 25
Fourth class meeting.  Motion capture expert John Brennan will serve as a guest artist in today’s class.  John will speak a bit about his background in the field and introduce some key general concepts at the core of optical marker motion capture.  We will then move down to the mocap facility in E58 where he will walk us through an introduction to the use of Optitrack Motive:Body with a suited performer.  We will learn the theory and practice of selecting the appropriate markerset, and placing markers on the mocap suit to optimise Skeleton Tracking via the appropriate placement of Joint Markers and Segment Markers.

These diagrams taken from the Motive 2.1 documentation show the Baseline markerset of 37 markers which serves as a minimum (Click on a diagram to view a larger image). Typically we use the Baseline +13 additional markers (50) in order to obtain a higher fidelity capture.

“Asymmetry is the key to avoiding the congruency for tracking multiple markersets. When there are more than one similar marker arrangements in the volume, marker labels may be confused. Thus, it is beneficial to place segment makers — joint markers must always be placed on anatomical landmarks — in asymmetrical positions for similar rigid bodies and skeletal segments. This provides a clear distinction between two similar arrangements. Furthermore, avoid placing markers in a symmetrical shape within the segment as well. For example, a perfect square marker arrangement will have ambiguous orientation and frequent mislabels may occur throughout the capture. Instead, follow the rule of thumb of placing the less critical markers in asymmetrical arrangement.”

.

March 4
Fifth class meeting. Motion capture expert John Brennan returns as a guest artist.  The class met in the E58 motion capture facility in order to explore basic principles for marker placement and the foundational anatomical understanding of locomotion  that underpins theories of optical marker based performance capture.   Methods for the  integration of OptiTrack Motive via plug-ins for MotionBuilder, Unity, and Unreal were covered.  The plan for the upcoming week’s class is to go over temporarily disabling the native position and orientation tracking of the Oculus Rift and replace that with tracking based on mounting an OptiTrack rigid body array to the Rift. This will enable simultaneous tracking of the headset and the bodysuit  for doing immersive VR work.

IMG_20190304_141703
John Brennan explaining best practices in marker placement based upon a working knowledge of human bone and joint structures.  This included a description of methods for adapting that understanding to the specific simplifications required for  motion capture.

.

March 11
Sixth class meeting.  John Brennan joined us again and supervised students in proper marker placement.  We went over the differences between Forward Kinematics (FK) and Inverse Kinematics (IK).  Program in Character Animation faculty member John Yoon had joined us and he and John experimented with and discussed various aspects of working with rigged characters using as an example characters that John Yoon had created for students in his Maya rigging course.  After those explorations we moved on to the plan to demonstrate the process of of temporarily disabling the native position and orientation tracking of the Oculus Rift so that it could be replaced tracking based on mounting an OptiTrack marker set rigid body array to the Rift.  We hit a snag with the Alienware VR laptop throwing an error message and had to postpone that demonstration until another week so that the laptop could be reformatted and the software re-installed.

vlcsnap-00119

John Yoon recording Bleu Cremers testing an Inverse Kinematic rig for John Brennan

.

March 18
Seventh class meeting.  The first part of the class was dedicated to presenting and discussing material from the Drawing in Space  and Chronophotography class pages.  We looked at a wide range of approaches to capturing and displaying motion over time. After the break we split into two groups. One group remained in F105 for a direct experience of actively drawing in the immersive three dimensional space space of room scale using the HTC Vive and  Tilt Brush.  The other group moved down to E58 to see how quickly they could suit up a student and run a capture session with the Optitrack hardware and Motive software.

“Google has been working closely with more than 60 artists to help them explore their style in virtual reality as part of the Tilt Brush Artist in Residence program (AiR). Coming from a wide range of disciplines, these graffiti artists, painters, illustrators, graphic designers, dancers, concept artists, creative technologists and cartoonists have all brought their passion and talent to create some amazing art with Tilt Brush.”

.

March 25
Eighth class meeting.

I will be in away in France conducting a visiting artist workshop at the ENSAV of the Université Toulouse-Jean Jaurès.  John Brennan will return to conduct a hands on workshop with Optitrack and Motive.  He will cover a range of possibilities including integrating Motive with Unity and MotionBuilder.   John has served as our mocap consultant for many years, and in 2013  John took over the class in my absence while I was away conducting a workshop at what was then the ESAV of  the Université de Toulouse-Le Mirail.  John demonstrated a way to perform multiple characters live with the PhaseSpace motion capture system and Autodesk MotionBuilder which inspired Rachel Ho to work with Julian Petschek and Rob Gordon to workshop and produce the innovative live performance in the space of only two weeks. Subsequently John worked with Rachel and Julian et al, on the piece  SLEIGHTING, which was initially staged at CalArts, and then later at the Bootleg Theater in Los Angeles as part of the 2015 LAX festival.

Mo Cap Mo Problems is a multimedia performance utilizing real-time motion capture with a playful irreverence that straddles the gap between high tech and low tech. It is self-referential in its engagement with the technology and tools of the trade, while involving improvisation and audience participation more suited for a club type setting. The 15-minute show consists of a few musical segments including characters such as a trigger happy sci-fi warrior, Aragor, and a “magic trick” involving beer. Issues of identity and the relationship of technology to pop culture are implicated through role-play involving clip art characters native to Autodesk’s software for motion capture.

This video features excerpts from five shows performed April 25th, 2013 at CalArts Black & White Studio Gallery A404.

.

April 1
Spring Break

.

April 8
Ninth class meeting. John Brennan will continue working with students on capture sessions relative to projects that they are preparing.  If it can be arranged, he will be working with a team that will cover best practices with markerless facial performance capture and conduct facial mocap sessions for use in student projects. Two years ago Narae Kim presented the latest build of her collaborative VR project Pippa’s Pan . Students had a chance don the HTC Vive and use the handheld controllers to experience the interactive immersive VR story of two lovers reliving memories across the gap of encroaching Alzheimer’s Disease. She and her partners have since created a start-up company dedicated to using VR to provide comfort for bedridden hospital and patients.

.

April 15
Tenth class meeting. Two years ago we had a presentation on Chronophotography beginning with its history as an initial form of motion capture. This was followed by a review presentation of Drawing in Space, and one-on-one sessions providing a direct experience of  the latest version of my long standing VR project, Anaphorium.

This year we will probably head down to E58 to run capture sessions for projects planned for inclusion in the end of semester exhibition or the CalArts Expo.

.

April 22
Eleventh class meeting. Three years ago we had a presentation and demo by a team from Faceware Technologies.  Academic Program Manager Max Murray provided an overview of the company’s history in facial performance capture while the other members of the team set up for a demo of their real-time head mounted facial mocap system.  Since Brian Lecaroz had done some tests with Faceware’s free demo software (and had been planning on using that for his installation in the course exhibition until there was a license collapse that could not be rectified in time for Thursday’s show) Brian was selected as the person to wear the HD Pro Headcam for the initial demo session (and perhaps come away with some files that he could use to continue work on his AI interview project).   The team also brought along their lower cost GoPro based solution. This year we will take a look at the history of facial motion capture and a range of approaches that have been taken.   One of the best markerless systems was Faceshift which was purchased by Apple in 2015. Instead of continuing to make it available, they removed it from the market unfortunately depriving artists of a great tool.

Last week several students expressed an interest in learning more about virtual camera systems, and the simultaneous use of the Oculus Rift and the Unreal game engine (UE4).  This week John Brennan set up a virtual camera test using the Unreal Virtual Camera Plugin. This Virtual Camera relies on a separate UE4 Plugin called Remote Session running on a PC, and the associated Unreal Remote 2 app, running on an ARkit equipped iPad making it the remote viewer/controller for the UE4 cine camera.  John also did a demo using Oculus Quill to roughly sketch out a set of strokes of a schematized biped character in a T-pose, then exported that to Mixamo to automatically rig the character, add a stock mocap animation to it, and then import the character into Unreal and view it with the virtual camera plugin.  Chris experimented with creating his first quill character however there were problems with the way the character was understood by Mixamo (possibly due to the leg spiral structure strokes being unconnected to the leg strokes) and so the auto-rigging process failed. We will attempt this again next week and go deeper into the set up and operation of the Unreal Virtual Camera.

.

April 29
Twelfth class meeting. This year John Brennan will continue covering the setup and use of the Unreal Virtual Camera, and will also conduct a workshop on using the Optitrack system with the Oculus Rift and the Unreal game engine in order to provide two different options inside Unreal. The first option is to have the user wearing the Rift tracked with the Optitrack so that they appear embodied.  The second option is to have the user wearing the Rift see and interact with a visual  embodiment of a second person being tracked via the Optitrack system.

Last year Naomi Cornman and Darby Johnston from Oculus demonstrated the use of their innovative VR sculpting program Medium, providing students with direct hands on experience via a set of one-on-one sessions. Oculus is supplying us with an Oculus Touch workstation under their NextGen program so that students and faculty will be able to explore the potential of Medium more deeply and have the option of using it for projects.  Following is a list of links that Naomi sent us.

Monthly Artist Spotlight series:

Goro Fujita of Oculus Story Studio’s live stream.

Landis Fields of ILMxLab’s live stream.

Krystal Sae Eua of The Mill’s live stream.

Brandon Gillam of Carbon Games’ live stream.

Gio Nakpil of the Oculus REX team.

Steve Lord, amazing traditional and Zbrush sculptor. (This is the only live stream we have with the updated UI and features of the current build).

Two Facebook groups for viewing and sharing work:

Virtual Sculpting

Oculus Medium Artists

ZBrush educator and animator, Glen Southern’s  tutorials.

.

 

 

Goro Fujita live stream sculpting session with an earlier version of Medium back in September 2016.

.

May 6
Thirteenth and final class meeting.  Last year, students continued work on projects for our participation in  CalArts Expo 2018 with a show entitled,  Ampersand: Experiments in Kinesthesia, which was held in the B&W Gallery A404, on Thursday, May 3.  This year’s class opted not to have a group end-of-semester show, however Daniel Strausman proposed presenting a solo mocap performance as a mixed reality extension of an existing Macbeth piece that he had staged earlier in the year.  Daniel has been consulting with us on his proposal for MoCap MacBe piece which is scheduled to be performed in E58 on May 9th.  Daniel will suit up in class again this week and we will work on running mocap sessions for his project. We intend to demonstrate the effective use of the Unreal Virtual Camera to record shot information that can be utilised in post production of some of the pre-recorded and pre-rendered performance captured characters in MoCap MacBe.  Time permitting, John Brennan will also continue going over using the Optitrack system with the Oculus Rift and the Unreal game engine in order to provide two different options inside Unreal.

May 13
The School of Film/Video Bijou Festival.

.

.