Mark Christiansen – ProVideo Coalition https://www.provideocoalition.com A Filmtools Company Wed, 16 Aug 2023 12:23:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.5 https://www.provideocoalition.com/wp-content/uploads/cropped-PVC_Logo_2020-32x32.jpg Mark Christiansen – ProVideo Coalition https://www.provideocoalition.com 32 32 Is AI video about to creep out of the uncanny valley? https://www.provideocoalition.com/is-ai-video-about-to-creep-out-of-the-uncanny-valley/ https://www.provideocoalition.com/is-ai-video-about-to-creep-out-of-the-uncanny-valley/#comments Thu, 10 Aug 2023 15:49:54 +0000 https://www.provideocoalition.com/?p=269410 Read More... from Is AI video about to creep out of the uncanny valley?

]]>
AI video leader Runway has added Image to Video as the latest feature in its Gen-2 suite of tools. This feature directly creates moving image from a still, with no extra prompting. While previous text-to-image generation (whether in Gen-2 or a competitor) have generally produced results that are too creepy and crude for professional usage, does this new toolset kind of almost hint at producing synthesized images you could actually use in post-production?

In this short article I’ll explore this question and describe how you can try this feature out for yourself in under 5 minutes, for free. At the end I share some of the more popular examples that have appeared since Image to Video was introduced late last month.

The state of synthesized video in mid-2023

Remember AI in November 2022? Me neither. It’s hard to recall how recently there was no ChatGPT generating professional-sounding text nor Midjourney v4 fooling you into thinking you were looking at the work of professional artists and designers (unless the image contained hands with a countable number of fingers).

But now I feel our collective anxiety. AI in the 2020s looks to rival creative professionals who work with words and images. We feel the necessity to understand these tools while Hollywood wages a war against basic rights of the those who have made its existence possible.

Admittedly it’s cold comfort that early results with text-to-video have ranged from deeply creepy to laughable (and often, hilarious and terrifying at the same time). And this is where a recent breakthrough from Runway in its Gen2 product may be worth your attention.

Runway Gen-2

Runway is among the most prominent startups to boldly go headlong into fully synthetic video and effects. Gen-2 is the name of the current version of their “AI Magic Tools” (as branded on the site). The word “magic” aptly describes the act of creating something from nothing, and Gen-2 initially targeted text-to-video images. This is using type to get a movie with no intervening steps. Magic.

However, if you scan the examples promoted on the Runway site, or at competitor Pika Labs, you may not find much that you could add even as b-roll in a professional edit. The style tends to be illustrative and the results somewhat unstable in terms of detail and constancy. This “jittery and ominous… shape-shifting” quality was used by Method Studios to create the opening credit sequence of Secret Invasion. The Marvel producers wanted a foreboding, “otherworldly and alien look.” While Method claims “no artists’ jobs were replaced by incorporating these new tools,” they have not revealed how the sequence was realized.

So perhaps that was the first and last Hollywood feature to use 2023-era text to video as a deliberate style—time will tell. Meanwhile, solving for the creep factor has been a major focus for Runway, and that’s where this major new breakthrough comes in.

If you’ve ever wished that a still photo you captured or found were moving footage, and who hasn’t, or if you want to play with the pure-AI pipeine of generating an image, here are the steps.

Try image-to-video yourself in 5 minutes or less

Runway operates as a paid web app with monthly plans, but you can try it for free. The trial gives you enough credits to try a number of short video clips; the credits correspond to seconds of output. For the free clips, there is a watermark logo that appears in the lower right which is removed on a paid plan.

To get started, you need an image. Any still image will do, but one in which some basic motion is already implied is a good choice. I with 2 simple nature scenes, one with neutral color range and more detail, the other already more stylized.

Is AI video about to creep out of the uncanny valley? 1
The stock image of LA palm trees in the setting sun was licensed from Shutterstock. The still of desert plant life is from a winter trip to Joshua Tree.

If you visit runwayml.com and click on a link for Image to Video or Gen-2, if you’re new you’ll be asked to sign up prior to reaching the Dashboard. Once there, click on Image to Video and you’re presented with a simple UI that looks like this:

ai video prompt in Runway Gen-2
Although the UI includes information on prompting and a text tab, Image to Video currently only works correctly with no added description.

The most important thing to understand is that, at this stage of Gen-2 development (which can and will change at any time). you can only provide an image. Any text you add actually spoils the effect we want, which is direct use the original image (instead of using it as a stylistic suggestion). You can choose a length, with the default set as 4 seconds and the maximum currently 16 secs. Other than that it’s currently what you might call a black box; your subject will remain at the center of frame and it and the scene around it will evolve however the AI sees fit. The results are single takes.

There is way more happening with color shifts than would be ideal on the palm trees, but that’s 100% the decision of the software.

The more limited palette and subject of the cactus has a subtler result. The portrait format of the original image was preserved.

The main point here is that the original scene has been retained and put into motion in a way that wouldn’t be possible without an AI. What makes this a potential killer app is the situation where you have a still image reference and wish you had a video version of it, because that’s exactly what this is designed to do.

So now let’s look at “imagined” images of human figures.

Is AI video about to creep out of the uncanny valley? 2
For these examples I went with a historical figure and an image that was created in Leonardo.ai (a free alternative to Midjourney) with the prompt “model with colorful flowing hair, octane render.” I did not specify gender.

Here’s what Runway returned:

This synthesized image produced with leonardo.ai creates a compelling moving image, even if the character of the face seems to shift slightly. Keep in mind that although the source image was generated with a text prompt, no  text-to-video option (including Runway’s own Text to Video) would achieve this level of fidelity or continuity.

This example is included as a reminder that the toolset is not immune to classic mistakes of anatomy and context.

Next we’ll look at some of the most polished (iterated) examples that have been circulating since this feature debuted weeks ago.

Image to Video

If you’re interested to explore the best I was able to find of what has been produced within the first few weeks of Image to Video being available, here is a collection of posts curated from x.com (formerly called “tweets”). As you study these, I encourage you to notice moments where continuity is maintained well (and when, for example,the expression on a face changes a little too much to maintain character).

So, if you’re reading all the way to the end and weren’t already aware of the breakthrough that Runway Gen-2 has presented with Image to Video, this is a great opportunity to evaluate for yourself. And if you’re reading later summer of 2023, my guess is that many of these limitations will have been addressed. How long will still images remain more effective than text as a starting point for AI-generated video? That remains to be seen. When will you use these tools to steal a shot you can’t find any other way? Well, that’s completely up to you—but the possibility is there.

]]>
https://www.provideocoalition.com/is-ai-video-about-to-creep-out-of-the-uncanny-valley/feed/ 1
Silhouette Paint for Adobe: a powerful roto/paint/tracking toolset for editors and vfx https://www.provideocoalition.com/silhouette-paint-for-adobe-paint-roto-tracking/ https://www.provideocoalition.com/silhouette-paint-for-adobe-paint-roto-tracking/#respond Tue, 25 Aug 2020 19:08:06 +0000 https://www.provideocoalition.com/?p=222397 Read More... from Silhouette Paint for Adobe: a powerful roto/paint/tracking toolset for editors and vfx

]]>
Silhouette Paint from BorisFX goes far beyond the roto/paint/tracking tools found in Adobe After Effects (and not found in Premiere Pro at all). It can stabilize detail that needs to be removed in place, automatically transform and deform custom paint strokes over time, and it allows you to get past the standard giveaways of paint work.

leg-tattoo-2
The Silhouette Paint UI is more like other BorisFX tools (such as Mocha) than Adobe tools. There is a learning curve, but excellent resources are available even free on the BorisFX site.

You can clone not only by averaging multiple sources (useful when dealing with gradiated light) but even clone different characteristics of a given source (just the texture of the skin, or facial details). Better yet, if you’re at all skilled as a painter, you may have the greatest success building up your own replacement strokes with color and letting the tool swap in detail, texture and grain.

I worked with the Silhouette Paint plug-in for Adobe, which allows you to work in After Effects or Premiere Pro. Like many plug-ins of this magnitude (and like BorisFX Mocha), the plug-in operates as a stand-alone application within the Adobe environment on Mac or Windows. It is available on a monthly or annual subscription basis, at a cost (as of this posting) of $25/month or $195/year.

Why paint?

In my own training for VFX at School of Motion and LInkedIn, I refer to paint as generally a tool of last resort. In cases where tracking and roto can be used to remove, replace or augment an area of a shot, they are much more straightforward (which is not to say easy) to execute.

But there are cases where only paint will do. The key word to understand these situations is “organic.” If you are removing detail from an irregular, organic area—faces and skin being number one, but also skies, landscapes, gradient-lit interiors—tracking in a patch tends to fail. Likewise, if you are the type of artist who likes the organic feel of a pen on a tablet more than adjusting masks point by point, that is a factor in the decision of the right tool for the job.

And what if you don’t consider yourself an artist at all? Although there are other After Effects nerds like myself that read reviews on this site, I’m aware that it is editors who may get the biggest leg up by being able to quickly edit the contents of a shot, as follows.

Suppose you have a one-day turn-around for an edit of a show, and are able to complete the basic cuts in a few hours, leaving you an hour or two for improvements to the shot that don’t involve a full VFX workflow. This could be beauty work, removing or adding an object from the scene, even changing some fundamental detail. This is where Silhouette Paint could make the biggest difference, and heck, could help you to raise your rate.

sil-wall
Having the Mocha tracker built in to Silhouette Paint means you can isolate a moving surface in place, paint across all frames as if it were a still image, and restore motion.

Getting Started

The one main hurdle you need to overcome to get up-and-running with Silhouette Paint is unfamiliarity with its sophisticated and unfamiliar modal UI. This is not a tool to learn by noodling. The learning process won’t take that much time—I was able to get the hang of it in an hour or two, if the process is less familiar it would be a day at the most.

But this is not a case where you can simply dive in and learn as you go. Many of the key tools are modal—they don’t all appear in the same place in the UI when you open the app, but just in specific contexts. The two main modes are Roto and Paint, and most of the time, you are likely to use them together. Within Roto is a Tracker that operates in three unique modes: point tracking, planar tracking, and Mocha tracking (the most powerful but also the most unfamiliar when getting started).

The icons in the various toolbars (there are multiple) won’t be instantly familiar. You can get their labels with tool tips, but you’ll go a lot further starting with a tutorial that shows you how to do what you want to do. In this case, I’m not providing a tutorial of my own with the review, but for an excellent idea of how to perform one common paint operation—tattoo removal—there is an excellent 10 minute tutorial from Mary Poplin at BorisFX that shows you the process, step-by-step, start to finish.

Silhouette Paint Before/After
Tattoo removal is a classic situation where paint is the best option, because the surface is natural and dynamic. Full tutorial can be found here.

Most useful features

Silhouette Paint has a few killer features built into it, depending on your usage. One thing you’ll never want to give up once you start to use it is the ability to stabilize an area of the shot in place, work on it, then reverse it. This allows you to isolate and work on a detail as if it were from a locked-off shot. It’s something that is a big pain to do with native After Effects tools, and can’t be done at all with native Premiere Pro. With a detail that you want to remove or change in static position, you just work on it as you would a still frame.

The other tool that is essential to make this process work is Auto-Paint. This allows you to perfects a single frame only, and let the application matchmove it for you. This might seem like not such a big deal—you already tracked it, right?—but as I mention above, simply tracking in a single frame patch tends to fail in situations that have organic detail and lighting. Auto-Paint tracks changes to shading and proportion over time.

Along with that, there are brushes that go beyond what the clone brush in After Effects can do. The Detail Brush effectively separates color and detail, so you can get one of them right without worrying about the other. In other words, you can get the shading of skin correct with a smear of color, and then restore the detail of the face.

Likewise, the Blemish Brush performs blur and grain in a single tool. For beauty work, this means you can knock out any unwanted detail without having to come up with a replacement, let alone one that holds through the entire shot.

Or, if you do paint or clone to replace an entire area of a shot with a static set of strokes, the Grain Brush will restore matched grain. This is one more essential that you don’t want to have to actually think about very hard, and the tool set anticipates a solution in ways that native After Effects does not (even though it does offer ways to add and match grain).

Cloning can be done from multiple sources. For a great example of why you might want to do this, check out this segment of a training video from Steve Wright in which he removes an outlet from a softly-lit wall. Technically, he is working in Nuke rather than After Effects, but other than how the plug-in handles input differently, the workflow is identical to what you would experience in Adobe applications.

sil-detail
Image detail and color are held separately by the clone and brush tools. You can select what detail remains or is removed independent of replacing the missing area with paint.

The bottom line

Althogh Silhouette Paint does stand out clearly from the other options on the Adobe platform, there are other ways to achieve some of what it can do.

Earlier this year I reviewed Lockdown, a completely different approach that makes roto and tracking more organic by deforming to match natural surfaces. This tool does essentially one very jaw-dropping thing—replace a moving, deforming surface with the image (still or moving) of your choice. There is no paint involved whatsoever (although you could use it with, say, the clone brush).

There’s also Mocha Pro (or the Mocha AE version that ships with After Effects). It provides the tracking but looks at the whole world as flat planes, whether they are or not. Anything you could solve by adding a layer like a patch, Mocha will likely help. But not every situation lends itself to that approach.

Each of those tools likewise necessitates learning an unfamiliar UI. Again, to reassure you, we’re talking about the type of learning that happens in a half-day, not the months you would spend learning a full new graphics or editing environment.

That leaves you with what’s built into the Adobe stock tools and any other plug-ins you may own. If you’ve ever thought you could get a better result, or a better rate by fixing more about a shot than you can achieve with the tools you have. Silhouette Paint is worth a look, along with the tutorials I mention above and more that can be found at borisfx.com. Download the demo version and wiithin a matter of hours, aided by this video training, you too can decide for yourself if this is a tool for you.

 

]]>
https://www.provideocoalition.com/silhouette-paint-for-adobe-paint-roto-tracking/feed/ 0
Lockdown for After Effects is amazing, and free during the shutdown https://www.provideocoalition.com/lockdown-for-after-effects-is-amazing-and-free-during-the-shutdown/ https://www.provideocoalition.com/lockdown-for-after-effects-is-amazing-and-free-during-the-shutdown/#comments Fri, 15 May 2020 16:43:26 +0000 https://www.provideocoalition.com/?p=194904 Read More... from Lockdown for After Effects is amazing, and free during the shutdown

]]>
It’s rare these days to come across a tool that works in an everyday video pipeline and yet sets a new precedent for what is possible. Lockdown, an After Effects plugin available via aescripts.com, is just such a tool. What’s more, during the shutdown (currently until the end of May) the developer is licensing it free for full use.

Because it is both unfamiliar and also has a somewhat complex looking UI, I thought it would be helpful to show you how to get the best result with the fewest possible steps from this toolset. A more in-depth 30 minute tutorial from developer Chris Vranos is available online, and there is a step-by-step guide on the product page.

But what does it do? Lockdown is tracking software that specifically works with surface deformation: moving clothing, body parts, faces. You can add a graphic to the clothing of a moving subject. Place tattoos on dancing talent. Even replace a face (note this demo uses the earlier Lockdown workflow, which has changed fundamentally in 6 months).

A full review of this effect pipeline would ask: a) does it work? b) how hard is it to use, and c) is it worth the money? In an effort to let you decide for yourself, I present a simple use case with the steps needed to achieve it in this 8 minute video. As for pricing, if you put it to use now on a free license, you can decide for yourself if it’s worth $250.

Lockdown would seem to be the most game-changing After Effects plug-in to be introduced in the past year, or more. This is especially remarkable because the same developer released Composite Brush, a tool which for many has solved color keying, less than a year previous.

I highly recommend you take an hour or two to install this plug-in while it’s available free, shoot a little footage using available talent, and freely augment the result. You may find yourself, at some point in the future, shooting with a specific plan to finish shots in Lockdown.

]]>
https://www.provideocoalition.com/lockdown-for-after-effects-is-amazing-and-free-during-the-shutdown/feed/ 1
Explore the new features of After Effects in 33 minutes or less https://www.provideocoalition.com/an-overview-with-links-to-videos-of-whats-new-in-the-2019-edition-of-adobe-after-effects/ https://www.provideocoalition.com/an-overview-with-links-to-videos-of-whats-new-in-the-2019-edition-of-adobe-after-effects/#respond Mon, 15 Oct 2018 15:11:24 +0000 https://www.provideocoalition.com/?p=78815 Read More... from Explore the new features of After Effects in 33 minutes or less

]]>
Adobe today introduced the 2019 version of Creative Cloud, which includes a number of updates to After Effects. And for the next two weeks, you can access a 30 minute rundown of what’s new in After Effects. To start watching go to these links at LinkedIn Learning or at Lynda.com.

Here are the highlights:

  • Effects>3D Channel just became a whole lot more useful now that you can apply depth effects directly to 3D created in After Effects
  • Advanced Puppet Pins have been further upgraded to allow more precise articulation of character animation, along with a clearer workflow
  • MochaAE has received a major upgrade to version 6, and now runs as an effect, making it much better integrated with After Effects
  • There is a new, more modern JavaScript engine for expressions
  • Responsive Design – Time is a features set brought over from Premiere Pro that lets you more easily create reusable comps that can be stretched or compacted in time while preserving the precise tempo of specific sections
  • The Effect Controls panel is now easier to organize, and there is more you can do with text templates for Premiere Pro, including importing a spreadsheet with multiple entries for lower thirds or other graphics and iterate it using expressions

There are also a bunch more workflow and UI changes condensed into a single lesson. A separate lesson covers feature and performance upgrades.

Rather than rely on marketing videos, you can watch these for a more critical approach to what matters. So if you’re curious what’s new in the latest version of After Effects, just follow the links.

]]>
https://www.provideocoalition.com/an-overview-with-links-to-videos-of-whats-new-in-the-2019-edition-of-adobe-after-effects/feed/ 0
After Effects NAB 2018 Update: Master Properties and Advanced Puppet https://www.provideocoalition.com/after-effects-nab-2018-update-master-properties-and-advanced-puppets/ https://www.provideocoalition.com/after-effects-nab-2018-update-master-properties-and-advanced-puppets/#respond Tue, 03 Apr 2018 16:00:31 +0000 https://www.provideocoalition.com/?p=69589 Read More... from After Effects NAB 2018 Update: Master Properties and Advanced Puppet

]]>
It’s that time of year when Adobe presents its latest software additions, with video first in line. For those curious, here’s a rundown of what you can find in the spring update of After Effects 2018.

Master Properties and Essential Graphics

The addition of the Essential Graphics panel, along with .mogrt file export, made some After Effects users happy. Others reacted less positively, since Premiere Pro benefitted more than After Effects from this feature. The panel only supported one-dimensional properties (not even Position and Scale). After Effects itself couldn’t make use of a .mogrt file.

Both limitations no longer exist. Although 3D properties are still off limits, all of the 2D transforms are now fully operational in the panel. You can open and save a .mogrt as an After Effects project. However, that’s just the tip of the iceberg here; the panel has now spawned a new feature in After Effects called Master Properties. Its purpose is to reduce the need to pre-comp, which it does in a novel way.

Perhaps the most-requested feature in After Effects is for nested comps to be “twirlable.” Instead of having to dig into a separate tab to adjust and re-animate properties in a nested comp (or sub-comp) you would have direct access to all of those layers and there settings in one place, as with the Layers panel in Photoshop.

After Effects NAB 2018 Update: Master Properties and Advanced Puppet 3
Master Properties delivers that holy grail of features—twirlable properties in nested comps… to the extent that you add them, anyhow.

What the new feature set actually does is allow twirl-down access to any properties that are added to Essential Graphics in the sub-comp. Even the keyframes are visible and adjustable without ever having to open up the nested composition.

Animation and iteration with Master Properties

And animation and keyframes is where it gets interesting. If you edit these properties, you then have the option to keep them unique in that one layer, to overwrite them in the nested comp, or to overwrite them with anything that you edit in that nested composition.

This means that you can have multiple iterations of a single nested comp without making duplicates. Any changes you make can change all, some or none of those iterations.

After Effects NAB 2018 Update: Master Properties and Advanced Puppet 4
The Timeline UI includes new buttons which become activated in situations where properties and keyframes can be pushed or pulled.

So why not just drag all the properties from the nested comp into Master Properties? For better or worse, there are many reasons not to do this. The main reason is that properties become orphaned from the context in which they belong. You might have a property called “Scale” that doesn’t, in fact, adjust the scale of the layer but that of a shape layer, or a particle in an effect, or what have you. You can use custom labeling to overcome this, but an ordinary nested composition with a few layers and effects might have hundreds of properties.

It will be interesting to see how workflows are transformed by Master Properties. They are most applicable in cases where the alternative would be to create multiple versions of a single nested comp. Think of situations where you’re creating a theme and variations using settings and animation timing, and you’ve found a case where you may be putting Master Properties into play.

Advanced Puppet Tool

It’s a fair criticism to note that some of the core tools in After Effects make a splashy debut and this 1.0 version is never substantially iterated. The Puppet tool hasn’t changed since it was introduced over a decade ago, but now it has spawned an entirely new version that uses the same familiar controls.

The new set of features collectively make up the Advanced Puppet Tool. This is a mode unique from the “Legacy” version which remains available (via a menu control in the effect). Although the basic functionality remains the same, the new version does a few things better than the old one ever could.

After Effects NAB 2018 Update: Master Properties and Advanced Puppet 5
One improvement of the Puppet tool is that the quality is simply better. The mesh created is smarter about clustering deformation triangles near the pin points.

Adding more pins adds more detail, and you also have an overall control for the amount of refinement in the mesh. You can also create a cloth-like ripple effect by adjusting the Mesh Rotation Refinement setting near or at zero. This forces the layer to deform via clean straight lines instead of rotating and crumpling. It’s not exactly a great way to create, say, a flag animation, but it’s certainly useful for precision.

My guess is that cases where you’ll want to create a new puppet animation using the Legacy tool don’t much exist. It looks better and gives much better control over the look of layer deformations.

Pickwhip it, display it

After Effects NAB 2018 Update: Master Properties and Advanced Puppet 6
What was Parent is now Parent & Link.

One thing you’ll notice right away in the TImeline is a whole column of pickwhip icons. This new feature is as simple as it could be, and caters to those who avoid using expressions in any fashion. It allows you to link the property on which the pickwhip resides to any other property in one click and drag. There is no need to even set an expression for one to appear.

Even if you’ve never used expressions in After Effects before, you know the pickwhip from the controls in the Parent column of the Timeline. Twirl down and you’ll see a pickwhip by every property that can be controlled with an expression.

One great thing for everyone about this new addition is that it attempts to resolve mismatches. Say you link a one-dimensional property like Rotation to one with two dimensions, like Scale. The pickwhip will populate both X and Y, which you would otherwise have to set up yourself.

Drag and drop visual values

Another time-saver is just as simple. Drag a property to the Composition window and After Effects creates a text layer that displays the value of that property, frame-by-frame. You can see the value of any property in real-time as you work.

The text layer appears directly above the layer that created it. By default it’s a Guide Layer so that it doesn’t show up in your final output. Like the pickwhip, this simplifies something you could always do with multiple steps down to one step.

Head-mounted VR display previews

After Effects NAB 2018 Update: Master Properties and Advanced Puppet 7
Who left these goggles in my Comp viewer?

A small but huge new feature is support for head-mounted displays for virtual reality. Once you set up Video Preview Preferences for use with your headset, you can choose to preview real-time in monoscopic or stereoscopic view.

This means that instead of pretending you’re looking through a headset by converting a comp to a viewer, you can simply view it immersively, turning your head around to inspect the full 360 view. It’s a huge timesaver if you’re doing this work.

And as you inspect in spherical view, you’ll notice a huge improvement to the Plane to Sphere effect. Compared with the previous iteration, this effect is now much better able to handle fine detail.

More new features

There are a bunch of workflow improvements included here. For the full rundown of these, along with the additions detailed above, check out After Effects 2018 New Features at LinkedIn Learning (or Lynda.com). As I write this the course hasn’t been updated yet, but if you don’t see the new features yet, check back later today. If you need access, click Try Premium for free in the upper right corner of your LinkedIn homepage.

How about you, what do you make of these new additions? What’s most missing from After Effects in 2018? What’s next?

]]>
https://www.provideocoalition.com/after-effects-nab-2018-update-master-properties-and-advanced-puppets/feed/ 0
RenderGarden accelerates After Effects renders https://www.provideocoalition.com/rendergarden/ https://www.provideocoalition.com/rendergarden/#comments Tue, 16 Jan 2018 17:00:29 +0000 https://www.provideocoalition.com/?p=65809 Read More... from RenderGarden accelerates After Effects renders

]]>
Stop me if you’ve heard this one before: After Effects renders are slow. Complex shots progress sluggishly, so you open up Activity Monitor or Task Manager (ya nerd) and stare at CPU levels flying at half-mast. It makes the days longer and the deadlines tighter, and you just know there has to be another option.

RenderGarden, from Mekajiki, may now be the most direct way to peg those processors and get that fan blasting at render time for small studios. Its capabilities are closer to an actual render farm than the popular BG Renderer Pro. Unlike a render farm, this is software you can quickly install and run yourself, locally or on a network.

It’s a garden, not a farm, get it? A “farm” needs staff, dedicated equipment, significant budget and can even require ongoing maintenance. For a “garden” you just need some seeds and a place to plant them where they’ll grow. RenderGarden costs $99 and installs a script and a set of modest utility applications. Actual gardens with plants and soil can cost more than this.

How it works

If you’ve ever used either network rendering software such as Pixar’s Tractor, or even the aforementioned BG Renderer, you’ve made use of aerender. This is Adobe’s command-line executable that sits next to the After Effects application on your system. It runs in a text shell using custom commands that manage a headless version of the Render Queue.

RenderGarden also relies on aerender, with a couple of key extra ingredients, beginning with the RenderGarden script panel. With an item in the Render Queue, open the panel and click “Plant the Seeds!” and the render begins. A “Seed” is a portion of the Render Queue item; the default value of 3 seeds divides the item into 3 separate sections to run as an individual process. If you set this to 1, you replicate what BG Renderer does—run a single background render process of an entire queued item.

RenderGarden accelerates After Effects renders 9
You can simply click Plant the Seeds! to kick off a RenderGarden background render, or you can first specify how many Seeds (sections) you want the render to be divided into.

At this point (since you’ve set up two free third-party utilities, Python and FFmpeg, as instructed) a second window pops up allowing you to specify a number of Gardeners (CPU processes) to launch. This defaults to 2, which is a good place to start and evaluate whether more CPU, memory and disk I/O is available. A set of green-colored Terminal windows (get it?) open and begin to display frame-by-frame render progress.

Best practices

On a single system, the number of Gardeners (processors)should be 50-75% of the processor cores on your system. “Virtual” cores don’t count. On, say, a laptop with 8 virtual cores but 4 physical processor cores, you might choose 3 Seeds and 3 Gardeners.

Once each process completes, FFmpeg stitches the result into the output format of your choice. That way, you can choose any moving image format available in After Effects, not just image sequences. This runs as an additional terminal process, and the intermediate files remain cached in the Seed Bank. This all can be located wherever you choose, and can be emptied out from time to time like a cache folder.

As long as those shell windows that ran aerender stay open, they keep looking for new seeds to appear in the same directory.

One more thing

The initial intention for RenderGarden was simply to max out the processors on a single system. But leave those shell windows running, plant Seeds on a network drive, and you have passive network rendering. An application called Gardener included with the install can be run anywhere, even with a render-only version of After Effects.

So if you have 2 or more systems around, you can choose a network directory (identical file path from all locations), point Gardener at it, and leave it running to pick up jobs as they occur. You can’t manage it from some central panel like on a real render farm; you need to fire it up (or kill it) on the individual system. But it will stay running open-endedly in the background until either something crashes or you close it down.

RenderGarden accelerates After Effects renders 10

Not a farm

If your studio has idle machines, you can leave RenderGarden running on them and experience startling render times. Obviously if these were actual render farm machines, you would simply let the farm manage After Effects. But how many of us have a couple systems each, not to mention open desks even in a small studio?

Just like with a render farm, jobs will fail at every stage of the render process. You can kill an unwanted or stuck job using Kill Renders, which is also installed with RenderGarden. This halts each render in progress but leaves the terminal shells running to pick up more jobs. You can also use Gardener re-queue a segment instead of restarting the whole job.

The system for identifying the status of render segments is one of the most elegant hacks in operation here. Placeholder files for each seed contain JSON data including terminal commands, and the filenames begin with an alphanumeric status identifier. This could be “ready_”, “rendering_” or “complete_” depending on its current state, and changes as soon as that job picks up or completes.

RenderGarden accelerates After Effects renders 11
RenderGarden makes use of its own directories to write temporary render projects and interim moving image files. The queue is tracked using alphanumeric file names that may begin “complete_” or “rendering_” or “ready_”.

Sometimes a hack can be your best friend

To call RenderGarden a hack, however, kinda misses the point (unless the term is meant to be complementary). RenderGarden effectively and dramatically increases render power on modern systems and networks without the “drama” of extra system administration. Plus, all remote rendering systems rely on aerender, which itself is something of a hack.

Were it not for the current limitations of After Effects itself however, there might be no need for the extra hack. Despite efforts to modernize it section by section, the 25-year-old application lacks anything like comprehensive multithreading. The last attempt at a multiprocessing feature within the application itself was removed because it reduced stability while failing overall to reduce render times.

RenderGarden isn’t a feature that Adobe itself would ever ship with After Effects. This toolset cleverly fills a need with a very low commitment of resources. It should appeal to anyone with ample motion graphics work and underutilized processing power. It can easily cut render times in half, or better. A one-week trial allows you to evaluate and justify the $99 investment for your own studio.

Mekajiki, developer of RenderGarden, is run by Matt Silverman, Brendan Bolles, and Brandon Smith, all longtime pals and colleagues of the author.

]]>
https://www.provideocoalition.com/rendergarden/feed/ 2
These are the latest features in After Effects CC 2017, available now https://www.provideocoalition.com/effects-cc-2017/ https://www.provideocoalition.com/effects-cc-2017/#respond Wed, 19 Apr 2017 16:00:33 +0000 https://www.provideocoalition.com/?p=50329 Read More... from These are the latest features in After Effects CC 2017, available now

]]>
Today marks the first time Adobe has released updates to its core video and audio applications ahead of NAB, where the company will put After Effects and Premiere Pro front and center on the show floor. Why wait? Here’s an overview of everything new that’s available for immediate download to anyone with a Creative Cloud subscription.

For a detailed overview, you can look at my course After Effects CC 2017: New Features from LinkedIn Learning (otherwise known as Lynda.com), newly updated with everything that’s been added. This course features the examples depicted here in step-by-step detail, including this free preview of Essential Graphics.

Essential Graphics

The biggest addition to this new release is only usable in Premiere Pro; it provides the means to bring motion graphics design customization to the large community of editors unfamiliar with After Effects. With Essential Graphics, you build a motion graphics template with customizable controls for use in Premiere Pro.

These are the latest features in After Effects CC 2017, available now 12
The Essential Graphics template allows you to adjust text source and color of the 3D texture and light to match the sequence in Premiere Pro.

For those keeping track, the earlier 2017 release of After Effects featured Text Templates, which are now obsolete. Essential Graphics reduces what was a multiple-step process with limited uses to a drag-and-drop feature. A new Essential Graphics panel (and workspace) appears in both After Effects and Premiere Pro, and you can design a template in either application.

Only specific properties are essential

In After Effects, only specific properties are available to Essential Graphics in this initial release. You choose a composition to work with in the panel and click Solo Supported Properties, which reveals each property in the timeline that is one of the following:

  • Source Text
  • Color
  • any single-value numerical property (e.g. no 2D or 3D values)
  • Checkbox

Your design can include any features you like into the chosen composition. It’s just that only the types above can be adjusted in Premiere Pro. The exported .aegraphic file appears in the corresponding Essential Graphics panel (or Creative Cloud Library, your choice) in Premiere Pro. Although you can save the project that created it, you can’t currently open or edit an aegraphic in After Effects.

These are the latest features in After Effects CC 2017, available now 13
Premiere Pro ships with dozens of preset Essential Graphics templates, ready to be customized.

What’s the point? In this 1.0 version, Essential Graphics lets you create an animated graphic (most likely an overlay such as a title or lower third) to hand off for use in a Premiere Pro edit. But since we all know After Effects artists to be hackers, the new panel also serves as a heads-up display for controls that are otherwise buried in the Timeline or Effect Controls (no need to export a template).

More new feature additions

The most sophisticated new tech appears in the Camera-Shake Deblur effect. It uses optical-flow technology to reduces motion blur resulting from an unstable camera and a slow shutter. It detects blurred features and searches and replaces them using matching detail in adjacent frames. There are a few simple controls to refine the automated result.

This tool closes a limitation of Warp Stabilizer VFX. Any shot with instability jarring enough to cause a blurred frame isn’t fixed with stabilization alone. The shot, while smoother, can end up looking even worse for still containing randomly blurred detail, as you see in this before and after example:

 

A couple of features found in other Adobe applications now appear in After Effects. The Lumetri Scopes panel from Premiere Pro adds long-requested waveform and vectorscope displays to After Effects. As you might expect, you can customize the scopes for specific colorspaces. Activate the new Color workspace to put them to use.

These are the latest features in After Effects CC 2017, available now 14
The Lumetri Color panel from Premiere Pro now appears in After Effects.

Other Creative Cloud applications including Premiere Pro support right-to-left text (for South Asian and Middle Eastern languages). After Effects adds a new Text preferences panel but will require more work for full integration; for example, per-character animation still operates left to right.

One more thing for serious After Effects nerds: hold the option key (Mac, alt on Windows) while clicking to display a snapshot in the composition viewer to see a comparison in Difference mode. Match two frames precisely and you see a black frame, or use this to map changes too subtle to analyze in the A/B comparison otherwise provided by snapshots.

Workflow improvements

Everyone who uses After Effects hates pre-comping at some point or other. In this latest release, when you apply a compound effect (one using a separate layer as an input), you no longer need to pre-comp to make use of any effects and/or masks applied to it. Effect Input Layer Options add a pull-down menu to effects like Compound Blur and Displacement Map. Moreover, you can even use it with the Set Matte effect, so that (unlike a track matte) you have a choice whether an effect or mask is included in a selection.

Here’s one nobody foresaw: you can now rename and/or relocate the Solids folder. This is a UI element that has occupied the same place in the Project panel pretty much… forever. Additionally, timeline markers can now include not only duration but even custom color.

After Effects no longer interprets high-speed footage to a maximum 99fps. The new limit of 999fps covers most of the actual speeds you’ll encounter outside of scientific visualization, and prevents the need for pre-comp workarounds. If you do a good deal of mask or roto work, you’ll appreciate new shortcuts that let you set a mask mode as you draw or edit a mask.

For those keeping track of GPU optimized effects, in each recent release a handful more appear. And each time an effect is re-written to run on the GPU, not only is it far faster than the obsolete CPU version, it operates natively with floating-point accuracy. GPU optimization now makes the following effects faster: Levels (which I still believe to be the single most-used effect), Fractal Noise, Gradient Ramp, Drop Shadow. Offset, and Fast Box Blur.

After Effects CC 2017 new features

This is the second major release of After Effects 2017. The previous update 6 months prior brought the Cinema 4D renderer directly into the Composition viewer, one-click Adobe Media Encoder queueing, TypeKit integration, project templates, not to mention instant spacebar playback of raw footage and Team Projects.

Additions in this latest release imply more to come. Essential Graphics demands eventual support for 2d and 3D properties. Right-to-left text require a more complete feature set to bring non-western type animation to parity. Whether these remain a priority will depend on how the community puts existing features to use.

]]>
https://www.provideocoalition.com/effects-cc-2017/feed/ 0
What does After Effects even do? https://www.provideocoalition.com/after-effects-basics/ https://www.provideocoalition.com/after-effects-basics/#respond Fri, 03 Mar 2017 18:35:28 +0000 https://www.provideocoalition.com/?p=45904 Read More... from What does After Effects even do?

]]>
Ridiculous question, right? We all know the After Effects basics. I mean, depending on your point of view, of course. It’s a motion graphics app that can be used to create Hollywood-caliber visual effects, when you’re not using it to animate. It’s all about type choreography. Wait, no, really it’s about plug-ins so sophisticated they might as well be separate applications. Really it’s just a tool for video. One thing I know is, I wouldn’t ever use it to edit, I mean, except when I did.

For those who aren’t fortunate enough to have someone walk them through After Effects personally (or, in my case, fortunate enough to have started with a beta version of After Effects 2.0 while working for George Lucas in the 1990’s), there’s a recently-added course at Lynda.com (LinkedIn) to walk you through it: After Effects: The Basics. I’m qualified to tell you about it because I’m the one who created it.

What does After Effects even do? 19

What is it—not just After Effects, but this course itself?

Let’s face it, many tutorials, even the really excellent ones, quickly get into the weeds. You can spend 20 minutes to an hour watching a talented artist talk you through building up a scene. When it’s done, you may wonder, “how many more of these do I watch to get the basic idea?”

I can relate. I like audio apps but have never used ProTools. My audio designer pals have used it for everything from music editing to mashups, EDM to diegetic sound. I have so many questions how they get their results, I don’t even know where to start. So I created an overview that shows what the majority of people do with After Effects, in a few simple steps.

There’s a notion, after all, that a tool like After Effects allows you to do anything. There’s even some truth to that, but the statement doesn’t pass the “mom and dad test.” It doesn’t clarify anything. The course is designed to help absolute beginners glimpse what makes After Effects the flagship motion graphics and compositing application, the swiss-army knife of video.

What does After Effects even do? 20

How to get from start to finish with a shot

You must know, but can’t (on your own) easily learn the simplest way to get a shot through the application, start to finish. This course covers how to accomplish that in the very first lesson, which you can watch here (no subscription required).

Most shots require more than a few simple steps. The rest of the first section of the course is a deeper dive into each major section of the application. Topics include:

  • Starting a project
  • Using the Timeline
  • Organizing with layers
  • Controlling animation with keyframes
  • Using effects and 3D
  • Working with type
  • Rendering in After Effects

The focus is on what each section of After Effects is for. In all I cover ten individual areas of the application, and the activities you’ll undertake in each of them. Why the Layer panel? It’s the only place to add paint, make rotobrush selections or point-track a shot. In the bigger picture, it’s also the “operating room” for an isolated element.

Responses to common questions

A beginner (who might also be an experienced film/video professional) has questions. What about After Effects makes it specifically good for motion graphics? Where in the app can I focus to master it most quickly (answer: the Timeline)? What are all those controls in the Viewer for? Suppose I want my keyframes to look more graceful, is that complicated? How does 3D even work? Do you have to be super advanced to get started animating type? (No.) Why is rendering still part of this application?

Maybe you already know the answers to these questions, in which case I bet others have asked you to answer them.

The final half of the course focuses on designing a single 10 second sequence. It includes steps one might considered advanced, such as creating and applying a 3D camera track. However it’s designed so that no single task requires more than a few simple steps. The focus, again, is to forego workflow details to begin, and get a sense of how it looks to keep things simple.

What does After Effects even do? 21

Who are you, and what do you love to do?

Any two accomplished After Effects artists might have little to nothing in common for skills and interests. I learned I was an animator when I found I could spend hours refining motion that plays out in a few seconds, but compositing became my jam. Most of the compositors I’ve worked with don’t animate. The filmmaker uses After Effects to take control of a scene and make realistic looking changes to it. The motion graphics artist tends to go the opposite direction, bending screen reality into something we’ve never seen or imagined before.

And, let’s face it, After Effects is not always the best tool for the job. I have to correct anyone who refers to After Effects as an application for editing video; heaven help you if you actually try to use it beyond extreme short form edits. It does 3D, sure, but it ain’t Cinema 4D or Maya. It complements and sometimes competes with Photoshop (which also handles video, after all) and Illustrator.

After Effects Basics

When someone says you can do anything in After Effects, they also mean that it helps to know the toolset very well and to be more than a bit stubborn. That stubbornness comes from really wanting to do something, and then you find that the application presents a way of working through a shot that is (for the most part, anyhow) logical and expressive.

My overall metaphor for After Effects? It’s like having a camera that can shoot anything you can imagine, in motion. If that sounds overwhelming, it often is. With The Basics, I’m merely offering you the simplest way to get rolling.

What does After Effects even do? 22

]]>
https://www.provideocoalition.com/after-effects-basics/feed/ 0
Free After Effects video tutorial: Effectively track motion https://www.provideocoalition.com/free-effects-video-tutorial-effectively-track-motion/ https://www.provideocoalition.com/free-effects-video-tutorial-effectively-track-motion/#comments Tue, 24 May 2016 23:45:32 +0000 https://www.provideocoalition.com/?p=33650 Read More... from Free After Effects video tutorial: Effectively track motion

]]>
LDC_Tracker

Check it out. When I’ve had the opportunity to work with less experienced After Effects artists, and even some experienced ones who should know better, I see people make fundamental mistakes with the built-in Tracker. Not only are these easily avoided once you know what to do (and what not to do) they will instantly and dramatically improve results if you address them.

So why are we talking about the Tracker in 2016, when all of the other tracking tools in After Effects are, on the whole more powerful and more fully automated? Believe it or not, it’s still the most accessible and versatile solution, especially when it’s all you actually need. The unique thing that it does is to generate actual X and Y pixel data, without ever leaving the Viewer, and that data can be applied to any keyframe channel, anywhere in the project.

In this lesson, we focus on the two things you would never know unless someone told you. First is how to set the track and feature region so it will have the best chance to succeed. Second, there is a specific way to apply that data that is almost always the right choice: you create a null object, assign the motion to that, and use that null as the basis for whatever will inherit that motion.

If you get these wrong, you’re not going to have a good time with the Tracker. If you can put aside the fact that this tool involves more manual interaction than the automated Camera Tracker, you can take advantage of the fact that this tool also doesn’t force you into working in a 3D environment, if that’s not even what you want or need.

5min 4sec lesson can be viewed free in its entirety from the course After Effects Compositing 06: Tracking and Stabilization (full two hour course requires a lynda.com subscription—click here for a free 10-day trial that allows you to view the full course and series).

]]>
https://www.provideocoalition.com/free-effects-video-tutorial-effectively-track-motion/feed/ 3
The Simpsons go live: an exclusive inside look https://www.provideocoalition.com/simpsons-go-live-exclusive-inside-look/ https://www.provideocoalition.com/simpsons-go-live-exclusive-inside-look/#comments Fri, 20 May 2016 01:15:59 +0000 https://www.provideocoalition.com/?p=33648 Read More... from The Simpsons go live: an exclusive inside look

]]>
Adrenaline. High drama. Zero failure tolerance. Beta software. Which one of these things is not like the other? This is the story of how even a flagship broadcast brand can take an honest risk once in a while, and a glimpse of how it all played out behind-the-scenes this past weekend, as relayed to me by one David Simons, co-originator of After Effects.

Now, software developers have tackled many exciting challenges since the era when coder Margaret Hamilton put humans on the moon and changed the notion of what humans could do, or be. After all, no one knew whether it could be done until the mission succeeded, and had it not, the failure would have played out on an international stage.

Which is more or less exactly the situation Simons and fellow O.G. After Effects computer scientist Dan Wilk found themselves in Sunday evening, in a broadcast equivalent of mission control, Fox Studios in Century City.

CA_teamSimpsons

From left, Dan Wilk, Dave Simons, Allan Giacomelli and Adobe Sr. Strategic Development Manager Van Bedient celebrate a job well done.

(For the rest of the article David Simons is referred to as DaveS, a nickname he has held since the early “There’s a 50% Chance My Name is Dave” days of After Effects, and Dan Wilk is Wilk, as he says is typically called now that the team is full of guys named Dan.)

The moon-shot in question was an opportunity to improvise on live television for an audience of several million viewers—via animation—running on software that is technically still pre-release. Moreover the feat had to be completed twice, for EDT and PDT time zones.

How does an opportunity like this even come about?

The initial invitation came to Adobe a few months earlier, in February. The plan involved a sequence for the final three minutes of episode 595 (entitled “Simprovised”) that would be acted and animated in real time. The idea was to feature skilled improviser Dan Castellaneta, as Homer Simpson, responding to questions from live callers—real ones, who dialed a toll-free number—as a series of other animations played around him. The beginning and ending lines would be scripted but would still be performed in real time. The production team of the Simpsons had contemplated trying such a feat before, but it was only once Character Animator was in preview release that they felt that there was in fact a possibility of going through with it.

Technically, the challenge was unprecedented. The software wasn’t even designed for real-time rendering. In early tests, the team was not satisfied with the lip sync quality, so Adobe Principal Scientist Wil Li went to work overhauling the way phonemes (distinct units of sound in speech) were mapped to visemes (mouth shapes). “We dropped one of our mouths,” says DaveS, “added two more, and renamed one… ending up with around 11” (technically 15 total, adding in four corresponding exaggerated mouth-shape versions).  Translation of phonemes into mouth shapes created in a Photoshop file is at the core of what Character Animator does.

What worked?

Although the software can use video to determine facial expressions and body positions, in this case, contrary to what has been reported elsewhere, Castellaneta was only on a microphone. No video of his facial or physical movements was captured. “They decided they didn’t want Dan to have to worry about any performance other than what he was saying, no camera on him.”

For Homer’s body, longtime Simpsons producer David Silverman operated a keyboard with preset actions from the audio mixing room. Allan Giacomelli from Fox sat next to him, operating a second system, ready to take over if the need arose. The keyboard, DaveS explains, was also set up to trigger camera views (a wide and a close-up, which had to be rendered with separate source due to details like line thickness) as well as everything else that happened in the scene: Homer answering the phone, a series of other characters moving across the frame in cameos, and the big finale in which the walls of the “bunker” collapse to reveal Marge in curlers, gently burping Maggie on the couch.

dan-ban

Of course, none of this would be possible without extensive improv experience by the voice of Homer Simpson, Dan Castellaneta. (Photo: Getty Images)

One huge question was that of adapting the software to operate in real-time. It was designed as a module for After Effects, which of course is render-only, and so real time usage had not come into play other than for previews. “The first big challenge is that live lip-sync isn’t as good as when your timeline can see into future,” DaveS elucidates. “This isn’t just smoothing or interpolation, the software knows the odds of what’s going to happen” by using the future information to derive the most likely mouth shape.

This, in turn, led directly to a second major improvement that was fast-tracked for Character Animator: “We made it so you can (use the future-looking lip sync) mode live, with just a half-second delay.” This is a special mode that won’t be in the upcoming preview release of Character Animator, but which is likely to be in the next version. “Anyone who wants access to it can contact us to get into our pre-release program.”

The render systems for show day were the fastest available Mac Pros—two of them for fail-safe redundancy. Since Castellaneta had no visual monitor, a short delay was not a concern; it would be added to the 7 second delay needed to meet FCC requirements for a live event (to allow for bleeping of foul language—the closest thing to this having been the word “Drumpf” in the second airing, a subject widely anticipated beforehand and left in, unedited).

What could possibly go wrong?

silverman_simpsonslive

Simpsons veteran David Silverman prepares for everything to go perfectly according to plan as showtime approaches. (Photo: David Silverman)

There were just three rehearsals with Castellaneta at Fox Sports in the two weeks prior to the show. A rehearsal with employees lobbing questions was recorded, and an international version was recorded earlier last week that was ready to be cut to as a backup.

The main concern was that the puppet being animated was “the largest we had in terms of memory by far.” The 478 MB .psd file’s 2659 layers, delivered last Friday morning, included not only Homer’s rig but the full animations of all of the supporting characters as well as of the scene itself. Optimizations had been done to make this one enormous “puppet” created by the Simpsons artists operate properly; it could have been rendered just as quickly if created as a set of layers, but “they were just putting everything into the Homer puppet, one enormous Photoshop document. It was working despite the possibility that it was just too big.”

The principal line of defense against unexpected surprises was to have two Macs set up the exact same way, each with its special keyboard to trigger the animations, with audio fed to both. “If one crashed, we could switch to the other Mac. If he was in the middle of a special movement, it might glitch” but the show would go on.

CA_keyboard

In the trenches: the X-Keys keypad with customized Homer-keys at the ready (Photo: David Silverman).

It wasn’t until 4:00 pm on Sunday afternoon, one hour prior to airtime, that an issue emerged with the backup machine; “the gags were all running slowly. The main machine was running fine, but the backup one was clearly running seconds too late. The Macs were identical, so we were thinking, how do we know the main one isn’t gonna suddenly bog down?” There no way DaveS and Wilk could know for sure if some previously unseen glitch would also emerge on the main system to prevent subsequent animations from being triggered, including the wall collapse that ends the episode.

“We had to go to air without knowing what was wrong.”

The backup machine wasn’t required for the east coast broadcast, which came off without a major hitch (if you look for excerpts on YouTube, you may see stuttering motion which has been confirmed as the result of faulty capture, and did not appear in the actual transmission).

That left the other shoe to drop in 3 hours. “We got some pizza, had a drink, and then went to work.” After Wilk kicked around various unlikely theories with DaveS and Allan, he had a sudden flash of insight: “it had to be App Nap.” This feature, introduced in OS X Mavericks, causes inactive applications to go into a paused state, helping to reduce power usage. As it turns out, App Nap had been disabling the software that ran in the background for the custom keyboard.. “We used the terminal command that forces that machine to kill the feature.” Problem solved!

Except… the question remained what to do about the main machine, which had performed fine, with App Nap actively running, in the first performance. “We decided if it ain’t broke don’t fix it.” The two developers were also able to confirm a lag on the main machine if it were left idling, which, naturally, it hadn’t been.

And so, the west coast broadcast came off without a hitch.

How will this change the way an animator works, or even what it’s possible for animation to do?

An odd parallel: After Effects & the Simpsons (as a Fox series) are the same age, give or take 3 or 4 years; more to the point, each has demonstrated staying power that has extended far beyond the odds (or the competition). While the cartoon was a juggernaut nearly from its inception in 1989, the software that debuted in 1993 was anything but; yet just like the show, it soon found its passionate fans, myself very much included in both cases.

Character Animator is an application in its own right, both in terms of its power and the learning curve involved to rig a character and make full use of it, and it appears destined to become that rarest of entities, a new desktop video application developed entirely within Adobe.

He couldn’t tell me about other series or studios that are interested in Character Animator yet, but when I asked DaveS what type of show he thought was a good fit for the technology, he named Archer, a dialog-heavy series with a clean straightforward artistic look.

“Straightforward” doesn’t mean the character needs to literally face straight  toward the camera, but each new angle requires a separate puppet rig unless a production is very clever about warping a flat-shaded character. While the Simpsons has been and will continue to be animated by hand in Korea, it seems it’s only a matter of time before another major show adopts Character Animator—whether or not animation for live television becomes a bonafide trend.

Homer is ready for your call.Homer stands by.
]]>
https://www.provideocoalition.com/simpsons-go-live-exclusive-inside-look/feed/ 3