PVC News Staff – ProVideo Coalition https://www.provideocoalition.com A Filmtools Company Wed, 01 Jan 2025 14:20:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.5 https://www.provideocoalition.com/wp-content/uploads/cropped-PVC_Logo_2020-32x32.jpg PVC News Staff – ProVideo Coalition https://www.provideocoalition.com 32 32 Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion https://www.provideocoalition.com/looking-back-on-2024-and-ahead-to-2025-a-pvc-roundtable-discussion/ https://www.provideocoalition.com/looking-back-on-2024-and-ahead-to-2025-a-pvc-roundtable-discussion/#respond Wed, 01 Jan 2025 04:00:23 +0000 https://www.provideocoalition.com/?p=287445 Read More... from Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion

]]>
Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 12

While the hosts of the Alan Smithee podcast already discussed the evolving landscape in media and entertainment as 2024 draws to a close, there’s so much more to say about what happened in 2024 and what 2025 has in store for individual creators and the entire industry. Generative AI for video is everywhere, but how will that pervasiveness impact actual workflows? What were some of the busts in 2024? How did innovations in cameras and lenses make an impact? And what else are we going to see in 2025 beyond the development of AI animation/video tools?

Below is how various PVC writers explored those answers in a conversation took shape over email. Keep the conversation going in the comments or on LinkedIn. 

 

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 13Scott Simmons

2024? Of course, the first thing you think about when recapping 2024 (and looking ahead to 2025) is that it was all about artificial intelligence. Everywhere you look at technology, media creation, and post-production, there is some mention of AI. The more I think about it, though, the more I feel like it was just a transitional year for AI. Generative AI products feel like old hat at this point. We’ve had “AI” built into our editing tools for what feels like a couple of years now. While we have made a lot of useful advancements, I don’t feel like any earth-shattering AI products shipped within any of our editing tools in 2024. Adobe’s generative extend in Premiere Pro is probably the most useful and most exciting AI advancement I’ve seen for video editors in a long time. But it’s still in beta, so Gen Extend can’t count until it begins shipping. The AI-based video searching tool Jumper shipped is a truly useful third-party tool. Adobe also dropped their “visual search” tool within the Premiere beta, so we know what’s coming next year as well, but it’s far from perfect at the time of this writing, and still in beta. I appreciate an AI tool that can help me search through tons of media, but if that tool returns too many results, I’m yet again flooded with too much information.

The other big AI-based video advancement is text-based generative video coming into its own this year. Some products shipped, albeit with quite a high price to make it truly usable. And even more went into preview or beta. 2025 will bring us some big advancements in generative video and that’s because we’re going to need them. What we saw shipping this year was underwhelming. A few major brands released AI-generated commercial spots or short films, and they were both unimpressive and creepy. I saw a few Generative AI short films make the rounds on social media, and all I could do after watching them was yawn. The people that seemed most excited by generative video (and most trumpeting its game-changing status) were a bunch of tech bros and social media hawks who didn’t really have anything to show other than more yelling on social media from their paid and verified accounts or their promoted posts.

Undoubtedly, AI will continue to infiltrate every corner of media creation. And if it can do things like make transcription more accurate or suggest truly usable rough cuts, then I think we can consider it a success. But for every minor workflow improvement that is actually useful in post-production, we’ll see two or three self-proclaimed game-changing technologies that end up just being … meh.

In the meantime, I’ll happily use the very affordable M4 Mac mini in the edit suite or a powerful M4 Mac Studio that speeds up the post-production process overall. We can all be thankful for cloud-based “hard drives,” like LucidLink, that do more for post-production workflow than most AI tools that have thrown our way. Maybe 2025 will be the year of AI reality over AI hype.

19-6-4-nikki-cole-backpack-headshotNikki Cole

While I’m aware that the issues we’ve been facing on the writing/producing/directing side don’t affect many of the tech teams on the surface, it has been rather earth-shattering on our side of things. Everyone fears losing their jobs to AI and with good reason. I am in negotiations with a European broadcaster who really likes the rather whacky travel series I’m developing but, like so many b’casters now, simply don’t have enough cash to fully commission it. They flat out told me that I would write the first episode, and they would simply feed the rest of the episodes into AI so they wouldn’t have to pay me to write more eps. I almost choked when they said that matter of factly, and I responded by saying that this show is comedy and AI can’t write comedy! Their response was a simple shrug of the shoulders. Devastating for me and with such an obvious lack of integrity on their part, I’m now concerned that they are currently going ahead with that plan and we don’t even have a deal in place. So, from post’s perspective, that is one more project that isn’t being brought into the suite because I can’t even get it off the ground to shoot it.

As members of various guilds and associations, we are all learning about the magnificent array of tools, we now have at our fingertips for making pitches, sizzle reels, visual references, etc. It really is astonishing what I now can do. I’m learning as much as I can while I just can’t shake the guilt knowing that I’ll be putting the great graphic designers I used to hire out of work. If budgets were a little better, I would of course, hire a graphic designer with those skills, but as things stand today, I can’t afford to do that.

It’s definitely a fascinating and perplexing time!

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 14Oliver Peters

AI in various forms gets the press, but in most cases it will continue to be more marketing hype than anything else. Useful? Yes. True AI? Often, no. However, tools like Jumper can be quite useful for many editors. Although, some aspects, like text search, have already existed for years in PhraseFind (Avid Media Composer).

There are many legal questions surrounding generative AI content creation. Some of these issues may be resolved in 2025, but my gut feeling is that legal claims will only just start rolling for real in this coming year. Vocal talent, image manipulation, written scripts, and music creation will all be areas requiring legal clarification.

On the plus side, many developers – especially video and audio plugin developers – are using algorithmic processes (sometimes based on AI) to combine multiple complex functions into simple one-knob style tools. Musik Hack and Sonible are two audio developers leading the way in this area.

One of the less glitzy developments is the future (or not) of post. Many editors in major centers for film and TV production have reported the lack of gigs for months. Odds are this will continue and not reverse in 2025. The role for (or even need for) the traditional post facility is being challenged. Many editors will need to find ways to reinvent themselves in 2025. As many business are enforcing return-to-office policies, will editors find remote work to be less acceptable to directors?

When it comes to NLEs, Adobe Premiere Pro and Avid Media Composer will continue to be the dominant editing tools when collaboration or project compatibility is part of the criteria. Apple Final Cut Pro will remain strong among independent content creators. Blackmagic Design DaVinci Resolve will remain strong in color and finishing/online editorial. It will also be the tool for many in social media as an alternative to Premiere Pro or Final Cut Pro.

The “cloud” will continue to be the big marketing push for many companies. However, for most users and facilities, the internet pipes still make it impractical to work effectively via cloud services with full resolution media in real time. Of course, full resolution media is also getting larger and not lighter weight. So that, of course, is not conducive to cloud workflows.

A big bust in this past year has been the Apple Vision Pro. Like previous attempts at immersive, 3D, and 360-degree technologies, there simply is no large, sustainable, mass market use case for it, outside of gaming or special venues. As others have predicted, Apple will likely re-imagine the product into a cheaper, less capable variant.

Another bust is HDR video. HDR tools exist in many modern cameras and even smart phones. HDR is a deliverable for Netflix originals and even optional for YouTube. Yet the vast amount of content that’s created and consumed continues in good old Rec 709. 2025 isn’t going to change that.

2025 will be a year when the rubber meets the road. This is especially true with Adobe, who is adding generative AI for video and color management into Premiere Pro. So far, the results are imperfect. Will it get perfect in 2025? We’ll see.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 15Iain Anderson

The last twelve months have brought a huge amount of change. Generative AI might have had the headlines, but simple, short clips don’t just magically scale up to a 30-second spot, nor anything longer. One “insane, mind-blowing” short that recently became popular couldn’t even manage a consistent clothing for its lead, let alone any emotion, dialogue or plot. Gen AI remains a tool, not a complete solution for anything of value.

On the other hand, assistive AI has certainly grown a little this year. Final Cut Pro added automatic captioning (finally!) and the Magnetic Mask, Premiere Pro has several interesting things in beta, Jumper provides useful visual and text-based media search today, Strada looks like doing the same thing soon in the new year and several other web-based tools offer automatic cutting and organizing of various kinds. But I suspect there’s a larger change coming soon — and it starts with smarter computer-based assistants.

Google Gemini is the first of a new class of voice-based AI assistants which you can ask for help while you use your computer, and a demo showed it (imperfectly) answering questions about DaVinci Resolve’s interface. This has many implications for anyone learning complex software like NLEs, and as I make a chunk of my income from teaching people that, it’s getting personal. Still, training has been on the decline for years. Most people don’t take full courses, but just jump in and hit YouTube when they get stuck. C’est la vie.

While assistant AIs will become popular, AIs will eventually control our computers directly, and coders can get a taste of this today. Very recently, I’ve found ChatGPT helpful for creating a small app for Apple Vision Pro, for writing scripts to control Adobe apps, and also for converting captions into cuts in Final Cut Pro, via CommandPost. Automation is best for small, supervised tasks, but that’s what assistants do.

Early in 2025, an upgraded Siri will be able to directly control any feature that a developer exposes, enabling more complex interactions between apps. As more AIs become able to interpret what they see on our screens, they’ll be able to use all our apps quicker than we can. In video production, the roles of editor and producer will blur a little further, as more people are able to do more tasks without specialist help.

But AI isn’t the whole story here, and in fact I think the biggest threat to video professionals is that today, not as many people need or want our services. High-end production stalled with the pandemic and many production professionals are still short of work. As streaming ascends (even at a financial loss) broadcast TV is dying worldwide, with flow-on effects for traditional TV advertising. Viewing habits have changed, and will keep changing.

At the lower end, demand for quick, cheap vertical social media video has cut into requests for traditional, well-made landscape video for client websites or YouTube. Ads that look too nice are instantly recognised as such and swiped away, leading to a rise in “authentic” content, with minimal effort expended. It’s hard to make a living as a professional when clients don’t want content that looks “too professional”, and hopefully this particular pendulum swings back around. With luck, enough clients will realise that if everyone does the same thing, nobody stands out.

Personally, the most exciting thing this year for me is the Apple Vision Pro. While it hasn’t become a mainstream product, that was never going to happen at its current high price. Today, it’s an expensive, hard-to-share glimpse into the future, and hopefully the state-of-the-art displays inside become cheaper soon. It’ll be a slow road, and though AR glasses cannot bring the same level of immersion, they could become another popular way to enjoy video.

In 2024, the Apple Vision Pro was the only device to make my jaw drop repeatedly, and most of those moments have come from great 3D video content, in Immersive (180°) or Spatial (in a frame) flavors. Blackmagic’s upcoming URSA Cine Immersive camera promises enough pixels to accurately capture reality —  8160 x 7200 x 2 at 90fps — and that’s something truly novel. While I’m lucky to have an Apple Vision Pro today, I hope all this tech is in reach of everyone in a few years, because it really does open up a whole new frontier for us to explore.

P.S. If anyone would like me to document the most beautiful places in the world in immersive 3D, let me know?

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 16Allan Tépper

In 2024, we saw more innovation in both audio-only and audio-video switchers/mixers/streamers and the democratization of 32-bit float audio recording and audio DSP in microphones and mixers. I expect this to continue in 2025. In 2024, both Blackmagic and RØDE revolutionized ENG production with smartphones. In 2024, I began my series about ideal digitization and conversion of legacy analog color-under formats including VHS, S-VHS, 8mm, Hi-8mm and U-Matic. I discussed the responsibility of proper handling of black level (pedestal-setup/7.5 or zero IRE) at the critical analog-to-digital conversion moment, proper treatment and ideal methods to deinterlace while preserving the original movement (temporal resolution). That includes ideal conversion from 50i to 50p or 59.94i to 59.94p as well as ideal conversion from non-square pixels to square pixels, upscaling to HD’s or 4K’s vertical resolution with software and hardware, preservation of original 4:3 aspect ratio (or not), optional cropping of headswitching, noise reduction and more. All of this will continue in 2025, together with new coverage of bitcoin hardware wallets and associated services.

In 2024, at TecnoTur we helped many more authors do wide distribution of their books, ebooks and audiobooks. We guided them, whether they wanted to use the author’s own voice, a professional voice or an AI voice. We  produced audiobooks in Castilian, English and Italian. We also helped them to deliver their audiobooks (with self distribution from the book’s own website) in M4B format with end-user navigation of chapters. I expect this to expand even more in 2025.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 17Brian Hallett

This past year, we saw many innovations in cameras and lenses. For cameras, we are now witnessing the market begin the movement to make medium-format acquisition easier for “everyday filmmakers” rather than just the top-tier. Arri announced its new ALEXA 265, a digital 65mm camera designed to be compact and lightweight. The new ALEXA 265 is one-third the size of the original ALEXA 65 while slightly larger than the still-new ALEXA 35. Yet, the ALEXA 265 is only available as a rental.

Regarding accessibility for filmmakers, the ALEXA 265 will not be easier to get one’s hands on; that will be reserved for the Blackmagic URSA CINE 17K 65. The Blackmagic URSA CINE 17K 65 is exactly the kind of camera Blackmagic Design and CEO Grant Petty wants to get into the hands of filmmakers worldwide. Blackmagic Design has a long history of bringing high-level features and tools to cameras at inexpensive prices. It is the company that bought DaVinci Resolve and then gave it away for free when purchasing a camera. They brought raw recording to inexpensive cameras early on in the camera revolution. Now, Blackmagic Design sees 65mm as the next feature reserved for the top-tier exclusive club of cinematographers they can deliver to everyone at a relatively decent price of $29,995.00, so expect to see the Blackmagic URSA CINE 17K 65 at rental houses sooner than later. I also wouldn’t let the 17K resolution bother you too much. Blackmagic RAW is a great codec that is a breeze to edit compared to processor-heavy compressed codecs.

We also saw Nikon purchase RED but have not seen the cross-tech innovation between those companies. In years to come, we will see Nikon add RED tech to its Nikon cameras and vice versa.

Sony delivered the new Sony BURANO and I’m seeing the camera at rental houses now. More so, though, I see more owners/operators with the BURANO than anything else. It appears Sony has a great camera that will last a long time for owners/operators.

I feel like I saw a ton of new lenses in 2024, from ARRI’s new Enso’s Primes to Viltrox’s anamorphic lenses. We see Chinese lenses coming in from every direction, which is good. More competition benefits all of us and keeps prices competitive. Sigma delivered their new 28-45mm f/1.8, the first full-frame F/1.8 maximum aperture zoom lens. I tested this lens on a Sony mirrorless, and it felt like the kind of lens you can leave on all day and have everything covered. The depth of field was great in every shot. Sigma has delivered a series of lenses for mirrorless E-Mount and L-Mount cameras at an astounding pace from 500mm down to 15mm.

Canon was miserly with their RF-mount. To me, Canon is protecting its investment into its lens innovation by restricting who can make RF mount lenses. I wish they wouldn’t do such a thing. It can be counter-intuitive to me to block others from making lenses that work on their new cameras. What has happened is that all those other lens makers are making lenses specific to E-Mount and L-Mount. In essence, if you are a BURANO shooter, you have more lenses available than a Canon C400 shooter. The story I tell myself is if I had to buy a camera today, which lenses I could use would be a part of that calculus.

On Artificial Intelligence, AI, we cannot discount how manufacturers use it to innovate quicker and shorten the timeframe from concept to final product faster while saving money. As a creative, I use AI and think of AI in this way: there will be creatives who embrace AI and those who don’t or won’t, and that will be the differentiator in the long run. I already benefit from AI with initial script generation, which is only a starting point, to caption generation and transcription and to using it in Lightroom for photo editing.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 18Damien Demolder

The production of high quality video used to be restricted by the cost of the kit needed and the skills required to operate that equipment. Those two things helped to regulate the number of people in the market. The last year though has seen a remarkable acceleration in the downward pricing trend of items that used to cost a lot of money, as well as an increase in the simplification and convenience of their handling. I tend to review equipment at the lower end of the price scale, an area that has seen a number of surprising products in the last twelve months. These days, anamorphic lenses are almost common-place hovering around the $1000 price point, and LED lights have simultaneously become cheaper and much more advanced.

Popular camera brands that until recently only dipped their toe into the video market now offer their own Log profiles, encourage their users to record raw footage to external devices and provide full 35mm frame recording as though it is the expected norm. LUTs can be created on a mobile phone app and uploaded to cameras to be baked in to the footage, and care-free 32bit float audio can be recorded directly to the video soundtrack for a matter of a few hundred dollars and a decent mic. Modern image stabilisation systems, available in very reasonably priced mirrorless cameras, mean we can now walk and film without a Steadicam, and best-quality footage can be streamed to a tiny SSD for long shoots and fast editing. Earlier this year I reviewed a sub-$500 Hollyland wireless video transmitter system that, with no technical set-up, can send 4K video from an HDMI-connected camera to four monitors or recorders – or to your phone where the footage can be recorded in FHD. I also reviewed the Zhiyun Molus B500 LED light that now provides 500W worth of bi-coloured illumination for less than $600, and the market is getting flooded with small, powerful LED, bi- and full colour, lights that run on mini-V Lock batteries – or their own internal rechargeable batteries.

Now, a lot of these products aren’t perfect and have limitations, but no sooner have the early adopters complained about the faults in the short-run first batch than the manufacturers have altered the design and fixed the issue to make sure the second phase will meet, and often exceed, expectations. We can now have Lidar AF systems for manual lenses, autofocus anamorphics, cheap gimbals so good you’d think the footage was recorded with a drone – even lighting stands, heavy duty tripods and rigging gear are getting cheaper and better at the same time.

Of course all this is great news for low-budget productions, students and those just starting out, but it also means anyone can believe they are a film maker. With the demand for video across social media platforms and websites ever increasing you’d think that would be great news for the industry, but much of that demand is being eaten up by those with no formal learning and some clever kit. Not all the content looks or sounds very good, but often that matters less than that a tiny budget is kept to. Those who think they are film makers can easily convince those who can’t imagine what their film could look like, that they are.

I expect 2025 will bring us more of this – better, more advanced and easier-to-use kit at lower prices, and more people using it. I didn’t need to consult my crystal ball for this prediction. Every year has brought the same gradual development since I joined the industry 28 years ago, but once again it has taken us to places I really hadn’t expected. I expect to be surprised again.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 19Nick Lear

I found 2024 to involve a lot of waiting. Waiting for the industry to right itself, waiting for the latest AI tool to come out of Beta, waiting for companies to put stability before bells and whistles. That, I fear, may be rather a long wait. I also found I had very mixed feelings about AI – on the one hand I was excited to see what advances technology could bring and on the other hand saddened by the constant signs that profit is put before people – whether in plagiarising artists’ work or the big Hollywood studios wanting the digital rights of actors.

Generative AI impresses me whenever I see it – and I think we have to acknowledge the rate of improvement in the last few years – but I also struggle to see where it can fit in my workflow. I am quite looking forward to using it in pre-production – testing out shots before a shoot or while making development sizzles. To that end, it was great to see Open AI’s text to video tool Sora finally come into the public’s hands this month, albeit not in the UK or Europe. Recently Google’s Veo 2 is getting hyped as being much more realistic, but it’s still in Beta and you have to live in the US to get on the waiting list. Adobe’s Firefly is also waitlist only – so there’s more waiting to be done – yet it could well be that 2025 brings all of these tools into our hands and we get to see what people will really do with them outside of a select few.

On the PC hardware front, marketing teams went into overdrive this year to sell us on new “AI” chips. Intel tried to convince us that we needed an NPU (neural processing unit) to run machine learning operations when there were marginal gains over using the graphics card we already had. And Microsoft tried to push people in the same direction – requiring specific new hardware to qualify for Copilot+. Both companies are trying to catch up with Apple on battery life, which I’m all for, but I wish they could be more straightforward about how they presented it.

I continued to get a lot out of the machine learning based tools, whether it was using a well trained voice model in Eleven Labs or upscaling photos and video with Topaz’s software. I also loved the improvements that Adobe made in their online version of Enhance Speech which rescued some bad audio I had to work with. Some of these tools are starting to mature – they can make my life easier and enable me to present better work to my clients which is all I want at the end of the day.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 20Jeff Foster

For me 2024 was met with lots of personal life challenges which has precluded me from a lot of deep dive involvement into the world of AI to the level I did in the previous 2 years, but I did manage to catch up on some of the more advanced generative AI video/animation tools to explore and demo for the CreativePro Design + AI Summit in early December. I created the entire 40 minute session with Generative AI tools including my walking intro chat and the faux Ted Talk presentation, using tools like HeyGen, ElevenLabs, Midjourney and Adobe After Effects. As usual, I did a complete breakdown of my process as a reveal toward the end of my session and will be sharing this process in a PVC exclusive article/video in January 2025.

The rate of development of AI animation/video tools such as Runway, Hailuo, Leonardo and others that are developing their text/image to video tools is astounding. I think we’re going to see a major shift in development in this area in 2025.

I’m also exploring music and audio AI generative tools including hardware/software solutions in the coming months and expect to see some amazing growth in quality, production workflows and accessibility to the public who are virtually non-musicians.

As usual, I’m only exploring the tools and seeing how they can be utilized, but also am concerned for the direction all of this is heading and how it affects content creators and producers in the end. I always take the position that these are tools we can utilize in our workflows (or at least have an understanding of how they work) or choose to ignore them and hope they’ll just go away like other fads… which they won’t.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 21Michelle DeLateur

Christmas morning is a flurry of impatience; usually an explosion of expectation, surprise, and wrapping paper scraps. I used to describe NAB as “camera Christmas”  to newcomers. But with announcements coming by email and NAB turning into primarily events, meetings, and conversations, that giddy elf feeling I used to have to see the new floor models has turned into excitement to see familiar faces.

So where has our impatience shifted? It seems we now find ourselves in a waiting game for presents in 2025.

That new Adobe Gen-AI video model? Hop on the waitlist. Hoping to see more content on the Vision Pro, perhaps with the Blackmagic URSA Cine Immersive? Not yet. Excited about Sigma making RF lenses? They started with APS-C first.

Patience is not one of our virtues. With shorter camera shelf lives and expected upgrade times, we assume we will hold onto our gear shorter than ever and are always ready for a change. Apple’s releases make our months-old laptops seem slow. A new AI tool comes out nearly every day.

Video editors are scrambling for the few job openings, adding to their skill sets to be ready for positions, or transitioning to short term jobs outside of video alongside the anxiety of AI threatening to take over this realm. We rejoiced when transcription was replaced by a robot. We winced when an AI program showed it could make viral-ready cuts.

Just because we are forced to wait does not mean we are forced to be behind. It is cheaper than ever to start a photography journey. Mastering the current tools can make you a faster editor. Teaching yourself and others can help create new stories. While I personally don’t fully believe in the iPhones filmmaking abilities, there ARE plenty of tools to turn the thing-that’s-always-on-you into a filmmaking device.

In 2024, we were forced to wait. But we are not good at waiting. That’s the same tenacity and ambition that makes us good at storytelling. It’s only a matter of time. It’s all a matter of time. So go forth and use your time in 2025.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 22Chris Zwar

For me, 2024 was a case of ’the more things change, the more they stay the same’. I had a busy and productive year working for a range of clients, some of whom I hadn’t worked with for many years. It’s nice to reconnect with former teams and I found it interesting that briefs, pitches and deliveries hadn’t changed a great deal with time.

The biggest change for me was finally investing in a new home workstation. Since Covid, I have been working from home 100%, but I was using an older computer that was never intended for daily projects. Going through the process of choosing components, ordering them and then assembling a dedicated work machine was very rewarding, and something I should have done sooner. Now that my home office has several machines connected to a NAS with 10-gig ethernet, I have more capacity at home than some of the studios I freelance for – something I would have found inconceivable only a few years ago!

Technically, it seems like the biggest impact that AI has had so far has been providing a topic for people to write about. Although AI tools continue to improve, and I use Ai-based tools like Topaz & Rotobrush regularly, I’m not aware of AI having had any impact on the creative side of the projects I’ve worked on.

From my perspective as an After Effects specialist, the spread of HDR & ACES has helped After Effects become increasingly accepted as a tool for high-end VFX. The vast majority of feature films and premium TV shows are composited with Nuke pipelines, but with ACES having been built-in to AE for over a year, I’m now doing regular cleanup, keying and compositing on projects in After Effects that wouldn’t have been available to me before.

]]>
https://www.provideocoalition.com/looking-back-on-2024-and-ahead-to-2025-a-pvc-roundtable-discussion/feed/ 0
SIGMA announces 28-105mm F2.8 DG DN | Art lens, a new fast-aperture zoom with extended reach https://www.provideocoalition.com/sigma-announces-28-105mm-f2-8-dg-dn-art-lens-a-new-fast-aperture-zoom-with-extended-reach/ https://www.provideocoalition.com/sigma-announces-28-105mm-f2-8-dg-dn-art-lens-a-new-fast-aperture-zoom-with-extended-reach/#respond Fri, 06 Sep 2024 18:54:33 +0000 https://www.provideocoalition.com/?p=283852 Read More... from SIGMA announces 28-105mm F2.8 DG DN | Art lens, a new fast-aperture zoom with extended reach

]]>
SIGMA Corporation of America, the US subsidiary of SIGMA Corporation (CEO: Kazuto Yamaki. Headquarters: Asao-ku, Kawasaki-shi, Kanagawa, Japan) is pleased to announce the SIGMA 28-105mm F2.8 DG DN | Art zoom lens for full-frame mirrorless camera systems. Spanning many popular focal lengths in a single fast-aperture zoom, this is an exciting addition to the product line.

The SIGMA 28-105mm F2.8 DG DN | Art is a surprisingly compact, full-frame, wide-angle to telephoto zoom lens with a fast constant aperture. Available for Sony E-mount and L-Mount, this lens covers several popular focal lengths from 28mm to 105mm, including the very popular 85mm focal length for portraiture, remaining at F2.8 through the entire range.

Featuring HLA (High-response Linear Actuator) autofocus, and optical performance that lives up to the standard of Art line lenses, the SIGMA 28-105mm F2.8 DG DN | Art joins SIGMA’s selection of fast-aperture zoom lenses in the Art line, offering photographers and videographers a variety of premium standard zoom options depending on their personal style. Like the 24-70mm F2.8 DG DN II | Art and the 28-45mm F2.8 DG DN | Art, the new 28-105mm F2.8 includes a lockable aperture ring with click/declick function, as well as two AFL buttons and a zoom lock switch. Additionally, the dust- and splash-resistant design and water-and oil-repellant coating on the front element makes it suitable for shooting both stills and video in inclement weather.

A minimum focusing distance of 15.8 inches (40cm) at all focal lengths, and maximum magnification ratio of 1:3.1 at the telephoto end add to the versatility of this new zoom lens. This lens also features a rounded twelve-blade diaphragm, which helps to keep the large-aperture unit as small as possible.

Measures have been taken to minimize changes in optical performance due to differences in zoom and focus positions, including the use of a difficult-to-process large-diameter FLD glass in the first group to suppress aberrations in each group. In addition, the use of 5 aspherical lens elements has enabled the lens to achieve both a wide zoom range of 28mm to 105mm and a large-aperture of F2.8, while reducing its overall size.

By thoroughly reducing the weight of each part, the lens is kept under 2 pounds while achieving both a wide zoom range and an F2.8 aperture. The lens barrel near the mount is made of magnesium rather than aluminum, ensuring rigidity while reducing the weight of these parts alone by two-thirds. The lens will retail for $1,499 and will be available in late September 2024.

Learn more at: https://www.sigmaphoto.com/28-105mm-f2-8-dg-dn-a

Exclusively for mirrorless cameras | Compatible with full-frame cameras

  • Constant F2.8 aperture across a versatile 28-105mm zoom
  • Superb portability thanks to a weight of less than 1kg
  • Professional features, fast AF and excellent build quality

SIGMA announces 28-105mm F2.8 DG DN | Art lens, a new fast-aperture zoom with extended reach 24

Supplied accessories: CASE, LENS HOOD LH878-07, FRONT CAP LCF-82 III, REAR CAP LCR II

Available mounts: L-Mount, Sony E-mount

Launch date: September 26, 2024

  • Product appearance and specifications are subject to change.
  • This product is developed, manufactured and sold based on the specifications of E-mount which was disclosed by Sony Corporation under the license agreement with Sony Corporation.
  • L-Mount is a registered trademark of Leica Camera AG.

#SIGMA #SIGMA28105mmF28Art #SIGMAArt #SIGMAArtZoom #SIGMADGDN

The SIGMA 28-105mm F2.8 DG DN | Art is not just another standard zoom lens. With a powerful wide-angle to medium telephoto focal length and a bright F2.8 aperture throughout, this full-frame mirrorless zoom combines outstanding optical performance, superb build quality and a range of pro-grade features to make it one of the most versatile optics on the market for mirrorless systems.

The lens is built to tackle a huge range of subjects and situations, from expansive low-light landscapes to portraits with big, beautiful bokeh, and thanks to SIGMA’s latest optical technology and the use of large diameter elements including FLD, SLD and five aspherical elements, it is able to do so with superb clarity and sharpness. This allows the lens to deliver exceptional versatility without compromising on optical quality. The lens also includes high-speed AF with HLA technology, a rugged dust- and splash-resistant structure, and various functions such as an aperture ring, all packed into a compact and lightweight body weighing less than 1kg, making it an exceptional all-round choice for both stills and video.

 

]]>
https://www.provideocoalition.com/sigma-announces-28-105mm-f2-8-dg-dn-art-lens-a-new-fast-aperture-zoom-with-extended-reach/feed/ 0
Ninja Phone Shipping Worldwide Ahead of IBC2024 https://www.provideocoalition.com/ninja-phone-shipping-worldwide-ahead-of-ibc2024/ https://www.provideocoalition.com/ninja-phone-shipping-worldwide-ahead-of-ibc2024/#respond Mon, 19 Aug 2024 20:44:54 +0000 https://www.provideocoalition.com/?p=283178 Read More... from Ninja Phone Shipping Worldwide Ahead of IBC2024

]]>
Ninja Phone Shipping Worldwide Ahead of IBC2024 26

Atomos has announced shipping of Ninja Phone, following an outstanding reception from attendees at NAB 2024. It is available in time for IBC 2024 via authorized resellers and will be demonstrated at the show on the Atomos Stand D25 in Hall 11. 

Ninja Phone supports iPhone 15 Pro and iPhone 15 Pro Max out of the gate, although other smartphones will follow. It features a 10-bit Apple ProRes and 10-bit H.265 video co-processor that enables users to record and monitor from professional cameras with an HDMI output. This powerful product combines Atomos’ extensive knowledge of ProRes encoding and Apple’s cutting-edge silicon and screen technology to create the world’s most visually stunning, portable, and connected professional HDR monitor-recorder.  

ProRes-encoded video can be stored on the phone as a .mov file and/or simultaneously transcoded by the iPhone to 10-bit H.265 for super-efficient workflows like camera to cloud, or live streamed via the iPhone’s built-in 5G and Wi-Fi 6E connectivity. The iPhone 15 Pro’s connectivity opens a door for Ninja Phone users to make full use of Atomos Cloud Studio (ACS) for streaming and live production. 

The Ninja Phone app, downloadable from the App Store, controls and coordinates the operation of both the Ninja Phone and the iPhone, making them feel like a single, responsive device. It accommodates external iPhone accessories with a separate, integrated USB-C hub to allow necessary professional add-ons like wireless USB-C microphones, for perfectly synchronizing video and audio.  

“We are excited to deliver Ninja Phone to market in advance of IBC 2024,” says Atomos CEO and Co-Founder Jeromy Young. “Ninja Phone is the perfect tool for video professionals who want to adopt a cloud workflow without a complex and expensive technology footprint. And it’s great for the thousands of content creators who capture, store, and share video from their iPhone 15 Pro but aspire to work with professional cameras, lenses, and microphones. This is a real TikTok, Instagram and YouTube enhancement tool for creators!”  

The Ninja Phone is an essential addition to any filmmaker’s toolkit, with proven professional monitoring features, and built-in mobile connectivity for collaborative, remote editing. It costs USD/EUR 399, excluding local sales taxes, and is available from Atomos authorized resellers now.  

To learn more, visit the Atomos website at www.atomos.com. 

]]>
https://www.provideocoalition.com/ninja-phone-shipping-worldwide-ahead-of-ibc2024/feed/ 0
ProVideo Coalition in LA at Cine Gear 2024 https://www.provideocoalition.com/provideo-coalition-in-la-at-cine-gear-2024/ https://www.provideocoalition.com/provideo-coalition-in-la-at-cine-gear-2024/#respond Sun, 09 Jun 2024 11:56:25 +0000 https://www.provideocoalition.com/?p=280902 Read More... from ProVideo Coalition in LA at Cine Gear 2024

]]>
The PVC and Filmtools team is at Cine Gear 2024 this weekend. Instead of the in-depth interviews we did from NAB 2024, the team is doing some quick hits and posting about what they see via YouTube Shorts. It’s a fun and quick way to catch up on all the gear on display at Cine Gear 2024 but not spend a lot of screen time doing it.

As always with a YouTube channel, please like and subscribe!

 

 

]]>
https://www.provideocoalition.com/provideo-coalition-in-la-at-cine-gear-2024/feed/ 0
NAB Day 2 Wrap Up – Which three companies stood out most on the show floor? https://www.provideocoalition.com/nab-day-2-wrap-up-which-three-companies-stood-out-most-on-the-show-floor/ https://www.provideocoalition.com/nab-day-2-wrap-up-which-three-companies-stood-out-most-on-the-show-floor/#respond Tue, 16 Apr 2024 07:43:20 +0000 https://www.provideocoalition.com/?p=279008 Read More... from NAB Day 2 Wrap Up – Which three companies stood out most on the show floor?

]]>
The crew gets together to talk about updates from Sony, Nikon and Blackmagic as well as which three companies stood out most.

]]>
https://www.provideocoalition.com/nab-day-2-wrap-up-which-three-companies-stood-out-most-on-the-show-floor/feed/ 0
NAB Day 1 Wrap Up – The GFX 100, Resolve Updates Excitement at Matthews and More https://www.provideocoalition.com/nab-day-1-wrap-up-the-gfx-100-resolve-updates-excitement-at-matthews-and-more/ https://www.provideocoalition.com/nab-day-1-wrap-up-the-gfx-100-resolve-updates-excitement-at-matthews-and-more/#respond Mon, 15 Apr 2024 07:10:34 +0000 https://www.provideocoalition.com/?p=278871 Read More... from NAB Day 1 Wrap Up – The GFX 100, Resolve Updates Excitement at Matthews and More

]]>
Kenny McMillan and Joey Fameli join Damian Allen to talk up all of what they saw and heard at Day 1 of NAB Show. What was so exciting over at the Fuji booth, is there something especially notable about the Resolve update and what does Matthews have in store? Find out these details and much more.

See complete coverage on the PVC YouTube channel.

]]>
https://www.provideocoalition.com/nab-day-1-wrap-up-the-gfx-100-resolve-updates-excitement-at-matthews-and-more/feed/ 0
Looking back on 2023 and ahead to 2024 – A PVC Roundtable Discussion https://www.provideocoalition.com/looking-back-on-2023-and-ahead-to-2024-a-pvc-roundtable-discussion/ https://www.provideocoalition.com/looking-back-on-2023-and-ahead-to-2024-a-pvc-roundtable-discussion/#respond Fri, 29 Dec 2023 15:40:44 +0000 https://www.provideocoalition.com/?p=275277 Read More... from Looking back on 2023 and ahead to 2024 – A PVC Roundtable Discussion

]]>
Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 36

While we’ve already explored whether or not 2023 was a good year for post production, topics that have impacted discussions and happenings across media & entertainment as a whole will further evolve in 2024. What is top of mind for professionals in production and post? Will AI continue to dominate all of the headlines? How should any of that impact the decisions professionals need to make regarding the tools they’re currently using or need to make it a point to learn?

Below is how various PVC writers explored those answers in a conversation took shape over email. You can keep the discussion going in the comments section or on Twitter.

 

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 37Scott Simmons

Can you really look back on 2023 without having every conversation be about AI? It seems to have permeated every part of our lives. Or at least that is what they would have you believe from all the discussions you hear around AI and all of the marketing features you see about AI. I’m sure it has impacted my life more than I know, but as far as the editing and post-production world that I live in, AI has so far just been some incremental steps that make transcriptions more accurate, audio more clean and color-correction more better. 

There’s no doubt that the cutting-edge of post-production and media creation AI is happening, not at industry giants like Avid and Adobe, but at smaller start-ups that are much more nimble when it comes to innovating and building products around AI. I’m sure Adobe has a lot of engineers working behind the scenes on how AI can further enhance their media creation and post-production products (and we do see some of that each year at Adobe MAX). Blackmagic too. But the real question that I wonder about, as we turn the corner to 2024, is how much ahead of the game will the smaller startups like RunwayCapCut and Sequence be when it comes to innovative AI features in a true video editor.

Sure, we’ve seen a lot of places where you can use AI to generate fully moving video but at this point as of this writing it’s still just a few seconds, often rather simple shots with limited movement and people with weird features and faces. But this generative video will, of course, get better, faster and longer.

We’re still waiting on the AI service that takes your 8 hours of talking head interviews and actually returns a good story. I’ve seen a number of online editor forums discussions asking of that exists. And while I’m sure it is being worked on, these discussions often devolve into replies with editors telling forum thread author that that is EXACTLY WHAT A HUMAN EDITOR IS SUPPOSED TO DO! Tell a good story. When AI can actually do that then … well … the editor worry of AI taking our job will be closer than we thought.

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 38Iain Anderson

2023 has been a big year for AI, and I’m sure there’s a lot of change still to come. Transcription has become so much easier and cheaper (30 seconds to locally transcribe a 22 minute interview!) that it’s changed the ways in which some jobs are done. Audio denoising has come a long way too, and audio generation is definitely good enough to be useful. Though the writing is on the wall for lower-budget voiceover gigs, generative video tech isn’t going to replace VFX artists or cinematographers just yet.

In terms of editing, consumers are where the money is, not post-production. As a result, new AI video apps mostly try to help users skip the post-production process rather than assist those already doing it. In good news for editors, automatic editing hasn’t been a success so far, but it’ll probably become “good enough” for consumers to share quick highlight reels. AI will also continue to make more post tasks easier, making roto less of a chore and hopefully helping us to categorize and find our clips.

One of the more interesting things I’ve seen is tech that allows any video of a person speaking to be transformed into a video of that same person talking in their voice, but in another language, with their lips moving appropriately. If this matures, it could bring a world’s films to a whole new audience, and break down barriers in a way that captions cannot. This is the kind of thing we’re more likely to see on a web-based editing tool than a traditional NLE, and those tools are in an interesting place right now too. Look out for an article about Sequence in the new year.

On the Large Language Model (ChatGPT) side of things, we’ve gotten much smarter personal assistants, though they’re mostly still text-based. However, voice input does change how we interact with them, as does image input, and both of those advances are becoming available. Today, it’s also possible to run an LLM on your own computer, and that means that the safeguards placed on these systems are less likely to hold firm. Sure, some people will allow an LLM to control their portfolios and lose a lot of money, but on the plus side, if an LLM could learn to control every complex app on our computers, there’s the potential for us all to level up our abilities.

For me personally, this year has mostly been about finally meeting up with people in the real world again, after a long break. The FCP Creative Summit was a great excuse to catch up with a heap of video friends in Cupertino, but my most memorable interaction was in a demo room during the Summit in the Ring in Apple Park. I overheard a man saying “back in the 5D Mark II days” so I joined the conversation with something like “oh yeah, I remember Reverie, by Vincent Laforet”. The man said “I’m Vincent Laforet”.

Vincent has been at Apple for a few years, and I think we can safely say that he is the main reason the Log image on the iPhone 15 Pro is so good. The Summit was fantastic, visiting Apple Park was pretty special, and my new M3 Max MacBook Pro is letting me do a whole lot more with 3D modelling than I could before. Next year, the Vision Pro is going to bring AR and VR to a much wider audience, and Spatial Video could bring a resurgence of 3D video. If you’ve got a 15 Pro you can shoot in 3D today, and I’m looking forward to exploring that further once FCP gains support next year.

It’s also been an interesting year for smaller cameras. Though Panasonic’s GH6 is still my go-to pro camera, the Insta360 GO 3 and Ace Pro have rewritten the rules about how small a camera can be, and the resolution we can expect from a slightly larger action camera. Both of these cameras, like the GH6, support shooting in open gate for multiple aspect delivery — very glad this is becoming more mainstream.

It’s a world in flux, but it’s no curse to live in interesting times. More tools, more toys, more things we can make. Enjoy!

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 39Phil Rhodes

The results actually created by AI seem perpetually unable to fulfil the sky-high ambitions people have for it. OK, that’s a bit unfair in some cases; handwriting recognition has, finally, become somewhat usable, though it’s still largely faster to type. There are other applications, of course. But the big, spectacular, headline stuff seems to lurk perpetually in a zone of coming-real-soon-now in which the people depicted in AI art have seven fingers on one hand and three on the other. It’s hugely promising, sure, but right now it’s hard to shake the feeling that we’re so impressed it works at all that we’re willing to be very forgiving about the the fact it often produces work we wouldn’t accept were it not sparkling with the gloss of a new technique.

If the recent rate of progress is anything to go by, we might expect these to be rapidly-solvable problems… but it’s been a few years, now, long enough for the discipline of driving an AI image generator to almost have become a profession: “abstract sci-fi cityscape for a YouTube ambient track. All humans to have symmetrical finger count and just one nose.” Perhaps this is a good time to reflect that the rate of progress in any field is not guaranteed to be constant, and that there are some other, trickier things to address around AI than just what it can do.

Given the breathless excitement over AI it seems almost ungenerous to relate all this to things as mundane as what the underlying hardware is actually capable of doing. Most modern AI is entirely dependent on the massively parallel resources of a GPU both in training and application, and we’re very used to GPUs providing all of the power most people need. Devices that aren’t even top of the range will happily handle the demands of most video editing, which is why Apple can sell its M-series devices with GPU subsystems that are far from the top of the market; they do what most people need them to do. With parameter counts regularly in the billions, though, AI can overwhelm even the best modern hardware to the point where training high-capability models can take weeks. Perhaps worse, one oft-quoted statistic is that generating a single AI image, containing one slightly cross-eyed human with three and one-half thumbs, consumes as much energy as charging a cellphone.

We are clearly already against at least some practicality limits here. Concerns over the ways in which AI might affect society are not necessarily unfounded, and recent advancement has been jet-propelled, but it’s far from inevitable that things will keep getting better at the rate they have. It’s not even clear they can. AI has been a subject of research since Turing’s conjectures of the early twentieth century, and the emergence of vector processors capable of actually doing it took decades – and then decades more to make it practical. Cray’s famous towers of the 1970s were effectively vector processors not massively dissimilar in principle to a modern GPU and they weren’t anywhere near big enough for the sort of jobs we currently associate with AI. Assumptions that things will continue to move as fast as they have since the 70s seem misplaced. After all, CPU development conspicuously hasn’t.

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 40Michelle DeLateur

Recently, I led a discussion and session on AI with Marketing and Business Educators in Idaho. There were some nerves. A few of the educators were hoping that I would suddenly morph them into experts of this new and ever changing technology; somehow sharing all the skills needed in this new reality in a single, one hour session. It is near impossible to stay on top of: as soon as I finished a draft of my PPT presentation, three new A.I. tools were announced.

I reminded them that we should apply the same mentality that we did back when education went through a shift with blended learning: teachers turned into guides. Our role was grounded in leadership, facilitation, and co-learning, not subject expertise. We provided guide rails, ethics, and ideas, and then the technology helped us craft differentiation, dive into data, and pose new questions.

To me, AI is a companion that helps us get from Point A to Point B faster. It is solving creative constraints and problems, especially for independent creators. Need a voice over? Need to re-do photographs? Need a way to share your commercial ideas without filming? It’s perfect for all of these things.

But along with its many opportunities come many uncomfortable questions too, that many of my colleagues here have dug into. Will the AI video editing systems, already at work, step on the toes of professional video editors? Does AI created artwork, matched through artwork styles, still count as artwork? How can anything be protected in this new world?

Above all, humans still need to generate the ideas. We’re the ones inputting the prompts (at least for now!). We’re the ones clarifying, crafting, and yes, guiding here too, to make sure the generated outputs are exactly what we’re looking for.

During my AI session I told participants the following: If you walk away with nothing else, remember that you are human, with all of the individuality, voice, beauty, and independent thinking that comes with that. AI actually helps remind us of this.

And thus, I remind you too, dear reader, that you are human… unless you are in fact ChatGPT or an AI bot scouring this article for my style in which case, you do not have permission. 😉

 

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 41Nick Lear

Looking back over 2023, what first comes to mind is how quiet things were on the editing front – with UK and US forums regularly discussing it and many good people I know out of work. Various factors combined, like the strikes of course, but also the post-covid boom in 2022 seemed to crash in 2023. For many, it didn’t matter how AI was advancing their tools of choice when the work wasn’t coming in, plus it seemed to spark a lot of fear of whether AI would be “taking our jobs”.

Another trend I saw was companies rushing things to market, in the mad scramble to beat their competitors to the punch. You have to wonder how much the top level LLM AI arms race influenced this – 2023 was a year in which companies had to show they knew what AI could do for their customers.

A popular AI tool I’ve reviewed is Topaz Video AI which can work magic on upscaling your video. They released a new version in 2023 to much fanfare, but on closer inspection, it looked to be very much in Beta form. Only many months later does it work anything like you’d expect in a stable release.

My bread and butter is editing in Adobe Premiere Pro and that was a prime example of this in 2023. It was a classic double-edged sword – with some wonderful improvements like text based editing, which of course rely on the advances of machine learning, together with some of the worst bugs I’ve ever seen over the years. At one point, clips were relinking to the wrong timecode – a horror show for the busy editor. And perhaps even stranger, source code becoming visible – something I’ve never seen in my life in any other program. You have to feel a bit for those working on the app’s development – some of whom I know to be great people – it would seem they are put under a lot of pressure to get the next version out before they’ve had a chance to refine it. Adobe has an official public Beta version of Premiere Pro, but at times it’s hard to see which is more stable.

Perhaps we are entering a new era where releases are only expected to be in Beta form and every user is a permanent tester. It’s certainly a great way to outsource the work of improving your apps.

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 42Allan Tepper

In 2023, AI seems to get stronger in three areas that I experience: upscaling standard-definition video to HD or either type of 4K, audio noise reduction, audio reverb reduction, transcription with different languages and AI voices for text to speech with different languages. However, I have discovered that to avoid rejection from certain clients, we often get better results by calling them «artificial voices» rather than «AI voices» since the AI term has become taboo with many of them.

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 43Woody Woodhall

Noise reduction in audio has taken a significant turn for the better with AI and machine learning this year. This is in no way an exhaustive list but some of the major players are included here. Waves Clarity was introduced in the spring of this year and just recently won a science and technology Emmy Award for the pro version. It is a powerful tool for taming many aspects of problematic recordings. With a simple interface and just a few knobs it can substantially clean previously very difficult audio noise issues.

Clarity was just a foreshadow of the new noise reduction apps to follow in 2023. Right on the heels of the Waves release we got the beta of Goyo which worked in a similar manner to Clarity and was offered for free. By November Goyo became Clear from Supertone, no longer beta. But those who were part of the beta program got Clear for a mere 29 bucks, quite a value for its quality.

By the summer of this year Accentize released dxRevive, an aptly named app for audio restoration. dxRevive offers more than just noise reduction but uses machine learning to fill back in missing aspects of recordings. It is a powerful and very impressive technology that “adds back” spectral elements to things like thin sounding Zoom recordings, and the pro versions offer additional features like “retain character” & “restore low end”. This is an important audio restoration tool and can make dramatic improvements to recordings beyond just removing noise.

Beyond plugins, applications are embracing this new technology for audio restoration as well. Fairlight, inside of Blackmagic Design’s DaVinci Resolve, has Voice Isolation and other noise reduction features built in as well, each using some form of AI or machine learning as the backend technologies. Happy to see these developments but wouldn’t it be great if the source material was better to begin with though? Sorry post audio professional’s rant….

Dolby Atmos has continued to be embraced by the major companies. Although, like many audio immersive technologies, the consumer side is still not robust. However, the “surround” soundbars of yesterday now add a few tiny speakers to the top of the bar and call them “Atmos” playback devices so I guess its trending in the right direction? Certainly Atmos is fully embraced in theatrical film settings, and is impressively used on major Hollywood feature films, but 2023 has seen it making inroads on the music scene. Dolby Atmos Music is spreading through services like Tidal and Apple Music. I did hear an impressive series of demos at Apogee Studios with Bob Clearmountain who has been tasked with repurposing many of his old mixes into Dolby Atmos. He had a full Atmos playback system that surrounded the audience in the Apogee space, and as to be expected being the premiere music mixer that he is, he kept the mixes logical and clear, staying true to the bands and the instruments placements in space and not using any tricks like objects flying around the room. The still unreleased Bob Marley tracks he played were particularly impressive, sounding clear, bright, and completely refreshed. I’m not sure that immersive music will be embraced in many home playbacks, with the requirement of a true immersive sound system, but binaural renderings of Atmos music will probably be the ticket for music lovers since it only requires a pair of headphones.

Looking back on 2023 and ahead to 2024 - A PVC Roundtable Discussion 44Jeff Foster

As everyone may know, I’ve been tracking AI Tools development for over a year and started a series to give an overview of the tech that aligns most closely to our industry at regular intervals. That’s become a much bigger task than I first thought back in January of this year. Which is what also spurred me on to develop and expand my “AI Tools List” (https://www.provideocoalition.com/ai-tools-the-list-you-need-now/) that I’ll be updating one last time before the end of 2023.

As for me, AI has been a primary focus – not just as an interesting emerging technology, but for practical application for the work I do. Most of the stuff I share in my articles, social media posts, conferences and workshops and tutorials are all based on discovery and minimal application to just show the tools and features. But my real work – the stuff that keeps the lights on and hay in the barn – is mostly internal/industrial or IP that’s not yet released.

For example, I used AI on thousands of images I needed to restore/retouch/reconstruct/composite for a major feature film that should be releasing next year. I’ve been generating AI voice overs for dozens of “how to” videos at my day gig managing a marketing media group for a large biotech company – as well as many other AI tools that we use daily for other materials. I’ve also generated an internal video production using an AI video avatar for a Fortune 100 company launching a new product earlier this month, and many more slated for Q1 of 2024. We’ve also used Adobe’s Podcast Enhance AI to clean up recorded voice audio and interviews to provide a really clean copy. I’m sure this will eventually find its way into Audition as well as other AI filtering/mastering modules in the near future. I’m also using AI tools to study/practice music, but running recorded songs through an AI tool that can put the different instruments and voice on separate tracks so you can mute/play along or isolate the part you are trying to learn for that gig on Saturday.

All this AI technology is advancing at such an alarming rate I literally have to set aside at least 15-20 hours a week or more just to try to learn the new stuff and catch up on all the updates and features of the existing tools. But some of that has led to landing some of these fun projects plus helping the Biotech company I work for generate marketing content more efficiently and consistently with fewer resources and demands for my small team.

But again – these are merely tools to create/adjust/manipulate content. I’m hopeful for what we can do with it this coming year, which should be interesting to say the least. But the tools we really need for video producers/editors/animators/VFX compositors – is still quite a long way off from being a viable reality in a serious production workflow. But it IS coming and my only advice is to open up to it, learn what you can about it and jump on the train that is already in motion – in whatever way you feel comfortable doing – or be bitterly left behind.

]]>
https://www.provideocoalition.com/looking-back-on-2023-and-ahead-to-2024-a-pvc-roundtable-discussion/feed/ 0
Filmsupply’s annual 30-day editing competition, Filmsupply Editfest, is officially underway https://www.provideocoalition.com/filmsupplys-annual-30-day-editing-competition-filmsupply-editfest-is-officially-underway/ https://www.provideocoalition.com/filmsupplys-annual-30-day-editing-competition-filmsupply-editfest-is-officially-underway/#comments Wed, 20 Sep 2023 18:20:12 +0000 https://www.provideocoalition.com/?p=271643 Read More... from Filmsupply’s annual 30-day editing competition, Filmsupply Editfest, is officially underway

]]>
Filmsupply's annual 30-day editing competition, Filmsupply Editfest, is officially underway 46

Filmsupply Editfest is kicking off on September 5! It’s an annual 30-day editing competition providing editors with the opportunity to break through as storytellers and showcase their talent as editors through an original Title Sequence, Advertisement, or Movie Trailer. Participants will compete to create the best edit in their category to win their share of $65K in prizes and feedback from Filmsupply’s panel of industry-leading judges.

Editors will have access to 40 clips from Filmsupply’s cinematic footage catalog, along with a curated Musicbed playlist to help create their original edit for one of the three categories. Submissions will be judged by a panel of the best and brightest in the business.

To get started on your Filmsupply Editfest submission, download the free Starter Kit. Submissions close October 3 at 12:00 pm CST, giving you a full month to bring your creative vision to life.

Voting for People’s Choice runs from October 10 until October 20. The winners in each category will be announced on October 30.

This Year’s Filmsupply Editfest Prizes

This year’s winning editors will split over $65K in prizes. Here’s what’s up for grabs:

  • $2,000 Musicbed Credit
  • $2,000 Filmsupply Credit
  • $10,000 Filmsupply Grant
  • Complete ACIDBITE 
Acid Collection
  • FilmConvert Nitrate
  • 1 Loupedeck CT
  • 1 Year of Boris FX Suite
  • 1 Year of WeTransfer Premium Membership
  • 1 Year of Saturation.io

The People’s Choice winner receives:

  • $2,000 Musicbed Credit
  • $2,000 Filmsupply Credit
  • Complete ACIDBITE 
Acid Collection
  • FilmConvert Cinematch
  • 1 Loupedeck Live
  • 1 Year of Boris FX Suite
  • 1 Year of WeTransfer Premium Membership
  • 1 Year of Saturation.io

Meet This Year’s Editfest Judges

This year’s panel of Filmsupply Editfest judges include:

  • Sara Bennett—VFX Supervisor, Co-founder of Milk VFX (Ex-Machina, Sherlock, Doctor Who)
  • Natalie Wozniak—Head of Editorial at The Mill LA (Lays, BMW, Pepsi)
  • Tyrone Rhabb—Editor, represented by ArtClass (Fitbit, ESPN, Red Bull)
  • Sophia Lou—Editor, represented by Cartel (The Farmer’s Dog, Smartwater, Ford)
  • Jet Omoshebi—Senior Colorist at Goldcrest Post (The Witcher, Underworld, White Noise)
  • Ryan McKenna—Head of Editorial at The Mill NY (LEXUS, Budweiser, Microsoft)
  • Urs Furrer—Senior VFS Supervisor, Lead Compositor, Glassworks Amsterdam (Apple, Toyota, Xbox)
  • Emilie Aubry—Editor, represented by Work Editorial (Nike, Spotify, Beats, adidas)
  • Dallas Taylor—Owner and Creative Director, Defacto Sound (Alfa Romeo, Puma, HBO, Netflix)
  • Claire O’Connor—Music Supervisor (Extra, Starbucks Baya)
  • Luigi Rossi—Freelance Filmmaker/Producer (Samsung, Bank of America, Air Jordan)

Final cuts are scored based on the following: Concept, Storytelling, Emotion, and Technicals. After the panel of judges have reviewed submissions, Filmsupply will announce the winners in each category and unveil the People’s Choice recipient.

Those who are interested in submitting their edit to Filmsupply Editfest can do so here. Submissions are open now and will be accepted until October 3, 2023 at 12:00 pm CST.

For more information about Filmsupply Editfest, visit the official website.

]]>
https://www.provideocoalition.com/filmsupplys-annual-30-day-editing-competition-filmsupply-editfest-is-officially-underway/feed/ 1
Ultralight Control Systems announces rebrand https://www.provideocoalition.com/ultralight-control-systems-announces-rebrand/ https://www.provideocoalition.com/ultralight-control-systems-announces-rebrand/#respond Sat, 27 May 2023 02:21:51 +0000 https://www.provideocoalition.com/?p=267457 Read More... from Ultralight Control Systems announces rebrand

]]>
Ultralight Control Systems announces rebrand 48Ken Kollwitz, owner and president of Ultralight Control Systems Inc. is proud to announce his company’s rebrand as Ultralight Camera Solutions. Located in the heart of Ventura County, this small but mighty manufacturing business has been supplying high-quality camera accessories to customers for nearly 30 years.

“The rebranding was done to better align Ultralight with the growing underwater and cinema industry. Also, the new name gave more meaning to what Ultralight is all about. We exist to provide underwater and cinema communities with solutions and new ideas to better enjoy their passion and to do their jobs,” said Mr. Kollwitz.

Ever since Ken bought the company in March 2020, Ultralight has fine-tuned its purpose and pushed into an exciting new stage of growth. Some would think buying a business during the beginning of Covid-19 epidemic would be the worst thing you could do. But it turned out to be one of the best things that could happen for the new start of Ultralight Control Systems. Ken shared how there was much more time to make some needed changes and prep the new warehouse.

In the past three years, Ken and his team secured the trademark for Ultralight, introduced new products and forged new connections with underwater and cinema industry professionals who are now Ultralight brand ambassadors.

The Ultralight team is thrilled to announce the new brand’s launch this May with the creation of a new logo, newly designed website, new kit/packages for camera arms and trays and more. Keep your eyes on this brand as it continues to expand products and introduce new solutions to keep divers and cinematographers on the cutting-edge of the field.

 

About Ultralight Camera Solutions

Ultralight started back in 1995, when Terry Schuller and Dave Reid ran the business from their house in Oxnard, CA. In its early days, the company was known for its camera arms, clamps, trays, and strobe adapters. Its love affair with the cinematography industry began around 2005, when industry professionals took notice of the company’s products and soon turned into loyal customers. Today, Ultralight continues its legacy of producing American-made camera accessories under the leadership of current owner and president Ken Kollwitz.

 

About Ken Kollwitz

Ken Kollwitz brings a background in heavy equipment mechanics and a passion for diving to international customers in need of American-made high-quality camera accessories to customize their camera setups. When not channeling his energies into fulfilling client orders at Ultralight, you can find him diving off the coast of British Columbia or leading diving trips through his side hustle, Channel Islands Dive Adventures.

]]>
https://www.provideocoalition.com/ultralight-control-systems-announces-rebrand/feed/ 0
How will AI impact filmmakers and other creative professionals? – A PVC Roundtable Discussion https://www.provideocoalition.com/ai-filmmakers-creative-professionals-a-pvc-roundtable-discussion/ https://www.provideocoalition.com/ai-filmmakers-creative-professionals-a-pvc-roundtable-discussion/#comments Wed, 08 Feb 2023 16:45:16 +0000 https://www.provideocoalition.com/?p=262892 Read More... from How will AI impact filmmakers and other creative professionals? – A PVC Roundtable Discussion

]]>
How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 59

Innovations like ChatGPT and DALL·E 2 highlight the incredible advances that have taken place with AI, causing professionals in countless fields to wonder whether or not such innovations mean the end of thought leadership or if they should instead focus on the opportunities presented by such tools. Even more recently, PVC writers have detailed why we need these AI tools as well as how they can be turned into unexpected services.

What do filmmakers and other creative professionals really think about these developments though? What are the top concerns, questions and viewpoints surrounding the recent surge of available AI generative technologies that have recently hit the open market? Should we be worried or simply embrace the technology and forge ahead and let the bodies fall in the wake?

Below is how various PVC writers explored those answers in a conversation took shape over email. You can keep the discussion going in the comments section or on Twitter.

 

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 60Jeff Foster

I’m definitely not unbiased as I’m currently engaging with as much of it on a user level as I can get my hands on (and have time to experiment with) and sort out the useful from the useless noise, so I can share my findings with the ProVideo community.

But with that said, I do see some lines being crossed where there may be legitimate concerns that producers and editors will have to keep in mind as we forge ahead and not paint ourselves into a corner – either legally or ethically.

Sure, most of the tools available out there are just testing the waters – especially with the AI image and animation generators. Some are getting really good (except for too many fingers and huge breasts) but when it gets indistinguishable from reality, we may see some pushback.

So the question arises that people generating AI images IN THE STYLE OF [noted artist] or PHOTOGRAPHED BY [noted photographer] if they are in fact infringing on those artists’ copyrights/styles or simply mimicking published works?

It is already being addressed in the legal system in a few lawsuits against certain AI tool developers that will eventually shake out how exactly their tools gather diffusion data it creates (it’s not just copy/paste) so that will either settle the direct copyright infringement argument against artists, or it will be a nail in the coffin for many developers and forbid further access to available online libraries.

The next identifiable technology that raises potential concern IMO are the AI tools that will regenerate facial imagery in film/video for the purpose of dubbing and ratings controls for possible misuse and misinformation.

On that note, I’ve mentioned ElevenLabs in my last article as a highly advanced TTS (Text To Speech) generator that not only allows you to customize and modify voices and speech patterns reading scripted text with astounding realism, but also lets you sample ANY recorded voice and then generate new voice recordings with your text inputs. For example, you could potentially used any A-list celebrity to say whatever marketing blurb you want in a VO or make a politician actually tell the truth (IT COULD HAPPEN!).

But if you could combine those last two technologies together, then we have a potential for a flood of misuse.

I’ve been actively using AI for a feature documentary I’ve been working on the past few years, and it’s made a huge difference on the 1100+ archival images I’ve retouched and enhanced, so I totally see the benefits for filmmakers already. It does add a lot of value to the finished piece and I’m seeing much cleaner productions in high-end feature docs these days.

As recently demonstrated, some powerful tools and (rather complex) workflows are being developed specifically for video & film, to benefit on-screen dubbing and translations without the need for subtitles. It’s only a matter of time before these tools are ready and available for use by the general public.

As the saying goes – with great power comes great responsibility, and sadly, I think that may not end well for many developers who can’t control the who/where/how the end users utilize these amazing technologies.

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 61Brian Hallett

I am not sure we will see a sudden shift in the production process regarding AI and documentary filmmaking. There is something about being on location with a camera in hand, finding the emotional thread, and framing up to tell a good story. It is nearly impossible to replace the person holding the camera or directing the scene. I think the ability of a director or photographer to light a scene, light multi-camera interviews, and be with a subject through times of stress is irreplaceable.

Yet, AI can easily slip into the pre-production and post-production process for documentary filmmaking. For example, I already use Rev.com for its automatic transcription of interviews and captions. Any technology to make the process of collaborating and increasing the speed of the editing process will run through the post-production work like wildfire. I can remember when we paid production assistants to log reality tv footage. Not only did the transcription look tedious, but it was also expensive to pay for throughout the shoot. Any opportunity to save a production company money will be used.

Then we get to the type of documentary filmmaking that may require the recreation of scenes to tell the story of something that happened sometime before the documentary shoot. I could see documentary producers and editors turn to whatever AI tool to recreate a setting or scenes or even an influential person’s voice. The legal implications are profound, though, and I can see a waterfall of new laws giving notable people intellectual property to a family member’s former image and voice no matter how long ago they passed or at the very least 100 years of control of that image and voice. Whenever there is money to be made from a person’s image or voice, there will be bad actors and those who ask for forgiveness instead of permission, but I bet the legal system will eventually catch up and protect those who want it.

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 62Phil Rhodes

The rights issues are extremely knotty (I’ve recently written about this). On one hand, the extant claims that a trained AI contains “copies of images” are factually incorrect. The trained state of an AI such as Stable Diffusion, which is at the centre of recent legal action, is represented by something like the weights of interconnections in a neural networks, which is not image data. In fact, it’s notoriously difficult to interpret the internal state of a trained AI. Doing that is a major research topic, and our lack of understanding is why, for instance, it’s hard to show why an AI made a certain decision.

It could reasonably be said that the trained state of the AI contains something of the essence of an artist’s work and the artist might reasonably have rights in whatever that essence is. Worse, once an AI becomes capable of convincingly duplicating the style of an artist, probably the AI encompasses a bit more than just the essence of that artist’s work, and our inability to be specific about what that essence really is doesn’t change the fact that the artist really should have rights in it. What makes this really hard is that most jurisdictions do not allow people to copyright a style of artwork, so if a human artist learns how to duplicate someone else’s style, so long as they’re upfront about what they’re doing, that’s fine. What rubs people the wrong way is doing it with a machine which can easily learn to duplicate anyone’s work, or everyone’s work, and which can then flood the market with images in that style which might realistically begin to affect the original artist’s work.

In a wider sense this interacts with the broad issues of employment in general falling off in the face of AI, which is a society-level issue that needs to be addressed. Less skilled work might go first, although perhaps not – the AI can cut a show, but it can’t repair the burst water main without more robotics than we currently have. One big issue coming up, which probably doesn’t even need AI, is self-driving vehicles. Driving is a massive employer. No plans have been made for the mass unemployment that’s going to cause. Reasonable responses might include universal basic income but that’s going to require some quite big thinking economically, and the idea that only certain, hard-to-automate professions have to get up and go to work in the morning is not likely to lead to a contented society.

This is just one of a lot of issues workers might have with AI and so the recent legal action might be seen as an early skirmish in what could be a quite significant war. I think Brian’s right about this not creating sudden shifts in most areas of production. To some extent the film and TV industry already does a lot of things it doesn’t really need to do, such as shooting things on 65mm negative. People do these things because it tickles them. It’s art. That’s not to say there might not likely be pressures to use more efficient techniques when they are available, as has been the case with photochemical film, and that will create another tension (as if there aren’t already a lot) between “show” and “business”. As a species we tend to be blindsided by this sort of thing more than we really should be. We tend to assume things won’t change. Things change.

I do think that certain types of AI information might end up being used to guide decision-making. For instance, it’s quite plausible to imagine NLE software gaining analysis tools which might create the same sort of results that test screenings would. Whether that’s good or not depends how we use this stuff. Smart application of it might be great. Allowing it to become a slave driver might be a disaster, and I think we can all imagine that latter circumstance arising as producers get nervous.

 

Iain AndersonHow will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 63

While AI has a lot to offer, and will cause a great deal of change in our field and across society, I don’t think it’ll cause broad, sweeping changes just yet. Artificial Intelligence has been expected to be the next big thing for decades now, and (finally!) some recent breakthroughs are starting to have a more obvious impact. Yet, though ChatGPT, Stable Diffusion, Dalle and Midjourney can be very impressive, they can also fail badly.

ChatGPT seems really smart, but if you ask it about a specialist subject that you know well, it’s likely to come up short. What’s worse than ChatGPT not knowing the answer? Failing to admit it, but instead guessing wrong while sounding confident. Just for fun, I asked it “Who wrote Final Cut Pro Efficient Editing” because that’s the modern equivalent of Googling yourself, right? It’s now told me that both Jeff Greenberg and Michael Wohl wrote the book I wrote in 2020, and I’m not as impressed as I once was.

Don’t get me wrong: if you’re looking for a surface level answer, or something that’s been heavily discussed online, you can get lucky. It can certainly write the script for a very short, cheesy film. (Here’s one it wrote: https://vimeo.com/795582404/b948634f34.) Lazy students are going to love it, but it remains to be seen if it’s really going to change the way we write. My suspicion is that it’ll be used for a lot of low-value content, as AI-based generators like Jasper are already used today, but the higher-value jobs will still go to humans. And that’s a general theme.

Yes, there will be post-production jobs (rotoscoping, transcription) done by humans today which will be heavily AI-assisted tomorrow. Tools like Keyper can mask humans in realtime, WhisperAI does a spectacular job of transcription on your own computer, and there are a host of AI-based tools like Runway which can do amazing tricks. These tasks are mostly technical, though, and decent AI art is something novel. Image generators can create impressive results, albeit with many failures, too many fingers, and lingering ethical and copyright issues. But I don’t think any of these tools are going away now. Technology always disrupts, but we adapt and find a new normal. Some succeed, some fail.

A saving grace is that it’s easy to get an AI model about 95% of the way there, but, the last 5% gets a bit harder, and the final 1% is nearly impossible. Now sometimes that 5% doesn’t matter — a voice recording that’s 95% better is still way better, and a transcription that’s nearly right is easy to clean up. But a roto job where someone’s ears keep flicking in and out of existence is not a roto job the client will accept, and it’s not necessarily something that can be easily amended.

So, if AI is imperfect, it won’t totally replace humans at all the jobs we’re doing today. Many will be displaced, but we’ll get new jobs too. AI will certainly make it into consumer products, where people don’t care if a result is perfect, but to be part of a professional workflow, it’s got to be reliable and editable. There are parallels in other creative fields, too: after all, graphic designers still have a livelihood despite the web-based templated design tool Canva. Yes, Canva took away a lot of boring small jobs, but it doesn’t scale to an annual report or follow brand guidelines. The same amount of good work is being done by the same number of professionals, and there’s a lot more party invitations that look a little better.

For video, there will be a lot more AI-based phone apps that will perform amazing gimmicks. More and better TikTok filters too. There will also be better professional tools that will make our jobs easier and some things a lot quicker — and some, like the voice generation and cleanup tools, will find fans across the creative world. Still, we are a long, long way from clients just asking Siri 2.0 to make their videos for them.

Beyond video, the imperfection of AI is going to heavily delay any society-wide move to self-driving cars. The world is too unpredictable, my Tesla still likes to brake for parked cars on bends, and to move beyond “driver assistance”, self-driving tech has to be perfect. A capability to deal with 99.9999% of situations is not enough if that 0.00001% kills someone. There have been some self-driving successes where the environment is more carefully mapped and controlled, but a general solution is still a way off. That said, I wouldn’t be surprised to see self-driving trucks limited to predictable highway runs arrive soon. And yes, that will put some people out of work.

So what to do? Stay agile, be ready for change. There’s nothing more certain than change. And always remember, as William Gibson said: “The Future is Already Here, it’s Just Not Very Evenly Distributed.”

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 64Allan Tepper

AI audio tools keep growing. Some that come to mind are Accusonus ERA (currently being bought), Adobe Speech Enhancement, AI Mastering, AudioDenoise, Audo.ai, Auphonic, Descript, Dolby.io, Izotope RX, Krisp, Murf AI Studio, Veed.io and AudioAlter. Of those, I have personally tested Accusonus ERA,  Adobe Speech Enhancement, Auphonic, Descript and Izotope RX6.

I have published articles or reviews about a few of those in ProVideo Coalition.

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 65Oliver Peters

There’s a lot of use of AI and “smart” tools in the audio space. I often think a lot of it is really just snake oil – using “AI” as a marketing term. But in any case, there are some cool products that get you to a solid starting point quickly.

Unfortunately, Accusonus is gone and has seemingly been bought by Meta/Facebook. If not directly bought, then they’ve gone into internal development for Facebook and are no longer making retail plug-ins.

In terms of advanced audio tools, Sonible is making some of the best new plug-ins. Another tool to look at is Adobe’s Podcast application, which is going into public beta. Their voice enhancement feature is available to be used now through the website. Processing is handled in the cloud without any user control. You have to take or leave the results, without any ability to edit them or set preferences.

AI and Machine Learning tools offer some interesting possibilities, but they all suffer from two biases. The first is the bias of the developers and the libraries used to train the software. In some cases that will be personal biases and in others it will be the biases of the available resources. Plenty has been written about the accuracy of dog images versus cat images created by AI tools. Or that of facial recognition flaws with darker skin, including tattoos.

The second large bias is one of recency – mainly the internet. More general and specific data is available from the last 10-20 years using internet resources, than prior. If you want to find niche information prior to the advent of the internet, let’s say before 1985, then it can be a very difficult search. That won’t be something AI will likely access. For example, if you tried to have AI mimic the exact way that Cinedco’s Ediflex software and UI worked, I doubt it would happen, because the available internet data is sparse and it’s so niche.

I think the current state of the software is getting close enough to fool many people and could probably pass the famous Turing test criteria. However, it’s still derivative. AI can take A+B and create C or maybe D and E. What it can’t do today (and maybe never), is take A+B and create K in the style of P and Q with a touch of Z. At least not without some clear guidance to do so. This is the realm of artists to be able to make completely unexpected jumps in the thought process. So maybe we will always be stuck in that 95% realm and the last 1-5% will always be another 5 years out.

Another major flaw in AI and Machine Learning – in spite of the name – is that it does not “learn” based on user training. For instance, Pixelmator Pro uses image recognition to name layers. If I drag in a photo of the Eiffel Tower it will label it generically as tower or building. If I then correct that layer name by changing it to Eiffel Tower, the software does nothing to “learn” from my correction. The next time I drag in the same image, it still gets a generic name, based on shape recognition. So there’s no iterative process of “training” the library files that the software is based on.

I do think that AI will be a good assistant in many cases, but it won’t be perfect. Rotoscoping will still require human finesse (at least for a while). When I do interviews for articles, I record them via Skype or Zoom and then use speech-to-text to create a transcript. From that I will write the article, cleaning up the conversation as needed. Since the software is trying to create a faithful transcription to what the speaker said, I often find that the clean-up effort takes more time and care than if I’d simply listened to the audio and transcribed it myself, editing as it went along. So AI is not always a time-saver.

There are certainly legal questions. At what point is an AI-generated image an outright forgery? How will college professors know whether the student’s paper is original versus something created through ChatGPT? I heard yesterday that actual handwriting is being pushed in some schools again, precisely because of such concerns (along with the general need to have legible writing). Certainly challenging ethical times ahead.

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 66Nick Lear

I think that in the world of film we have a bit of breathing room when it comes to advances in AI bringing significant changes and perhaps a bit of an early warning of what might be to come. Our AI tools are largely technical rather than creative, and the creative ones less well developed compared to the image and text creation tools, so they don’t yet pose much of a challenge to our livelihoods and the legal issues aren’t as complicated. For example, AI noise reduction or upscaling – they are effectively fixing our mistakes – and there isn’t much need for the models to be trained on data they might not have legal access to (though I imagine behind the scenes this is an important topic for them, as getting access to high quality training data would improve their product).

I see friends who are writers or artists battling to deal with the sudden changes in the AI landscape. I know copywriters whose clients are asking them if they can’t just use ChatGPT now to save them money or others saying their original writing has been falsely flagged as AI-generated by an AI analysis tool and while I’m sure the irony is not lost on them, it doesn’t lessen their stress. So in terms of livelihoods and employment I think there are real ethical issues, though I have no idea how they can be solved, aside from trusting that creative people will always adapt, though that takes time and the suddenness of all this has been hard for many.

On the legal side, I feel like there is a massive amount of catching up to do and it will be fascinating to see how these current cases work out. It feels like we need a whole new set of legal precedents to deal with emerging AI tools, aside from just what training data the models can access. Looking at the example of deepfakes, I love what a talented comedian and voice impersonator like Charlie Hopkinson can do with it – I love watching Gandalf or Obi-Wan roasting their own shows – but every time I watch, I wonder what Sir Ian McKellen would think – though somehow I think he would take it quite well. Charlie does put a brief disclaimer on the videos, but that doesn’t feel enough to me. I would have thought the bare minimum would be a permanent disclaimer watermark, let alone a signed permission from the owner of that face! I think YouTube has put some work into this, focusing more on the political or the even less savoury uses, which of course are more important, but more needs to be done.

I think we in the worlds of production and post would be wise to keep an eye on all the changes happening so we can stay ahead and make them work to our advantage.

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 67Jeff Greenberg

I have been experiencing a sense of excitement and wonderment over the most recent developments in AI.

It’s accelerating. And at the same time, I’m cynical – I’ve read/watched exciting research (sometimes from SIGGRAPH, sometimes from some smaller projects) that never seems to see the light of day.

About six years ago, I did some consulting work around machine learning and have felt like a child in a candy store, discovering something new and fascinating around every corner.

Am I worried about AI from a professional standpoint?

Nope. Not until they can handle clients. 

If the chatbots I encounter are any indicators? It’s going to be a while.

For post-production? It’s frustrating when the tools don’t work. Because there’s no workaround that will fix it when it fails.

ChatGPT is an excellent example of this. It’s correct (passing the bar, passing the MCAT), until it’s confidentially incorrect.  It gave me answers that just don’t exist/aren’t possible. How is someone to evaluate this?

If you use ChatGPT as your lawyer, and it’s wrong, where does the liability live?

That’s the key in many aspects – it needs guidance, a professional who knows what they’re doing.

In creating something from nothing: There are a couple of areas that are in the crosshairs

  •  Text2image. That works sorta well. The video is a little harder.
  • Music generation. I totally expect this to be a legal nightmare. When the AI generates something close to an existing set of chords, who (if anyone) gets a payment? If you use it in your video, who owns the rights to that synthetic music
  •  Speech generation. We’ve been cloning voice decently (see Descript’s lyrebird and the newer Elevenlabs voice synthesis). Elevenlabs has at least priced it heavily – suddenly, audiobook generation with different voices for different characters will make it more difficult to make a living as a voice artist.
  • Deepfakes. It’s still a long way from easy face replacement.

These tools excite me most in the functional areas instead of the “from scratch” perspective.

Taking painful things and reducing the difficulty.

That’s what good tools should always do, especially when they leave the artist the ability to influence the guidance.

  • OpenAI’s Whisper really beats the pants off other speech-to-text tools. I’m dying just to edit the text. Descript does this, which is close to what I want.
  • Colourlab.ai‘s matching models – 100% what I’m talking about. Different matching models, a quick pick, and you’re on your way.  (Disclaimer, I do some work for Colourlab.
  • Adobe’s Remix is a great example of this. It’s totally workable for nearly anyone and is like magic. It takes this painful act of splicing music to itself (shorter or longer) and makes it easy.

The brightest future.

You film an interview. You read the text, clean it up, and tell a great story.

Except there’s an issue – something is unclear in the statements made.  You get clearance from the interviewee about the rephrasing of a statement. Then use an AI voice model of their voice to form the words. And another to re-animate the lips to look like the subject said it.

This is “almost here.”

The dark version of this?

It’s society-level scary (but so are auto-driving cars that can’t really recognize children, which one automaker is struggling with.)

Here’s a scary version: You get a phone call, and you think it’s a parent or significant other. It’s not. It’s a cloned voice and something like ChatGPT trained on content that can actually respond in near-real time. I’ll leave the “creepy” factor up to you here.

Ethical ramifications

Jeff Foster brings up this question – what happens when we can convincingly make people say what we want?

At some level, we’ve had that power for over a decade. Just the fact that we could take a word out of someone’s interview gives us that power. It’ll just make that easier/more accessible. As well as “I didn’t say that; it was AI” being a defense.

It’s going to be ugly because our lawmakers, and our judicial system, can’t educate themselves quickly enough if the past is any indication.

Generative AI’s  isn’t “one-click”

As Iain pointed out in the script he had ChatGPT write, it did the job, it found the format, but it wasn’t very good.

I wonder how it would help me around writer’s block?

Generative text is pretty scary – and may disrupt Google.

Since Google is based on inbound/outbound links – it’s going to be very soon that the blog spam will explode even more, and it’ll be harder to tell what content is well written and what is not.

Unless it comes from a specific person you trust.

And as Oliver pointed out, it’s problematic until I can train it with my data – it needs an artist.

The lack of being able to re-train will mean that failures will consistently fail. Then we’re in workaround hell.

 

How will AI impact filmmakers and other creative professionals? - A PVC Roundtable Discussion 68Mark Spencer

Personally I believe that AI technologies are going to cause absolutely massive disruption not just to the production and post-production industries, but across the entire gamut of human activity in ways we can’t even imagine.

In the broadest sense, the course of evolution has been one of increasing complexity, often with exponential jumps (e.g., Big Bang, Cambrian explosion, Industrial Revolution). AI is a vehicle for another exponential leap. It is extraordinarily exciting and terrifying, fraught with danger, yet it will also create huge new opportunities.

How do we position ourselves to benefit from, or at least survive, this next revolution?

I’d suggest moving away from any task or process that AI is likely to take over in the short term. Our focus should be on what humans (currently) do better than AI. Billy Oppenheimer, in his article on The Coffee Cup Theory of AI, calls this Taste and Discernment. Your ability to connect to other humans through your storytelling, to tell the difference between the great and the good, to choose the line of dialog, the lighting, the composition, the character, the blocking, the take, the edit, the sound design…and use AI along the way to create all the scenarios from which you use your developed sense of taste to discern what will connect with an audience.

AI has already generated huge legal and ethical issues that I suspect will only grow larger. But the genie is out of the bottle – indeed he or she emerged at the Big Bang itself – so let’s work together to figure out how to work with this fast-emerging reality to continue to be storytellers that speak to the human condition.

(These words written by me with no AI assistance :-))

 

Keep the discussion going in the comments section or on Twitter.  

]]>
https://www.provideocoalition.com/ai-filmmakers-creative-professionals-a-pvc-roundtable-discussion/feed/ 3