Production – ProVideo Coalition https://www.provideocoalition.com A Filmtools Company Sat, 04 Jan 2025 16:24:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.5 https://www.provideocoalition.com/wp-content/uploads/cropped-PVC_Logo_2020-32x32.jpg Production – ProVideo Coalition https://www.provideocoalition.com 32 32 Samsung ALoP: a revolutionary smartphone lens structure https://www.provideocoalition.com/samsung-alop-a-revolutionary-smartphone-lens-structure/ https://www.provideocoalition.com/samsung-alop-a-revolutionary-smartphone-lens-structure/#respond Fri, 03 Jan 2025 15:59:21 +0000 https://www.provideocoalition.com/?p=287542 Read More... from Samsung ALoP: a revolutionary smartphone lens structure

]]>
Samsung AloP: a revolutionary smartphone lens structureSamsung’s ALoP introduces a large aperture lens that promises low-noise portrait images in night shots and a lower-profile camera bump and slimmer smartphone. The upcoming Galaxy S25 Ultra may be the first to use the technology.

In November 2024, Samsung Electronics won a total of 29 ‘CES Innovation Awards 2025’ ahead of the world’s largest and most influential technology event, CES 2025. One Samsung technology honored with a CES 2025 Innovation Award is All Lenses on Prism (ALoP). This revolutionary structure that enables smaller telephoto camera modules while capturing bright and clear photos, will be showcased at the Las Vegas Convention Center from January 7-10, 2025.

As mobile phone users demand better image capture from their cameras, smartphone makers have added cameras for each zoom ratio (e.g. wide, ultra-wide, telephoto). Over time, the phone camera array has become quite crowded, with an ever-larger camera bump. That may be about to change thanks to All Lenses on Prism (ALoP), developed by Samsung’s Sensor Solutions Team.

The technology, which was officially announced in November 2024, may debut in the upcoming Samsung Galaxy S25 Ultra smartphone, the upcoming company’s flagship model, which, according to rumors, will have the same lens set as the S24 Ultra, except for the ultrawide camera, which will drop the actual 12MP and be based on a 50MP sensor with a 1/1.57″ optical size and 1.0 µm individual pixels the Samsung ISOCELL S5KJN3. There is also a chance that AloP will first appear on a rumored Galaxy S25 Slim, a new addition to the family, that will be used to show the potential of AloP to incorporate a set of large telephoto lenses in a smartphone without incurring the penalty of a huge camera bump.

Samsung AloP: a revolutionary smartphone lens structureLimitations of the folded telephoto camera

Telephoto cameras have been a new point of differentiation for smartphone manufacturers because they offer high-magnification capabilities, can compress backgrounds, reduce distortion due to the narrow field of view, and even create a suitable background blur effect, ideal for portraits… but as they reach ratios up to 5x, taller module height and an obtrusive camera bump are unavoidable.

The common solution is the use of folded telephoto camera structure, bending the path of light by 90 degrees like a periscope to allow for longer focal lengths horizontally without extending the thickness of smartphone vertically. The lenses stand vertically with respect to the plane of the smartphone body, so their diameter determines the height of the camera bump. This made higher magnification telephoto cameras in smartphones possible… but introduced limitations in terms of light gathering.

Samsung AloP: a revolutionary smartphone lens structureIn fact, the folded telephoto camera structure limits the telephoto improvements that can be made in terms of image brightness.  A wider lens diameter is required for brighter images. Moving to a larger telephoto lens and brighter telephoto camera using large image sensor increases both the module height and length, to the point where the user would find the resulting bulky camera bump objectionable.

To address these issues, the industry has been looking for a solution that can achieve better results than traditional folded zoom structures. That’s AloP, a technology that employs a clever optical structure in which lenses sit horizontally upon the prism, remaining in the plane of the smartphone body. Using this approach, increasing the effective lens size (EPD) by increasing lens diameter brings a brighter image yet does not affect the camera module shoulder height. Moreover, it provides for a shorter module length by reducing the space needed for lenses in the folded camera module.

Samsung AloP: a revolutionary smartphone lens structureALoP: features and benefits

Here are the key features and benefits AloP brings to smartphones:

Brightness. The novel optics design of ALoP accommodates an f/2.58 lens aperture at a focal length of 80mm. Differing from conventional folded camera optics, the lens in this case is placed ahead of the prism. In this way, ALoP can use a large aperture lens that promises low-noise portrait images in night shots.

Compact size. Thanks to the ALoP architecture, the module length can be shortened 22% with respect to conventional folded camera optics. More importantly, the ALoP takes up an especially low module height because it employs a 40˚-tilted prism reflection surface and 10˚-tilted sensor assembly. Taken together, these reduced dimensions make for a lower-profile camera bump and slimmer smartphone.

Aesthetics and ergonomics. Users generally find a thick smartphone camera bump objectionable. It not only makes the smartphone design unappealing but also harder to use when laid on a flat surface. Additionally, the shape of the lenses within the camera bump can be off-putting. In a smartphone using conventional folded camera optics, users see a rectangular prism that is cosmetically somewhat jarring to an otherwise sleek camera appearance. By contrast, in a smartphone adopting ALoP optics, users see only the expected circular lens shapes.

According to Younggyu Jeong, from Samsung’s Sensor Solutions Team, “smartphone cameras are rapidly evolving, but they haven’t yet surpassed DSLR capabilities due to size constraints. I believe new technologies will continue to emerge to close that gap, with technologies aimed at improving the image quality of telephoto cameras, which are disadvantaged in terms of form factor, at the forefront.”

“As a business” – he added -, “we anticipate the commercialization of various technologies to reduce F/#, increase zoom magnification, and reduce module size. We will continue to combine the differentiated hardware solutions of ISOCELL with AI-based software solutions to expand users’ mobile camera experiences.”

]]>
https://www.provideocoalition.com/samsung-alop-a-revolutionary-smartphone-lens-structure/feed/ 0
New year’s resolutions for crew https://www.provideocoalition.com/a-victorious-2025/ https://www.provideocoalition.com/a-victorious-2025/#respond Thu, 02 Jan 2025 20:07:24 +0000 https://www.provideocoalition.com/?p=287516 Read More... from New year’s resolutions for crew

]]>
The aftermath of a party, with glitter on the floor and empty wine glasses on the coffee table.

Resolution is a word that gets more airtime than it should in a world where pocket-money cameras have four times the sharpness of classic cinema. In fact, ending that preoccupation with numbers should go on a list of things we’ll try to do in 2025. A new year’s not-resolution, perhaps.

Here’s a few others.

Take user-generated content more seriously

If you’re perpetually engaged in senior positions on high-end projects at union rates, it’s easy to overlook changes in the wider industry. Of course, a lot of people are conspicuously not in that position at the moment, and there’s a lot of things to blame: the peak and decay of streaming, the hangover of the pandemic, pricy-but-mediocre franchise films and streaming series, and industrial action which, whether we like it or not, certainly kicked an industry when it was down.

The fact that this all happened at the point where user-generated content was ascendant is no coincidence, but certain markets have been able to ignore that reality because YouTube has not so far been capable of funding an Alexa 65 and a set of DNAs. That probably hasn’t changed yet, although some shockingly high-end work is being done. The Dust channel has been putting out user-generated sci-fi for aNo while, and while much of Dust’s output might not quite satisfy Netflix subscribers, it is naive to assume that the status quo is eternal.

Snobbery is involved, though as a business consideration the rise of user-generated content is a question for the c-suite more than camera crews. Other things, though, are more in the hands of the craftspeople.

 

Young woman sitting in front of a ring light applying makeup.
Here we see the entire production, directorial and post team at work. Yes, when you and your one million buddies can put the wind up Disney using ten-dollar Aliexpress ring lights and iPhones, you are worth taking seriously. By Pexels user George Milton.

Recognise production design

Given film is so much a team sport, the lack of communication between departments is often slightly shocking. Perhaps that’s because it is also a very expensive artform, provoking a nervousness which tends to keep people firmly in lane. A film set is a place where it is often better to keep silent and be thought a fool. In the abstract, most people are keenly aware that there is no good cinematography without good production design, but that’s easily forgotten in the midst of pixel peeping the latest camera release (of which more anon).

Sometimes, production design means months of preparation. Sometimes, it just means picking the right time and place. Still, interdepartmental collaboration is sometimes more competitive than it should be. That’s particularly true on less financially replete productions, where it may be accepted that the show will not compete with blockbusters but that nobody wants that outcome to be their fault. So, camera refuses to unbend for the location manager, or vice versa, and the result is unnecessarily compromised.

We could equally assign a couple of new years’ resolutions to other departments, encouraging them to recognise the need to, say, put the camera somewhere it can see both the actors at once. Ultimately, though, we should admit that too many people put too much importance on the camera, and not enough on what’s in front of it.

Be bold

Even lay audiences have started to notice that a certain proportion of mainstream film and TV has adopted a rather cautious approach to high contrast and saturated colour. Some of the accused productions have been comic book or animation adaptations, which probably ought to be the opposite. What’s even more counterintuitive is that this is invariably the product of digital cinematography, which was long held to be lacking in dynamic range – which is the same thing as high in contrast.

Grey concrete support pillars under a bridge, in grey mist.
This atmospheric photo by pexels user Markus Spiske is pretty, but a lot of modern film and TV sort of looks a bit like this even when it isn’t foggy.

Engineers have since given us fifteen-stop cameras, but there seems to be a lasting societal memory of early, less-capable electronic cinematography which makes people afraid of the extremes. It’s at least as likely that fiscal conservatism is leading to artistic conservatism around the sheer cost of nine-figure blockbusters. Nobody ever got in trouble for not crushing the blacks.

The result is an identifiable lack of punch in movies and TV shows that even determinedly nontechnical people are starting to notice. There’s a whole discussion to have about history, how things once looked, and how they look now, but with modern grading we can have anything. The solution is easy, if the producers will stand it: be not afraid of minimum and maximum densities – unless you’re grading for HDR, in which case absolutely be afraid, but that’s another issue.

Stop pixel peeping

And yes, like a lecturing parent frustrated with a chocolate-smeared child’s perpetual tendency to steal cookies, we do have to talk about that obsession with numeric specifications. It is the camera department’s equivalent of bargain vodka. Everyone knows it’s a bad idea, but it starts off fun and we can stop whenever we like. Soon, though, we realise that cameras are now almost too good and pixel peeping has facilitated a generation which thinks that swear-box words like “cinematic” and “painterly” are objectively measurable. Then it turns out that our attractively-priced metaphorical booze was mostly brake fluid, and people end up spending time counting megabits that should have been spent working out a mutually-beneficial compromise with the location manager.

Everyone knows that good equipment is necessary. Everyone knows it isn’t sufficient. Everyone also knows that pixel peeping is a bad habit and complaining about it almost feels redundant. But if we can make 2025 the year when film students use social media to discuss technique more than they discuss technology, that’ll be a minor victory.

]]>
https://www.provideocoalition.com/a-victorious-2025/feed/ 0
Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion https://www.provideocoalition.com/looking-back-on-2024-and-ahead-to-2025-a-pvc-roundtable-discussion/ https://www.provideocoalition.com/looking-back-on-2024-and-ahead-to-2025-a-pvc-roundtable-discussion/#respond Wed, 01 Jan 2025 04:00:23 +0000 https://www.provideocoalition.com/?p=287445 Read More... from Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion

]]>
Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 12

While the hosts of the Alan Smithee podcast already discussed the evolving landscape in media and entertainment as 2024 draws to a close, there’s so much more to say about what happened in 2024 and what 2025 has in store for individual creators and the entire industry. Generative AI for video is everywhere, but how will that pervasiveness impact actual workflows? What were some of the busts in 2024? How did innovations in cameras and lenses make an impact? And what else are we going to see in 2025 beyond the development of AI animation/video tools?

Below is how various PVC writers explored those answers in a conversation took shape over email. Keep the conversation going in the comments or on LinkedIn. 

 

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 13Scott Simmons

2024? Of course, the first thing you think about when recapping 2024 (and looking ahead to 2025) is that it was all about artificial intelligence. Everywhere you look at technology, media creation, and post-production, there is some mention of AI. The more I think about it, though, the more I feel like it was just a transitional year for AI. Generative AI products feel like old hat at this point. We’ve had “AI” built into our editing tools for what feels like a couple of years now. While we have made a lot of useful advancements, I don’t feel like any earth-shattering AI products shipped within any of our editing tools in 2024. Adobe’s generative extend in Premiere Pro is probably the most useful and most exciting AI advancement I’ve seen for video editors in a long time. But it’s still in beta, so Gen Extend can’t count until it begins shipping. The AI-based video searching tool Jumper shipped is a truly useful third-party tool. Adobe also dropped their “visual search” tool within the Premiere beta, so we know what’s coming next year as well, but it’s far from perfect at the time of this writing, and still in beta. I appreciate an AI tool that can help me search through tons of media, but if that tool returns too many results, I’m yet again flooded with too much information.

The other big AI-based video advancement is text-based generative video coming into its own this year. Some products shipped, albeit with quite a high price to make it truly usable. And even more went into preview or beta. 2025 will bring us some big advancements in generative video and that’s because we’re going to need them. What we saw shipping this year was underwhelming. A few major brands released AI-generated commercial spots or short films, and they were both unimpressive and creepy. I saw a few Generative AI short films make the rounds on social media, and all I could do after watching them was yawn. The people that seemed most excited by generative video (and most trumpeting its game-changing status) were a bunch of tech bros and social media hawks who didn’t really have anything to show other than more yelling on social media from their paid and verified accounts or their promoted posts.

Undoubtedly, AI will continue to infiltrate every corner of media creation. And if it can do things like make transcription more accurate or suggest truly usable rough cuts, then I think we can consider it a success. But for every minor workflow improvement that is actually useful in post-production, we’ll see two or three self-proclaimed game-changing technologies that end up just being … meh.

In the meantime, I’ll happily use the very affordable M4 Mac mini in the edit suite or a powerful M4 Mac Studio that speeds up the post-production process overall. We can all be thankful for cloud-based “hard drives,” like LucidLink, that do more for post-production workflow than most AI tools that have thrown our way. Maybe 2025 will be the year of AI reality over AI hype.

19-6-4-nikki-cole-backpack-headshotNikki Cole

While I’m aware that the issues we’ve been facing on the writing/producing/directing side don’t affect many of the tech teams on the surface, it has been rather earth-shattering on our side of things. Everyone fears losing their jobs to AI and with good reason. I am in negotiations with a European broadcaster who really likes the rather whacky travel series I’m developing but, like so many b’casters now, simply don’t have enough cash to fully commission it. They flat out told me that I would write the first episode, and they would simply feed the rest of the episodes into AI so they wouldn’t have to pay me to write more eps. I almost choked when they said that matter of factly, and I responded by saying that this show is comedy and AI can’t write comedy! Their response was a simple shrug of the shoulders. Devastating for me and with such an obvious lack of integrity on their part, I’m now concerned that they are currently going ahead with that plan and we don’t even have a deal in place. So, from post’s perspective, that is one more project that isn’t being brought into the suite because I can’t even get it off the ground to shoot it.

As members of various guilds and associations, we are all learning about the magnificent array of tools, we now have at our fingertips for making pitches, sizzle reels, visual references, etc. It really is astonishing what I now can do. I’m learning as much as I can while I just can’t shake the guilt knowing that I’ll be putting the great graphic designers I used to hire out of work. If budgets were a little better, I would of course, hire a graphic designer with those skills, but as things stand today, I can’t afford to do that.

It’s definitely a fascinating and perplexing time!

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 14Oliver Peters

AI in various forms gets the press, but in most cases it will continue to be more marketing hype than anything else. Useful? Yes. True AI? Often, no. However, tools like Jumper can be quite useful for many editors. Although, some aspects, like text search, have already existed for years in PhraseFind (Avid Media Composer).

There are many legal questions surrounding generative AI content creation. Some of these issues may be resolved in 2025, but my gut feeling is that legal claims will only just start rolling for real in this coming year. Vocal talent, image manipulation, written scripts, and music creation will all be areas requiring legal clarification.

On the plus side, many developers – especially video and audio plugin developers – are using algorithmic processes (sometimes based on AI) to combine multiple complex functions into simple one-knob style tools. Musik Hack and Sonible are two audio developers leading the way in this area.

One of the less glitzy developments is the future (or not) of post. Many editors in major centers for film and TV production have reported the lack of gigs for months. Odds are this will continue and not reverse in 2025. The role for (or even need for) the traditional post facility is being challenged. Many editors will need to find ways to reinvent themselves in 2025. As many business are enforcing return-to-office policies, will editors find remote work to be less acceptable to directors?

When it comes to NLEs, Adobe Premiere Pro and Avid Media Composer will continue to be the dominant editing tools when collaboration or project compatibility is part of the criteria. Apple Final Cut Pro will remain strong among independent content creators. Blackmagic Design DaVinci Resolve will remain strong in color and finishing/online editorial. It will also be the tool for many in social media as an alternative to Premiere Pro or Final Cut Pro.

The “cloud” will continue to be the big marketing push for many companies. However, for most users and facilities, the internet pipes still make it impractical to work effectively via cloud services with full resolution media in real time. Of course, full resolution media is also getting larger and not lighter weight. So that, of course, is not conducive to cloud workflows.

A big bust in this past year has been the Apple Vision Pro. Like previous attempts at immersive, 3D, and 360-degree technologies, there simply is no large, sustainable, mass market use case for it, outside of gaming or special venues. As others have predicted, Apple will likely re-imagine the product into a cheaper, less capable variant.

Another bust is HDR video. HDR tools exist in many modern cameras and even smart phones. HDR is a deliverable for Netflix originals and even optional for YouTube. Yet the vast amount of content that’s created and consumed continues in good old Rec 709. 2025 isn’t going to change that.

2025 will be a year when the rubber meets the road. This is especially true with Adobe, who is adding generative AI for video and color management into Premiere Pro. So far, the results are imperfect. Will it get perfect in 2025? We’ll see.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 15Iain Anderson

The last twelve months have brought a huge amount of change. Generative AI might have had the headlines, but simple, short clips don’t just magically scale up to a 30-second spot, nor anything longer. One “insane, mind-blowing” short that recently became popular couldn’t even manage a consistent clothing for its lead, let alone any emotion, dialogue or plot. Gen AI remains a tool, not a complete solution for anything of value.

On the other hand, assistive AI has certainly grown a little this year. Final Cut Pro added automatic captioning (finally!) and the Magnetic Mask, Premiere Pro has several interesting things in beta, Jumper provides useful visual and text-based media search today, Strada looks like doing the same thing soon in the new year and several other web-based tools offer automatic cutting and organizing of various kinds. But I suspect there’s a larger change coming soon — and it starts with smarter computer-based assistants.

Google Gemini is the first of a new class of voice-based AI assistants which you can ask for help while you use your computer, and a demo showed it (imperfectly) answering questions about DaVinci Resolve’s interface. This has many implications for anyone learning complex software like NLEs, and as I make a chunk of my income from teaching people that, it’s getting personal. Still, training has been on the decline for years. Most people don’t take full courses, but just jump in and hit YouTube when they get stuck. C’est la vie.

While assistant AIs will become popular, AIs will eventually control our computers directly, and coders can get a taste of this today. Very recently, I’ve found ChatGPT helpful for creating a small app for Apple Vision Pro, for writing scripts to control Adobe apps, and also for converting captions into cuts in Final Cut Pro, via CommandPost. Automation is best for small, supervised tasks, but that’s what assistants do.

Early in 2025, an upgraded Siri will be able to directly control any feature that a developer exposes, enabling more complex interactions between apps. As more AIs become able to interpret what they see on our screens, they’ll be able to use all our apps quicker than we can. In video production, the roles of editor and producer will blur a little further, as more people are able to do more tasks without specialist help.

But AI isn’t the whole story here, and in fact I think the biggest threat to video professionals is that today, not as many people need or want our services. High-end production stalled with the pandemic and many production professionals are still short of work. As streaming ascends (even at a financial loss) broadcast TV is dying worldwide, with flow-on effects for traditional TV advertising. Viewing habits have changed, and will keep changing.

At the lower end, demand for quick, cheap vertical social media video has cut into requests for traditional, well-made landscape video for client websites or YouTube. Ads that look too nice are instantly recognised as such and swiped away, leading to a rise in “authentic” content, with minimal effort expended. It’s hard to make a living as a professional when clients don’t want content that looks “too professional”, and hopefully this particular pendulum swings back around. With luck, enough clients will realise that if everyone does the same thing, nobody stands out.

Personally, the most exciting thing this year for me is the Apple Vision Pro. While it hasn’t become a mainstream product, that was never going to happen at its current high price. Today, it’s an expensive, hard-to-share glimpse into the future, and hopefully the state-of-the-art displays inside become cheaper soon. It’ll be a slow road, and though AR glasses cannot bring the same level of immersion, they could become another popular way to enjoy video.

In 2024, the Apple Vision Pro was the only device to make my jaw drop repeatedly, and most of those moments have come from great 3D video content, in Immersive (180°) or Spatial (in a frame) flavors. Blackmagic’s upcoming URSA Cine Immersive camera promises enough pixels to accurately capture reality —  8160 x 7200 x 2 at 90fps — and that’s something truly novel. While I’m lucky to have an Apple Vision Pro today, I hope all this tech is in reach of everyone in a few years, because it really does open up a whole new frontier for us to explore.

P.S. If anyone would like me to document the most beautiful places in the world in immersive 3D, let me know?

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 16Allan Tépper

In 2024, we saw more innovation in both audio-only and audio-video switchers/mixers/streamers and the democratization of 32-bit float audio recording and audio DSP in microphones and mixers. I expect this to continue in 2025. In 2024, both Blackmagic and RØDE revolutionized ENG production with smartphones. In 2024, I began my series about ideal digitization and conversion of legacy analog color-under formats including VHS, S-VHS, 8mm, Hi-8mm and U-Matic. I discussed the responsibility of proper handling of black level (pedestal-setup/7.5 or zero IRE) at the critical analog-to-digital conversion moment, proper treatment and ideal methods to deinterlace while preserving the original movement (temporal resolution). That includes ideal conversion from 50i to 50p or 59.94i to 59.94p as well as ideal conversion from non-square pixels to square pixels, upscaling to HD’s or 4K’s vertical resolution with software and hardware, preservation of original 4:3 aspect ratio (or not), optional cropping of headswitching, noise reduction and more. All of this will continue in 2025, together with new coverage of bitcoin hardware wallets and associated services.

In 2024, at TecnoTur we helped many more authors do wide distribution of their books, ebooks and audiobooks. We guided them, whether they wanted to use the author’s own voice, a professional voice or an AI voice. We  produced audiobooks in Castilian, English and Italian. We also helped them to deliver their audiobooks (with self distribution from the book’s own website) in M4B format with end-user navigation of chapters. I expect this to expand even more in 2025.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 17Brian Hallett

This past year, we saw many innovations in cameras and lenses. For cameras, we are now witnessing the market begin the movement to make medium-format acquisition easier for “everyday filmmakers” rather than just the top-tier. Arri announced its new ALEXA 265, a digital 65mm camera designed to be compact and lightweight. The new ALEXA 265 is one-third the size of the original ALEXA 65 while slightly larger than the still-new ALEXA 35. Yet, the ALEXA 265 is only available as a rental.

Regarding accessibility for filmmakers, the ALEXA 265 will not be easier to get one’s hands on; that will be reserved for the Blackmagic URSA CINE 17K 65. The Blackmagic URSA CINE 17K 65 is exactly the kind of camera Blackmagic Design and CEO Grant Petty wants to get into the hands of filmmakers worldwide. Blackmagic Design has a long history of bringing high-level features and tools to cameras at inexpensive prices. It is the company that bought DaVinci Resolve and then gave it away for free when purchasing a camera. They brought raw recording to inexpensive cameras early on in the camera revolution. Now, Blackmagic Design sees 65mm as the next feature reserved for the top-tier exclusive club of cinematographers they can deliver to everyone at a relatively decent price of $29,995.00, so expect to see the Blackmagic URSA CINE 17K 65 at rental houses sooner than later. I also wouldn’t let the 17K resolution bother you too much. Blackmagic RAW is a great codec that is a breeze to edit compared to processor-heavy compressed codecs.

We also saw Nikon purchase RED but have not seen the cross-tech innovation between those companies. In years to come, we will see Nikon add RED tech to its Nikon cameras and vice versa.

Sony delivered the new Sony BURANO and I’m seeing the camera at rental houses now. More so, though, I see more owners/operators with the BURANO than anything else. It appears Sony has a great camera that will last a long time for owners/operators.

I feel like I saw a ton of new lenses in 2024, from ARRI’s new Enso’s Primes to Viltrox’s anamorphic lenses. We see Chinese lenses coming in from every direction, which is good. More competition benefits all of us and keeps prices competitive. Sigma delivered their new 28-45mm f/1.8, the first full-frame F/1.8 maximum aperture zoom lens. I tested this lens on a Sony mirrorless, and it felt like the kind of lens you can leave on all day and have everything covered. The depth of field was great in every shot. Sigma has delivered a series of lenses for mirrorless E-Mount and L-Mount cameras at an astounding pace from 500mm down to 15mm.

Canon was miserly with their RF-mount. To me, Canon is protecting its investment into its lens innovation by restricting who can make RF mount lenses. I wish they wouldn’t do such a thing. It can be counter-intuitive to me to block others from making lenses that work on their new cameras. What has happened is that all those other lens makers are making lenses specific to E-Mount and L-Mount. In essence, if you are a BURANO shooter, you have more lenses available than a Canon C400 shooter. The story I tell myself is if I had to buy a camera today, which lenses I could use would be a part of that calculus.

On Artificial Intelligence, AI, we cannot discount how manufacturers use it to innovate quicker and shorten the timeframe from concept to final product faster while saving money. As a creative, I use AI and think of AI in this way: there will be creatives who embrace AI and those who don’t or won’t, and that will be the differentiator in the long run. I already benefit from AI with initial script generation, which is only a starting point, to caption generation and transcription and to using it in Lightroom for photo editing.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 18Damien Demolder

The production of high quality video used to be restricted by the cost of the kit needed and the skills required to operate that equipment. Those two things helped to regulate the number of people in the market. The last year though has seen a remarkable acceleration in the downward pricing trend of items that used to cost a lot of money, as well as an increase in the simplification and convenience of their handling. I tend to review equipment at the lower end of the price scale, an area that has seen a number of surprising products in the last twelve months. These days, anamorphic lenses are almost common-place hovering around the $1000 price point, and LED lights have simultaneously become cheaper and much more advanced.

Popular camera brands that until recently only dipped their toe into the video market now offer their own Log profiles, encourage their users to record raw footage to external devices and provide full 35mm frame recording as though it is the expected norm. LUTs can be created on a mobile phone app and uploaded to cameras to be baked in to the footage, and care-free 32bit float audio can be recorded directly to the video soundtrack for a matter of a few hundred dollars and a decent mic. Modern image stabilisation systems, available in very reasonably priced mirrorless cameras, mean we can now walk and film without a Steadicam, and best-quality footage can be streamed to a tiny SSD for long shoots and fast editing. Earlier this year I reviewed a sub-$500 Hollyland wireless video transmitter system that, with no technical set-up, can send 4K video from an HDMI-connected camera to four monitors or recorders – or to your phone where the footage can be recorded in FHD. I also reviewed the Zhiyun Molus B500 LED light that now provides 500W worth of bi-coloured illumination for less than $600, and the market is getting flooded with small, powerful LED, bi- and full colour, lights that run on mini-V Lock batteries – or their own internal rechargeable batteries.

Now, a lot of these products aren’t perfect and have limitations, but no sooner have the early adopters complained about the faults in the short-run first batch than the manufacturers have altered the design and fixed the issue to make sure the second phase will meet, and often exceed, expectations. We can now have Lidar AF systems for manual lenses, autofocus anamorphics, cheap gimbals so good you’d think the footage was recorded with a drone – even lighting stands, heavy duty tripods and rigging gear are getting cheaper and better at the same time.

Of course all this is great news for low-budget productions, students and those just starting out, but it also means anyone can believe they are a film maker. With the demand for video across social media platforms and websites ever increasing you’d think that would be great news for the industry, but much of that demand is being eaten up by those with no formal learning and some clever kit. Not all the content looks or sounds very good, but often that matters less than that a tiny budget is kept to. Those who think they are film makers can easily convince those who can’t imagine what their film could look like, that they are.

I expect 2025 will bring us more of this – better, more advanced and easier-to-use kit at lower prices, and more people using it. I didn’t need to consult my crystal ball for this prediction. Every year has brought the same gradual development since I joined the industry 28 years ago, but once again it has taken us to places I really hadn’t expected. I expect to be surprised again.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 19Nick Lear

I found 2024 to involve a lot of waiting. Waiting for the industry to right itself, waiting for the latest AI tool to come out of Beta, waiting for companies to put stability before bells and whistles. That, I fear, may be rather a long wait. I also found I had very mixed feelings about AI – on the one hand I was excited to see what advances technology could bring and on the other hand saddened by the constant signs that profit is put before people – whether in plagiarising artists’ work or the big Hollywood studios wanting the digital rights of actors.

Generative AI impresses me whenever I see it – and I think we have to acknowledge the rate of improvement in the last few years – but I also struggle to see where it can fit in my workflow. I am quite looking forward to using it in pre-production – testing out shots before a shoot or while making development sizzles. To that end, it was great to see Open AI’s text to video tool Sora finally come into the public’s hands this month, albeit not in the UK or Europe. Recently Google’s Veo 2 is getting hyped as being much more realistic, but it’s still in Beta and you have to live in the US to get on the waiting list. Adobe’s Firefly is also waitlist only – so there’s more waiting to be done – yet it could well be that 2025 brings all of these tools into our hands and we get to see what people will really do with them outside of a select few.

On the PC hardware front, marketing teams went into overdrive this year to sell us on new “AI” chips. Intel tried to convince us that we needed an NPU (neural processing unit) to run machine learning operations when there were marginal gains over using the graphics card we already had. And Microsoft tried to push people in the same direction – requiring specific new hardware to qualify for Copilot+. Both companies are trying to catch up with Apple on battery life, which I’m all for, but I wish they could be more straightforward about how they presented it.

I continued to get a lot out of the machine learning based tools, whether it was using a well trained voice model in Eleven Labs or upscaling photos and video with Topaz’s software. I also loved the improvements that Adobe made in their online version of Enhance Speech which rescued some bad audio I had to work with. Some of these tools are starting to mature – they can make my life easier and enable me to present better work to my clients which is all I want at the end of the day.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 20Jeff Foster

For me 2024 was met with lots of personal life challenges which has precluded me from a lot of deep dive involvement into the world of AI to the level I did in the previous 2 years, but I did manage to catch up on some of the more advanced generative AI video/animation tools to explore and demo for the CreativePro Design + AI Summit in early December. I created the entire 40 minute session with Generative AI tools including my walking intro chat and the faux Ted Talk presentation, using tools like HeyGen, ElevenLabs, Midjourney and Adobe After Effects. As usual, I did a complete breakdown of my process as a reveal toward the end of my session and will be sharing this process in a PVC exclusive article/video in January 2025.

The rate of development of AI animation/video tools such as Runway, Hailuo, Leonardo and others that are developing their text/image to video tools is astounding. I think we’re going to see a major shift in development in this area in 2025.

I’m also exploring music and audio AI generative tools including hardware/software solutions in the coming months and expect to see some amazing growth in quality, production workflows and accessibility to the public who are virtually non-musicians.

As usual, I’m only exploring the tools and seeing how they can be utilized, but also am concerned for the direction all of this is heading and how it affects content creators and producers in the end. I always take the position that these are tools we can utilize in our workflows (or at least have an understanding of how they work) or choose to ignore them and hope they’ll just go away like other fads… which they won’t.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 21Michelle DeLateur

Christmas morning is a flurry of impatience; usually an explosion of expectation, surprise, and wrapping paper scraps. I used to describe NAB as “camera Christmas”  to newcomers. But with announcements coming by email and NAB turning into primarily events, meetings, and conversations, that giddy elf feeling I used to have to see the new floor models has turned into excitement to see familiar faces.

So where has our impatience shifted? It seems we now find ourselves in a waiting game for presents in 2025.

That new Adobe Gen-AI video model? Hop on the waitlist. Hoping to see more content on the Vision Pro, perhaps with the Blackmagic URSA Cine Immersive? Not yet. Excited about Sigma making RF lenses? They started with APS-C first.

Patience is not one of our virtues. With shorter camera shelf lives and expected upgrade times, we assume we will hold onto our gear shorter than ever and are always ready for a change. Apple’s releases make our months-old laptops seem slow. A new AI tool comes out nearly every day.

Video editors are scrambling for the few job openings, adding to their skill sets to be ready for positions, or transitioning to short term jobs outside of video alongside the anxiety of AI threatening to take over this realm. We rejoiced when transcription was replaced by a robot. We winced when an AI program showed it could make viral-ready cuts.

Just because we are forced to wait does not mean we are forced to be behind. It is cheaper than ever to start a photography journey. Mastering the current tools can make you a faster editor. Teaching yourself and others can help create new stories. While I personally don’t fully believe in the iPhones filmmaking abilities, there ARE plenty of tools to turn the thing-that’s-always-on-you into a filmmaking device.

In 2024, we were forced to wait. But we are not good at waiting. That’s the same tenacity and ambition that makes us good at storytelling. It’s only a matter of time. It’s all a matter of time. So go forth and use your time in 2025.

Looking back on 2024 and ahead to 2025 – A PVC Roundtable Discussion 22Chris Zwar

For me, 2024 was a case of ’the more things change, the more they stay the same’. I had a busy and productive year working for a range of clients, some of whom I hadn’t worked with for many years. It’s nice to reconnect with former teams and I found it interesting that briefs, pitches and deliveries hadn’t changed a great deal with time.

The biggest change for me was finally investing in a new home workstation. Since Covid, I have been working from home 100%, but I was using an older computer that was never intended for daily projects. Going through the process of choosing components, ordering them and then assembling a dedicated work machine was very rewarding, and something I should have done sooner. Now that my home office has several machines connected to a NAS with 10-gig ethernet, I have more capacity at home than some of the studios I freelance for – something I would have found inconceivable only a few years ago!

Technically, it seems like the biggest impact that AI has had so far has been providing a topic for people to write about. Although AI tools continue to improve, and I use Ai-based tools like Topaz & Rotobrush regularly, I’m not aware of AI having had any impact on the creative side of the projects I’ve worked on.

From my perspective as an After Effects specialist, the spread of HDR & ACES has helped After Effects become increasingly accepted as a tool for high-end VFX. The vast majority of feature films and premium TV shows are composited with Nuke pipelines, but with ACES having been built-in to AE for over a year, I’m now doing regular cleanup, keying and compositing on projects in After Effects that wouldn’t have been available to me before.

]]>
https://www.provideocoalition.com/looking-back-on-2024-and-ahead-to-2025-a-pvc-roundtable-discussion/feed/ 0
HDMI 2.2 will be announced at CES 2025 https://www.provideocoalition.com/hdmi-2-2-will-be-announced-at-ces-2025/ https://www.provideocoalition.com/hdmi-2-2-will-be-announced-at-ces-2025/#respond Sat, 28 Dec 2024 13:09:47 +0000 https://www.provideocoalition.com/?p=287420 Read More... from HDMI 2.2 will be announced at CES 2025

]]>
HDMI 2.2 will be announced at CES 2025The announcement of a new specification for HDMI, offering higher bandwidth, will be made at CES 2025, and some believe the new NVIDIA RTX 50 series will include it in the specifications.

Information is scarce, but according to the information shared by the HDMI Forum, the new HDMI specifications – which will deliver higher bandwidths and resolutions – will be revealed during a press conference at CES 2025, on January 6, the same day that NVIDIA founder and CEO Jensen Huang will deliver a keynote at CES 2025. Huang will deliver his keynote Monday, January 6, at 6:30 p.m.

CES 2025, which will take place in Las Vegas from January 7-10, 2025, is the stage where NVIDIA will announce its new RTX series, Blackwell. There is some speculation that the new RX 50 series (starting with the RTX 5080 and RTX 5090) will already include HDMI 2.2 (if that’s the final name for the new HDMI version). At CES 2025, AMD is also introducing its new graphics card, which reports name as Radeon RX 8000, a GPU that may also use the new HDMI specification.

The announcement from the HDMI Forum indicates that the new version will support higher resolutions, refresh rates and enhanced transmission quality… and a new cable. The actual HDMI 2.1 transmits data at up to 48Gbps, allowing for refresh rates of up to 120Hz at 4K resolution. It is expected that HDMI 2.2 will compete with DisplayPort 2.1, which can reach 80Gbps.

]]>
https://www.provideocoalition.com/hdmi-2-2-will-be-announced-at-ces-2025/feed/ 0
Studio Extreme, an immersive experience at CES 2025 https://www.provideocoalition.com/studio-extreme-an-immersive-experience-at-ces-2025/ https://www.provideocoalition.com/studio-extreme-an-immersive-experience-at-ces-2025/#respond Sat, 28 Dec 2024 11:58:28 +0000 https://www.provideocoalition.com/?p=287412 Read More... from Studio Extreme, an immersive experience at CES 2025

]]>
Studio Extreme, an immersive experience at CES 2025Disguise, Nikon and MRMC bring Studio Extreme immersive activation to CES 2025, to show how easily brands can create everything from social live streams to professional ads in a turnkey virtual production studio.

Studio Extreme will take participants on an immersive journey. As they step into the activation they will be transported to a location just outside Vegas, where they will be tasked to deliver a live news report. But will they be prepared for all the weather conditions that Vegas has to offer?

The immersive experience, which will be on display at CES 2025, is the result of a partnership between Disguise — the company behind the virtual production technology used on commercials for Apple Music and Lenovo, as well as feature film Daddio starring Sean Penn —, Nikon and its subsidiary MRMC. It aims to demonstrate how easily brands can create everything from social live streams to professional ads in a turnkey virtual production studio.

Nikon chose Studio Pro by Disguise to bring this activation to life. Designed for brands looking to tell powerful stories through immersive content, Studio Pro combines the most advanced virtual production technology with best-in-class creative and technical services, including 24/7 support, training and installation, enabling brands to shoot commercials, social media content, internal training material or important keynote or investor presentations — all from one stage on the same day.

Studio Extreme, an immersive experience at CES 2025

“We’ve helped deliver 400 virtual production studios in more than 100 countries, and know just how transformational they can be for brands,” says Alexandra Coulson, VP of Marketing at Disguise. “Traditionally, however, LED technology required significant upfront training and investment, making it reserved for Hollywood studios. We are changing that. Together with Nikon and MRMC we are showcasing the possibilities at CES 2025. Attendees will experience firsthand how they can take their brand’s story to the next level, producing a versatile offering of content with our comprehensive powerhouse solution, Studio Pro.”

The technology behind the Studio Extreme activation
Visitors who step into the CES activation will be able to experience state-of-the-art virtual production technology including:

  • Studio Pro — a turnkey studio solution by Disguise
  • Studio Bot LT — MRMC’s most compact robotic camera arm solution
  • RED Komodo 6K camera
  • Lighting by Kino Flo
  • L-Acoustics spatial audio

Once they complete their weather report, visitors will receive a personalized video downloadable via a QR code.

Visit the virtual production activation Studio Extreme at the Nikon booth 19504 in LVCC, Central Hall, at CES 2025.

]]>
https://www.provideocoalition.com/studio-extreme-an-immersive-experience-at-ces-2025/feed/ 0
Enhancing On-Set Communication: On Set Headsets’ Product Lineup and LDI 2024  https://www.provideocoalition.com/enhancing-on-set-communication-on-set-headsets-product-lineup-and-ldi-2024/ https://www.provideocoalition.com/enhancing-on-set-communication-on-set-headsets-product-lineup-and-ldi-2024/#respond Fri, 27 Dec 2024 17:26:20 +0000 https://www.provideocoalition.com/?p=287373 Read More... from Enhancing On-Set Communication: On Set Headsets’ Product Lineup and LDI 2024 

]]>
Effective communication is the backbone of any successful film or live production. On Set Headsets has been at the forefront of revolutionizing on-set communication, and their presence at LDI 2024 was no exception. With a lineup designed to cater to the unique demands of filmmakers and live production professionals, On Set Headsets is raising the bar for reliability and functionality. This year, Filmtools proudly joins the effort as an official dealer of On Set Headsets’ products, making these essential tools even more accessible to the production community.

The FilmPro Surveillance Headset: Precision and Clarity

At the heart of On Set Headsets’ offerings is the FilmPro Surveillance Headset, a versatile and discreet communication tool tailored for on-set environments. With its ergonomic design and crystal-clear audio quality, this headset ensures seamless communication without compromising comfort. The FilmPro’s lightweight build and secure fit make it ideal for prolonged use during demanding shoot days, while its durability withstands the rigors of film production.

The FilmPro stands out for its noise-canceling microphone, ensuring messages remain clear even in noisy settings. Whether coordinating with crew members or addressing last-minute changes, the FilmPro Surveillance Headset ensures no detail is lost in translation.

LiveComms Pro: The Ultimate Tool for Live Events

For live production professionals, the LiveComms Pro offers a tailored solution that merges robust design with high-fidelity audio. Featuring a 4-pin XLR connector, this headset seamlessly integrates with professional-grade intercom systems. The LiveComms Pro is built to handle the high-pressure demands of live events, ensuring uninterrupted communication during critical moments.

Its comfortable fit and adjustable components accommodate extended wear, making it a favorite among stage managers, event producers, and broadcast professionals. At LDI 2024, attendees had the opportunity to experience the LiveComms Pro’s superior performance firsthand, solidifying its reputation as a must-have for live production scenarios.

Tubeez Tubes: Simple Innovations with Big Impact

Another standout product in On Set Headsets’ lineup is the Tubeez Tubes, a flexible earpiece solution that prioritizes both hygiene and audio clarity. Designed to be compatible with most surveillance headset systems, Tubeez Tubes provide a comfortable and secure listening experience.

Their interchangeable nature offers an added layer of hygiene, a crucial consideration in environments where equipment is shared among multiple users. With a variety of lengths and styles available, Tubeez Tubes cater to individual preferences while maintaining consistent audio performance. This product exemplifies On Set Headsets’ commitment to blending practicality with user-friendly design.

LDI 2024: A Showcase of Innovation

The LDI (Live Design International) tradeshow has long been a hub for showcasing cutting-edge technologies in live production and entertainment. On Set Headsets’ presence at LDI 2024 underscored their dedication to advancing communication solutions for the industry. Attendees had the chance to explore the full range of On Set Headsets’ products and witness live demonstrations of their capabilities.

From interactive product displays to hands-on testing stations, On Set Headsets engaged with industry professionals, gathering valuable feedback and insights. Their booth was a testament to the brand’s commitment to innovation and user-centric design, further solidifying their position as a leader in communication technology.

Conclusion

On Set Headsets has set a new standard for communication in film and live production. With a product lineup that includes the FilmPro Surveillance Headset, LiveComms Pro, and Tubeez Tubes, the brand offers tailored solutions for every scenario. Their presence at LDI 2024 highlighted their commitment to innovation and user satisfaction, while their partnership with Filmtools ensures these essential tools are readily available to the industry.

Whether you’re a filmmaker coordinating a complex set or a live event producer managing a high-stakes show, On Set Headsets has the tools to keep your communication seamless and efficient. Explore their products today through Filmtools and experience the difference that superior communication technology can make.

]]>
https://www.provideocoalition.com/enhancing-on-set-communication-on-set-headsets-product-lineup-and-ldi-2024/feed/ 0
Samsung, Pimax and DPVR: three new VR headsets for 2025 https://www.provideocoalition.com/samsung-pimax-and-dpvr-three-new-vr-headsets-for-2025/ https://www.provideocoalition.com/samsung-pimax-and-dpvr-three-new-vr-headsets-for-2025/#respond Tue, 24 Dec 2024 13:52:48 +0000 https://www.provideocoalition.com/?p=287339 Read More... from Samsung, Pimax and DPVR: three new VR headsets for 2025

]]>
Samsung, Pimax and DPVR: three new VR headsets for 2025CES 2025 will be the stage to discover the new VR headsets coming to the market: the 27 million pixels Pimax Dream Air, a new “mystery product” set from DPVR and, probably, the Samsung Moohan Android XR headset.

Virtual Reality is about to take a new step forward with the introduction of new VR/MR headsets that promise to attract more people to experience what these technologies have to offer. One of the new headsets, announced as a “mystery product”, comes from DPVR, a company that has not delivered any exciting consumer solutions, despite its promises, so it will be interesting to see what this new announcement will bring. DPVR will be at CES 2025, showing the company’s products at Booth #15945 in the Central Hall at the Las Vegas Convention Center (LVCC).

Samsung, Pimax and DPVR: three new VR headsets for 2025The second VR headset now announced is the Pimax Dream Air, presented as the world’s smallest full-feature 8K resolution VR headset. With only 200g (strap included), a 102° FOV(H) on 27 million pixels, 3840 x 3552 pixels per eye at 90 Hz refresh rate, on micro-OLED panels, the new headset features head, hand and eye-tracking, integrated spatial audio, a Display Port connection, and a self-adjusting backstrap.

The Dream Air is a PCVR headset, and, according to Pimax, “borrows a lot of components from the previously announced Crystal Super, including the micro-OLED panels and pancake lenses — but packs this into a small form factor headset, to satisfy different use cases. It breaks with previous Pimax headsets, with a new design language, signalling the small form factor era for Pimax.”

Samsung, Pimax and DPVR: three new VR headsets for 2025Pimax Dream Air: is it real?

The Pimax Dream Air is a 6DoF tracking PCVR headset with inside-out tracking by default but Lighthouse compatible if you’ve the Steam base stations. Pimax – the company will be at CES 2025 –  reveals that the headset can become a standalone solution with the addition of Cobb, a standalone puck compute device being developed, powered by Qualcomm’s Snapdragon XR2 chip and a battery.

With a price starting at USD $1,900, the Pimax Dream Air will be available in May 2025, but the company is now taking pre-orders. Knowing Pimax’s previous history of announcing products that it does not deliver in time, and the many problems that users have with Pimax headsets, the Pimax Dream Air may be another example of as one user noted online… “Pimax being Pimax”.

Still, the Pimax Dream Air looks interesting, in terms of specifications and the fact that it promises to be a full-featured PCVR, with a DisplayPort interface for visually uncompressed images from the PC, which continues to be the best way to explore Virtual Reality. Will the Dream Air turn reality?

Samsung, Pimax and DPVR: three new VR headsets for 2025First Android XR headset comes from Samsung

The third VR headset coming in 2025 is much more appealing as it marks the return of Samsung to the VR market, where the company left a good impression with its Windows Mixed Reality headset, Odyssey, released in November 2017, following an announcement in October 2017. The Odyssey was the company’s first VR headset release, designed to work with Microsoft’s Windows Mixed Reality… which is now dead.

Microsoft put an end to its own Windows Mixed Reality, which it promoted in 2016 – when the company announced that multiple OEMs would release virtual reality headsets for the Windows Holographic platform -, and officially “killed” in December 2023, when the company announced deprecation of WMR with complete removal in a future release of Windows 11 version 24H2 expected to arrive in late-2024.

What this means is that owners of one of the 10 VR headsets launched (from companies as Samsung, Lenovo or Acer) since 2016 – the last one of them being the HP Reverb G2 – will own a mere paperweight when they update Windows 11. It’s Microsoft being Microsoft…

With WMR dead, a new “universal” platform is about to be born, and the first VR headset designed for it is Samsung’s Project Moohan. In fact, Samsung Project Moohan is the first Android XR headset to make it to the market. And Google says that more devices are coming for the new operating system built for this next generation of computing, created in partnership with Samsung and Qualcomm. Android XR combines years of investment in AI, AR and VR to bring helpful experiences to headsets and glasses, promising a platform to extend your reality to explore, connect and create in new ways.

Android XR platform supports OpenXR

Google says that the company is “working to create a vibrant ecosystem of developers and device makers for Android XR, building on the foundation that brought Android to billions” and that the preview of the new operating system made available to developers will allow them to start building apps and games for upcoming Android XR devices. Google also revealed that “Qualcomm partners like Lynx, Sony and XREAL, we are opening a path for the development of a wide array of Android XR devices to meet the diverse needs of people and businesses. And we are continuing to collaborate with Magic Leap on XR technology and future products with AR and AI.”

One important aspect of the new Android XR platform is that it will support OpenXR, which is a royalty-free, open standard that provides a common set of APIs for developing XR applications that run across a wide range of AR and VR devices, reducing the time and cost required for developers to adapt solutions to individual XR platforms while also creating a larger market of easily supported applications for device manufacturers that adopt OpenXR.

The Khronos Group, promoters of the OpenXR standard, said that “most major XR platforms have transitioned to using OpenXR to expose current and future device capabilities. Vendors with conformant OpenXR implementations include Acer, ByteDance, Canon, HTC, Magic Leap, Meta, Microsoft, Sony, XREAL, Qualcomm, Valve, Varjo, and Collabora’s Monado open source runtime. OpenXR is also supported by all the major game and rendering engines, including Autodesk VRED, Blender, Godot, NVIDIA’s Omniverse, StereoKit, Unreal Engine, and Unity.”

According to Google, “with headsets, you can effortlessly switch between being fully immersed in a virtual environment and staying present in the real world. You can fill the space around you with apps and content, and with Gemini, our AI assistant, you can even have conversations about what you’re seeing or control your device. Gemini can understand your intent, helping you plan, research topics and guide you through tasks.”

The company says that it is also “reimagining some of your favorite Google apps for headsets. You can watch YouTube and Google TV on a virtual big screen, or relive your cherished memories with Google Photos in 3D. You’ll be able to explore the world in new ways with Google Maps, soaring above cities and landmarks in Immersive View. And with Chrome, multiple virtual screens will let you multitask with ease. You can even use Circle to Search to quickly find information on whatever’s in front of you, with just a simple gesture.”

Android XR will also support glasses for all-day help in the future. Google notes that the company wants “there to be lots of choices of stylish, comfortable glasses you’ll love to wear every day and that work seamlessly with your other Android devices. Glasses with Android XR will put the power of Gemini one tap away, providing helpful information right when you need it — like directions, translations or message summaries without reaching for your phone. It’s all within your line of sight, or directly in your ear.”

Android XR: an open, unified platform for XR

Android XR is designed to be an open, unified platform for XR headsets and glasses. For users, this means more choice of devices and access to apps they already know and love. For developers, it’s a unified platform with opportunities to build experiences for a wide range of devices using familiar Android tools and frameworks.

One question remains, though: will Android XR become the “universal” platform for Virtual Reality experiences? Or will Google tire of it and add one more name to the list of abandoned projects, which includes Chromecast, YouTube VR, Google Cardboard,  Google Daydream. VR180 Creator or Stadia. In fact, Google and Microsoft both have a proven track record of projects abandoned.

Still, with OpenXR supported, Samsung’s Project Moohan is, as the company puts it, “one of our most ambitious endeavors yet”, the first headset designed for Android XR. The name “Moohan”, meaning ‘infinity’ in Korean, connotes Samsung’s belief in delivering unparalleled, immersive experiences within an infinite space. According to Samsung, “equipped with state-of-the-art displays, passthrough capabilities and natural multi-modal input, this headset will be your spatial canvas to explore the world through Google Maps, enjoy a sports match on YouTube or plan trips with the help of Gemini. All these experiences come with lightweight, ergonomically optimized hardware designed to ensure maximum comfort during use.”

Samsung, Pimax and DPVR: three new VR headsets for 2025Virtual Desktop can be ported to Android XR

“XR has quickly shifted from a distant promise to a tangible reality. We believe it has the potential to unlock new and meaningful ways to interact with the world by truly resonating with your everyday lives, transcending physical boundaries,” said Won-Joon Choi, EVP and Head of R&D, Mobile eXperience Business. “We are excited to collaborate with Google to reshape the future of XR, taking our first step towards it with Project Moohan.”

“We are at an inflection point for the XR, where breakthroughs in multimodal AI enable natural and intuitive ways to use technology in your everyday life”, said Sameer Samat, President of Android Ecosystem, Google. “We’re thrilled to partner with Samsung to build a new ecosystem with Android XR, transforming computing for everyone on next-generation devices like headsets, glasses and beyond.”

First revealed in February 2023 when Google, Samsung and Qualcomm announced that they would co-develop an XR device, the Project Moohan headset continues to be a mystery in terms of specifications. All that is known right now is that it uses a Snapdragon XR2+ Gen 2 processor, a more powerful version of the chip in Quest 3 and Quest 3S, suggesting it’s a standalone device, so quite different from Pimax’s Dream Air, a native PCVR headset. Still, as Project Moohan is compatible with OpenXR, it will also be easy to use as a wireless PCVR solution, as it will be able to run Virtual Desktop, the most popular software for wireless experiences on headsets as the Quest, HTC Vive or Pico 4.

Guy Godin, developer of Virtual Desktop, revealed to website UploadVR that bringing his native OpenXR app to Android XR “took only a few hours and the basics just worked out of the box”, adding this: ”Personally I think it’s refreshing to work with a platform that wants to collaborate with developers rather than one who tries to block and copy us. Grateful to have more options for consumers in the near future and I’m very excited to bring the best PC streaming solution to Android XR.”

]]>
https://www.provideocoalition.com/samsung-pimax-and-dpvr-three-new-vr-headsets-for-2025/feed/ 0
Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 https://www.provideocoalition.com/review-the-canon-rf-s-7-8mm-spatial-lens-on-the-eos-r7/ https://www.provideocoalition.com/review-the-canon-rf-s-7-8mm-spatial-lens-on-the-eos-r7/#respond Mon, 23 Dec 2024 13:00:03 +0000 https://www.provideocoalition.com/?p=287186 Read More... from Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7

]]>
Video professionals looking to create 3D content for the Apple Vision Pro, for other VR devices, or for 3D-capable displays, have only a few camera options to choose from. At the high end, syncing two cameras in a beam-splitter rig has been the way to go for some time, but there’s been very little action in the entry- or mid-level 3D camera market until recently. Canon, working alongside Apple, have created three dual lenses, and one is specifically targeted at Spatial creators: the RF-S 7.8mm STM DUAL lens, for the EOS R7 (APS-C) body.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 23
Two lenses, close up in a single lens housing on an R7

From the side, it looks more or less like a normal lens. You can add 58mm ND or other filters if you wish, a manual focus ring on the front works smoothly, and autofocus is supported. From the front, you’ll see two small spherical lenses instead of a single large circle, and on the back of the camera, you’ll see a separate image for the left and right eyes. Inter-pupillary distance is small, at about 11.6mm, but this isn’t necessarily a problem for the narrower field of view of Spatial.

Spatial ≠ Immersive

As a reminder, the term “spatial” implies a narrow field of view, much like traditional filmmaking, but 3D rather than 2D. Spatial does not mean the same thing as Immersive, though! Immersive is also 3D, but uses a much wider 180° field of view. It’s very engaging, but if you’re used to regular 2D filmmaking, shifting to Immersive will have a big impact, on how you shoot, edit and deliver your projects. The huge resolutions required also bring their own challenges.

If you do want to target Immersive filmmaking, one of Canon’s other lenses would suit you better. The higher-end RF 5.2mm f/2.8L Dual Fisheye is for full-frame cameras, while the RF-S 3.9mm f/3.5 STM Dual Fisheye suits APS-C crop-sensor cameras. On these lenses, the protruding lenses mean that filters cannot be used, and due to the much wider field of view, a more human-like inter-pupillary distance is used. While I’d like to review a fully Immersive workflow in the future, this time around, my focus here is on Spatial.

Handling

While it’s been a while since I regularly used a Canon camera, the brand was my introduction to digital filmmaking back in the “EOS Rebel” era. Today, the R7’s interface feels familiar and easy to navigate.  The flipping screen is helpful, the buttons are placed in unique positions to encourage muscle memory, and the two dials allow you to tweak settings by touch alone. It’s a solid mid-range body with dual SD slots, and while the slightly mushy buttons don’t give the same solid tactile response as those on my GH6, it’s perfectly usable. 

Of note is the power switch, which moves from “off”, through “on”, to a “video” icon. That means that on the R7, “on” really means “photos”, because though you can record videos in this mode, you can’t record in the highest quality “4K Fine” mode. If you plan to switch between video and stills capture, you’ll need to master this one, but if you only want to shoot video, just move the switch two notches across. Settings are remembered differently between the two modes, so remember to adjust aperture etc. if you’re regularly switching.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 24
A rear view of the R7, with two circular images in the viewscreen

Dynamic range is good (with 10-bit CLog 3 on offer) and if you shoot in HEIF, stills can use an HDR brightness range too. That’s a neat trick, and I hope more manufacturers embrace HDR stills soon.

Since the minimum focal distance is 15cm, it’s possible to place objects relatively close to the camera, and apparently the strongest effect is between 15cm and 60cm. That said, be sure to check your final image in an Apple Vision Pro, as any objects too close to the camera can be quite uncomfortable to look at. It’s wise to record multiple shots at slightly different distances to make sure you’ve captured a usable shot. 

While autofocus is quick, it’s a little too easy to just miss focus, especially when shooting relatively close subjects at wider apertures. The focusing UI can take a little getting used to, and if the camera sometimes fails to focus, you may need to switch to a different AF mode, or just switch to manual focus. This is easy enough, using a switch on the body or an electronic override, and while the MF mode does have focus peaking, it can’t be activated in AF mode.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 25
The rear viewscreen on the R7, showing two lenses R, then L

Another issue is that as the viewfinder image display is quite small, showing both image circles side by side, you’ll struggle to see what you’re framing and focused on without an external monitor connected to the micro HDMI port. However, when you do plug in a monitor, the touchscreen deactivates, and (crucially!) it’s no longer possible to zoom in on the image. It’s fair to say that I found accurate focusing far more difficult than I expected. For any critical shots, I’d recommend refocusing and shooting again, just in case, or stop down.

Composing for 3D

Composing in 3D is a lot like shooting in 2D with any other lens, except for all the weird ways in which it isn’t. Because the image preview is two small circles, it’s hard to visualize exactly what the final image will look like after cropping and processing. If you don’t have a monitor, you’ll want to shoot things a little tighter and a little wider to cover yourself.

To address the focus issue, the camera allows you to swap between the eyes when you zoom in, to check focus on each angle independently, though this is only possible if a monitor is not connected. Should you encounter a situation in which one lens is in focus and the other isn’t, use the “Adjust” switch on the lens to set the focus on the left or right angle alone.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 26
The Adjust switch on the front allows for per-eye focus correction

Importantly, because 3D capture is more like capturing a volume than carefully selecting a 2D image, you’ll be thinking more about where everything you can see sits in depth. And because the 3D effect falls off for objects that are too far away, you’ll spend your time trying to compose a frame with both foreground and background objects.

Some subjects work really well in 3D — people, for example. We’re bumpy, move around a bit, and tend to be close to the camera. Food is good too, and of all the hundreds of 3D clips I’ve shot over the last month or so, food is probably the subject that’s been most successful. The fact that you can get quite close to the camera means that spatial close-ups and near-macro shots are easier here than on the latest iPhone 16 series, but remember that you can’t always tell how close is too close.

Field tests

To run this camera through its paces, I took it out for a few test shoots, and handling (except for focus) was problem-free. Battery life was good,  there were no overheating issues, and it performed well.

To compare, I also took along my iPhone 16 Pro Max (using both the native Camera app at 1080p and the Spatial Camera app at UHD) and the 12.5mm ƒ/12 Lumix G 3D lens on my GH6. This is a long-since discontinued lens which I came across recently at a bargain price, and in many ways, it’s similar to the new Canon lens on review here. Two small spherical lenses are embedded in a regular lens body, positioned close together, and both left and right eyes are recorded at once.

There’s a difference, though. While the Canon projects two full circular images on the sensor, the Lumix lens projects larger circles, overlapping in the middle, with some of the left and right edges cropped off. More importantly, because the full sensor data can be recorded, this setup captures a higher resolution (2400px wide per eye, and higher vertically if you want it) than the Canon can.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 27
The Lumix 3D lens uses a lot more of the sensor area to capture its image, at a higher video resolution of 5760×4320

That’s not to say the image is significantly better on the Lumix — the Canon lens is a far more flexible offering. The Lumix 3D lens is softer, with a far more restrictive fixed aperture and 1m minimum focus distance. Since this isn’t a lens that’s widely available, it’s not going to be something I’d recommend in any case, but outdoors, or under strong lighting — sure, it works.

Interpreting the dual images

One slight oddity of dual-lens setups is that the images are not in the order you might expect, but are shown with the right eye on the left and the left eye on the right. Why? Physics. When a lens captures an image, it’s flipped, both horizontally and vertically, and in fact, the same thing happens in your own eyes. Just like your brain does, a camera flips the image before showing it to you, and with a normal lens, the image matches what you see in front of the camera. But as each of the left and right images undergoes this flipping process independently, when the camera flips the image horizontally, the left and right images are swapped too.

While the R-L layout is useful for anyone who prefers “cross-eye” viewing to match up two neighbouring images, it makes life impossible for those who prefer “parallel” viewing. If the images were in a traditional L-R layout, you could potentially connect a monitor and use something similar to Google Cardboard to isolate each eye and make it easier to see a live 3D image. As it is, you’ll probably have to wing it until you get back into the edit bay, and you will have to swap the two eyes back around to a standard L-R format before working with them.

Processing the files — as best you can

Canon’s EOS VR Utility is designed to process the footage for you, swapping the eyes back around, performing parallax correction, applying LUTs, and so on. It’s not pretty software, but it’s functional, at least if you use the right settings. While you can export to 3D Theater and Spatial formats, Spatial isn’t actually the best choice. The crop is too heavy, the resolution (1180×1180) is too low, and the codec is heavily compressed.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 28
EOS VR Utility, converting video to the 3D Theater format, before choosing a 16:9 crop

Instead, video professionals should export to the 3D Theater format, with a crop to 16:9. (Avoid the 8:9 crop, as too much of your image will be lost from the sides.). The 3D Theater format performs the necessary processing, but as it converts to the ProRes codec instead of MV-HEVC, most issues from generation loss can be avoided. Resolution will be 3840 across (including both eyes) and though this isn’t “true” resolution, it’s a significant jump up from the native Spatial option. When you import these clips into FCP, set Stereoscopic Conform to Side by Side in the Info panel, and you’re set.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 29
Stereoscopic Conform should be set to Side by Side in FCP

(Note that if space is a concern, 3D Theater can use the H.264 codec instead of ProRes, but when you import these H.264 clips into FCP, they’re incorrectly interpreted as “equirectangular” 360° clips, and you’ll have to set the Projection Mode to Rectangular as well as setting Stereoscopic Conform to Side by Side.)

A third option: if you’d prefer to avoid all this processing, it is possible (though not recommended) to work with the native camera files. After importing to FCP, you’ll first need to set the Stereoscopic Conform metadata to “Side by Side”, then add the clips to a Spatial/Stereo 3D timeline. There, look for the Stereoscopic section in the Video Inspector, check the Swap Eyes box, and set Convergence to around 7 to get started.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 30
Left and Right images can be swapped with the Info button — note the vertical disparity

With this approach, you’ll have to apply cropping, sharpening and LUTs manually, which would be fine, but the deal killer for me is that it’s very challenging to fix any issues with vertical disparity, in which one lens is very slightly higher than the other. That’s the case for this Canon lens, and also my Lumix 3D lens — presumably it’s pretty tricky to line them up perfectly during manufacturing. Although the EOS VR Utility corrects for this, unprocessed shots can be uncomfortable to view. Comparing the original shots with the 3D Theater processed shots, I’ve also noticed a correction for barrel distortion, but to my eye it looks a little heavy handed; the processed shots have a slight pincushion look.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 31
A crop of the left side of an original video on the left, and the same part of the processed video on the right — note the curved lines that should be straight

It’s worth noting that the EOS VR Utility is not free; to process any clips longer than 2 minutes, you’ll need to pay a monthly (US$5) or yearly (US$50) subscription. While that’s not a lot, it’s about 10% of the cost of the lens per year, and some may have objections to paying a subscription to simply work with their own files. Another issue is that the original time and date are lost (and in fact, sometimes re-ordered) when you convert your videos, though stills do retain the original time information.

Here’s a quick video with several different kinds of shots, comparing sharpness between the Canon and the iPhone. While you can argue that the iPhone is too sharp, the Canon is clearly too soft. It’s not a focus issue, but a limit of the camera and the processing pipeline. If you’re viewing on an Apple Vision Pro, click the title to play in 3D:

Stills are full resolution, but video is not

Unlike the video modes, capped at 3840×2160, the R7’s full sensor can be captured to still images: 6960x4640px. They’re sharp and they look great. Unfortunately, EOS VR Utility can’t directly convert still images into a spatial stills format that works directly on the Apple Vision Pro, and though it can make side by side images with the 3D Theater export option, since this resolution is capped at 3840 pixels across, you will be throwing away much of the original resolution.

To crop and/or color correct your images, use Final Cut Pro. Import your 3D Theater processed stills into FCP, add them to a UHD spatial timeline, adjust convergence, then use any color correction modules you want to. To set each clip to one frame long, press Command A, then Control-D, then 1, then hit Return. Share your timeline to an Image Sequence made of side-by-side images. For the final conversion to Spatial, use the paid app Spatialify, or the free QooCam EGO spatial video and photo converter.

This lens can certainly capture some impressive images with satisfying depth, but unfortunately the limitations of the pipeline mean that you don’t quite get the same quality when shooting video. Not all pixels are equal, and resolution isn’t everything, but there are limits, and 3D pushes right up against them.

The resolution pipeline

Pixel resolution is one of the main issues when working with exotic formats like 360°, 180° and 3D of all flavors. And as already mentioned, although the 32.5MP sensor does offer a high stills resolution of 6960×4640 pixels, the maximum video resolution that can be captured is the standard UHD 3840×2160, and that’s before the video is processed.

How does this work? The lens projects almost two complete circles across the width of the sensor, leaving the rest blank. But remember, the camera downsamples the sensor’s 6960px width to just over half that: 3840px across. Because two eyes are being recorded, only half of that is left for each eye, so at most we’d have 1920px per eye. The true resolution is probably about 1500px after cropping, but it’s blown back up to 1920px per eye with 3D Theater, or scaled down further to 1180×1180 in Spatial mode.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 32
Here’s how the resolution is lost, in video and stills (all resolutions include both eyes)

While it’s great that the whole sensor is downsampled (not binned) if you record in “Fine” mode, a significant amount of raw pixel data (about 45%) is still lost when recording video. While this is expected behavior for most cameras, I’ve been spoilt by the open gate recording in my GH6, where I can record the full width of the sensor in a 1:1 non-standard format, at 5760×4320. NLEs are way more flexible than they once were, and it’s entirely possible to shoot to odd sizes these days. If the R7 could record the original sensor resolution, the results would be much improved.

Here’s a Spatial video comparison of stills vs video quality, and again, if you’re viewing on an Apple Vision Pro, click the title to play in 3D:

While some 3D video content can appear slightly less sharp than 2D content in the Apple Vision Pro, resolution still matters. I can definitely see the difference between Spatial 1080p content and Spatial UHD content shot on my iPhone, and although the R7’s sensor has high resolution, too much is lost on its way to the edit bay. Spatial video shot on an iPhone is not just sharper, but more detailed than the footage from this lens/body combo. 

Conclusion

The strength of this lens is that it’s connected to a dedicated camera, while its chief weakness is that camera and its pipeline can’t quite do it justice. For videos, remember not to use the default Spatial export from EOS VR Utility, but to use 3D Theater with a 16:9 crop. Stills look good (though they could look much better) and if you’re into long exposure tricks like light painting, or close-ups of food, you’ll have a ball doing that in 3D. But the workflow. For video? It’s just not as sharp as I’d like, and that’s mostly because the camera can’t capture enough pixels when recording video.

In the future, I’d love to see a new version of EOS VR Utility. It’s necessary for correcting disparity and swapping eyes, but it shouldn’t distort images or lose time and date information, and it should be able to export Spatial content at a decent resolution. I’d also love to see a firmware update or a new body with even better support for this lens, either cleverly pre-cropping before recording, or by recording at the full sensor resolution. The high native still photo resolution is a tantalising glimpse into what could have been.

So… should you buy this lens? If you want to shoot 3D stills, if you’ve found the iPhone too restrictive, and you’ve been looking for manual controls on a dedicated camera, sure. Of course, it’s an easier decision if you already own an R7, and especially if you can use another app to process the original stills at full resolution. However, as the workflow isn’t straightforward and results aren’t as sharp or detailed as they could be, this won’t be for everyone. Worth a look, but I’d wait for workflow issues to be resolved (or a more capable body) if video is your focus.

RF-S 7.8mm STM DUAL lens $499
Canon EOS R7 $1299

PS. Here’s a sample of light painting in 3D with a Christmas tree:

]]>
https://www.provideocoalition.com/review-the-canon-rf-s-7-8mm-spatial-lens-on-the-eos-r7/feed/ 0
Fantastic fairy dust https://www.provideocoalition.com/fantastic-fairy-dust/ https://www.provideocoalition.com/fantastic-fairy-dust/#respond Sat, 21 Dec 2024 12:01:02 +0000 https://www.provideocoalition.com/?p=287269 Read More... from Fantastic fairy dust

]]>
The BFG character looks over a table as it is filled with food by a purplish cloud of magical dust.
Sainsburys’ festive commercial. Sometimes, fairy dust is not gold. But usually it’s gold.

 

Fire up the particle generators, folks: it’s the time of year when every third commercial features snow and a variation on fairy dust that isn’t usually the product of a visible fairy. Instead, it appears in the wake of a as effects artists do their best to create a visual expression of general-purpose seasonal cheer.  We’ll concentrate on the UK, since that is your correspondent’s responsibility, and explore a few glitter-heavy examples that might not be familiar around the world. Your correspondent’s home island is an odd place for festive advertisements, since it rarely snows in December, and it certainly doesn’t snow very much in the late August weeks during which most of these things are actually shot. Almost all of them, though, reach for the buckets of shredded paper – or the controls for Red Giant’s Particular software. Truly, the festive season is all about floating motes. Let us know in the comments if this is a global phenomenon, and whether it applies to other traditional celebrations, too.

Congealed innards

It’s a sign of the economic times that low-cost UK supermarket chain Aldi is doing so well. Despite the company’s cost controlled approach, though, its yuletide promotional effort gives us two particle effects in the opening shot, setting a high standard for floating glitter in this year’s lineup. Otherwise, Aldi is determinedly traditionalist, giving us a widescreen presentation apparently shot on spherical lenses (the amount of CG integration might have made anamorphics a bit of a millstone). The teal and orange look is a safe choice, especially as it’s so easy to maintain when your protagonist is a carrot.

By comparison, Asda suffers a mild handicap in that its corporate colours are a decidedly antifestive black and green. It’s therefore difficult to depict the logo as a welcoming island of light in a winter scene, although the opening scene is a wonderful example of the slowly-descending crane opener which makes a valiant effort to do just that.

A sleigh careers dangerously down a snowbound street at night, trailing golden glitter.

Perhaps by way of compensation, Asda takes a more ambitious approach to camera equipment. There are some striking similarities between the two narratives, both of which feature CG creatures on a military operation, but Asda’s exists in a cinemascope-style frame shot on true anamorphics. If particle-effect snow is done with sufficient enthusiasm, perhaps as a full 3D render, it can be hard to tell from real, but it seems possible that Asda chose to reinforce its dedication to the traditional by deploying real fake snow, if there is such a thing.

Marks & Spencer shows us a world in which Dawn French invites a half the post code round for a pre-festivities party which must have felt crowded even in the multimillion London townhouse in which it appears to take place. The ninety-second piece (a popular duration, for some reason) also gives us a fairy in person, provoking plenty of work for Particular. Even so, the show leans away from the teal and orange toward a look that’s actually fairly desaturated but for the deliberately chosen spot colours of red and green.

The action opens in a world where people are taught to carefully coordinate not only their clothing but also their purchases – look for the red hat, red flowers and red shopping bag from the very first frame. Look also for the interiors which take place in a room that someone’s painted a dark brownish-purple, though the production designer probably used the term “oxblood”. This is a particularly good example of why movies rarely look like the real world, because in the real world people tend to avoid living in an interior the colour of congealed innards.

Gingerbread cannibalism

Dawn’s ersatz house is oxblood and dark green, which is festive but impractical. It would probably look a bit less cheerful around a scene set during a midsummer ball, but it does serve to remind nascent filmmakers that great cinematography can begin with a can of paint.

For those of us who rarely encounter the opportunity to build a dining room bigger than the average UK apartment, consider Tesco, which takes a determinedly real-world approach to its seasonal promotion. Mostly shot handheld, everything seems amazingly realistic until people (and foxes) start turning into gingerbread. There’s a certain body-horror aspect to a gingerbread boy eating a gingerbread video game, but there’s a certainly a preponderance of family values and a standard-issue big, soft, warm top light over a table full of way too much food.

Fantastic fairy dust 33
Now this is quality fairy dust. Field turbulence, motion blur, and a strong contribution from a glow and flare plugin, if your narrator is any judge.

There are even particle effects, though they’re used to create a shower not of fairy dust, but gingerbread cookies, candy canes, and other objects evocative of saturnalia. Again, the photography is fairly straightforward – spherical 16:9 – and beyond the cookie people, the production design is restrained to maintain that real-world feeling. Because sub-f/1 lenses exist, It’s dangerous to claim something was shot on a really big chip, but the depth of field here screams large format. The highly mobile car interior at 0’48” would have needed either a camera ending in “mini” or a DSLR on a stick, but of course that doesn’t preclude a sensor the size of a playing card.

Perhaps the most technically interesting of this year’s crop is the commercial produced by Sainsbury’s. Featuring the BFG character from the film of the same name, the company takes a leaf from Me to You’s now-classic stop motion promo and animates its lead character only every other frame for a handmade look.

Not stop motion

The thing is, the BFG, unlike Tatty Teddy, is computer generated, not stop motion. He’s also inserted into live-action scenes. The combination works surprisingly well, especially considering some of that live action has camera movement. Tracking CG animation at 12.5 frames per second into a live action scene shot at 25 inevitably creates a one-frame disparity in the motion tracking at least every other frame. It is visible when stepping through frame by frame. Still, careful choice of subject and animation creates a result that clearly embodies the sort of hand-made aesthetic they wanted to seem to be using.

It’s a rare moment of innovation in a field which often doesn’t attract many new ideas. So’s the cloud of magical vapour that appears in one scene. Fairy dust is inevitably golden in colour, but Sainsbury’s pushes out the boat with a rainbow of hues. The titular Giant, meanwhile is more or less a celebrity endorsement, and it’s not clear how many of the advertised seasonal treats represents a fair serving for someone who’s thirty-five feet tall.

Finally, there’s upmarket chain Waitrose, which is so famous for producing story-based commercials this time of year that their releases are national news. They hired Matthew Macfadyen. But they didn’t hire anyone with a copy of After Effects to do any particle work, so it’s hard to take seriously.

]]>
https://www.provideocoalition.com/fantastic-fairy-dust/feed/ 0
Stream Deck now controls RØDECaster Video mixer/switcher https://www.provideocoalition.com/stream-deck-now-controls-rodecaster-video-mixer-switcher/ https://www.provideocoalition.com/stream-deck-now-controls-rodecaster-video-mixer-switcher/#respond Fri, 20 Dec 2024 19:15:55 +0000 https://www.provideocoalition.com/?p=287283 Read More... from Stream Deck now controls RØDECaster Video mixer/switcher

]]>
Stream Deck now controls RØDECaster Video mixer/switcher 35

Thanks to a new free plugin for Stream Deck, the popular control device can now control the RØDECaster Video mixer/switcher (reviewed in several articles here). Also, an update to the free RØDE Capture app now allows connecting the live output from an iPhone or iPad to an input source of the RØDECaster Video. When doing that, you have access to using both the primary and self-portrait («selfie») cameras simultaneously, if desired. To have a Stream Deck control your RØDECaster Video, the RØDECaster Video must be connected to the same local area network as the computer which hosts the Stream Deck. Then you must download the free RØDECaster Video plugin from Elgato’s Stream Deck marketplace. See that and more in the free video from RØDE below:

 

 

Lee este artículo en buen castellano

Stream Deck ahora controla el mezclador RØDECaster Video

(Re-)Subscribe for upcoming articles, reviews, radio shows, books and seminars/webinars

Stand by for upcoming articles, reviews, books and courses by subscribing to my bulletins.

In English:

En castellano:

Most of my current books are at books.AllanTepper.com, and also visit AllanTepper.com and radio.AllanTepper.com.

FTC disclosure

RØDE has not paid for this article. RØDE has sent Allan Tépper units for review. Some of the manufacturers listed above have contracted Tépper and/or TecnoTur LLC to carry out consulting and/or translations/localizations/transcreations. So far, none of the manufacturers listed above is/are sponsors of the TecnoTurBeyondPodcastingCapicúaFM or TuSaludSecreta programs, although they are welcome to do so, and some are, may be (or may have been) sponsors of ProVideo Coalition magazine. Some links to third parties listed in this article and/or on this web page may indirectly benefit TecnoTur LLC via affiliate programs. Allan Tépper’s opinions are his own. Allan Tépper is not liable for misuse or misunderstanding of information he shares.

]]>
https://www.provideocoalition.com/stream-deck-now-controls-rodecaster-video-mixer-switcher/feed/ 0