Iain Anderson – ProVideo Coalition https://www.provideocoalition.com A Filmtools Company Fri, 03 Jan 2025 04:05:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.5 https://www.provideocoalition.com/wp-content/uploads/cropped-PVC_Logo_2020-32x32.jpg Iain Anderson – ProVideo Coalition https://www.provideocoalition.com 32 32 Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 https://www.provideocoalition.com/review-the-canon-rf-s-7-8mm-spatial-lens-on-the-eos-r7/ https://www.provideocoalition.com/review-the-canon-rf-s-7-8mm-spatial-lens-on-the-eos-r7/#respond Mon, 23 Dec 2024 13:00:03 +0000 https://www.provideocoalition.com/?p=287186 Read More... from Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7

]]>
Video professionals looking to create 3D content for the Apple Vision Pro, for other VR devices, or for 3D-capable displays, have only a few camera options to choose from. At the high end, syncing two cameras in a beam-splitter rig has been the way to go for some time, but there’s been very little action in the entry- or mid-level 3D camera market until recently. Canon, working alongside Apple, have created three dual lenses, and one is specifically targeted at Spatial creators: the RF-S 7.8mm STM DUAL lens, for the EOS R7 (APS-C) body.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 1
Two lenses, close up in a single lens housing on an R7

From the side, it looks more or less like a normal lens. You can add 58mm ND or other filters if you wish, a manual focus ring on the front works smoothly, and autofocus is supported. From the front, you’ll see two small spherical lenses instead of a single large circle, and on the back of the camera, you’ll see a separate image for the left and right eyes. Inter-pupillary distance is small, at about 11.6mm, but this isn’t necessarily a problem for the narrower field of view of Spatial.

Spatial ≠ Immersive

As a reminder, the term “spatial” implies a narrow field of view, much like traditional filmmaking, but 3D rather than 2D. Spatial does not mean the same thing as Immersive, though! Immersive is also 3D, but uses a much wider 180° field of view. It’s very engaging, but if you’re used to regular 2D filmmaking, shifting to Immersive will have a big impact, on how you shoot, edit and deliver your projects. The huge resolutions required also bring their own challenges.

If you do want to target Immersive filmmaking, one of Canon’s other lenses would suit you better. The higher-end RF 5.2mm f/2.8L Dual Fisheye is for full-frame cameras, while the RF-S 3.9mm f/3.5 STM Dual Fisheye suits APS-C crop-sensor cameras. On these lenses, the protruding lenses mean that filters cannot be used, and due to the much wider field of view, a more human-like inter-pupillary distance is used. While I’d like to review a fully Immersive workflow in the future, this time around, my focus here is on Spatial.

Handling

While it’s been a while since I regularly used a Canon camera, the brand was my introduction to digital filmmaking back in the “EOS Rebel” era. Today, the R7’s interface feels familiar and easy to navigate.  The flipping screen is helpful, the buttons are placed in unique positions to encourage muscle memory, and the two dials allow you to tweak settings by touch alone. It’s a solid mid-range body with dual SD slots, and while the slightly mushy buttons don’t give the same solid tactile response as those on my GH6, it’s perfectly usable. 

Of note is the power switch, which moves from “off”, through “on”, to a “video” icon. That means that on the R7, “on” really means “photos”, because though you can record videos in this mode, you can’t record in the highest quality “4K Fine” mode. If you plan to switch between video and stills capture, you’ll need to master this one, but if you only want to shoot video, just move the switch two notches across. Settings are remembered differently between the two modes, so remember to adjust aperture etc. if you’re regularly switching.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 2
A rear view of the R7, with two circular images in the viewscreen

Dynamic range is good (with 10-bit CLog 3 on offer) and if you shoot in HEIF, stills can use an HDR brightness range too. That’s a neat trick, and I hope more manufacturers embrace HDR stills soon.

Since the minimum focal distance is 15cm, it’s possible to place objects relatively close to the camera, and apparently the strongest effect is between 15cm and 60cm. That said, be sure to check your final image in an Apple Vision Pro, as any objects too close to the camera can be quite uncomfortable to look at. It’s wise to record multiple shots at slightly different distances to make sure you’ve captured a usable shot. 

While autofocus is quick, it’s a little too easy to just miss focus, especially when shooting relatively close subjects at wider apertures. The focusing UI can take a little getting used to, and if the camera sometimes fails to focus, you may need to switch to a different AF mode, or just switch to manual focus. This is easy enough, using a switch on the body or an electronic override, and while the MF mode does have focus peaking, it can’t be activated in AF mode.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 3
The rear viewscreen on the R7, showing two lenses R, then L

Another issue is that as the viewfinder image display is quite small, showing both image circles side by side, you’ll struggle to see what you’re framing and focused on without an external monitor connected to the micro HDMI port. However, when you do plug in a monitor, the touchscreen deactivates, and (crucially!) it’s no longer possible to zoom in on the image. It’s fair to say that I found accurate focusing far more difficult than I expected. For any critical shots, I’d recommend refocusing and shooting again, just in case, or stop down.

Composing for 3D

Composing in 3D is a lot like shooting in 2D with any other lens, except for all the weird ways in which it isn’t. Because the image preview is two small circles, it’s hard to visualize exactly what the final image will look like after cropping and processing. If you don’t have a monitor, you’ll want to shoot things a little tighter and a little wider to cover yourself.

To address the focus issue, the camera allows you to swap between the eyes when you zoom in, to check focus on each angle independently, though this is only possible if a monitor is not connected. Should you encounter a situation in which one lens is in focus and the other isn’t, use the “Adjust” switch on the lens to set the focus on the left or right angle alone.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 4
The Adjust switch on the front allows for per-eye focus correction

Importantly, because 3D capture is more like capturing a volume than carefully selecting a 2D image, you’ll be thinking more about where everything you can see sits in depth. And because the 3D effect falls off for objects that are too far away, you’ll spend your time trying to compose a frame with both foreground and background objects.

Some subjects work really well in 3D — people, for example. We’re bumpy, move around a bit, and tend to be close to the camera. Food is good too, and of all the hundreds of 3D clips I’ve shot over the last month or so, food is probably the subject that’s been most successful. The fact that you can get quite close to the camera means that spatial close-ups and near-macro shots are easier here than on the latest iPhone 16 series, but remember that you can’t always tell how close is too close.

Field tests

To run this camera through its paces, I took it out for a few test shoots, and handling (except for focus) was problem-free. Battery life was good,  there were no overheating issues, and it performed well.

To compare, I also took along my iPhone 16 Pro Max (using both the native Camera app at 1080p and the Spatial Camera app at UHD) and the 12.5mm ƒ/12 Lumix G 3D lens on my GH6. This is a long-since discontinued lens which I came across recently at a bargain price, and in many ways, it’s similar to the new Canon lens on review here. Two small spherical lenses are embedded in a regular lens body, positioned close together, and both left and right eyes are recorded at once.

There’s a difference, though. While the Canon projects two full circular images on the sensor, the Lumix lens projects larger circles, overlapping in the middle, with some of the left and right edges cropped off. More importantly, because the full sensor data can be recorded, this setup captures a higher resolution (2400px wide per eye, and higher vertically if you want it) than the Canon can.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 5
The Lumix 3D lens uses a lot more of the sensor area to capture its image, at a higher video resolution of 5760×4320

That’s not to say the image is significantly better on the Lumix — the Canon lens is a far more flexible offering. The Lumix 3D lens is softer, with a far more restrictive fixed aperture and 1m minimum focus distance. Since this isn’t a lens that’s widely available, it’s not going to be something I’d recommend in any case, but outdoors, or under strong lighting — sure, it works.

Interpreting the dual images

One slight oddity of dual-lens setups is that the images are not in the order you might expect, but are shown with the right eye on the left and the left eye on the right. Why? Physics. When a lens captures an image, it’s flipped, both horizontally and vertically, and in fact, the same thing happens in your own eyes. Just like your brain does, a camera flips the image before showing it to you, and with a normal lens, the image matches what you see in front of the camera. But as each of the left and right images undergoes this flipping process independently, when the camera flips the image horizontally, the left and right images are swapped too.

While the R-L layout is useful for anyone who prefers “cross-eye” viewing to match up two neighbouring images, it makes life impossible for those who prefer “parallel” viewing. If the images were in a traditional L-R layout, you could potentially connect a monitor and use something similar to Google Cardboard to isolate each eye and make it easier to see a live 3D image. As it is, you’ll probably have to wing it until you get back into the edit bay, and you will have to swap the two eyes back around to a standard L-R format before working with them.

Processing the files — as best you can

Canon’s EOS VR Utility is designed to process the footage for you, swapping the eyes back around, performing parallax correction, applying LUTs, and so on. It’s not pretty software, but it’s functional, at least if you use the right settings. While you can export to 3D Theater and Spatial formats, Spatial isn’t actually the best choice. The crop is too heavy, the resolution (1180×1180) is too low, and the codec is heavily compressed.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 6
EOS VR Utility, converting video to the 3D Theater format, before choosing a 16:9 crop

Instead, video professionals should export to the 3D Theater format, with a crop to 16:9. (Avoid the 8:9 crop, as too much of your image will be lost from the sides.). The 3D Theater format performs the necessary processing, but as it converts to the ProRes codec instead of MV-HEVC, most issues from generation loss can be avoided. Resolution will be 3840 across (including both eyes) and though this isn’t “true” resolution, it’s a significant jump up from the native Spatial option. When you import these clips into FCP, set Stereoscopic Conform to Side by Side in the Info panel, and you’re set.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 7
Stereoscopic Conform should be set to Side by Side in FCP

(Note that if space is a concern, 3D Theater can use the H.264 codec instead of ProRes, but when you import these H.264 clips into FCP, they’re incorrectly interpreted as “equirectangular” 360° clips, and you’ll have to set the Projection Mode to Rectangular as well as setting Stereoscopic Conform to Side by Side.)

A third option: if you’d prefer to avoid all this processing, it is possible (though not recommended) to work with the native camera files. After importing to FCP, you’ll first need to set the Stereoscopic Conform metadata to “Side by Side”, then add the clips to a Spatial/Stereo 3D timeline. There, look for the Stereoscopic section in the Video Inspector, check the Swap Eyes box, and set Convergence to around 7 to get started.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 8
Left and Right images can be swapped with the Info button — note the vertical disparity

With this approach, you’ll have to apply cropping, sharpening and LUTs manually, which would be fine, but the deal killer for me is that it’s very challenging to fix any issues with vertical disparity, in which one lens is very slightly higher than the other. That’s the case for this Canon lens, and also my Lumix 3D lens — presumably it’s pretty tricky to line them up perfectly during manufacturing. Although the EOS VR Utility corrects for this, unprocessed shots can be uncomfortable to view. Comparing the original shots with the 3D Theater processed shots, I’ve also noticed a correction for barrel distortion, but to my eye it looks a little heavy handed; the processed shots have a slight pincushion look.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 9
A crop of the left side of an original video on the left, and the same part of the processed video on the right — note the curved lines that should be straight

It’s worth noting that the EOS VR Utility is not free; to process any clips longer than 2 minutes, you’ll need to pay a monthly (US$5) or yearly (US$50) subscription. While that’s not a lot, it’s about 10% of the cost of the lens per year, and some may have objections to paying a subscription to simply work with their own files. Another issue is that the original time and date are lost (and in fact, sometimes re-ordered) when you convert your videos, though stills do retain the original time information.

Here’s a quick video with several different kinds of shots, comparing sharpness between the Canon and the iPhone. While you can argue that the iPhone is too sharp, the Canon is clearly too soft. It’s not a focus issue, but a limit of the camera and the processing pipeline. If you’re viewing on an Apple Vision Pro, click the title to play in 3D:

Stills are full resolution, but video is not

Unlike the video modes, capped at 3840×2160, the R7’s full sensor can be captured to still images: 6960x4640px. They’re sharp and they look great. Unfortunately, EOS VR Utility can’t directly convert still images into a spatial stills format that works directly on the Apple Vision Pro, and though it can make side by side images with the 3D Theater export option, since this resolution is capped at 3840 pixels across, you will be throwing away much of the original resolution.

To crop and/or color correct your images, use Final Cut Pro. Import your 3D Theater processed stills into FCP, add them to a UHD spatial timeline, adjust convergence, then use any color correction modules you want to. To set each clip to one frame long, press Command A, then Control-D, then 1, then hit Return. Share your timeline to an Image Sequence made of side-by-side images. For the final conversion to Spatial, use the paid app Spatialify, or the free QooCam EGO spatial video and photo converter.

This lens can certainly capture some impressive images with satisfying depth, but unfortunately the limitations of the pipeline mean that you don’t quite get the same quality when shooting video. Not all pixels are equal, and resolution isn’t everything, but there are limits, and 3D pushes right up against them.

The resolution pipeline

Pixel resolution is one of the main issues when working with exotic formats like 360°, 180° and 3D of all flavors. And as already mentioned, although the 32.5MP sensor does offer a high stills resolution of 6960×4640 pixels, the maximum video resolution that can be captured is the standard UHD 3840×2160, and that’s before the video is processed.

How does this work? The lens projects almost two complete circles across the width of the sensor, leaving the rest blank. But remember, the camera downsamples the sensor’s 6960px width to just over half that: 3840px across. Because two eyes are being recorded, only half of that is left for each eye, so at most we’d have 1920px per eye. The true resolution is probably about 1500px after cropping, but it’s blown back up to 1920px per eye with 3D Theater, or scaled down further to 1180×1180 in Spatial mode.

Review: the Canon RF-S 7.8mm Spatial Lens on the EOS R7 10
Here’s how the resolution is lost, in video and stills (all resolutions include both eyes)

While it’s great that the whole sensor is downsampled (not binned) if you record in “Fine” mode, a significant amount of raw pixel data (about 45%) is still lost when recording video. While this is expected behavior for most cameras, I’ve been spoilt by the open gate recording in my GH6, where I can record the full width of the sensor in a 1:1 non-standard format, at 5760×4320. NLEs are way more flexible than they once were, and it’s entirely possible to shoot to odd sizes these days. If the R7 could record the original sensor resolution, the results would be much improved.

Here’s a Spatial video comparison of stills vs video quality, and again, if you’re viewing on an Apple Vision Pro, click the title to play in 3D:

While some 3D video content can appear slightly less sharp than 2D content in the Apple Vision Pro, resolution still matters. I can definitely see the difference between Spatial 1080p content and Spatial UHD content shot on my iPhone, and although the R7’s sensor has high resolution, too much is lost on its way to the edit bay. Spatial video shot on an iPhone is not just sharper, but more detailed than the footage from this lens/body combo. 

Conclusion

The strength of this lens is that it’s connected to a dedicated camera, while its chief weakness is that camera and its pipeline can’t quite do it justice. For videos, remember not to use the default Spatial export from EOS VR Utility, but to use 3D Theater with a 16:9 crop. Stills look good (though they could look much better) and if you’re into long exposure tricks like light painting, or close-ups of food, you’ll have a ball doing that in 3D. But the workflow. For video? It’s just not as sharp as I’d like, and that’s mostly because the camera can’t capture enough pixels when recording video.

In the future, I’d love to see a new version of EOS VR Utility. It’s necessary for correcting disparity and swapping eyes, but it shouldn’t distort images or lose time and date information, and it should be able to export Spatial content at a decent resolution. I’d also love to see a firmware update or a new body with even better support for this lens, either cleverly pre-cropping before recording, or by recording at the full sensor resolution. The high native still photo resolution is a tantalising glimpse into what could have been.

So… should you buy this lens? If you want to shoot 3D stills, if you’ve found the iPhone too restrictive, and you’ve been looking for manual controls on a dedicated camera, sure. Of course, it’s an easier decision if you already own an R7, and especially if you can use another app to process the original stills at full resolution. However, as the workflow isn’t straightforward and results aren’t as sharp or detailed as they could be, this won’t be for everyone. Worth a look, but I’d wait for workflow issues to be resolved (or a more capable body) if video is your focus.

RF-S 7.8mm STM DUAL lens $499
Canon EOS R7 $1299

PS. Here’s a sample of light painting in 3D with a Christmas tree:

]]>
https://www.provideocoalition.com/review-the-canon-rf-s-7-8mm-spatial-lens-on-the-eos-r7/feed/ 0
Quick Look: Blackmagic URSA Cine Immersive https://www.provideocoalition.com/quick-look-ursa-cine-immersive/ https://www.provideocoalition.com/quick-look-ursa-cine-immersive/#comments Tue, 17 Dec 2024 23:48:38 +0000 https://www.provideocoalition.com/?p=287116 Read More... from Quick Look: Blackmagic URSA Cine Immersive

]]>
Blackmagic’s URSA Cine Immersive camera has transitioned from an announcement into something you can actually pre-order, with deliveries set to start in Q1 2025. At 8160×7200px per eye, capturing 3D at 90fps with 16 stops of dynamic range, this is hands-down the highest spec 3D Immersive rig ever produced — we’ve never seen this many pixels on a production camera. Of course, not all pixels are equal, but as you increase pixel count and sensor size, it becomes more and more difficult to get good results. Most companies in this league don’t even try.

So why are the specs so important? Let’s recap.

Quick Look: Blackmagic URSA Cine Immersive 11
The URSA CINE Immersive, from the front

 

Great Immersive content needs a lot of pixels

I’ve spoken here before about the transcendent experience you can have in the Apple Vision Pro, and since I wrote that, I had it all over again watching Wild Hearts by The Weeknd. Great 2D films can move hearts, but great Immersive films can make your jaw drop, then drop further. There’s a shot in that music video which made me laugh in astonishment — it’s just magic.

While the immersion is very important, a big part of the experience is how good it looks, and when you need to capture a field of view many times wider than a standard lens does, you really need to step way beyond the bounds of UHD. If you’ve ever wondered why you need to capture more than 4K, here’s one great reason. Resolution is one reason why great-looking 360° video is so hard to come by in its native form; the pixels just end up stretched too thinly. Remember, even Canon’s executives believe you need a lot more.

“In order to reproduce video for Vision Pro, you need to have at least 100 megapixels… So at the moment, we can’t cater to that level of a requirement. But what I presume what companies who will be providing images for the Vision Pro will be required to have 100-megapixels with 60 frames per second.”

— Yasuhiko Shiomi, Advisory Director and Unit Executive of the Image Communication Business Operations, in an interview with PetaPixel

Running the maths, an 8K camera has 31.5 megapixels. The URSA Cine Immersive offers 117.5 megapixels with both eyes combined, at a higher frame rate. We’re there.

So how is the best-looking content on the Apple Vision Pro being made? Apple has the budget to assemble completely custom camera rigs, and while some of their content looks utterly amazing, technical limitations mean that some of their Immersive content looks merely “good”, because you can see the pixels. The excellent Apple Vision Pro app Explore POV uses a Canon rig with their 180° VR lens, upscaling the 8K output to 16K, and I suspect that setup will remain relevant for anyone with a smaller budget or who’s planning to travel extensively. After all, the URSA Cine Immersive probably isn’t something many people would want to lug up a mountain.

Let’s dissect the press release and dive into the key specs.

Quick Look: Blackmagic URSA Cine Immersive 12
As seen from the right

Pulling apart all the key points

The sensor delivers 8160 x 7200 resolution per eye with pixel level synchronization and an incredible 16 stops of dynamic range, so cinematographers can shoot 90fps 3D immersive cinema content…

Because Genlock is far from universal, most commonly used mirrorless cameras can’t easily be rigged up in a sub-frame-accurate dual-camera setup — and a beam-splitter just adds to the complications.

…to a single file.

This massively simplifies the post-production process as you don’t need to manage separate files for the left and right eyes, or spend time mashing them together into new ones. In fact:

The new Blackmagic RAW Immersive file format is designed to make it simple and easy to work with immersive video within the whole post production workflow, and includes support for Blackmagic global media sync. Blackmagic RAW files store camera metadata, lens data, white balance, digital slate information and custom LUTs to ensure consistency of image on set and through post production.

In a world where not all cameras even name their files in a predictable, useful way, it’s refreshing to see metadata given the importance it deserves. Immersive is complex enough, and this will make production easier.

The custom lens system is designed for URSA Cine’s large format image sensor with extremely accurate positional data that’s read and stored at time of manufacturing. This immersive lens projection data — which is calibrated and stored on device — then travels through post production in the Blackmagic RAW file itself.

Small differences between two lenses are inevitable, so the fact that the camera itself can calibrate and record those differences is huge. Both the commercially available 3D lenses I’ve worked with have included some level of vertical disparity, and it’s essential to correct for it.

A new version of DaVinci Resolve Studio set to release in Q1 next year includes powerful new features to help filmmakers edit, color grade, and produce Apple Immersive Video shot on the URSA Cine Immersive camera for Apple Vision Pro. Key features include a new immersive video viewer, which will let editors pan, tilt, and roll clips for viewing on 2D monitors or on Apple Vision Pro for an even more immersive editing experience.

An immersive viewer in Resolve is coming soon for 2D screens, but can we use the Apple Vision Pro during the edit? Looking down the page…

Monitoring on Apple Vision Pro from the DaVinci Resolve Studio timeline.

Yes! This is a huge step up, and it’s going to make a massive difference in how Immersive is edited. It’s not 100% clear if it’ll be possible to view the camera feed live in the Apple Vision Pro, but we’ll see. It’s smart to focus on just one eye on the side display, with a toggle between left and right.

Quick Look: Blackmagic URSA Cine Immersive 13
A crop of the side view showing the 5″ monitor displaying a single eye at a time

Blackmagic URSA Cine Immersive comes with 8TB of high performance network storage built in, which records directly to the included Blackmagic Media Module, and can be synced to Blackmagic Cloud and DaVinci Resolve media bins in real time. This means customers can capture over 2 hours of Blackmagic RAW in 8K stereoscopic 3D immersive, and editors can work on shots from remote locations worldwide as the shoot is happening.

Hmm… 8TB for over two hours — let’s run some quick rough maths on a napkin (8000/2.25/60/60) to find a data rate about 1GB/sec, or 8000Mbps. When a camera generates files this big, at data rates this high, it’s probably wise to include a qualified storage system. If you’ve been looking for a clear example of why Thunderbolt 5’s 80Gbps+ speeds aren’t overkill, this is it.

Finally, here are the key bullet points:

Blackmagic URSA Cine Immersive Features

  • Dual custom lenses for shooting Apple Immersive Video for Apple Vision Pro.
  • Dual 8160 x 7200 (58.7 Megapixel) sensors for stereoscopic 3D immersive image capture.
  • Massive 16 stops of dynamic range.
  • Lightweight, robust camera body with industry standard connections.
  • Generation 5 Color Science with new film curve.
  • Each sensor supports 90 fps at 8K captured to a single Blackmagic RAW file.
  • Includes high performance Blackmagic Media Module 8TB for recording.
  • High speed Wi-Fi, 10G Ethernet or mobile data for network connections.
  • Includes DaVinci Resolve Studio for post production.

DaVinci Resolve Immersive Features

  • Monitoring on Apple Vision Pro from the DaVinci Resolve Studio timeline.
  • Edit Blackmagic RAW Immersive video shot on Blackmagic URSA Cine Immersive.
  • Immersive video viewer for pan, tilt and roll.
  • Complete post production workflow for immersive with edit, color, audio and graphics.
  • Export and deliver native files for viewing on Apple Vision Pro.

Availability and Price

Blackmagic URSA Cine Immersive is available to pre order now direct from Blackmagic Design offices worldwide for $29,995 (US). Delivery will start in late Q1 2025. URSA Cine Immersive will be available from Blackmagic Design resellers later next year.

At US$30K the URSA Cine Immersive’s price is the same as its non-immersive sibling, the URSA Cine 17K 65, and while that’s a big sticker for regular humans considering a mirrorless purchase, it’s a drop in the bucket on a film set. This will be a rental for most independent shooters, but it’s certainly within budget for production companies. It’s quite a trick to make an Apple Vision Pro look cheap, but in this market, the perspective is more likely to be “the lenses are included for free?”

Quick Look: Blackmagic URSA Cine Immersive 14
Seen from the left

Should you jump in?

If you’ve been waiting for the best Immersive camera and you have a capable crew, this is the most capable camera we’ve seen by some margin. I would expect to see new models of the Apple Vision Pro become cheaper and the market for Immersive content to grow; it’s up to you if you want to get in now or wait to see how the market develops.

Either way, I think it’s fair to say that Blackmagic have a significant lead over other camera manufacturers here, and I’d be amazed if anyone else can get near these specs for anywhere near this price any time soon. In the same way that Apple Vision Pro is on the bleeding edge of display tech, the Blackmagic URSA Cine Immersive is on the bleeding edge of camera tech.

After all, the non-Immersive URSA Cine 17K 65 (check out our story here) has a colossal resolution of 17520×8040 on its 65mm sensor, while most other cinema cameras use, at most, an 8K sensor. The renowned Alexa 65 offers 6560×3102px. But Immersive requires more data than feature films do, and full, for-real-life, just-like-being-there reality is harder to capture than the beautiful, flat, 2D unreality of cinema. Time will tell whether this camera can actually transport you somewhere else, but if it can’t, it’s not for lack of trying. 

If you’d like to draw your own conclusions, here’s the detailed press release. With luck, I’ll be able to bring you a hands-on review next year.

Quick Look: Blackmagic URSA Cine Immersive 15
From the rear, showing the 5″ monitor on the side

]]>
https://www.provideocoalition.com/quick-look-ursa-cine-immersive/feed/ 4
Q&A: Delivering Good Spatial Video https://www.provideocoalition.com/qa-delivering-good-spatial-video/ https://www.provideocoalition.com/qa-delivering-good-spatial-video/#comments Tue, 10 Dec 2024 13:00:46 +0000 https://www.provideocoalition.com/?p=286717 Read More... from Q&A: Delivering Good Spatial Video

]]>
I hear that the new Final Cut Pro 11 supports Spatial video editing! What’s that?

Spatial video is stereoscopic 3D video shot with a “normal” field of view, like most flat 2D videos, and stored in the MV-HEVC format. At its core, though: it’s 3D stereoscopic video, and could be viewed on another head-mounted device like a Meta Quest, or even a 3D monitor. You can also deliver one angle of a Spatial video to a regular 2D platform, so experimentation with this newer format is relatively low-risk. Spatial is also supported in DaVinci Resolve 19.1.

Q&A: Delivering Good Spatial Video 16
The author, excited, at Apple Park, in stereoscopic 3D

Does Spatial also mean a fuzzy border?

Not always. Most people watching Spatial will be using an Apple Vision Pro, and if a Spatial video has the correct metadata, it will usually be shown with a fuzzy border. That fuzziness can help avoid Stereo Window Violations, where objects approach the edge of frame and become uncomfortable, so in general, it’s a pretty good idea. However, a fuzzy border is not used for Spatial videos shown on Vimeo, nor is it used in Disney+ when watching 3D movies.

Does Spatial mean the same as Immersive?

Nope. Spatial has a narrow field of view, while Immersive (usually) has a 180° field of view. Immersive is extremely impressive, but it requires the very best cameras, huge resolutions, demanding delivery systems, and a whole different style of filmmaking. Spatial is far easier to handle, so let’s focus there for now.

What can you shoot Spatial videos on?

The simplest workflow is to use an iPhone 15 Pro or Pro Max, or any kind of iPhone 16. In the Camera app, switch to Spatial, make sure you’re in landscape mode, and you’re away. Right now, the maximum resolution is 1080p, and the only frame rate is 30fps. One major limitation is that one of your two angles is taken from a crop of the ultra-wide lens, making that angle softer, and noisier in low light.

Although the 16 Pro and Pro Max shifted to a 48MP ultra-wide sensor, the video quality hasn’t changed. My best guess is that it’s not possible to take a direct 1:1 pixel feed from the center of this sensor, but instead, a binned 4K feed is being captured, and then cropped down. Bummer.

Can third-party apps on an iPhone do any better?

Yes, but with some restrictions. Although some third-party apps do enable 4K Spatial recording, which does look sharper, those apps either don’t enable stabilization at all (Spatialify, SpatialCamera), or the stabilization can differ between the two lenses (Spatial Camera). In a shot with significant movement, the disparity between the two angles can make a shot unwatchable, but to be fair, jitter can do the same.

Q&A: Delivering Good Spatial Video 17
Clockwise from top, here’s SpatialCamera, Spatialify, and Spatial Camera

It’s important to note that resolution isn’t everything, and a good camera shooting 1080p will look nicer to most viewers than a 4K that’s oversharpened with a constrained bitrate. Right now, all the current iOS options have a regular, sharp, “phone” look. If Log with Spatial recording arrives (as Spatial Camera’s developer has offered) it could make a huge difference in quality.

Are there any good lenses you can use with mirrorless cameras?

Very few. Canon have a new lens for Spatial recording, and couple more for 180° Immersive recording. I’m planning to take a look at that system soon, and it’s the only option Apple’s actively recommending.

Q&A: Delivering Good Spatial Video 18
The Canon Spatial lens on the R7 — look for an in-depth review soon

Because the APS-C sensor Canon R7 tops out at 4K across, shared between the left and right angles, the maximum resolution is 1080p Spatial video, though you can expect it to look much better than most iPhone footage. There is more vertical headroom, so you could potentially deliver a taller, squarer image than 16:9 if you wanted to.

I have a Micro-Four-Thirds body — any lenses for that?

Lumix made a 3D lens over a decade ago, but it was only ever intended for stills. If you put some tape over the pins, you can use it to record video with a modern body like a GH6 or a GH7 in 5760×4320 @ 30fps or 5728×3024 @ 59.94fps. If you do go this route, realise that the minimum focus is 60cm with a fixed aperture of ƒ/12, and though this is not a modern lens, it’s not bad if you sharpen it up a little.

Q&A: Delivering Good Spatial Video 19
The ancient, hard-to-find Lumix 3D lens

Outdoors, you can actually get some decent results, with a resolution of around 2400px across per eye after processing, and you can deliver square output if you want to. Right now, the preferred native square output from FCP 11 is 2200x2220px, which actually works out OK. Here’s some demo footage from this camera. If you’re on an Apple Vision Pro with visionOS 2.2 or later, click the title to open it in 3D. If that fails, click the Vimeo logo to open in Safari, then choose OPEN in the top right:

Are there any other 3D cameras you can recommend?

Not yet. Acer showed the SpatialLabs Eyes Stereo Camera back in June, promising delivery in September, but we haven’t seen it yet. It’s 4K per eye and relatively cheap at $550, but very few people have actually used one. Sample clips look good, though, and it’s trivial to download them and use them in FCP 11 today.

Q&A: Delivering Good Spatial Video 20
Demo footage from the SpatialLabs Eyes works in FCP 11 today

Though there are a couple of other cameras available, they have significant flaws. If you’re considering combining footage from two separate cameras, know that it’s a tricky process with many potential issues, including sync difficulties, vertical disparity, and general workflow complexity.

Can you shoot Spatial videos just like flat 2D videos?

There are a few limitations. You really should stay level, and if you want to move the camera, you’ll get the best results if you move slowly, as if in a dream. A gimbal (or better, a slider) is an excellent idea, because you may not be able to rely on in-camera stabilization to work correctly, or at all.

One important factor is that you shouldn’t place objects too close to the camera, because they’re difficult for a viewer to focus on. If you’re shooting on an iPhone, this is made worse because the two lenses have different minimum focus distances. The Lumix 3D lens can’t focus closely either.

Where should I place objects in the scene?

Ideally, build a frame with your subject about 1 meter from the camera, with a decent amount of distance between them and the background. Anything deep in the distance won’t look very 3D, and it’s a good idea to place something in the foreground for contrast.

By its nature, 3D video will be closer to reality than 2D video — you’re capturing a volume, not just a flat image, and it’ll take experience to know what works well and what doesn’t on any particular camera. As the distance between a camera’s two lenses increases, the apparent depth effect increases, though if you push it too far, distant objects will look like models. Therefore, you don’t necessarily need or want a camera with a large, human-like inter-ocular distance — work with the sweet spot of your setup.

OK, I’ve got my shots. What’s the best way to review my footage in an Apple Vision Pro?

If you shot everything on your iPhone, and you have iCloud Photos turned on, they’ll sync automatically. Another workflow is to pull all the videos into a folder on your Mac, use File Sharing to share that folder to your local network, then connect to it with the Files app on  your Apple Vision Pro.

What special things do I need to know when importing clips into FCP 11?

When you bring in Spatial video clips from an iPhone, Final Cut Pro will recognize them and put a small icon on their thumbnail, and in the Info panel the “Stereoscopic Conform” field will have been set to “Side by Side”.

Q&A: Delivering Good Spatial Video 21
Look to the Info Inspector to find Stereoscopic Conform, then choose Side by Side

However, if you’ve already brought Spatial clips into an FCP 10.x library, they will not be tagged as “Side by Side” and you must make sure all these clips are tagged, or the second eye will be ignored and you’ll get a 2D clip.

If you have shot clips on any other kind of camera, you just need to tag them as side by side. If your camera produces two separate video files for left and right eyes, it’s probably a good idea to pre-process those clips to a side-by-side presentation with Compressor first.

How can I see those clips in 3D?

In the Viewer, under its View menu, look for Show Stereoscopic As. You can choose to view just one eye, both of them side by side, to superimpose the two angles, or to use Anaglyph mode with red/cyan glasses. One of the Anaglyph modes is usually the best way to judge the depth position of an object, as you can quickly see a positional difference between the red and cyan.

Q&A: Delivering Good Spatial Video 22
Using Anaglyph Monochrome mode, you can see cyan on the right edge of near objects and red on the right edge of far objects) — Anaglyph Outline works well too

Can’t I watch my video in 3D in my Apple Vision Pro while I’m editing it?

Not yet, and there’s no built-in way to monitor in 3D while you shoot either. Obviously a live link to Apple Vision Pro would be hugely useful, so submit a feature request and it might happen sooner. For now, use red/cyan glasses while editing, and export to a file when you want to preview in your Apple Vision Pro.

Can I control where things sit in 3D?

Yes, using Convergence, in the new Stereoscopic section found in the Video Inspector. This control offsets the two eyes from one another to control their apparent distance from the viewer. A member of the FCP team recommends that since the iPhone’s two lenses are parallel, and do not converge, that you should start by setting convergence to approximately 1.5. That sits the video back behind the screen a little (making it more comfortable to watch) while a negative value would sit in front.

Q&A: Delivering Good Spatial Video 23
As shot, Convergence 0, Anaglyph mode

Ideally, your subject in one shot should have a similar apparent convergence from the subject of the next shot, or you’ll force your viewers to refocus. Note that you can drag on the numbers next to the Convergence sliders to move them a lot further (±10) than the sliders themselves (±3).

Q&A: Delivering Good Spatial Video 24
Convergence set to 1.5, Anaglyph mode

While Convergence adjusts everything in a shot at the same time, it’s possible to virtually place separate elements in 3D space by using the new Magnetic Mask to separate objects from their shots, and then using different Convergence values.

Does all footage need the same treatment?

Footage from different cameras may need different numbers to create the same apparent look, and if you’re using an odd setup, you may need to use some pretty extreme values here. To deal  with clips shot on my old Lumix 3D lens, for example, I need a convergence value of nearly 20. How can I get there?

Q&A: Delivering Good Spatial Video 25
Yes, this is a ColorChecker on a wheelie bin, but more importantly, the Lumix 3D lens needs a large convergence adjustment

Because the sliders don’t go far enough, I need to apply convergence of 10, with Swap Eyes checked (because dual-image lenses record the images in “cross-eyed” format) and then create a compound clip. I can then add another 8-10 convergence to the Compound Clip (without swapping eyes) to get me close to where I need to be.

However, a probably better workflow if you need a large baseline convergence shift is to use adjustment layers. Convergence changes to adjustment layers can affect all clips beneath them, allowing me to use two adjustment layers to set a convergence shift for the whole timeline, and then make further individual convergence adjustments on each clip.

Do I really need to worry about convergence?

Yes. Uncontrolled convergence becomes uncomfortable, and you don’t want titles, for example, to clash with other objects in the frame, which will happen if they appear to be at the same position in depth. Also, if anything could to be overlaid on your video, it’s probably going to sit at a convergence of 0, so it’s going to look a bit weird if your titles appear to be in front of things that they’re clearly behind.

Finally, while you absolutely can throw things toward the viewer, it’s a party trick. Don’t do it too often, and don’t do it for long.

Can I crop, scale, and transform clips?

You can, but you’ll need to use a free effect to do so. If you activate and then use the built-in Transform controls, you’ll be adjusting the side-by-side double-wide frame. That’s helpful for some technical tasks, such as correcting a vertical disparity between the two eyes, but not helpful if you want to crop both eyes in the same way. Instead, download and install the free Alex4D Transform, which lets you transform any clip, or even rotate it in 3D. Any Motion-made effect will work, but this one’s great.

OK, I’m done with the edit. How can I export it?

Access the Share menu, then choose the new preset for Apple Vision Pro. If you want to send it to your Apple Vision Pro for preview, send it straight to iCloud and then find in the Files app, or send it anywhere and then AirDrop it. Leave the default metadata options (45° Field of View, 19.2mm Baseline) if you’ve shot on iPhone, and be sure to use 8-bit rather than 10-bit if you’re uploading to Vimeo. (Currently, only 8-bit files are detected as Spatial.)

Vimeo? OK, but what about YouTube?

YouTube has been openly hostile to the Apple Vision Pro. Not only do they still not have a native app, but their legal threats have seen the best existing app (Juno) removed from the App Store. While YouTube does support 3D video, they don’t support native Spatial workflows yet. Instead, export your video as a regular H.264 video — this will give you a Full Side by Side video in a double-wide frame, 7680×2160 or 3840×1080.

Send this file to Handbrake, add “frame-packing=3” in the Additional Options on the Video tab, but don’t change the video dimensions. Start the re-encoding process. After re-encoding, upload the output to YouTube, and after processing the regular 2D versions, the 3D versions will eventually become available. Be patient once again as you wait for the highest resolution 3D to process, and check the video with your red/cyan glasses. Here’s the result:

I’ve got an Apple Vision Pro here. Do you have any sample videos for me to check out?

Yes — lots and lots of clips, mostly shot on iPhone with the third-party app Spatial Camera. Some shots are still, most are moving; some edits use transitions, some don’t; most clips are 4K while some (marked) are 1080p; some are close to the camera while others are further away.

None of this is intended as narrative, but it should be useful for anyone planning their shots or considering making a travel video in Spatial. Watch for the intentional mistakes that I’ve left in to show you what not to do! There’s a thumb at the edge of frame that’s not visible with the fuzzy border in the Photos app, but can be seen on Vimeo. Some shots are simply too hard to converge, because the stabilization on the two angles is out of sync. In other shots, the “bobbing” movement from not walking like a ninja can be somewhat unpleasant. But overall, there’s a definite sense of “presence” here that you don’t get from flat 2D video, more like a memory than a snapshot.

The best way to view these on the Apple Vision Pro is to update to the latest visionOS 2.2, which allows you to click the title of a video or click the Vimeo link to open it in 3D. Right now, Vimeo’s Apple Vision Pro app isn’t perfect, and can’t load a folder. (If you’re still on an older version of the OS, navigate to the video’s page in Safari, then choose the “OPEN” link at the top right.)

Hampstead’s autumn/fall foliage (comfortable, but the very first shot moves quickly):

The well-known Hampton Court Palace (you may know it as a shooting location for Bridgerton):

A walk in the Cotswolds, in the English countryside — with some resolution comparisons, plus a rogue thumb in one angle:

Here are a few more, but I don’t want to flood the page with embeds — please check out the folder, though you may to view individual links in Safari and then choose OPEN to se them in 3D:
https://vimeo.com/user/1116072/folder/22963118

I want to deliver to 2D and 3D. Is that possible?

Yes, you can deliver a single eye from your 3D Spatial video as a flat 2D video. That could be a clean 4K image if you used a third-party iOS app, but it might only be a 1080p image if you’ve used something else. Another option is to dual-shoot. I shot a whole lot of video at the Final Cut Pro Creative Summit on my GH6 (shooting 2D) with my iPhone mounted on top (shooting Spatial 3D).

Q&A: Delivering Good Spatial Video 26
Here’s a Spatial multicam clip of Michael Cioni, in which the second angle is just a regular 2D flat clip — it works well

These matched shots can be combined into a multicam, edited once, and then you simply need to choose the angle you want to show (2D or 3D) in your final output. However, a major issue with this idea is eyelines, if your subjects ever talk straight to the camera. Short of a beam splitter, there’s no way for interviewees to look into two cameras at the same time.

What’s next?

There hasn’t been a truly new frontier in video for some time, and Spatial is one of the few new things you can explore safely, without compromising your mainstream video outputs. Immersive is great, but it’s a whole separate beast needing a whole new pipeline. Spatial is something you can deliver as an add-on, wowing your clients in a whole new way.

Watch for more camera reviews, editing tips and advanced workflows here over the next few months. 

]]>
https://www.provideocoalition.com/qa-delivering-good-spatial-video/feed/ 5
Two Small Gimbals – The Insta360 Flow Pro and the DJI RS 3 Mini https://www.provideocoalition.com/two-small-gimbals-the-insta360-flow-pro-and-the-dji-rs-3-mini/ https://www.provideocoalition.com/two-small-gimbals-the-insta360-flow-pro-and-the-dji-rs-3-mini/#respond Mon, 25 Nov 2024 12:00:54 +0000 https://www.provideocoalition.com/?p=285056 Read More... from Two Small Gimbals – The Insta360 Flow Pro and the DJI RS 3 Mini

]]>
The Insta360 Flow Pro and the DJI RS 3 Mini are two great small gimbals, though they don’t compete directly with one another, and neither will handle a full-size cinema camera. If you’re looking for something to handle a cine lens paired with a RED, these are not the gimbals for you. But if you’re a solo operator or you work as part of a small team, you could use the Flow Pro with a phone and the RS 3 Mini to balance most mirrorless cameras with most lenses. I’ve used heavier cameras and heavier gimbals before, and unless you’re a gym junkie, weight really does matter when you’re moving around and tracking subjects.

Here, we’ll take a look at why you might want a gimbal in the first place, what these particular gimbals are especially good at, and what their limitations are. None of the equipment here was provided for review; I own it and have used it personally.

Why you might want a gimbal

When you can’t use a tripod, and you want more than a smooth pan or tilt, a gimbal will protect your camera from unintended shakes and jerks, keeping your shot level, and making your camera moves smoother. A stable shot allows the viewer to more easily focus on the content, rather than being distracted by shakes, and the moving shots you can now capture without dolly tracks or sliders can bring life to your edit. No matter which gimbal you use, if you want to move the camera, but you don’t want footage that bobs up and down, learn how to walk like a ninja.

Gimbals are not the only game in town, and there are many other ways to stabilize a shot, including electronic stabilization, lens stabilization, sensor stabilization and of course post-production stabilization. Many action cameras include excellent anti-shake modes, and the iPhone has Action Mode too. But if you don’t want to crop in too far on your sensor, or your camera doesn’t have great stabilization, or you want to use a camera mode that’s not compatible with the full stabilization tech — a gimbal is still a good choice. Another example: if you’re shooting Spatial 3D video or immersive 180° video, you really need to make every shot as stable as possible, or you’ll make people sick. Handheld, even stabilized, isn’t always good enough.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 27
No matter how your hands move, the image remains stable

But it’s important to note that today’s gimbals are not just focused on stabilization. A gimbal can provide remote control over tilting and panning, or can track a subject without human input. Some of them can create 360° panoramas, or timelapses, or hyperlapses, or power your camera for long periods. If you’ve never used one, you might also be surprised how cheap they are today.

Let’s start with one of the latest phone gimbals.

The Insta360 Flow Pro (US$149)

The Insta360 Flow Pro is a phone-only gimbal that packs down small, but extends in two ways, with a selfie-stick-style extension tube and a tabletop tripod. There’s a 1/4-20 screw hole at the bottom if you want to mount it on a regular tripod or monopod, and it doesn’t require any balancing. To connect your phone, you can fit the spring-loaded mount to your phone, then use the magnets in the other side to hold it firmly on the gimbal.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 28
Folded, with the included bag and the optional MagSafe mount

For slightly more money (US$20, iPhone only) you can buy an additional MagSafe mount that replaces the spring-loaded adapter. This mount makes mounting a one-click magnetic affair, and can stay on the back of your iPhone all day. The MagSafe mount absolutely makes setup quicker, and if I’m casually shooting while on a walk or day out, I want to take as little time as possible to be ready to shoot.

Insta360 include an app to make the most of the gimbal, and that’s where you’ll find most of the bells and whistles. It’s the same app you’ll use with other Insta360 products, like the Ace Pro, GO3, or many 360° cameras like the ones I’ve reviewed here before. It provides tips, ideas for videos, AI-based editing, gimbal settings and calibration, special shot modes, and advanced tracking options. For anything fancy, use the app.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 29
The app is capable, works in portrait and landscape, and includes ideas and templates for new videographers

So what’s the difference between the Flow and the Flow Pro? The Pro model includes full 360° rotation, very handy in some situations, and also DockKit compatibility, which is iPhone-only. DockKit sets up quickly, using NFC, and from that point, you can use the Flow Pro for simple person-based tracking while using any third-party video recording app. The front trigger can tun this feature on and off, and a green ring light shows you if tracking is active. This is critical if you want to use any other app to record video, and given how many options a modern iPhone has, this is something more serious video creators will want.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 30
The green light indicates when tracking is active, and works in third-party apps

If I’m recording Spatial video on my iPhone 16 Pro Max, I might want to use the Camera app, or I might want to use Spatial Camera to enable 4K Spatial video with stabilization. But I can’t record in Spatial with the Insta360 app, so if I want any kind of tracking, I’ll need the Flow Pro. Similarly, although the Insta360 app does allow ProRes Log recording, if I want to record in Log without using ProRes, I’ll need to move to an app like Kino or Blackmagic Camera. Or maybe you’d simply rather shoot with Final Cut Camera, or the official Camera app, because you’d prefer to. You’ll want the Flow Pro too.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 31
The Record button and zoom ring can trigger the native Camera app, but for extra control (such as focus) you’ll need to use the Insta360 app

This gimbal isn’t perfect, though, and one issue is that it doesn’t offer a huge amount of tilt flexibility when placed on a table. If you’re planning on using it to smooth out a shot at eye height, shooting yourself or away from you, it’s best to extend the gimbal vertically, tilt the top part towards you, and then hold the base of the gimbal out to the side. While this definitely increases your tilt range, means you have to try a little harder to pull off some kinds of moving shots. Other small folding gimbals like the DJI Osmo Mobile 6 have similar issues, and in general, the phone gimbals that don’t have this limitation are a little larger and heavier.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 32
With the MagSafe mount and a power cable connected

Another issue is that the weight limit (300g) is designed with modern phones in mind, but not with phones loaded with accessories — you’ll be near the limit with an iPhone 16 Pro Max with a case. So, if you want to use the included cold shoe for mounting a microphone, be sure to use a cable that’s flexible enough not to pull on the phone as it moves around. Mind you, if you don’t need external sound, there’s a USB-C port near the top that can power the phone instead.

But what if you want to stabilise a bigger camera, or you want to rig up an iPhone well beyond its original weight? Time for…

The DJI RS 3 Mini (US$279)

With a much larger payload (2000g/2kg/4.4lbs) this gimbal has a very different focus, on stabilization rather than automation. There’s plenty of range in all three axes of motion, though if you try to pull off a long tilt with a long lens, you might hit the tripod itself. But it’s a strong, capable gimbal that can compensate for more significant movement than any phone gimbal could — and which gives more control over the movement, too. With the RS3 Mini, it’s possible to lock the camera direction, no matter how the gimbal is moved — a trick the Flow Pro can only manage if you hold down its trigger.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 33
My Panasonic GH6 and Olympus 12-40 are easily stabilised by the RS3 Mini — though I’d prefer the autofocus of a GH7

That stability comes at the cost of flexibility in setup, and in weight. It’s necessary to balance the gimbal whenever you load a new camera, or change from a heavy to a light lens. Once you get used to it, balancing isn’t difficult, and the locking mechanisms on each axis make it easy to lock down all axes except the one you’re trying to balance. When not in use, you may be able to lock the arms and fold the gimbal up pretty small, but (depending on your camera setup) you may not be able to fold it completely. 

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 34
Each axis of stabilization needs to be adjusted by hand, and this is made much easier with the included locks

The RS3 Mini itself weighs 850g (1.87lbs), not insubstantial. When you add your camera and lens setup, and then hold it for half an hour, or a few hours at an event, you may feel it. I use the Panasonic Lumix GH6, which isn’t that light, but with lenses that are on the lighter side, and I’m able to shoot with this setup pretty comfortably.

That wasn’t the case with my previous “serious” gimbal, the original Ronin S, which at 1.86 kg (4.1 lb) became a serious burden over time, and if you’re using heavy full-frame gear that in turn requires a heavier gimbal, it simply might not be possible to hand-hold it with any gimbal for a long shoot. For me, lightness matters, but if you need to support a more serious rig, there are bigger, newer, and more capable gimbals in the same series, including the RS4 and RS4 Pro.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 35
Custom modes allow you to stop the gimbal from changing direction — ideal for some shots

DJI’s gimbals don’t just offer stability, but control too. The RS3 Mini’s built-in joystick is comfortable to use, and gives precise control for smooth pans and tilts with a gentle start and finish — you may not need a high-end tripod head for smooth pans any more. Beyond simply supporting a camera, the RS3 Mini can also connect to one directly, either with an included USB-C cable or even (with some Sonys) wirelessly. (Note that even if you connect with a cable, the gimbal doesn’t provide enough power to keep the camera charged.)

The gimbal can then start and stop recording on the camera, trigger auto focus, and if you also mount a phone running the DJI Ronin app, potentially offer automated tracking. But I suspect most serious camera operators want to operate the camera themselves, and are more interested in remote control: starting, stopping, and reframing.

If you have a remote monitor hooked up to your camera, you can place the camera out of reach, and control its movement using either your phone’s gyroscope (moving the phone physically to move the gimbal) or with the joysticks on an Xbox or Playstation controller. If you can’t be next to the camera, this is one of the easiest ways to control it.

Do note that if you plan to mostly shoot with a bare phone, it’s going to be too light to balance, and you’ll need to add weight to be able to use it. For anyone setting up a phone with a cage, plus a mic receiver, SSD and power source, you shouldn’t have too much trouble. 

Conclusion

There’s a world of gimbals out there, and these are just two. DJI make phone gimbals too, but none of them support DockKit just yet, and Insta360 make plenty of other products but not large gimbals. Still, most solo professional videographers don’t need to hold several kilograms of weight, and can stay at the lower end of the market. At the end of the day, I can recommend both of these gimbals.

Two Small Gimbals - The Insta360 Flow Pro and the DJI RS 3 Mini 36
With the built-in tripod and the extension tube out, it’s significantly taller (about 50cm) than its folded size (about 16cm)

If your focus is on automation and tracking, or you only need to stabilize a phone, phone gimbals are easier to set up and will help a solo operator capture a variety of shots. To match a dedicated camera, go for a larger, heavier gimbal; what you lose in flexibility you’ll get back in stabilization performance. While the Insta360 Flow Pro isn’t meant for fast, sudden movements, you can smooth out a running shot with most cameras on the RS3 Mini — it’s solid.

Either way, the right gimbal will give you flexibility on set, because you won’t need to set up and repeatedly move a tripod, and will give you flexibility in the edit bay, because you’ll be able to use new kinds of smooth, dynamic shots. At both ends of a production, a gimbal can impress a client without breaking your back or emptying your wallet. If you shoot, take a look.

  • Insta360 Flow Pro US$149
  • DJI RS 3 Mini US$279
]]>
https://www.provideocoalition.com/two-small-gimbals-the-insta360-flow-pro-and-the-dji-rs-3-mini/feed/ 0
Final Cut Pro 11 released at the FCP Creative Summit https://www.provideocoalition.com/fcp-11-released-at-the-fcp-creative-summit/ https://www.provideocoalition.com/fcp-11-released-at-the-fcp-creative-summit/#respond Thu, 14 Nov 2024 03:49:11 +0000 https://www.provideocoalition.com/?p=286178 Read More... from Final Cut Pro 11 released at the FCP Creative Summit

]]>
Here at the Final Cut Pro Creative Summit, at the Developer Center just across from Apple Park, we were shown the latest version of Final Cut Pro 11 for Mac in a slick, live, in-person demo. After that, we trekked across to Apple Park itself to check out some live demos and ask some tricky questions. In a first, we were allowed to take pictures outside Apple Park, and it’s as stunning a building as you may have thought.

Final Cut Pro 11 released at the FCP Creative Summit 37
Apple Park, up close

Back to FCP. As well as the 3D spatial video editing support promised earlier this year, a number of additional features that editors will love are included, and the most impressive is the Magnetic Mask. Let’s take a tour through the new features in FCP 11. (There were also new releases of Logic, FCP for iPad, and Final Cut Camera. Scott Simmons has also written a great article about these releases — check it out.)

Magnetic Mask

Over the last few years, AI has really stepped up, making tedious or impossible jobs far less arduous, and the new Magnetic Mask is a machine-learning-powered tool that will win many fans. While chromakey remains the simplest solution for removing a person or object from a background, you’re not always in control of the shoot or the client.

A couple of releases back, FCP added the Scene Removal Mask, which combined a difference mask with machine learning smarts to remove a fixed background, and third-party effects like Keyper enabled automatic “person tracking”. These solutions were useful, but sometimes lacked temporal stability, and Keyper could only recognize people.

Similar to the Rotobrush in After Effects and the Magic Mask in DaVinci Resolve, the new Magnetic Mask isolates part of a shot, and is more powerful than previously included options. To use it, drag the Magnetic Mask effect directly onto a person or an object in any shot, and its outline will be automatically recognized. Move the mouse a little to shift the detected outline, and if it’s still not right, you can use brush tools to add or subtract areas. When you’re happy with this frame, press Analyze and it’ll be tracked throughout the shot — but this isn’t a fixed shape like the existing object tracking.

Final Cut Pro 11 released at the FCP Creative Summit 38
You’ll see a different outline when you drag to slightly different spots — move around before releasing the mouse button

Instead, as the object changes from frame to frame, the Magnetic Mask outline changes to match it, similar to the Segment Anything Model previously announced by Meta. A person’s arms moving as they walk? No problem, they’ll be found and selected in every frame. What if they walk behind something, and then walk out again? Occlusion is well handled, so there’s a good chance it’ll work perfectly. What if you need to track multiple people at once? No problem, each selection will be shown in a different color, and can even occlude one another.

Final Cut Pro 11 released at the FCP Creative Summit 39
Analysis and the brush controls are found at the top of the viewer

It’s quick, it works well, and it’s the perfect solution when a client asks to isolate any part of a shot. The usual workflow will be to duplicate a shot above itself, silence the top copy, then use Magnetic Mask to isolate part of that top copy. Add whatever effects you need to the top clip and/or the bottom clip, and add any other elements (like titles) between the two clips if you wish.

There are included sliders to tweak the edges, but if you want a full suite of matte tools, you should probably do this in Motion instead — it’s got this new effect and many more effects for final tweaking. In FCP, this is a solid tool that gives good results, and should make it easy to highlight just about anything.

Automatic Captions

This is a feature that people have been requesting for a long time, and it’s finally here. Leveraging the support in macOS Sequoia, it’s now possible to create high-quality captions, offline, for free, very quickly indeed — select any number of clips, right click, then choose Transcribe to Captions. If you needed this already, you’ve probably been using something like MacWhisper, or Descript, or Premiere Pro, or DaVinci Resolve, or Simon Says to create captions and then reimport them, but this is quicker and easier. (Note that this is English-only for now.) How good are the captions? Quality is good, but this model seems less likely to guess at words it doesn’t quite recognize than MacWhisper. If you see an ellipsis (…) that will often indicate an omission you should check.

Final Cut Pro 11 released at the FCP Creative Summit 40
Automatic captions, on the timeline

It’s important to note that these captions are the good kind of captions — closed captions. These are exported separately as an SRT file, are displayed at the viewer’s preferred size or can be hidden entirely, automatically move out of the way when the playback bar appears, which aid accessibility and search engines, and which can be edited after the video is uploaded if a mistake is found. The bad kind of captions are open captions: burned in titles, with fixed size and font, that can’t be turned off and that are inaccessible. These are, however, the only way to go if you’re targeting platforms like Instagram and TikTok that don’t support proper closed captions.

If you need to create titles from your closed captions, that’s not yet built in, and though there have been apps that offered this feature (including Captionator and Transcriber) there’s a brand new one that does a great job: captionAnimator, from Intelligent Assistance. Give it a timeline with closed captions and a single title of your choice (set up just the way you like it) and you’ll receive a new timeline full of open captions to match.

Text-based editing is also not yet part of the app, but you can use the Timeline Index to navigate through all the captions, and you can click on any caption to jump to it in the timeline. It’s possible to use scripting to make things a little easier (though a script I posted here earlier wasn’t quite good enough!) and I’ll pursue this further.

Spatial Video Editing

Please excuse my excitement — I’ve had an Apple Vision Pro for a few months now, and although it’s a wonderful device, it’s been a challenge to create my own content for it. Workarounds did exist, but now they’re redundant: FCP 11 can import and edit Spatial clips from recent iPhones (15 Pro, 15 Pro Max, any 16) or the Apple Vision Pro itself, and then export to Spatial using a new destination in the Share command. Distribution is sorted too, thanks to Vimeo’s recent announcement of their Vision Pro app and support for Spatial videos.

If you’re not across the format, Spatial video is 3D stereoscopic video, using the MV-HEVC codec, with metadata to describe the field of view. It’s not 180° VR, but is much more like a traditional 3D movie, and doesn’t require a wholesale change in the way you make your videos. Indeed, you can shoot in 3D, edit in FCP, and then deliver to Spatial and traditional 2D from a single timeline. This should make experimenting with this format risk-free, so if you have clients curious to try it out, why not?

Final Cut Pro 11 released at the FCP Creative Summit 41
View left and right eyes in a double-wide frame if you wish

Under the hood, Spatial clips are treated as double-width clips with the left and right eyes side-by-side at full size, and indeed, if you were to use another app (such as Blender) to create clips in this FSBS format, you can treat them just like regular Spatial clips if you choose the right Stereoscopic setting in the Info Inspector. That’s happy news, because it means editors can take advantage of existing 3D tools without any further work.

When working with Spatial clips, a new submenu in the Viewer called “View Stereoscopic As” offers several ways to work. You can choose to see just one eye, both eyes together, or an overlap of the two in Anaglyph, Superimpose or Difference mode. Yes, you can pop on a pair of red/cyan glasses and work with that if you like, though of course it’s not the same as working live on the Vision Pro. Sadly, there’s no direct link to the Vision Pro yet, though I expect it’s in the works.

Final Cut Pro 11 released at the FCP Creative Summit 42
Stereoscopic display options

In the Inspector, you’ll see a new setting group under Video that includes a Convergence adjustment. This moves a clip forwards or backwards in 3D space by adjusting the gap between the two eyes, and this is how you should position titles in 3D space. Negative convergence moves away from the viewer, while positive convergence moves towards the viewer. Be sure to use positive convergence briefly, at low values, and sparingly, because you’ll be asking your viewers to go cross-eyed, and it’s uncomfortable.

Right now, it’s also a little tricky to reframe, crop, rotate or scale spatial clips. The Transform controls are disabled by default, and if you activate them, you’ll find they control the virtual double-width clip rather than each eye independently. This control is needed for some technical tasks, but if you want controls that adjust both eyes together, there’s an easy workaround. Create a new effect in Motion that publishes all the Transform properties (Position, Crop, Four-Corner) and now you can adjust these properties for both eyes at once.

Final Cut Pro 11 released at the FCP Creative Summit 43
Exporting is straightforward, though you should include the metadata and use HEVC 8-bit if you plan to use Vimeo’s new Spatial support

In future releases, I’d love to see built-in transform controls which do what most editors need, and also full 3D integration from Motion templates into FCP, but this is OK for now.

With regard to Spatial video, FCP isn’t alone; DaVinci Resolve announced Spatial support earlier this week, and in the future will be supporting 180° VR Immersive 3D for their upcoming URSA Cine Immersive camera too. Still, as impressive as 180° Immersive videos are, simple Spatial  is an easier way into 3D for most folks. It’s never been easier for regular humans to work in 3D video, and I can’t wait to see what we make.

Many smaller improvements

Besides the new headline features, many smaller key commands and more minor updates round out the release. Support for 90/100/120fps timelines is very welcome, allowing everyone to create media for today’s high-frame rate displays. The new Vertical zoom-to-fit command (Option-Shift-Z) complements the existing horizontal zoom-to-fit (Shift-Z) to keep your timeline view organized. Commands already existed to select the clip above or below the currently selected clip (Command-up/down) but it’s now possible to move clips up and down in the stack too, with Option-up/down). If you’ve ever wrestled with specific vertical placement in a complex stack of connected clips, you’ll now have a much easier time.

Browser clips can now be hidden, not simply rejected — but why would you want to hide clips rather than delete them? Well, if you’re using multicam or sync workflows, the original source clips would normally remain visible after a multicam clip has been created, and you probably never need to touch those original clips again. Now, they’re hidden by default, leaving the browser display much cleaner. Of course, you can reveal hidden clips if you wish, or not hide them in the first place.

A new set of Modular transitions allow you to smoothly move from one shot to another, pausing mid-transition to show both frames together in a split screen, with plenty of options. While I’d expect most editors to make use of a sensible side-by-side or top/bottom split, there’s a Shapes option there too. If you’ve been missing a star wipe, here’s an option for you.

Final Cut Pro 11 released at the FCP Creative Summit 44
The callout effect, drawing attention to part of the frame

There are also two new effects, Callout and Picture in Picture, which enable new, useful ways to focus attention on part of the frame, and offer on-screen controls and additional sliders to make them very customizable. Note that if you want Callout to occur part-way through a clip, you’ll need to blade the clip up and apply the effect to one segment only.

Final Cut Pro 11 released at the FCP Creative Summit 45
The Picture in Picture effect includes good on-screen controls for square or rectangular frames

However, I do confess to somewhat mixed feelings here, because these effects duplicate some of the features of my own plug-ins! Callout draws attention to a specific part of the frame, much like the Zoom In titles in my funwithstuff Annotator plug-in. Picture in Picture shrinks and crops a clip to a rectangle and adds a border, much like the Rectangles effect in my funwithstuff PiP Kit plug-in.

Final Cut Pro 11 released at the FCP Creative Summit 46
Another way to use Callout

Now, I’m not claiming I’ve been “Sherlocked”; my plug-ins aren’t the only options on the market, and they still offer many more options than the new defaults. Hopefully, if people like these new effects and want additional controls, they might seek out and find mine. But still, if you see emoji-based particles like funwithstuff Emojisplosion in a future FCP release, I’ll be suspicious. 😉

Blackmagic RAW support coming soon

This was announced but it’s not yet available: thanks to the Media Extensions support in macOS Sequoia, Blackmagic will be implementing support for Blackmagic RAW in Final Cut Pro. This is great news, and we’ll be sure to test this when it goes live.

Final Cut Camera 1.1 can now do Log HEVC

If you want the quality of Log but would prefer to avoid external SSDs, you may have chosen to go with Kino or Blackmagic Camera, but now there’s an official Apple-made app that allows Log HEVC recording, with a great zoom control. You can also choose to preview your video with a LUT, and use the new tilt and roll indicators to level your camera in two dimensions.

Final Cut Pro 11 released at the FCP Creative Summit 47
The line tells you about roll, while the dot tells you about tilt

The data rate is low-ish (under 25Mbps for 4K) but I haven’t seen any issues myself. If you need a higher data rate than that, Blackmagic Camera has a ton of options, but Final Cut Camera has a clean UI that will win a lot of fans.

Final Cut Pro for iPad 2.1

Several of the new features on the Mac have come across to the iPad. Magnetic Mask is here, as are the new Modular transitions and the new Picture-in-Picture and Callout effects and the higher timeline frame rate support. Enhance Light and Color has come across from the previous Mac release too, and a new pinch gesture allows vertical scaling in the timeline. The pencil support has expanded to include additional brush options, so you’ll have more options for Live Drawing. Remember too that these drawings can be brought back to the Mac version.

Either way, if you enjoy editing in FCP for iPad, or you prefer to use it to manage a multicam shoot and then transfer to Mac, it’s a solid app with genuine utility. It’s great to see its updates continue.

Logic 11.1

With the new release for Mac and iPad, there are several new features. Here’s the full list, but the standout is the inclusion of the Quantec Room Simulator by Wolfgang “Wolf” Buchleitner. The live demo we heard sounded great, and it’s great to see digital audio history being recreated. The update is free, and you can roundtrip between Mac and iPad, so give it a shot.

Conclusion

This is a big release of FCP — serious small improvements, brand new features for everyone and a whole new dimension for brave souls exploring the new frontier of 3D. Automatic captioning will bring accessible captions for everyone who needs them, and enable text-based editing too. Cheap and affordable options now exist for anyone who also needs to hard-code those captions as animated titles as well.

Editing in 3D for the Apple Vision Pro just got a whole lot easier, and while Spatial isn’t as breathtaking as 180° Immersive, it absolutely makes a tangible difference, and it’s much, much easier to shoot, edit and distribute. While we’re here at the Final Cut Pro Creative Summit, I’ll be talking about 3D, Spatial and Immersive video, and there are many other talks to be had too. Right now, I need to get chatting and prep for my sessions over the next two days. It’s a great conference, and this was a great way to kick off the first day. Cheers!

*The script for blading around captions needs tweaking — look for an update soon.

]]>
https://www.provideocoalition.com/fcp-11-released-at-the-fcp-creative-summit/feed/ 0
Review: AI-driven video search with Jumper https://www.provideocoalition.com/review-ai-driven-video-search-with-jumper/ https://www.provideocoalition.com/review-ai-driven-video-search-with-jumper/#comments Wed, 06 Nov 2024 13:13:25 +0000 https://www.provideocoalition.com/?p=285802 Read More... from Review: AI-driven video search with Jumper

]]>
AI isn’t taking our jobs just yet — instead, it’s helping us do our jobs more efficiently. Jumper is a tool that can analyze all your media, locally, and then let you search through it in a human-like way. Ask to see all the clips with water, or moss, or beer, or where a particular word is spoken, and it can find it, like magic. What Photos can do on your iPhone, Jumper does for your videos. But let’s take this a step at a time.

Analysis

Jumper currently works with Final Cut Pro or Premiere Pro, with support for Davinci Resolve and Avid Media Composer in the works, and it supports Windows as well as macOS. I’ve tested on Mac, where there’s an app to manage installation and extensions to integrate with Final Cut Pro and Premiere Pro.

Review: AI-driven video search with Jumper 48
The Jumper installation and management app on Mac

In FCP, you’ll use a workflow extension, available from the fourth icon after the window controls in the top left, and in Premiere Pro, you’ll use an Extension, available from Window > Extensions. In both host apps, Jumper behaves in a similar way, although while Premiere Pro allows you to access any of the clips open in your current projects, you’ll need to drag your library from FCP to the Jumper workflow extension manually to get started. Either way, clips must be analyzed for visual or audio content, so click to the Media tab, choose some or all of your clips, and set Jumper to work.

Review: AI-driven video search with Jumper 49
Here’s how it looks in FCP, during processing of a bunch of clips from last year’s FCP Creative Summit

If you give it enough to do, analysis can take a little while, but on a modern MacBook Pro it’s 15x faster than real-time playback; a decent sized shoot will need you to walk away and let the fans run. When the scan is complete, a moderately sized cache has been built, and this makes all future searches nearly instant.

Review: AI-driven video search with Jumper 50
Here’s a collection of familiar demo clips in Premiere Pro, searching for water

Searching for visuals

What’s it able to look for? You can search for visuals or for speech, and both work well. Jumper can successfully find just about anything in a shot — cars, a ball, water, books — and because it’s a fuzzy search, you don’t have to be too specific in your wording. You can look for objects, or times of day, or specific locations like Times Square.

Review: AI-driven video search with Jumper 51
Here’s that time I was on a big billboard in Times Square, as found by Jumper searching for “Times Square”

The most powerful thing about the visual search is that it’s going to find things you’d never think to categorize, things you might have seen once in a quick pass through all your clips, but might have forgotten to note down. Jumper just does it, finding the right parts of any clips that roughly match.

Review: AI-driven video search with Jumper 52
I’m in London as I write, and searching for “st pauls” did indeed find many images of that cathedral — searching for “borough market” worked too

How do you use what you’ve found? Hover over the clip to reveal four buttons at its lower edge, allowing you to find the clip in the NLE, to mark the specific In and Out on that clip, to add it to your timeline, or to find similar clips. That last option is very handy when a search returns somewhat vague results — find something close to what you want, then find similar clips.

Review: AI-driven video search with Jumper 53
Hover over a clip and then choose what you want Jumper to do

However, it’s important to realize that no matter what you search for, Jumper will probably find something, even if there’s nothing there to find. The first results will be the clips that match your search best, but if you scroll to the end of the results and click “Show More”, you’ll find clips that probably don’t match the search terms well. And that’s OK! If the clip is there, you’ll see it; if it’s not, you’ll  be shown something else, As with any search, if you ask sensible questions, you can expect sensible results.

Searching for speech

While this isn’t a replacement for transcription, Jumper will indeed find words that have been said in your video clips or voice-over audio recordings. You can search for words, parts of words or phrases, and they’ll be highlighted in a detected sentence.

Review: AI-driven video search with Jumper 54
As I was interviewing people at the FCP Creative Summit, plenty of them said “summit”

I’ve tested this against a long podcast recording, checking through a transcript of the podcast and making sure the same phrases were findable in Jumper, and happily, they were. As with visual searches, the first options will be the best, but don’t look too far down the list, as you’ll start finding false positives. Captioning support is already common across NLEs, so I suspect this feature may not be as widely used as the visual search, but it’s still useful, especially for those who would normally only transcribe a finished timeline.

Is this like a local version of Strada?

At this point, you may be thinking that it sounds a little like the upcoming Strada — which we will review when it’s released early in 2025 — and while there are some similarities, there are significant differences too. Firstly, Strada aims to build a selection of AI-based tools in the cloud, not just the analysis which Jumper offers. Second, Strada (at least right now) is explicit in its categorizations, showing you exactly which keywords it’s assigned to specific parts of your clips. Jumper never does that — you must search first, and it shows you the best matches, in a fuzzier, looser way.

Third, and probably most important, Strada runs in the cloud, on cloud-based media, and is very much ready for team-based workflows, at least for now. Jumper is a one-person local solution.

Conclusion

AI-based analysis is very useful indeed, and it’s going to be most useful to those with huge footage collections. If you’re an editor with a large collection of b-roll that you spend a lot of time trawling through, or a documentary maker with weeks worth of clips you’ve been meaning to watch and then log, this could be life changing. Jumper’s going to do a much better job of cataloging your clips much more quickly than you can.

The fact that Jumper can be purchased outright and then run fully offline (after license activation) will win it fans in some circles. Those same points will give others pause — if you’ve moved to a cloud-based workflow, you’ll have to download those clips and store them on a local drive to be able to analyze them. If you’ve moved to the cloud, Strada might be a better fit so check out their beta now, and look for our review when it’s released.

For anyone who stores their media locally and works alone, Jumper is worth a serious look. Grab a free trial from getjumper.io.

Pricing for one NLE: $29/month, $149/year, or $249 lifetime (perpetual).

]]>
https://www.provideocoalition.com/review-ai-driven-video-search-with-jumper/feed/ 1
HDMI Input on an iPad https://www.provideocoalition.com/hdmi-input-on-an-ipad/ https://www.provideocoalition.com/hdmi-input-on-an-ipad/#respond Thu, 17 Oct 2024 12:00:34 +0000 https://www.provideocoalition.com/?p=285046 Read More... from HDMI Input on an iPad

]]>
Since the iPad moved to USB-C, it’s become possible to connect many more external devices, opening up new possibilities. Because an iPad is thinner and lighter than a regular computer, it can sneak into bags that wouldn’t fit a laptop, and because the battery lasts for a long time, it doesn’t necessarily need to plug into power.

Video professionals have found many ways to make use of iPads already, including taking notes, referring to scripts, and driving teleprompters, but let’s focus here on how HDMI input can make a difference. Note that in 2023 we took a look at Video Assist, an early app which enabled this, but here, we’ll look at a few different apps and a few different ways to make use of them.

Hardware requirements

For this to make sense, you’ll need an iPad that supports USB-C — one of these:

  • iPad Pro (M4) — check out Scott’s review
  • iPad Pro 11-inch (1st, 2nd, 3rd, or 4th generation)
  • iPad Pro 12.9-inch (3rd, 4th, 5th, or 6th generation)
  • iPad Air (M2)
  • iPad Air (4th or 5th generation)
  • iPad mini (6th generation)

I’ll be working with an iPad Pro 11” with M1, the 3rd generation, and note that the displays on all these iPads vary quite a bit. All the most recent M4-based models have the OLED-based HDR-ready Ultra Retina XDR Display, and the two previous generations of the 12.9” iPad Pro also had an OLED display. Other iPads don’t, so blacks won’t be as black, but it’s unlikely to be a major concern for most of us; these are all good displays.

HDMI Input on an iPad 56
The Elgato HD60 S+ is one of many easily found devices that can bridge HDMI and USB.

As well as an iPad, you’ll need a USB-HDMI dongle-box-converter-device. I’m using an Elgato HD 60 S+, but other devices like the Elgato Cam Link or many no-name options will work too. My device can receive 4K @ 60fps, but it does downconvert that signal to 1080p for display, and note that not all devices offer the same encoding quality or reliability. For what it’s worth, I’ve been using this device for delivering multiple-hour-long live camera feeds at events for years now, and it’s always been reliable for me, but it’s definitely not the cheapest option.

HDMI Input on an iPad 57
Just some of the many apps that hook into Apple’s HDMI device support.

In terms of apps, you have many choices, including these and more:

  • Orion, free, with in-app purchase to adjust screen settings like contrast. Very cute retro UI.
  • CamX, free, no in-app purchases. Simple, clean UI around the feed, though this can be hidden. Image can be adjusted (brightness, contrast, saturation, zoom, position) and the image can also be mirrored or rotated. Features a videography mode that overlays the rule of thirds grid and a histogram. Also allows recording and screenshots.
  • Elgato Capture, free, no in-app purchases, for Elgato devices. Clean, simple UI, allows recording.
  • MoniCon, free, no in-app purchases. Some controls over the image (brightness, contrast, saturation, vibrance) along with Super Resolution upscaling and video range correction.
  • Dongled, free, no in-app purchases. Open source, no options, works fine.
  • HDMI, free, no in-app purchases. No options, but works fine.
  • Video Assist, US$99. Many options, including recording, video transformations, and recognition of ARRI and RED interfaces.

While Video Assist was an early mover, followed soon after by the free Orion, there’s obviously a lot more competition now, and many free options do include bells and whistles. Download CamX and Orion to start, see if they do what you need, try some other options if you wish, and consider the extra features of Video Assist if you need. Its integration with RED or ARRI overlays might be perfect for some advanced users, but others may not want to invest US$99.

With an app or a few installed, let’s hook everything together and get testing.

iPad as a field monitor

This is the obvious one. Hook your camera’s HDMI output into the USB-HDMI device, plug that into your iPad, and launch Orion (or another app of your choice). Lag should be minimal, though because the HDMI out in many cameras can be slightly delayed, if you’re not used to pulling focus on a monitor, don’t be surprised if you notice a delay of a few frames.

HDMI Input on an iPad 58
It’s always fun to generate a video loop — this is CamX

While most of us will hook up a dedicated camera, it’s possible to output HDMI from a modern iPhone these days. If you want, you can connected an HDMI output dongle from an iPhone to an HDMI input dongle on an iPad. Note that if you’re planning doing something like this, consider an app like CamX which gives you control over rotation and zooming.

Compared to most field monitors, an iPad will be thinner, larger, and may have more resolution, and if you’ve always relied on a camera’s built-in screen, the bigger display is a joy to use. While I’ve used a dedicated external monitor on extended shoots for many years, today, I probably wouldn’t buy another. The display on an iPad is clearer, more accurate and more consistent, and if you already own one, cost isn’t a factor.

Useful as this is, it gets better.

Stream live video into an Apple Vision Pro or to any TV

Once you’ve got an iPad displaying full-screen video from any camera, you can simply use the built-in AirPlay screen sharing feature to send this iPad’s screen, live, to any Mac, any Apple TV (and the regular TV it’s connected to) or any Apple Vision Pro running the latest visionOS 2. This is all wireless and free, with no extra subscriptions or hardware. (Just remember — if all this gear is close together for testing purposes, turn the sound on the iPad down to avoid audio feedback.)

HDMI Input on an iPad 59
So this is a screenshot of my view from inside the Apple Vision Pro, looking at my iPad, which is showing my GH6’s output, which is pointing at me

Of course, on a full production set you’d probably use a dedicated box like a Teradex, but if you want to make a client on a smaller job feel included from a different room, or you want to use your Apple Vision Pro while you monitor shots, this is an easy way to test the waters. If you want to share a multicam feed, that’s going to be a little harder, but you can view the output from an ATEM switcher if you want to.

Production could get interesting, then. What about post?

iPad as an extra computer display

Note: We wrote a whole article about this recently.
While Apple’s Sidecar feature already lets you use an iPad as an extended or mirrored display, there are some good reasons why you might want to use an HDMI-enabled iPad instead. One is that you might be editing on a PC rather than a Mac, and an extra monitor can be just what you need on a trip away from home.

Another reason is that in Final Cut Pro, if you want to use the A/V Output support to drive a monitor directly, that monitor has to be connected by a cable, and using a standard resolution. Normally, that excludes an iPad using Sidecar, but an iPad with an HDMI interface connected works just like a regular external HDMI monitor does.

HDMI Input on an iPad 60

On my Mac, I’m able to select from a huge range of regular video resolutions at 60Hz, and I can use it for A/V Output without any problems. This gives me a third screen for just the video in FCP, letting me focus on the Scopes on my main monitor with the Browser taking over my second screen. There was no visible lag, so there’s no problem listening to audio from my laptop while viewing the image on the iPad. Color was very close too — it’s a solid solution. A tip: if you want to use this feature regularly, assign a shortcut to “Toggle A/V Output on/off” in the Command Editor, and save a new workspace to remember your setup.

I also tested this in Premiere Pro, and it worked fine. Though there are no preset workspaces for two- or three-screen use, you can drag any panel to a separate screen, then save a new workspace. Spread everything out if you like.

iPad as a display for a games console?

Although it’s hardly a professional context, if your family’s main TV is being monopolized to watch a musical or sports production that you have no interest in, you might want to escape to a video game. If your shed, tent or basement retreat of choice doesn’t have a suitable TV, take your iPad and HDMI dongle with you. It’ll work fine.

A small USB hub — handy for video tasks on iPad and iPhone

Power can eventually become an issue, because the USB interface does chew through an iPad’s battery, but that doesn’t mean you’ll be tethered to a wall. A small USB hub, like  one from SmallRig that I recently picked up for my iPhone, solves the problem neatly. It’s all USB-C, turning one input port into four more: one 10Gbps connection for the HDMI interface, one PD port to connect to a wall charger or large power bank, and two slower USB 2.0 ports for connecting slower devices.

HDMI Input on an iPad 61
This hub includes a cold-shoe-sized clip on the back like the RØDE Wireless GO mics do, and supports high speed data, low speed data, and PD power

This hub is also ideal for anyone who wants to use an iPhone to record to an SSD while also using USB-C audio and external power, and it even slides into a cold shoe or clips onto a cable. Not many USB hubs include both fast data and USB-PD power, so read the specs carefully if you’re looking for one.

It all works really well, and if you haven’t discovered the joy of large USB-PD power banks yet, you’re missing out. I’m finally able to take a single box and a single type of cable to power everything for hours: my laptop, phone, dedicated camera and more.

Conclusion

Opening up the iPad and iPhone with USB-C has had real-world benefits for many video professionals. We’re able to share SSDs and other devices with our regular computers, and because the ecosystem is already built, we get other features like screen sharing and remote screen control for free.

So, if you’ve got a USB-HDMI interface and an iPad already, grab a free app and test out some new workflows. At the very least, you’ll have a new option for a field monitor, or a new screen to play games on, but if you’ve got a Vision Pro, you can now wirelessly integrate it with any camera. We live in interesting times.

]]>
https://www.provideocoalition.com/hdmi-input-on-an-ipad/feed/ 0
Review: iPhone 16 Pro Max https://www.provideocoalition.com/review-iphone-16-pro-max/ https://www.provideocoalition.com/review-iphone-16-pro-max/#comments Fri, 20 Sep 2024 02:40:53 +0000 https://www.provideocoalition.com/?p=284514 Read More... from Review: iPhone 16 Pro Max

]]>
Today, here in Brisbane, Australia, I was one of the first regular humans to pick up an iPhone 16 Pro Max, so here’s a detailed early look at how the latest iPhone could impact the working lives of pro video people.

Not every job will end up being shot on a dedicated camera, and as our phones improve, it’s worth understanding how capable they can be. Today, you could use a phone for family snaps, for the occasional b-roll shot, or even as the main camera for a full interview, but if it’s going to work, you need to know its limits. That’s why I’ve been out shooting this morning, and why I have some sample footage and fresh opinions for you.

A quick recap of the new features

We covered the new features of the 16 Pro and Pro Max in an article shortly after launch, so I won’t spend too much time on them here. There’s a new Camera Control button that can be tapped to launch the Camera app or swiped to change camera settings, and it works with third-party apps too.

It’s placed to work in both landscape and portrait, and though it works well today, full support hasn’t arrived yet. In the future, you’ll be able to tap-and-hold this button to activate Apple Intelligence, telling you what you’re looking at, or adding a date from of a poster you’re snapping to your calendar.

As an aside, while I am running the latest beta and do have access to Apple Intelligence, I can’t really review the experience — it’s still in beta. However, for me it’s been working very well on my 15 Pro Max over the last few weeks, and when its features become more widely available over the next few months (it’s arriving in stages) I think people are going to really like it. 

Back to pixels, the ultra-wide 0.5x lens has gained a 48MP sensor, increasing the amount of detail available at that focal length to match the main 1x sensor. In theory, this could give enough pixels to enable Spatial Video in 4K, but Apple has only enabled Spatial Photos officially, and you’ll still need to use a third-party app to capture Spatial Video in 4K. Unfortunately, 4K Spatial Video quality hasn’t significantly changed from the 15 Pro Max, though the Camera app interface now combines spatial photos and videos into a new Spatial mode.

Review: iPhone 16 Pro Max 64

In all video modes, you can now pause and restart video recording. While I don’t think I’ll use this much myself, I’m sure some will find this handy. Another surprise addition is Spatial Audio Capture. This uses four microphones to create an immersive soundfield, then allows you to tweak the positioning of audio in the shot to your taste. More on all these Spatial improvements below.

Video processing speed improvements mean that 120fps capture is now possible in 4K — though only on the main 1x lens — and playback speed can be set to 24, 30, 60, or 120fps. As usual, you’ll see the true frame rate when you bring it into an NLE, and of course you can still set the frame rate as you wish there, but it’s terrific to have the equivalent of Adobe’s Interpret Footage command right there in the field, ready for client review.

Review: iPhone 16 Pro Max 65
A quick way to change speed for client review in the field

Lastly, benchmarks show the iPhone 16 Pro Max’s A18 Pro is about 15-20% faster than the A17 Pro, and that it’s faster than the M1 on single-core and multi-core benchmarks. While most people don’t really notice processing power in everyday tasks, this is (again) the fastest mobile CPU available, and if your current phone is two or years old, it’s a solid bump. Both the regular and pro iPhone models received A18-family chips this time around to provide processing power and extra RAM to drive Apple Intelligence.

Oh, and it can charge much faster than older iPhones too, both wired, and wirelessly. But that’s enough about the phone; we’re here to look at the images it can capture.

Log Video — still great

If your last experience with iPhone video was before the 15 Pro and Pro Max enabled Log recording, you’ve been missing out. When you shoot in Log, you’re not just getting a different gamma curve, but you’re deactivating a lot of the sharpening, tone mapping and other processing that’s normally applied to iPhone video. While you can only use Log with ProRes HQ if you use Apple’s Camera app, using a third-party app like Blackmagic Camera or Kino allows you to use codecs which don’t require as much space, like HEVC. You can bring the Log clips back into a “normal” color space automatically in Final Cut Pro, or in any other NLE with this free LUT. As usual with Log footage, add your corrections before the Log transform for maximum control.

Of course, a dedicated camera definitely brings flexibility that a phone doesn’t, and allows you to shoot in a wider variety of situations. But the phone’s getting really close. How about the newly upgraded ultra-wide angle? Here’s a matched, cropped still, taken from Log video, comparing the 16 Pro Max (on top) with the 15 Pro Max (below):

Review: iPhone 16 Pro Max 66
The new 0.5x ultra-wide angle has a slight sharpness and detail edge for Log video (click to enlarge)

Has the iPhone 16 Pro Max changed the fundamental nature of Log over the 15 Pro Max? No. It’s very much the same, but the ultra-wide angle has improved, bringing good-looking wide-angle video to a wider audience. A bigger change for video is slow motion on the 1x lens — and that’s true for Log recording too.

Slo-mo video — much improved

Slow motion is one area where the iPhone was due for improvement. Though the iPhone has supported 120 or 240fps at 1080p for years now, quality definitely suffered compared to a dedicated action camera like the Insta360 Ace Pro, or the Lumix GH6.

Here’s the iPhone 16 Pro Max’s 120fps 4K, cropped next to the iPhone 15 Pro Max shooting 120fps HD:

Review: iPhone 16 Pro Max 67
There is a focus difference between these two shots, but the dynamic range, clarity and resolution are all better on the 16 Pro Max on the left (click to enlarge)

In terms of dynamic range, the 15 Pro Max can’t shoot slow motion in HDR, while all the other cameras can. The 16 Pro Max can shoot Dolby Vision HDR at 120fps, but not Log — so you will see more sharpening in this mode. Still, the 16 Pro Max is the only device here that can record to ProRes in 120fps to an external SSD — required, as this mode can’t be recorded internally. Be ready for huge files if you really do need 4K ProRes Log at 120fps.

Increasing frame rate usually comes with compromises in terms of image quality. Some of these issues are down to light limitations (you can’t beat physics) but most high-frame-rate issues are down to how fast a sensor can be read out. This year’s focus on sensor speed means that 120fps footage looks much more like normal speed footage in terms of image quality. If you need to capture casual slow motion footage and an action camera doesn’t suit, this is a great option.

Spatial improvements

While I was hoping that the newly upgraded ultra-wide sensor would enable 4K Spatial Video recording, that hasn’t happened. Some developers of apps that do support 4K Spatial Video recording have updated for the 16 Pro models, but there seems to be very little difference in spatial video quality.

However, we do now have official Spatial Photo support, which is very welcome. Spatial now has its own tab in the Camera app (not just a toggle in the Video tab) from which you can shoot photos or videos, and you can view them on an Apple Vision Pro.

As an iPhone crops in on the ultra-wide 0.5x lens to capture an angle that matches the 1x lens, resolution can be a concern. The left and right angles in a spatial photo are 2688×2016 each, which look good, but are a step down (5.5MP) from a regular 12MP shot. However, a higher-resolution (5712×4284, 24MP) 2D image is also captured alongside the other two angles — great news, because you’re not compromising 2D when you capture in 3D.

Review: iPhone 16 Pro Max 68
Spatial photos include a high-res 2D image alongside lower resolution left and right angles for 3D devices

One feature of visionOS 2 is that it can create 3D Spatial photos from 2D photos, but this algorithm won’t get everything right; the real 3D image is far more convincing. (Here’s a sample, if you have a compatible device.) Interestingly, when you view these images on the Apple Vision Pro, in visionOS 2, it’s not possible to resize a spatial photo except by using the immersive view. However, you can disable the spatial view temporarily, resize the image, and then re-enable spatial display, and it’s whatever size you want.

But there’s more Spatial tech, in audio form! The added microphones do allow spatial audio recording, and if you inspect the clip in Compressor, you’ll discover a new ambisonic track alongside the Stereo track, disabled by default. This is a welcome step towards being able to use surround sound in everyday videos, but as with HDR, it’s taking far too long to adopt this tech more widely, and it’s largely a distribution problem.

Review: iPhone 16 Pro Max 69
Ambisonic audio recording is present on iPhone 16 Pro and Pro Max

Today, it’s easy to enjoy both HDR and Atmos audio served to us by paid streaming providers, or on Apple Music, but we can’t deliver it through YouTube for smaller clients. While it’s possible to work in Surround formats in an NLE, Atmos is harder, and distribution is harder still. Since YouTube doesn’t even support 5.1 Surround playback on most iPhones, Androids, iPads, Macs or PCs — only on TVs and devices connected to them — it’s still tricky to tap into the Spatial Audio support now in almost all Apple devices and many Android devices.

Fingers crossed that Vimeo’s upcoming Spatial Video support includes Spatial Audio support too; we need some competition in this space.

Still image improvements

As video professionals are often tempted into still image shooting as well, photos are worth a quick look. The 48MP sensor on the 0.5x lens gives these stills a big quality boost, and since this lens also supports very close focusing, it enables high-quality macro images too. By default, the 48MP sensor produces 24MP images, but if you prefer, you can record the full 48MP as a ProRAW Max or a HEIF Max image instead. This year, ProRAW has some subtle changes, and you can use JPEG-XL Lossy compression if you’re willing to sacrifice a small amount of quality to save a lot of space. A 48MP ProRAW shot on the 15 Pro Max takes approximately 75MB, but JPEG-XL Lossy ProRAW on the 16 Pro Max takes just 20MB.

Review: iPhone 16 Pro Max 70
JPEG-XL Lossy compression reduces file size while retaining RAW flexibility

Note that this isn’t regular JPEG XL support, it’s just the type of compression used internally within ProRAW. As discussed in this article about WebP, there are good reasons to look beyond traditional formats if they can slot into our workflows, but for compatibility reasons, we may still need to ultimately share in older formats too. Today, JPEG XL is still not supported by Chrome (nor by many other apps) so we’re not there yet, but Apple’s (partial) support is certainly welcome.

Besides low-level changes, Apple have made their Filters preset system far more flexible, and given it a new name: Photo Styles. Underneath each image, while adjusting it, you’ll now see a control pad that you can drag to affect the tone and color of the image — much more flexible than a single “strength” slider. If you swipe to pick a preset photo style, you will also see a Palette slider underneath the pad, controlling the strength of the effect in addition to tone and color. Different parts of the images (such as faces) will receive unique treatment, so it’s more complex than it first appears.

These choices can be made while you take photos (in the Camera app) or afterwards (in the Photos app), and all choices made before shooting can be changed after shooting.

Review: iPhone 16 Pro Max 71

Photo Styles can produce extreme or subtle results — with two or three ways to tweak, depending on the style chosen

If you shoot in raw and plan to process them on a desktop, keep working that way. But if you’re just casually snapping, and you don’t need the flexibility of raw on every shot, Photo Styles will help you get a lot closer to the image you wanted, without having to compromise nearly as much. So, if you’re unhappy with the default, somewhat flat look of standard iPhone photos, consider shooting mostly in HEIF, and just dial in the Tone to your taste.

Conclusion

The iPhone 16 Pro Max is a solid step forward for image makers. Camera Control is truly useful, and can be used with third-party apps as well as the native Camera app. The slow motion is more capable, and the device in our pockets is, like it or not, the only camera which many new video creators own. We are not yet at the stage where a phone can replace every professional camera, but every year, a few more features that were unique to dedicated cameras are added to the iPhone too. This year, the Camera Control button makes the iPhone 16 family feel more like a regular camera than any previous model, and Photo Styles should help a lot of people’s casual photos better suit their preferences while JPEG XL Lossy ProRAW will help fans of raw save a ton of space.

Finally, although I prefer the big Max-size screen for a better playback experience, and a bigger battery for longer recording times, there’s a smaller option that’s just as capable. This year, the iPhone 16 Pro has almost exactly the same feature set with a smaller battery and screen, so there’s little compromise if you prefer a smaller phone. 

Recommendations? If you ever plan to use a phone to record video, and you’re fussy about images, you’ll notice the difference that Log brings. For most users with a 15 Pro or Pro Max already, you’ll get Apple Intelligence when it’s ready, and you’ve got very similar Log and ProRes video today. Upgrade from a 15 Pro or Pro Max if you want 120 fps, Camera Control, a better ultra-wide lens, or Photo Styles.

It’s an easier recommendation if you’ve got an older iPhone or an Android, and you’re not happy with the quality of the photos or videos you’re able to capture. The bottom line: if you’d like to record high quality Log video on a phone, and you’d like to access to Apple Intelligence over the next few months, the iPhone 16 Pro or Pro Max is a solid choice.

Do you want to know more?

Here are some worthwhile video reviews with plenty of detail and samples:

]]>
https://www.provideocoalition.com/review-iphone-16-pro-max/feed/ 2
Finding your perfect travel camera https://www.provideocoalition.com/finding-your-perfect-travel-camera/ https://www.provideocoalition.com/finding-your-perfect-travel-camera/#respond Wed, 04 Sep 2024 13:24:57 +0000 https://www.provideocoalition.com/?p=283611 Read More... from Finding your perfect travel camera

]]>
After the recent pandemic-tainted years, the value of time away from home has never been clearer. And for many photo or video professionals, a vacation is often a rare chance to capture some fresh moving or still images, for personal or semi-professional purposes. Whether you’re travelling to take videos or merely grabbing a few incidental shots, most of us will want to capture a few moments in decent quality.

Finding your perfect travel camera 72
The Hooker Track, in New Zealand, taken with a Lumix GH6 (25MP, click for full resolution)

If you’re just heading out for the day or the weekend, it’s easy enough — throw whatever photographic and video gear you might need in the back of a car. But what if you’re going further, for longer? What if you’re going to be traveling with family? Suddenly, you’ll have to get selective, and the camera in your phone, the camera that’s always in your pocket, becomes pretty tempting. I’ll still use a dedicated camera for client work, because I need control, reliability and the best image possible, but for my own images, things are shifting.

Finding your perfect travel camera 73
The Hooker Track, in New Zealand, taken with an iPhone 14 Pro Max (48MP, click for full resolution)

Today, more people than ever are capturing photos and videos, but fewer people than ever are using dedicated cameras to do so. Similarly, more people are consuming photos and videos, but mostly on smaller devices than ever before, where image quality matters less. So what should you do? There are a ton of variables here, including (I’m serious) if you have an Apple Vision Pro or not. This is not clickbait, it’s not an ad for any specific kind of hardware — it’s a guide so you can make your own decision. Let’s dig in with some questions.

Why are you shooting anyway?

In our personal lives, a couple of decades ago, if we were taking photos or shooting videos, we were probably the only people in the room doing so. Cameras were rare, camcorders even rarer, and every moment of our lives was simply not documented with anything like the detail or regularity it is today. We had slide nights, and photo books, but otherwise, we just had our real selves.

Because many people are now exposed to so many images, each photo has, to most people, less value than it once did. Of course, a special video of a now-passed family member is irreplaceable, and personal family photos will always have value that grows with time. But viewers have been spoiled. It’s hard to impress your friends with your great shot of a sunrise, because they’ve seen a thousand great sunrise pics this week. The world doesn’t need more great photos — we’re drowning in them.

I hope you want to take great photos, timelapses or video clips simply because you love it; there aren’t enough eyeballs to consume all these images.

What are you willing to carry?

In the past, I’ve certainly taken more gear that I would today. When the original Blackmagic Cinema Camera was the best video camera I could buy, I lugged it up mountains and elsewhere around Japan, capturing some very nice footage that few other people ever paid attention to.

Finding your perfect travel camera 74
Seriously, what am I going to do with all this ProRes log footage from 2013?

In 2022, I took my Lumix GH6 to New Zealand, along with a 360° camera and an iPhone 14 Pro Max, to try to capture the amazing landscape as best I could. At the time, the iPhone’s videos couldn’t come close to the monster 6K 4:3 open gate video from the GH6, though the iPhone 15 Pro and Pro Max’s Log videos can get a lot closer if 4K is sufficient. And despite the lower still resolution, the GH6’s still images look better (to me) before processing. The iPhone has the edge in raw resolution, but the default look is still a bit too sharp and contrasty for my tastes.

Finding your perfect travel camera 75
The iPhone 14 Pro Max’s 48MP sensor on the left (a bit sharp at 100%) vs the GH6’s 25MP sensor on the right, zoomed in to match (click for full res)

Although I’ve used this footage professionally and I don’t regret carrying “real” cameras, they weren’t light, and I felt the weight. Just managing to fit all the battery-powered equipment in my carry-on luggage was a non-trivial game of Tetris. Is there a lighter camera out there?

What about a compact travel camera?

Unfortunately this market segment has significantly declined, and today’s travel cameras are a significant step down from today’s pro cameras in terms of sensor size and image quality. As I’m used to the GH6, I don’t want to go backwards by using something worse — and if you have a full-frame camera already, your personal quality bar may already be set higher than mine. Action cameras are great for sports and for timelapses, and they’re certainly not heavy, but their image quality doesn’t surpass a modern phone.

Travel cameras, usually with a small built-in zoom lens, do offer reach, but don’t usually compete with mirrorless in image quality. Matching your current professional lens system makes sense, and I’m lucky being in the Micro Four-Thirds camp from a weight point of view. However, purchasing a smaller camera body to match my existing lenses would be a large investment for a relatively small weight saving.

Another issue with many cameras sitting just under the “pro” category is that they tend not to be weather sealed, and that can be a problem. On a holiday in Taiwan, I took a camera with a non-weather-sealed kit lens (as I thought the reach would be useful) but in the regular drizzle, I barely got to use the camera at all. Professional bodies and zooms tend to be weather sealed, but not all primes are. Beware.

Finally, then: is your phone good enough?

At video editing conferences over the last several years, I’ve often been the only person to bring a dedicated video camera. To be fair, not all editors are also shooters, but I was surprised to hear from colleagues, even pre-pandemic, that for them, a phone was good enough for family photos. To me, a phone was only good enough for photos viewed on a phone, but sub-par for images viewed on a large screen. Phones have improved a lot, but beware of the default settings, which sometimes remove too much noise and add too much sharpening.

Finding your perfect travel camera 76
iPhone (100%) on the left, GH6 zoomed in to match on the right (click to enlarge)

The shift of the iPhone 14 Pro Max’s main sensor to 48MP (and the expected upgrade of the 0.5x sensor to 48MP on the iPhone 16 Pro) has absolutely made a difference to how good photos look on a big screen, but detail is not the only reason to carry more than a phone. The core reasons for using a dedicated camera even for holiday snaps remain: changeable and better lenses to let you cope with more extreme situations, swappable batteries and storage, real depth of field, and a cleaner, less processed image. If you can live with the lens selection, everything else can be dealt with.

A few potential solutions

  1. If you can’t live with the quality limitations of the built-in Camera app, use third-party apps (like Halide and Kino) to shoot RAW photos and Log videos, avoiding overprocessing.
  2. To manage storage, take a pair of SSDs, offload your images each night to both drives, and delete when done. With USB-C on board, you don’t even need a computer to do this.
  3. If you find yourself running out of power, pack an external battery. USB-C batteries can recharge most modern devices and make you popular with your travel companions too.
Finding your perfect travel camera 77
Select your images, Share, Save to Files, then navigate to your connected SSD

But if you can’t give up the image quality of your professional mirrorless body and its super-shallow depth-of-field, don’t. Maybe you don’t need to buy another body for your current system either — leave your heavy zooms at home, and take a nice prime instead. Walk with your feet, embrace the constraints, and use both devices for what they’re best at.

On my next overseas trip later this year (see you at the Final Cut Pro Creative Summit?) I will need to shoot professional video footage in low-ish light, and a phone alone won’t quite do the job. As a compromise, I’ll be taking my Lumix GH6 with just a 15mm f1.7 Leica prime (943g or 33 ounces), significantly lighter than the same body with my Olympus 12-40 f2.8 zoom (1259g or 44 ounces), and taking up a lot less space.

Finding your perfect travel camera 78
The Lumix GH6 is weather sealed, but this light 15mm f1.7 prime is not

My GH6 can capture shallow DOF interviews plus clean wide-angle shots in any light if it’s not raining, and my iPhone can handle anything else — panoramas, Spatial Video, ultra-wide angles, selfies, and quick captures when the camera is in a bag.

Why does the Apple Vision Pro matter?

It’s simple — if you don’t have an Apple Vision Pro, you probably don’t have a device which can actually reveal all the pixels in your images or your videos. TVs aren’t going to go past 4K for some time and computer screens have hit the same barriers. Some 6K displays do exist, but they’ve all been more expensive than the Apple Vision Pro — which clearly isn’t cheap — and none of them can present an image anywhere near the apparent size that a headset can. Image quality on a phone is very good these days, but the Apple Vision Pro can show me its flaws, so I’ll still be taking a dedicated camera with me. This will be the case even if the iPhone 16 Pro has a 48MP 0.5x lens (as predicted) because resolution isn’t everything. But every year, as the compromises get smaller, the equation shifts just a little further towards just taking a phone.

Conclusion

Quality matters both more and less than it once did. Because of the quantity of photos we see, fewer people care about quality, but those who do care have better displays than ever to appreciate them. Image quality is still important to many people — including me, and if you’re a video professional, likely you too. So don’t compromise on the look, but also don’t break your back if a client isn’t paying you to capture their images. But it’s still worth taking the best photos you can, with whatever device you use. Eventually, when you do get an Apple Vision Pro 3 or 4 down the line, you’ll really be able to appreciate all those lovely pixels you captured, all those years ago.

]]>
https://www.provideocoalition.com/finding-your-perfect-travel-camera/feed/ 0
3D Stereoscopic Video — Fake it or make it? https://www.provideocoalition.com/3d-stereoscopic-video-fake-it-or-make-it/ https://www.provideocoalition.com/3d-stereoscopic-video-fake-it-or-make-it/#comments Sat, 31 Aug 2024 13:05:06 +0000 https://www.provideocoalition.com/?p=283397 Read More... from 3D Stereoscopic Video — Fake it or make it?

]]>
While the Apple Vision Pro is likely to have a significant impact in the niche area of stereoscopic video production, it’s likely to remain difficult for a while. If you’re a traditional production or post-production person experienced with regular 2D, what would you do if a client asks you to make something in stereo?

If you’re not excited about 3D stereoscopic video, it’s probably because you haven’t tried an Apple Vision Pro yet. That new dimension really does bring something novel; a sense of presence, of being there, and for some kinds of productions, it’s absolutely worth your time to explore. We’ve had 3D before, but it’s never looked this good. For a long time, the resolution of cameras was far higher than the delivery method, but now, we’re finally at the point where we need cameras to catch up.

Some clients are going to want to push this particular envelope, but if you aren’t sure you can handle the demands of a full 3D shoot, and you can’t bring in someone who can, don’t risk messing it up — shooting 3D is still hard. Even checking you got the shot in the field is hard — you’ll need to monitor with a headset or specialized monitor. And you can’t just bodge two cameras together: getting two cine lenses as close as two human eyes is difficult or impossible without some specialised gear. If the cameras are too far apart, it could enhance the 3D effect, but make the whole scene look small. Here’s a guide if you want to pursue it.

Before we look at how to fake it, let’s look back at how it’s been done.

Stereoscopic shooting over time

There was a push for 3D back in the 1950s, and since the films were shot in black and white, it was relatively easy to use red/blue anaglyph glasses for the 3D effect. Dial M for Murder and It Came From Outer Space are examples of classic 3D movies you can buy today in Apple’s TV app. Of course, without computers, they shot 3D for real.

3D Stereoscopic Video — Fake it or make it? 79
Just a few 3D films above a lake

When 3D movies kicked off again more seriously around the 2009 launch of Avatar (and 3D TVs became a thing for 6 years or so) there were many more films finished for 2D and 3D, some which shot natively in 3D and some with post conversion. Some native 3D productions also explored higher frame rates, including The Hobbit trilogy, Billy Lynn’s Long Half-Time Walk, and Avatar 1 and 2. But not everyone wants to view cinema in higher frame rates. Notably, Avatar: The Way of Water is the only one of those films actually watchable at home with a high frame rate, only in action sequences, and only through Disney+ on Apple Vision Pro. It’s worth a look.

If you’d like to explore the history of 3D film, take a look through this list to see all the 3D options and then, if you’re curious, consult this list to see which ones were shot “for real”.

Faking 3D has gotten a whole lot better

Still, you shouldn’t get too attached to “real” vs “fake”. As you may know already, filmmaking today is not as simple as just pointing a camera and pressing record; hybrid approaches can produce excellent results. Some VFX-heavy movies might be able to deliver excellent 3D while using a single camera on set. Here’s a terrific article to make you an instant expert on the subject.

Gravity is a hybrid; the live action parts were shot in 2D, but as so much of the movie is actually 3D animation, most of the movie was rendered from two separate angles. Many Disney and Marvel films are available in 3D, and a Disney+ subscription is something I’d recommend to most Apple Vision Pro owners. Note that international distribution of 3D films is messy; in the US, Edge of Tomorrow and Ready Player One are in 3D, but here in Australia, neither are. And 3D isn’t always consistent within a film series: Dune is available in 3D, but Dune 2 is frustratingly not — a real shame considering how good the 3D conversion for Dune is.

Some 3D fans have taken matters into their own hands, using conversion tools to create 3D versions of their own content, and even feature films they’ve somehow acquired. The original 2D image becomes one of the two “eyes”, and the other eye is created by analyzing the first image, figuring out where each part of the image should sit in 3D space, and then offsetting each element appropriately to create a 3D image.

These conversion tools used to be rare and limited, creating an effect where flat, depthless 2D characters are positioned in 3D space. I’ve also seen some questionable converted videos on 3D video sharing app SpatialStation. But, at least some of the time, AI can do a good job at both generating an accurate depth map from a 2D image, and filling in the gaps left by characters being moved slightly between the two images.

3D Stereoscopic Video — Fake it or make it? 80
A real shot, and the depth map created from it by Depthify — it’s a bit fuzzy

On the Apple Vision Pro, the upcoming visionOS 2 includes an automatic 2D-to-3D photo conversion tool, and it’s remarkable how much this fakery can add to the emotional impact of an important photograph. Video is a trickier problem, and output can exhibit some flickering, but since deflickering and temporal smoothing are not new problems, I’d expect this to be solvable. 

As an experiment, I shot several comparison shots with different subjects, capturing the same content with a 3D and a 2D camera, then converting the 2D footage to 3D and comparing them in my Apple Vision Pro. The 3D footage is from my iPhone 15 Pro Max (using the built-in Camera app) and the 2D footage is from a Lumix GH6, recording in 6K.

The iPhone’s depth is great, but native Spatial video quality is not really up to professional standards, especially when viewed through a headset. This may change with the iPhone 16 Pro and Pro Max, due in just a few weeks, but we’ll have to wait and see. Today, unsurprisingly, the 6K GH6 footage is more detailed, so is it worth shooting with a nicer camera and adding depth in post?

Software conversion options

Many apps can perform this conversion, so let’s consider a few:

  • Depthify.ai. Works locally on a Mac (free, slow) or in the cloud (paid). Sadly, this app needs a bit more optimization, at least on Mac. It processed at slower than one frame per second on my M3 Max MacBook Pro. Worse, it wasn’t 100% reliable, and sometimes failed to generate a video at all. 
  • Spatial Media Toolkit, an inexpensive option available for Mac and Apple Vision Pro. You can test the app for free in 1080p, but higher resolutions or durations over 60 seconds require payment (in-app Pro purchase at US$6/month, US$40/year, or US$60/lifetime). 
  • Owl3D, a more comprehensive option for Mac and Windows that can handle regular or 360° videos. There’s a US$10/month plan with support for up to 4K output, but commercial use or even higher resolution costs US$36/month or US$300/year.

Assessing the results

Spatial Media Toolkit is fast, but the artifacts it introduces around the edges of moving objects are pretty distracting — and in a hand-held shot, there are a lot of moving objects. These flaws are not simply because the depth map was imperfect (though it was) but because the newly generated areas aren’t correct. Even if you only plan to shoot on a tripod, you’ll still see odd halos around people’s heads, and the faster objects move, the worse the results will be. Your content will determine how objectionable these artifacts are, but I’m not sure that the output is reliable or tweakable enough for professional use.

3D Stereoscopic Video — Fake it or make it? 81
Pedestrians walking past at a normal speed were surrounded by warping and distortion in Spatial Media Toolkit, and this is pretty distracting in motion

On the plus side, this app is about as fast as a slow video filter (taking about 3x a clip’s duration to process) and produced 6K Spatial output at a usable data rate (85Mbps from my 200Mbps 6K input). Before you process the whole clip, you’ll see a preview (in parallel, cross-eye or “wiggle” formats on the Mac, or in true 3D on the Apple Vision Pro) and can scrub through the clip, which is useful for a quick assessment. Unfortunately, batch processing isn’t possible, and there’s no file name mapping between input and output, which would makes processing a whole shoot’s worth of clips pretty tedious. If you’re only planning on converting a finished timeline, it’s not really a problem.

Owl3D’s output has fewer artifacts than Spatial Media Toolkit, and it also gives you many more options to tweak, including a variety of depth creation and temporal smoothing options alongside ProRes support. The higher-precision models come at a speed cost, though the results are indeed cleaner, reducing artifacts to a point where a casual viewer may not notice them.

Though 1080p output was pretty speedy, 6K output took much, much longer — an hour to process the same monster 6K 28 second clip. (The developers have let me know that speed is expected to increase a lot with optimisations coming soon.) However, since batch processing is included, running your jobs overnight is an option, and again, if you’re only planning to export a finished timeline, processing speed is not such a big deal.

3D Stereoscopic Video — Fake it or make it? 82
Batch processing in Owl3D is certainly welcome

Both Spatial Media Toolkit and Owl3D can add depth and immersion to a scene, and while Spatial Media Toolkit has a clear advantage on speed and native Vision Pro support, Owl3D wins on quality. However, neither of them can produce perfect results in a complex scene with fine 3D details, and the artifacts — even with slow processing — can sometimes ruin the illusion.

Are the artifacts too much?

If the contents of your frame are relatively simple and clearly defined, you’ll see fewer flaws, but if you film a garden or park, the depth of each tree or bush is likely to be a softly defined blob; each leaf will not have its own distinct position in space. And foliage or not, parts of some objects will be misinterpreted, because the AI can’t segment every object perfectly. In one clip, I saw a car’s mirror placed far behind the car it was attached to, at the same depth layer as cars across the road.

To be fair, these apps are very much still under development, and I would expect speeds and output quality to increase in the future. We are off to a good start, and there are a few more options available if you have a grunty PC — the dearth of legitimate 3D movies on the Quest has meant that users of that platform have had to be more creative. If you want to live on the bleeding edge, here’s at least one more solution that’s command-line only on Mac, but is potentially faster if you have a decent Nvidia GPU.

What if you don’t want to wait for processing at all? Live, on-demand local 2D-to-3D conversion could be yours (if you have a beefy PC and a Quest) with this solution from Steam. UPDATE: There’s also an Apple Vision Pro app called ScreenLit that offers live (but imperfect) 3D conversion.

This space is still ripe with experimental solutions, and the story is far from over.

Conclusion

Your approach to 3D will depend on if you’re planning on delivering exclusively to 3D platforms, or for both 2D and 3D viewers. Shooting natively in 3D is currently more far difficult, so I can’t blame anyone considering post conversion as an option for hybrid delivery. While processing times can be high, you’ll have to find a balance between quality and speed that you’re happy with. Make sure to review output in a headset before you make that call, though. 

Time will tell if any of the 2D conversion tools can deliver reliable, consistent results that are good enough to use on a regular basis. On one hand, the shooting experience is far simpler with conversion, and you’re not going to compromise the film for 2D viewers at all. But you can still experience artifacts, and it’s perhaps a big ask for AI to do a job which many professionals spent a lot of time doing until just recently. As ever, your mileage may vary. But I suspect that for some jobs, shot well, 3D conversion could soon be a viable option that fulfills client expectations with minimal extra effort. Consider it.

]]>
https://www.provideocoalition.com/3d-stereoscopic-video-fake-it-or-make-it/feed/ 2