Phil Rhodes – ProVideo Coalition https://www.provideocoalition.com A Filmtools Company Thu, 02 Jan 2025 15:41:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.5 https://www.provideocoalition.com/wp-content/uploads/cropped-PVC_Logo_2020-32x32.jpg Phil Rhodes – ProVideo Coalition https://www.provideocoalition.com 32 32 New year’s resolutions for crew https://www.provideocoalition.com/a-victorious-2025/ https://www.provideocoalition.com/a-victorious-2025/#respond Thu, 02 Jan 2025 20:07:24 +0000 https://www.provideocoalition.com/?p=287516 Read More... from New year’s resolutions for crew

]]>
The aftermath of a party, with glitter on the floor and empty wine glasses on the coffee table.

Resolution is a word that gets more airtime than it should in a world where pocket-money cameras have four times the sharpness of classic cinema. In fact, ending that preoccupation with numbers should go on a list of things we’ll try to do in 2025. A new year’s not-resolution, perhaps.

Here’s a few others.

Take user-generated content more seriously

If you’re perpetually engaged in senior positions on high-end projects at union rates, it’s easy to overlook changes in the wider industry. Of course, a lot of people are conspicuously not in that position at the moment, and there’s a lot of things to blame: the peak and decay of streaming, the hangover of the pandemic, pricy-but-mediocre franchise films and streaming series, and industrial action which, whether we like it or not, certainly kicked an industry when it was down.

The fact that this all happened at the point where user-generated content was ascendant is no coincidence, but certain markets have been able to ignore that reality because YouTube has not so far been capable of funding an Alexa 65 and a set of DNAs. That probably hasn’t changed yet, although some shockingly high-end work is being done. The Dust channel has been putting out user-generated sci-fi for aNo while, and while much of Dust’s output might not quite satisfy Netflix subscribers, it is naive to assume that the status quo is eternal.

Snobbery is involved, though as a business consideration the rise of user-generated content is a question for the c-suite more than camera crews. Other things, though, are more in the hands of the craftspeople.

 

Young woman sitting in front of a ring light applying makeup.
Here we see the entire production, directorial and post team at work. Yes, when you and your one million buddies can put the wind up Disney using ten-dollar Aliexpress ring lights and iPhones, you are worth taking seriously. By Pexels user George Milton.

Recognise production design

Given film is so much a team sport, the lack of communication between departments is often slightly shocking. Perhaps that’s because it is also a very expensive artform, provoking a nervousness which tends to keep people firmly in lane. A film set is a place where it is often better to keep silent and be thought a fool. In the abstract, most people are keenly aware that there is no good cinematography without good production design, but that’s easily forgotten in the midst of pixel peeping the latest camera release (of which more anon).

Sometimes, production design means months of preparation. Sometimes, it just means picking the right time and place. Still, interdepartmental collaboration is sometimes more competitive than it should be. That’s particularly true on less financially replete productions, where it may be accepted that the show will not compete with blockbusters but that nobody wants that outcome to be their fault. So, camera refuses to unbend for the location manager, or vice versa, and the result is unnecessarily compromised.

We could equally assign a couple of new years’ resolutions to other departments, encouraging them to recognise the need to, say, put the camera somewhere it can see both the actors at once. Ultimately, though, we should admit that too many people put too much importance on the camera, and not enough on what’s in front of it.

Be bold

Even lay audiences have started to notice that a certain proportion of mainstream film and TV has adopted a rather cautious approach to high contrast and saturated colour. Some of the accused productions have been comic book or animation adaptations, which probably ought to be the opposite. What’s even more counterintuitive is that this is invariably the product of digital cinematography, which was long held to be lacking in dynamic range – which is the same thing as high in contrast.

Grey concrete support pillars under a bridge, in grey mist.
This atmospheric photo by pexels user Markus Spiske is pretty, but a lot of modern film and TV sort of looks a bit like this even when it isn’t foggy.

Engineers have since given us fifteen-stop cameras, but there seems to be a lasting societal memory of early, less-capable electronic cinematography which makes people afraid of the extremes. It’s at least as likely that fiscal conservatism is leading to artistic conservatism around the sheer cost of nine-figure blockbusters. Nobody ever got in trouble for not crushing the blacks.

The result is an identifiable lack of punch in movies and TV shows that even determinedly nontechnical people are starting to notice. There’s a whole discussion to have about history, how things once looked, and how they look now, but with modern grading we can have anything. The solution is easy, if the producers will stand it: be not afraid of minimum and maximum densities – unless you’re grading for HDR, in which case absolutely be afraid, but that’s another issue.

Stop pixel peeping

And yes, like a lecturing parent frustrated with a chocolate-smeared child’s perpetual tendency to steal cookies, we do have to talk about that obsession with numeric specifications. It is the camera department’s equivalent of bargain vodka. Everyone knows it’s a bad idea, but it starts off fun and we can stop whenever we like. Soon, though, we realise that cameras are now almost too good and pixel peeping has facilitated a generation which thinks that swear-box words like “cinematic” and “painterly” are objectively measurable. Then it turns out that our attractively-priced metaphorical booze was mostly brake fluid, and people end up spending time counting megabits that should have been spent working out a mutually-beneficial compromise with the location manager.

Everyone knows that good equipment is necessary. Everyone knows it isn’t sufficient. Everyone also knows that pixel peeping is a bad habit and complaining about it almost feels redundant. But if we can make 2025 the year when film students use social media to discuss technique more than they discuss technology, that’ll be a minor victory.

]]>
https://www.provideocoalition.com/a-victorious-2025/feed/ 0
Fantastic fairy dust https://www.provideocoalition.com/fantastic-fairy-dust/ https://www.provideocoalition.com/fantastic-fairy-dust/#respond Sat, 21 Dec 2024 12:01:02 +0000 https://www.provideocoalition.com/?p=287269 Read More... from Fantastic fairy dust

]]>
The BFG character looks over a table as it is filled with food by a purplish cloud of magical dust.
Sainsburys’ festive commercial. Sometimes, fairy dust is not gold. But usually it’s gold.

 

Fire up the particle generators, folks: it’s the time of year when every third commercial features snow and a variation on fairy dust that isn’t usually the product of a visible fairy. Instead, it appears in the wake of a as effects artists do their best to create a visual expression of general-purpose seasonal cheer.  We’ll concentrate on the UK, since that is your correspondent’s responsibility, and explore a few glitter-heavy examples that might not be familiar around the world. Your correspondent’s home island is an odd place for festive advertisements, since it rarely snows in December, and it certainly doesn’t snow very much in the late August weeks during which most of these things are actually shot. Almost all of them, though, reach for the buckets of shredded paper – or the controls for Red Giant’s Particular software. Truly, the festive season is all about floating motes. Let us know in the comments if this is a global phenomenon, and whether it applies to other traditional celebrations, too.

Congealed innards

It’s a sign of the economic times that low-cost UK supermarket chain Aldi is doing so well. Despite the company’s cost controlled approach, though, its yuletide promotional effort gives us two particle effects in the opening shot, setting a high standard for floating glitter in this year’s lineup. Otherwise, Aldi is determinedly traditionalist, giving us a widescreen presentation apparently shot on spherical lenses (the amount of CG integration might have made anamorphics a bit of a millstone). The teal and orange look is a safe choice, especially as it’s so easy to maintain when your protagonist is a carrot.

By comparison, Asda suffers a mild handicap in that its corporate colours are a decidedly antifestive black and green. It’s therefore difficult to depict the logo as a welcoming island of light in a winter scene, although the opening scene is a wonderful example of the slowly-descending crane opener which makes a valiant effort to do just that.

A sleigh careers dangerously down a snowbound street at night, trailing golden glitter.

Perhaps by way of compensation, Asda takes a more ambitious approach to camera equipment. There are some striking similarities between the two narratives, both of which feature CG creatures on a military operation, but Asda’s exists in a cinemascope-style frame shot on true anamorphics. If particle-effect snow is done with sufficient enthusiasm, perhaps as a full 3D render, it can be hard to tell from real, but it seems possible that Asda chose to reinforce its dedication to the traditional by deploying real fake snow, if there is such a thing.

Marks & Spencer shows us a world in which Dawn French invites a half the post code round for a pre-festivities party which must have felt crowded even in the multimillion London townhouse in which it appears to take place. The ninety-second piece (a popular duration, for some reason) also gives us a fairy in person, provoking plenty of work for Particular. Even so, the show leans away from the teal and orange toward a look that’s actually fairly desaturated but for the deliberately chosen spot colours of red and green.

The action opens in a world where people are taught to carefully coordinate not only their clothing but also their purchases – look for the red hat, red flowers and red shopping bag from the very first frame. Look also for the interiors which take place in a room that someone’s painted a dark brownish-purple, though the production designer probably used the term “oxblood”. This is a particularly good example of why movies rarely look like the real world, because in the real world people tend to avoid living in an interior the colour of congealed innards.

Gingerbread cannibalism

Dawn’s ersatz house is oxblood and dark green, which is festive but impractical. It would probably look a bit less cheerful around a scene set during a midsummer ball, but it does serve to remind nascent filmmakers that great cinematography can begin with a can of paint.

For those of us who rarely encounter the opportunity to build a dining room bigger than the average UK apartment, consider Tesco, which takes a determinedly real-world approach to its seasonal promotion. Mostly shot handheld, everything seems amazingly realistic until people (and foxes) start turning into gingerbread. There’s a certain body-horror aspect to a gingerbread boy eating a gingerbread video game, but there’s a certainly a preponderance of family values and a standard-issue big, soft, warm top light over a table full of way too much food.

Fantastic fairy dust 1
Now this is quality fairy dust. Field turbulence, motion blur, and a strong contribution from a glow and flare plugin, if your narrator is any judge.

There are even particle effects, though they’re used to create a shower not of fairy dust, but gingerbread cookies, candy canes, and other objects evocative of saturnalia. Again, the photography is fairly straightforward – spherical 16:9 – and beyond the cookie people, the production design is restrained to maintain that real-world feeling. Because sub-f/1 lenses exist, It’s dangerous to claim something was shot on a really big chip, but the depth of field here screams large format. The highly mobile car interior at 0’48” would have needed either a camera ending in “mini” or a DSLR on a stick, but of course that doesn’t preclude a sensor the size of a playing card.

Perhaps the most technically interesting of this year’s crop is the commercial produced by Sainsbury’s. Featuring the BFG character from the film of the same name, the company takes a leaf from Me to You’s now-classic stop motion promo and animates its lead character only every other frame for a handmade look.

Not stop motion

The thing is, the BFG, unlike Tatty Teddy, is computer generated, not stop motion. He’s also inserted into live-action scenes. The combination works surprisingly well, especially considering some of that live action has camera movement. Tracking CG animation at 12.5 frames per second into a live action scene shot at 25 inevitably creates a one-frame disparity in the motion tracking at least every other frame. It is visible when stepping through frame by frame. Still, careful choice of subject and animation creates a result that clearly embodies the sort of hand-made aesthetic they wanted to seem to be using.

It’s a rare moment of innovation in a field which often doesn’t attract many new ideas. So’s the cloud of magical vapour that appears in one scene. Fairy dust is inevitably golden in colour, but Sainsbury’s pushes out the boat with a rainbow of hues. The titular Giant, meanwhile is more or less a celebrity endorsement, and it’s not clear how many of the advertised seasonal treats represents a fair serving for someone who’s thirty-five feet tall.

Finally, there’s upmarket chain Waitrose, which is so famous for producing story-based commercials this time of year that their releases are national news. They hired Matthew Macfadyen. But they didn’t hire anyone with a copy of After Effects to do any particle work, so it’s hard to take seriously.

]]>
https://www.provideocoalition.com/fantastic-fairy-dust/feed/ 0
Exploring the extremes of dynamic range https://www.provideocoalition.com/exploring-extremes-dynamic-range/ https://www.provideocoalition.com/exploring-extremes-dynamic-range/#respond Fri, 22 Nov 2024 13:25:25 +0000 https://www.provideocoalition.com/?p=285895 Read More... from Exploring the extremes of dynamic range

]]>
A hand holding a light meter near a woman's face .
Near enough, right, with modern gear? By Pexels user Tima Miroshnichenko.

 

When camera manufacturers start talking (or at least creatively leaking) about twenty-plus-stop cameras, it might not be too much to wonder whether the end is nigh for the very concept of exposure. That’s a pretty big claim, given it’s often been said that the first responsibility of a cinematographer is to expose properly, and if some inconvenient bit of technology is threatening to come along and assume that responsibility, it’s as well to know about it as early as possible so that everyone can take out unemployment insurance.

OK, that’s hyperbole, but compare what happened to sound. With 32-bit digital audio, the amount of headroom available is so stunningly vast that being a sound recordist on a major motion picture has practically become a work-from-home occupation (for the sake of truth in journalism, this is not true, but irritating soundies is a big part of what keeps the camera department entertained between setups, so we’ll proceed on that basis).

It’s easy to overlook just how enormous 32 bits of dynamic range is. For a long time, 16 bits was considered adequate, and 24 truly pro-grade. Just to make it very mathematically clear, 32 bits is not twice the dynamic range of 16. Every time we add a bit, we double the range of values (we can have all the values we had before with the new bit off, and all the values we had before with the new bit on). 16 bits can represent 65,536 values. 32 bits can represent… well, it’s up in the four point two billion range. 32 bits is 65,535 times the dynamic range of 16 bits, which is really a whole bunch of dynamic range.

Louder recordings

That’s enough to record Brian Blessed at his liveliest, and it’s made things like digital radio mics a lot more practical. For the sake of comparison, let’s say we have a competitive modern cinema camera – capable of fourteen stops, probably. That inevitably involves lots of clever mathematics around noise reduction and colour processing, but it’s very useful. The thing is, 21 stops is seven more than that. Seven stops is only as many stops as there are between f/2.0 and f/11, inclusive.

Canon Sumire Prime 35mm lens mounted on a Canon C700 camera
Iris rings? In the brave new world, who needs ’em.

There are cameras out there which are diffraction limited above about f/4, but give or take a few NDs for indoors or outdoors, two to eleven is a bigger range of stops than most people actually use, on most productions. Let’s be clear, this is a thought experiment based on a press release claim. Still, a 21-stop camera creates a situation where proper exposure might, at best, require two positions – one marked “lots,” and another marked “less than that.”

Of course, that implies we’re willing to use up all of that dynamic range simply to avoid the inconvenience of glancing at a meter and twisting a ring on a lens. Invariably, we won’t do that, if only because there are quite a lot of reasons to set a stop on a lens other than making the picture brighter or darker. Assuming camera technology reaches the same sort of post-scarcity status audio has, we’re likely to use that flexibility to decide exactly how fuzzy we’d like our carefully-curated classic glass to look, or how deep we’d like the focus puller’s frown lines to become.

Traffic shot in extreme defocus to the point where all we can see are circles of defocussed light.
If current trends continue, this is more or less what cinematography will look like. Who says we don’t need f/0.5?

Bigger recordings

There are all kinds of practical problems with this, not least of which is how we might store such enormously capable material. We regularly shoot HDR material in ten-bit when it should really, mathematically, be represented in twelve. Clever things can be done with brightness encoding, although the PQ curve designed for current HDR was designed to do about as much as can be done. In theory, it handles brightness levels up to ten thousand nits, which is likely to remain fantastical for monitors but actually isn’t very bright compared to – say – the noonday sun. Whatever happens, we can add more bits, and more flash storage, and bigger editing workstations… and then, of course, someone has to come up with the piece of silicon that can actually capture that much light.

It’s easy to rest on the laurel that all these ideas are really quite a lot of R&D away from reality, although that’s a dangerous assumption in 2024. Claim warp drive is a long way off, and some chatbot on its night off will probably pop up with a pithy response containing comprehensive schematics. After all, there was a time when audio people might also have scoffed at the idea that level control might become a paperwork-only issue.

 

]]>
https://www.provideocoalition.com/exploring-extremes-dynamic-range/feed/ 0
A cheat’s manifesto part 5: Disco inferno https://www.provideocoalition.com/a-cheats-manifesto-part-5-disco-inferno/ https://www.provideocoalition.com/a-cheats-manifesto-part-5-disco-inferno/#respond Mon, 07 Oct 2024 13:00:34 +0000 https://www.provideocoalition.com/?p=256172 Read More... from A cheat’s manifesto part 5: Disco inferno

]]>
A mirrorball suspended against stars.
What do you mean, it’s a disco ball? Aliens had rotating mirror rigs! By Pexels user Neosiam.

In that heady moment before the slate closes, some people have felt sometimes an undeniable sense of spiritual responsibility to a now century-long legacy of cinematographers. For some people, that feeling leads to a life of sober dedication to the craft, post-nominal letters and little gold statues. For others, it provokes nothing more than a desperate hope that a pocket-money budget will remain safely concealed behind an impenetrable wall of flashing lights.

Our irregular series on gleeful cinematographic dishonesty has examined a few approaches to this sort of thing. Like a wary parent approaching a scowling baby with an unappealing spoonful of broccoli puree, we’ve found ways to distract the audience from the indigestible reality of underfunded production design and cheap locations with tasty morsels of mist, backlight, blur and flare. All of these techniques rely on obscuring the less-fortunate parts of the frame in order to allow a willing viewer’s brain to fill in the gaps. The problem is that quite often there’s an unavoidable need to centre a subject in frame, focus sharply, and occasionally point a light at it.

Our keyword today is occasionally.

Flashing lights

Multiple beams of light focussed on a DJ at a nightclub.
Yes, lights intended for clubs, theatres and other live event work are often very useful. Just watch out for potentially flickery metal halide ballasts. By Pexels user Jacob Morch

The prototypical example of this is Cameron’s Aliens, in which a fight between a woman in a robot suit and a slavering monster risked coming off as comical had it been handled even fractionally less well. That’s hardly an underfunded production, but it was made with comparatively primitive technology. Many things make it work, not least a fantastic robot suit, a tanker truck loaded with methyl cellulose slime, and an Oscar-nominated performance. A large contributor, however, is that it’s all rendered much more acceptable by the sweeping beams of light which drag the eye away from the less-ideal aspects of the image.

Looked at dispassionately, the enormous revolving mirror rigs which motivate this light are barely believable, much as the film takes great care to establish them in previous scenes. Each is an assembly the size of a large trash can and looks to have an HMI in the kilowatt-plus range concealed beneath. They’re half the size of a coastal lighthouse’s lantern, but Cameron depicts them in use aboard a spacecraft that looks to be less than a couple of hundred feet across. A conventional rotating beacon would have been fine. It makes about as much sense as a walking forklift.

Ill-advised liaisons

Of course, the point here isn’t to prevent people falling out of the airlock, it’s to distract us from the fact that the creature is a combination of 80s practical effects at various scales, and that a walking forklift probably wouldn’t be that great at fisticuffs anyway. Either way, the result is something considerably better than many modern attempts at comparable scenes. This sort of thing is the stock in trade of music videos (and nightclubs looking to provoke ill-advised romantic liaisons). Similar things were done with the rotating beacons in the original Alien, the final fight scene in the first Resident Evil, and almost any other situation in which a passing train illuminates a scene.

If we broaden the scope slightly to include other kinds of flashing light, then the trick goes back at least as far as the birth of a famous monster at the dawn of sound, and comes to us via more or less every subsequent horror or sci-fi movie which can justify a lightning storm, malfunctioning electronic device or deranged necromancer. The 2002 film Equilibrium, which succeeds hugely in that it is much slicker and much glossier than it has any right to be for $20million, builds an entire fight scene from single frames illuminated solely by muzzle flash.

A car in a misty forest at night with visible headlight beams.
Don’t feel like you have to drive the car around, just get a couple of production assistants with mirrors. By Pexels user Faruk Tokluoğlu

Moving lights

The idea of moving light, though, is uniquely powerful because it can leverage the reality of three-dimensional space simply by being panned through it, on a stand. Usually, if we want to see depth, we have to move the camera or the subject sideways, and that means large numbers of expensive people carrying large amounts of expensive equipment from truck to location. A static camera can watch light sweep through a location without either of them doing anything more than pan, and if we do want to get clever, it’s as easy to roll a light along on a dolly as it is a camera. People have regularly put lights on cranes to simulate the relative motion of the sun around a fighter jet, and lighting drones are increasingly common. Has anyone put a light on a gimbal yet? And then there’s time lapse, which can be shot by anyone with a third-hand DSLR and tends to feature sweeping shadows as the sun races across the sky.

This sort of trickery isn’t solely about obscuring regrettable things by not pointing lights at them. It’s also about the distraction of moving shadows and the option to actually animate chiaroscuro, something that could hardly be more appropriate to a moving-image artform that still takes major clues from the old masters. Or, at least, that’s what we’ll say if anyone asks why every scene in a police procedural reveals crucial clues in the sweep of a car’s headlights.

]]>
https://www.provideocoalition.com/a-cheats-manifesto-part-5-disco-inferno/feed/ 0
Space to grow https://www.provideocoalition.com/space-to-grow/ https://www.provideocoalition.com/space-to-grow/#comments Mon, 23 Sep 2024 12:29:58 +0000 https://www.provideocoalition.com/?p=247153 Read More... from Space to grow

]]>
There’s a reason low-rent independent short films often lack dynamic staging, feeling static and limited in their use of locations, and it’s not lack of space – or at least not lack of space in front of the camera.

It’s hard, sometimes, not to sink into a state of curmudgeonly finger-waggling any time someone more than fifteen years younger than oneself scoffingly claims that a given scene can be lit with pocket-sized lights running on battery power. It’s particularly difficult to take when they’re right; after all, cameras are now routinely capable of shooting at thousands of ISO with acceptable noise, and lighting with better-than-tungsten efficiency is much more affordable than small HMIs ever were. That hundred-watt open-faced LED still can’t light that whole building, you young whippersnapper, but taking everything into account it’s quite possible that the world really is effectively four stops brighter than it was around the turn of the century.

Woman standing in front of a hanging textile light with integrated white LED emitters.
This sort of thing makes for faster setups and easier moves, but it still has to be a certain minimum distance from the scene to control falloff over distance, and that requires space.

Less power, less gear. Same amount of space overall.

The amount of power and gear required has collapsed massively, and we don’t really have to take any particularly unusual measures to achieve it. What hasn’t collapsed is the required size of the setup required to create any given scene, because humans are just as big as they ever were and the inverse square law works just the same way it ever did. This means that certain things still can’t be done with an amount of gear we can happily backpack on the subway, and that’ll remain the case regardless how fast cameras become.

If this seems completely elementary, consider this example, provoked by a real world situation which must, sadly, remain anonymous, directed by someone whose experience began and ended with lights you can pick up with one hand.

Let’s say we’re covering the first half of a dinner conversation, at night. Rake the background with a hard steel green, because night is green this week, and position the big, slightly warm, very soft key just out of frame. Assuming we’ve put a suitably symmetrical-looking actor in an appropriate location, it starts looking like a movie even before we’ve set the backlight. This is the sort of thing that people tend to figure out fairly quickly, and it takes advantage of all the new technologies we’ve been discussing so far. Between f/1.2 lenses and 3200 ISO cameras, we can light that with a mating pair of glow-worms.

Close up of an LED video display panel, showing individual red, green and blue subpixels
Even when some of the space we’re showing is part of a video display, it has to be a certain minimum distance away – both so the light falloff is reasonably well controlled, and so it doesn’t look like, well, this.

Sit very still, please.

But anyone can light a closeup. Let’s say we’re interested in showing the talent getting up from the table, though; in that case, things change. If the diffusion is really close, even leaning away from it is likely to betray how nearby it is as exposure plummets precipitously. To fix that, the diffusion has to get bigger and move further back. Turn this into a scene in which our heroes get up from the table and continue their conversation as they leave the building, and it has to get much bigger and move much, much further back.

We’re still free to leverage the huge sensitivity of the camera and the huge efficiency of the light, but we still need giant diffusion. And somewhere to put it, and big stands to put it on, and several equally large flags to control the monster we’ve now created, as it projectile-vomits photons all over the hemisphere. We need a group of people in jeans and check shirts to rig and de-rig it all, food for them to eat, and somewhere for them to park. Saving 75% on power consumption is nice, but it starts to seem positively trivial in the context of merely securing a location with elevators big enough to get everything where it needs to be.

Closeup of an LED textile light with the conductive traces and both tungsten and daylight emittters visible
Things like this are saving at least some space, and they’re tremendously convenient.

In the end, space outside the frame to set up is not a new problem, nor one from which anyone is completely immune. Let’s consider what was probably one of the largest single lights ever used in a feature film, the gargantuan 100kW SoftSun specially built for First Man. Lighting night exteriors – which is not quite what was being done for the lunar surface scenes, but give me some rope here – has always been tricky. No matter how large a light we use, it will always fall off to darkness in the end. The key issue isn’t the distance we want the actors to move; it’s the ratio between that distance, and the distance between the actors and the light. Lights that size exist to maximise the actor-to-light distance and thus minimise visible falloff. The power consumption is one downside. The other is the construction crane and quarter mile of space, which doesn’t change even if we find a magical way to build a SoftSun that can be powered by a hamster in a wheel.

That’s not to say modern technology hasn’t given us anything that lets us shrink setups. The flexible light panels we can now rig into diffusion frames are unimaginably more efficient than firing a fresnel into a piece of fabric and hoping some of the photons make it out the other side. They’re also vastly faster, easier and lighter on gear and people. But unfortunately, if we want someone to be able to move around a scene without the exposure varying in a way that only makes sense if the sun is really nearby, a certain amount of size and space is always going to be involved.

]]>
https://www.provideocoalition.com/space-to-grow/feed/ 1
Lens good, crop factor bad https://www.provideocoalition.com/lens-good-crop-factor-bad/ https://www.provideocoalition.com/lens-good-crop-factor-bad/#respond Mon, 09 Sep 2024 13:00:21 +0000 https://www.provideocoalition.com/?p=244815 Read More... from Lens good, crop factor bad

]]>
Closeup of the large sensor on a Fujifilm GFX 100 medium format camera
Fujifilm’s GFX 100. Crop factors under one are possible, but nobody seems to use them, for some reason.

 

Let’s imagine what would happen if you walked into this coffee shop* and asked for a latte that was one point six times larger than a mug of mint tea you bought yesterday from a competing establishment. Even if you were a daily regular, the person behind the counter would be forgiven for frowning slightly, not necessarily because the answer is unclear – you’re a regular, remember – but because it’s such an abstruse way to ask for a venti.

Unfortunately, that’s what we’re doing when we use crop factors in discussion of focal lengths.

Crop factors we have known

The whole thing became popular when Kodak, Agfa, Konica and Fujifilm launched the Advanced Photo System in 1996. It wasn’t a terrible idea – an easier-to-handle, cassette-based film system which even supported some metadata – but it was really too late, quickly being eclipsed by then-emergent digital photography, and discontinued by the mid-2000s. It also seriously confused stills people by altering the field of view rendered by a lens of a particular focal length.

Most people are aware that a system of multiplication factors became popular so that people could quickly estimate what lens to use on an APS camera to create the same results they’d have enjoyed on 35mm. This holds into the modern world, where an APS-C digital stills camera, based broadly on the APS layout, renders an image with a field of view equivalent to that rendered by a lens 1.6 times longer on 35mm. A 50mm lens on an APS-C camera gives us the same view as an 80mm lens would on a 35mm camera – 1.6 times longer.

The mathematics works, and not only for stills. APS-C is very roughly the same sort of size as a Super-35mm film frame (APS-C is 16.7mm wide versus 18.6 for Super-35mm, though real world cameras which claim those sizes vary somewhat) and thus the multiplication is very roughly similar. The problem is that the multiplication factor we’re using to discuss focal lengths is based on a still photography format that has little or nothing to do with most motion picture equipment and isn’t nearly as familiar to most people as the behaviour of Super-35mm equipment in the first place.

Lens mount on the back of a Canon 24-105mm F4-7.1 IS STM lens.
An image of a fixed size comes out of this aperture. Bigger sensors see more of it. That’s all.

Cropped close

This doesn’t happen a lot on set. Anyone asking for the eighty-divided-by-one-point-six would rate a very confused look from the second assistant. Instead, this sort of terminology is a malady of internet forums and social media, and it leaks into filmmaking by way of recent graduates with more enthusiasm than experience. It’s not even very smart in the stills world, given that quality stills cameras are available in at least four broad categories of sensor size, and many modern photographers grow up on inexpensive APS-C cameras. In situations where experience on full-frame cameras in either the stills or moving-picture world (Alexa LF and the like notwithstanding) may be rare, using full-frame as a field of view reference is practically a cruelty to the beginner.

The final irony of this is that both the motion picture and stills worlds have described 50mm lenses as “normal,” despite the huge difference in framing between full-frame stills and typical motion picture cameras using such a lens. The idea here certainly isn’t that the 50mm lens matches human vision, which has a massively wide overall angle of view and a real world focal length in the single digit millimetres. Rather, on a monitor or cinema screen, the 50 doesn’t show any of the perspective compression of long focal lengths or the yawning perspective of very short focal lengths.

But if it’s a normal lens on both APS-C and full frame, we’re being erroneously informed that it’s a normal lens on vastly different sensors. That’s balderdash, and goes some way to suggest just how much imprecision there is in this sort of focal-length philosophising.

Canon EOS R camera with lens removed, revealing the sensor
Crop factor one.

In the end, the solution is for people – generally new entrants, for whom we probably all owe some sympathy at this slow time – to actually learn the equipment, so they can sensibly discuss focal length in the knowledge of what the camera is and how it will behave without resorting to middle-school mathematics. Otherwise, what we’re doing is to decide what lens we’d use on a full frame camera (as if we’d used one recently), remember the crop factor as it applies to the camera in use, do the multiplication, and to pick the nearest number out of the lens case. That’s insane.

So, let’s ensure there’s at least a virtual swear box everywhere there can possibly be one, and consign the phrase “crop factor” to the pit of ignominy in which it belongs.

* Your correspondent outlined this article in Jimmy’s Coffee on McCall Street in Toronto after having spent nearly forty-five minutes trying to find a coffee shop that wasn’t Tim Horton’s. No offence to Tim, but Jimmy’s oatmeal chocolate chip cookies well were worth the investment of shoe leather.

]]>
https://www.provideocoalition.com/lens-good-crop-factor-bad/feed/ 0
Bayer best? https://www.provideocoalition.com/bayer-best/ https://www.provideocoalition.com/bayer-best/#respond Mon, 26 Aug 2024 13:00:32 +0000 https://www.provideocoalition.com/?p=269652 Read More... from Bayer best?

]]>
Sensor of a Blackmagic Ursa Mini camera, showing spectral colour effects from its tiny features
This is a Blackmagic Ursa Mini’s sensor. The colour is not from the filter array, it’s from interference between the wavelength of the light and the fine pitch of features on the chip

Building colour images from red, green and blue is probably one of the most fundamental concepts of film and TV technology. Most people move quickly on to the slightly more awkward question of why there are three components, and are often told that it’s because we have eyes with red, green and blue-sensitive structures, and we’ve often built cameras which duplicate that approach.

The reason we’re discussing this is, in part, because of an interesting design from Image Algorithmics, which has proposed a sensor which it describes as “engineered like the retina.” That could refer to a lot of things, but here it refers to IA’s choice of filters it calls “long,” “medium” and “short,” diverging from Bryce Bayer’s RGB filters. The design is interesting, because that’s the terminology often used to describe how human eyes really work in practice. There isn’t really red, green and blue; there’s yellowish, greenish and blueish, and none of them are deep colours.

Human colour

A quick glance at the data makes it clear just how unsaturated those sensitivities really are in human eyes. It’d be easy to assume that the humans might struggle to see saturated colours in general, and red in particular. The sensitivity curves are so wide that light of that colour might just look like a pale green and an equally powdery and faded yellow, and the yellow and green overlap enormously. In practice, the human visual system detects red by (in effect) subtracting the green from the yellow, a biological implementation of the matrix operations we see in some electronic cameras.

When Bayer was doing his work in the 1970s, it might have been possible to build a sensor with long, medium and short-wavelength sensitive filters that match the human eye. What might have been trickier would have been the resulting need for compact and power-frugal electronics capable to turn the output of such a sensor into a usable image. So, Bayer took the direct route, with red, green and blue filters which nicely complemented the red, green and blue output of display devices. Modern Bayer cameras use complex processing, but early examples were often fairly straightforward and mostly worked reasonably well.

With modern processing it works even better, so the question might be what Image Algorithmics expects to gain from the new filter array. The simple answer is that less saturated filters pass more light, potentially enhancing noise, sensitivity, dynamic range, or some combination thereof. Image Algorithmics proposes a sensor with 50%, yellow, 37.5% green, and 12.5% blue subpixels, which approximates the scattering of long, medium and short-sensitive cone cells across the human retina.

Image compares the random scatter of colour-sensitive cells in the human retina to the diagonally ordered pattern in Image Algorithmics' filte array design
Image Algorithmics’ colour filter array comparing conventional Bayer filters and the configuration of the eye. Drawing courtesy the company

Existing ideas

This is not entirely new; Sony used an emerald-sensitive pixel (which sort of looks cyan in most schematics) on the Cyber-shot DSC-F828 as early as 2003, while Kodak used cyan, magenta and yellow filters in the Nikon F5-based DCS 620x and 720x around the turn of the millennium. Huawei has made cameras made cameras in which the green element of a Bayer matrix is replaced with yellow. The Blackmagic Ursa Mini 12K uses a sensor with red, green, blue and unfiltered photosites, presumably yielding gains which are very relevant to such a densely-packed sensor.

Other approaches have also been explored. Kodak’s cyan, magenta and yellow sensor, using secondary colours, allows fully double the light through the filter layers, though the mathematical processing required often means turning up the saturation quite a bit, which can introduce noise of its own. The differing sensitivity of the sensor to cyan, magenta and yellow light can also offset some of the improvement. IA itself voices caution about Huawei’s red-blue-yellow design, which encounters some odd mathematical issues (which are a bit outside the scope of this article) around using red filters to approximate the human response to red light.

Scarves of various bright colours knotted around a rail, hanging in a row
Dyes are a common source of – er – interesting results in digital imaging, since the very deep colours, which don’t exist in nature, can end up reflecting a strange spectrum of light.

The inevitable compromise

Suffice to say that in general, no matter what combination of colours is used, there will be a choice to make, often between brightness noise, colour noise, or sensitivity and dynamic range. For complicated reasons, colour noise is easier to fix than brightness noise, and it’s mainly that idea which has led IA to the green-blue-yellow layout it favours here.

The company suggests that the design should achieve a 4.25dB signal to noise ratio advantage over RGB “in bright light,” and perhaps a bit more than that in lower light. That may not seem astounding, although the company promises us a similar improvement in dynamic range, with a total improvement of more than a stop. Encouraging as that is, we should be clear that this is an idea, without even a demonstration sensor having been made, and it’s clearly some time from a film set near you.

What really matters is not this particular design; alternative filter arrays have been tried before. Given the overwhelming majority of cameras still use Bayer sensors, we might reasonably conclude that the results of previous experiments have not been overwhelmingly positive. Cinematographers are also a cautious bunch, and anyone proposing anything as (comparatively) outlandish as an LMS sensor might need a strategy to handle the caution as well as the technology itself – but if some sort of alternative to Bayer can be made to work, then it’s hard to object.

]]>
https://www.provideocoalition.com/bayer-best/feed/ 0
Your Newbie And You https://www.provideocoalition.com/your-newbie-and-you/ https://www.provideocoalition.com/your-newbie-and-you/#respond Mon, 12 Aug 2024 13:00:23 +0000 https://www.provideocoalition.com/?p=259758 Read More... from Your Newbie And You

]]>
Two men and a woman working as part of a film crew, with one man watching a monitor.
Somehow, people like this are supposed to spring into existence without anyone doing much to ensure that happens.

 

A long time ago, when camera assistants frolicked carefree in a world mercifully bereft of large-format cameras, your narrator recruited a friendly focus puller for a two-day project. In the days before Netflix had booked all the world’s film crew in perpetuity, it was possible to hire day players without eBaying the kids, and it is again now, although the most expensive aspect of most film shoots is still the warm bodies. As such, it was slightly surprising when, mere days before the show, said focus puller attempted to sell the production on an extra member of crew – a young newcomer with minimal credit history but, we were assured, a great attitude.

In the context of an almost-complete preproduction, this felt like reaching the end of a long, disgruntled line for the register and being upsold on today’s special offers before being allowed to pay for a chocolate bar. The production was of very limited scope, and the extra pair of hands felt unnecessary and expensive. Possibly this person would be bored. Possibly someone had the wrong idea about the nature of the show. Most offputting of all was the first assistant’s insistence. Was it really his business to recruit additional people? Well, perhaps, but this was new. It was quite some time – an embarrassing amount of time, looking back – before the truth slammed, appropriately, into focus.

This is what a film industry apprenticeship looks like.

The self-interest of self-employment

We’ve discussed the bleak prospects of new entrant crew in the past, but fact that it’s even possible to make this sort of mistake should make us consider not only the currently-urgent need for new people, but also how they’re treated. People running regular businesses – rental houses, in this context – will be keenly aware that full-time juniors are thin on the ground, and the idea of overlooking any opportunity to take on reliable new people will seem foreign. On set, though, when filling slots is someone else’s problem and bringing in new people is a job of work nobody is formally required to do, inexperienced people can instinctively seem likely to get underfoot.

As long as recruitment is nobody’s problem, it’s everybody’s problem, but our thesis here is not that every production must take every opportunity to take on new people. One of the trickier aspects is the responsibility of the trainer to ensure that the experience offered is useful to the trainee, especially given low trainee pay. When there’s so little money, there must be something more than the money, especially given the beginner is likely to feel under pressure to accept any job. The whole situation can lead to inexperienced people being abused as underpaid baristas on jobs that might not teach anyone much.

A nicely-made cup of coffee with pretty patterns in the foam.
If your new trainee can already do this, great. If the ability to do this develops while on the job, however, consider whether you’re actually teaching that person anything.

Even with the best intentions, entry-level productions of the kind which provoked this situation often won’t be organised and run on the same basis as a full-scale convention of the white truck owners’ association. On the cheapest shows, the ability to pick up non-technical but nonetheless crucial bits of setiquette might be less than we’d hope. Everyone has to start somewhere, and practicality dictates that most people won’t start on a Nolan megashow, but making a quick comparison of someone’s level of experience and stated goals to the nature of the production is nothing more than due diligence.

A matter of conscience

All of this should hover in the conscience of anyone who’s recruited cheap, inexperienced labour at a rate that’s only justifiable if the experience is actually useful. Because of exactly that issue, creating certified apprenticeships is a regulated process in many parts of the world, and it’s often easier to just take someone on at the low end of conventional employment. The wages are barely higher, and workers are allowed to do more than apprentices, but the moral responsibility to make the experience worthwhile remains in either case.

A young woman on public transport at night
Your trainees probably live further from set than you do. Consider this, especially if they’re already there when you arrive, and when you leave.

Balance the need for crew training with the interests of both the production and the trainee, and it’s certainly possible to take on a youngster without being remembered as a villain. It doesn’t take long for the shine to wear off the gold-paved streets of tinseltown in the small hours of a night shoot as the frost settles on the magliners, especially once we discover that lunch might be a lukewarm Big Mac anyway.

It’s long been fashionable to lay all of this at the feet of producers, who have often faced criticism on the basis of their expectation that crew will perpetually be available without anyone actually putting any work into ensuring that’s the case. True as that is, the people with the contacts to actually bring new players to the game are often not the producers themselves, and we should not overlook hints dropped by people who might be making a good-faith effort to ensure that newcomers get a fair shot.

]]>
https://www.provideocoalition.com/your-newbie-and-you/feed/ 0
Bayer chip best https://www.provideocoalition.com/bayer-chip-best/ https://www.provideocoalition.com/bayer-chip-best/#respond Mon, 29 Jul 2024 12:26:42 +0000 https://www.provideocoalition.com/?p=257668 Read More... from Bayer chip best

]]>
Stained glass window including red, green, blue and uncoloured panes.
“Tribute to Britecell,” as the artist didn’t call it. By Pexels user Igor Starkov.

One of the cutest things about subpixels on camera sensors is the way manufacturers hire teams of magical elves to put the colour filter array on the front, because only elves have small enough fingers to handle such tiny pieces of red, green and blue stained glass. A big advantage of this approach is that the elves are generally allowed, under union rules, to put almost any combination of colours you want on there. People have regularly done so, and advertised the result using the sort of trademarked terminology that’s been approved by a focus group.

There are a number of reasons for doing this. Sony put light-green pixels in the Cyber-Shot DSC-F828 in 2003 with the idea that it’d improve sensitivity (and thus potentially some combination of dynamic range and noise, too) as well as colour gamut. Similar technology was used in the Nikon Coolpix 8700 in 2006, acamermong many others. Samsung calls its white-pixel technology Britecell, and Fujifilm’s current X series cameras use the X-Trans layout, which includes a much larger preponderance of green elements. It’s not the first time the company has done something like that; its Super CCD technology used octagonal subpixels in essentially a 45-degree rotated grid pattern. A related approach was taken for the sensor used in the Sony F65, presumably in pursuit of less aliasing, and the F35 (and the essentially equivalent Panavision Genesis) used vertical RGB stripes.

Silicon wafer under the yellow light of a semiconductor manufacturing plant with rows of chips exposed.
Sensors in the factory; image courtesy David Gilbom of Alternative Vision Corporation. As technology improves, subpixel arrangement may become less relevant.

Not universal

So, the idea that Bayer sensors are universally adopted certainly isn’t true. If there’s a problem with all this, that 45-degree rotated sensor approach is talismanic of it. Yes, that avoids having vertical and horizontal rows of subpixels that will come dangerously close to lining up with all the vertical and horizontal edges which exist in human-made worlds, and thus potentially reduces aliasing. Of course, that does mean it’ll deal less well with edges which happen to be near 45 degrees to the vertical, such as chain link fences and the tombs of ancient Egyptians, though there’s probably a serviceable argument that more sharp details in the average picture are horizontal or vertical than they are diagonal.

Different colour layouts suffer similar issues. Even within the limits of a conventional Bayer filter layout, manufacturers are free to choose exactly which red, which green, and which blue is used. Denser filters potentially let the camera see colour more accurately, but absorb more light. Paler filters provide some combination of reduced noise, higher sensitivity or increased dynamic range, probably at the cost of colour precision. Adding filters of other colours is subject to many of the same engineering compromises. As evidenced by the continuing popularity of Bryce Bayer’s original design, there’s some argument that the situation represents more or less a zero-sum game; the upshot of all this has often been resounding indifference.

Like a lot of things in photography, this stuff works in both directions. We’re used to monitors having combinations of red, green and blue subpixels just like a camera (in fact, invariably in columns, just like an F35). Keen-eyed visitors to the part of this year’s NAB show that deals with interesting new ideas might have noticed at least one multi-primary display technology being demonstrated using a single LED video wall panel incorporating cyan emitters. Issues over colour precision that emerge when using unfiltered (that is, white) subpixels on a camera sensor are vaguely analogous to the issues that emerge when we use white subpixels on a display, although in a camera that’s apparently a good thing. The requirements are slightly different when we’re trying to persuade the human visual system to behave as we’d prefer, though some of the same considerations apply.

View of a Canon C700 from the front without lens, showing the full-frame sensor.
Size conquers all. Especially the focus puller. Image by the author.

Subpixels less relevant

In the end, there’s some reason to believe that all of this could become less and less impactful as fundamental improvements to sensor technology keep on coming; as the sum underlying that zero-sum game becomes larger. We can have 14-stop, 4K, Super-35mm cameras, as Arri has shown. Even more illuminating are the 12K Super-35mm sensors which Blackmagic had engineered for its 12K Ursa. Those photosites are so small that it’s not quite aiming to work in the same way as all those other sensors. That sort of configuration is diffraction limited at more than about an f/4-5.6 split, and probably resolution limited by at least some lenses in any event. The intent is not that every photosite is valuable resolution-wise. The intent is that a group of several forms a composite colour pixel.

If the intent was to build a really good 4K camera, and if this represents the future, well, great. All of this is likely to become a less and less relevant consideration as time goes on, in much the same way that sheer resolution has gone the same way. Whether that’ll discourage anyone from patenting another configuration of red, green, blue, and octarine remains to be seen.

]]>
https://www.provideocoalition.com/bayer-chip-best/feed/ 0
How things could work better https://www.provideocoalition.com/how-things-could-work-better/ https://www.provideocoalition.com/how-things-could-work-better/#respond Mon, 15 Jul 2024 12:25:45 +0000 https://www.provideocoalition.com/?p=269682 Read More... from How things could work better

]]>
An old-style cathode ray tube TV, covered in spider webs. Workflow for this is straightforward.
Ah, the good old days. No colour, but equally, very little potential for two different people to disagree on what “red” means. Image by Pexels user Rene Asmussen.

This topic was suggested by a conversation with a prominent member of  a national cinematography society whose rushes workflow had created results which were, well, very much not as they should have been, provoking awkward politics with principal cast and the production company. The cinematographer and the production must remain nameless, but the problem is something we absolutely can discuss – and probably should discuss more often.

Most of the ways we get pictures from set to camera and through post are good examples of a process that wouldn’t work the way it does, if we designed it from scratch right now. It’s not too much to call the current blizzard of snowflake workflows a storm of kludges. They’ve piled on top of one another during the  last twenty years of haphazard improvisation and case-by-case problem solving, ultimately leading to a teetering edifice of technologies and techniques that’s often inconvenient, sometimes downright dangerous, and almost always limits our access to nice things.

Colour and brightness encoding issues, which likely caused the rushes issue which provoked this article, are almost tired examples. The digital transition was not met with universal approbation to begin with, and the idea that digital cinematography would involve several incompatible colour and brightness standards per camera might have put even more people off. The idea that those standards would not be automatically encoded in camera original files, and left up to the technical interpretation of someone else, months down the line, is as absurd as it is common. Creatively, anyone using a show LUT is, in effect, re-engineering the colorimetry of the camera, and it’s shocking that such a fundamental decision might later be revised by other people depending how they feel about it.

Even in the best case, all this requires that the show LUT is designed competently and tested adequately, and in many cases neither of those things is a firm guarantee.

Timeline of a nonlinear editing application
And every single one of those clips could have a different LUT on it, all of them wrong. Image by Pexels user Alex Fu.

Workflow anxiety

That’s bad. What’s worse is the sheer number of other, even more exciting ideas that we could make real, but can’t, because our haphazard workflows just won’t permit it. Some stills cameras correct digitally for chromatic aberration and barrel distortion in lenses, for instance. There’s nothing formally preventing cinema cameras from doing the same. With lens manufacturers sweating buckets trying to create lenses with ever more spectacular combinations of focal length, aperture and sharpness, the option to achieve even higher performance, particularly in a smaller, lighter, more cost-effective package, must have occurred to someone.

But we can’t have that, because it would take a very, very brave cinematographer to work in the confident expectation that either the camera or the post process would follow instructions. It’s a pretty shocking reality. A big manufacturer would not dare design a series of lenses which would be reliant on digital correction, no matter how advantageous the resulting price-performance ratio might be. That manufacturer could very reasonably expect that someone would simply use those lenses without the required correction, then complain about their behaviour (notwithstanding the fact that almost no matter what a lens manufacturer makes, regardless its characteristics, someone, somewhere, will decide it’s a look.)

Similarly, no cinematographer dare shoot something requiring a gentle, painterly look with fast, sharp, high-performance modern lenses in the expectation of later simulating a more historic image, unless that cinematographer was expecting to post the show in the back bedroom and thus had total control. At risk of seeming to celebrate the obsolescence of someone’s beloved grandmother, it’s perfectly feasible for modern post production gear to simulate a huge range of subtle, complex, attractive optical aberrations which even fairly practised eyes might struggle to discern from even the most overpriced classic glass of 1970s stills photography. In 2024, doing that simulation is not a big deal. It’s barely even VFX. Given the amount of VFX-adjacent things currently done (on the quiet) in grading, lens effects are something we could reasonably fold into final colour process.

And this matters, because cinematographers are sometimes utterly desperate to stamp some sort of look on the current crop of digital cinema cameras, and that sometimes leads to serious concerns. Trying to shoot under lighting of the current zeitgeist (which is to say, not much lighting at all) with lenses designed in the 1970s has caused real world problems. That lens which looked wonderfully characterful at f/5.6 on the test stand might just look smudgy on an available light night exterior. People did not generally shoot available light night exteriors in the far-off decade when that lens was first released, and it wasn’t built to do that.

But, again, nobody would dare shoot one way on the assumption that post will make things look another way. The tools aren’t there (though they easily could be) and there aren’t even particularly well-defined ways to pass that information along, much as it’s been possible to embed ancillary data in computer video files for a lot longer than digital cinematography has been a mainstream force.

An old lens, type Helios-44M
Not everything this lens does can be replicated in post, though quite a lot of it can, if production and post can find a way to agree on exactly what’s to be done.

Universal solution?

The universal solution to all of this requires a lot of standardisation, and sadly, given the Jenga tower software and hardware we’ve created, it’s hard to imagine the process that might lead to an agreement sufficiently universal to satisfy the nervous cinematographer that instructions are likely to be followed. Perhaps things will converge over decades, but it’s hard to imagine even a subset of these issues being solved in the short term. We live in an age of better communications than have ever existed, and we have access to equipment with far more than the required capability, but due to a vague combination of human factors whole rafts of potential (and actual) capability go unregarded.

It’s this, indirectly, which has pushed the cost of the 24mm f/1.4 Canon FD SSC lens into five figures, it’s this which leads to people’s rushes too often not looking as they should, and it’s this which prevents the film and TV production world from enjoying some of the potential benefits of modern digital imaging that amateur stills people increasingly take for granted. It’s traditional to end a piece like this with a reference to proposed fixes and hope for the future, but that’s difficult on this occasion. All of this is something that’s sort of become normalised, but it’s not great, is it?

]]>
https://www.provideocoalition.com/how-things-could-work-better/feed/ 0