The "AI" versions of the movie stills are darker and "greener" or "bluer" in all cases in this article, which is NOT the case when you watch the movie. It's a mistake on the part of whoever put together the image comparisons.
The culprit here is that the non-AI screenshots are taken from presumably 1080p non-HDR sources, while all the AI screenshots are taken from presumably 4K HDR sources. The "AI" images are all displayed in the completely wrong color space -- the dark+green/blue is exactly what HDR content looks like when played on software that doesn't correctly support decoding and displaying HDR content.
It's a shame that the creator of the comparison images doesn't know enough image processing to understand that you can't grab stills from HDR content from a player that doesn't properly support HDR.
On the other hand, the state of HDR support is a mess right now in software players. Playing HDR content in common players like VLC, QuickTime, IINA, and Infuse will give you significantly different results between all of them. So I can't actually blame the creator of the images 100%, because there isn't even a documented, standardized way to compare HDR to non-HDR content side-by-side, as far as I know (hence why each player maps the colors differently).
The upscaled versions also screwed up the camera focus blur by artificially removing it, and unnecessarily took out the film grain. Even leaving the grain and blur aside, the texture of the objects depicted is also getting seriously screwed up, with unrealistic looking smoothing and weird places of heightened contrast unrelated to the original scene.
More generally, automatically second guessing the artistic choices of whoever originally color graded the film, for the sake of adding narratively gratuitous features like HDR or extremely high resolution, is a nasty thing to do. There might be moderately more dynamic range in the original physical film than there was in a digital copy; if so, by all means try to capture it in a new digitization while trying to match the visual impression of a theater projection to the extent practical. The "AI" features demonstrated here are incredibly tacky though.
People put in charge of making new versions of beloved artworks need to have good taste and exercise some discretion. Art from different times and places looks different due to changes in both the medium and the culture, and we should enjoy those differences rather than trying to erase them all.
Right. That director almost certainly chose that particular film because that particular grain lent the film the vibe they were looking for. Directors always look for ways to implement their artistic vision within the bounds of the medium, but the end results is almost certainly their artistic vision. A lot of that missing information is missing for a reason. Surely someone could use AI to invent things that go in all of the little shadows in Nosferatu, or steady the handheld video camera footage David Lynch was into for a while, but those things serve a purpose.
If people want to AI-enhance movies for their own viewing pleasure, then great. Watch it in reversed color or with a face-swap for all I care. But "improving" the original by inventing detail that the director probably never wanted to show to begin with is a parlor trick devoid of any artistic merit.
In the case of Aliens, James Cameron is explicitly saying he just went with Kodak film that was available at the time, which everyone was using.
I will submit that directors are not generally the right people to call the shots here. A director can never watch his own movie the way the audience does. They always see an unfulfilled vision, a compromise being made here and there because of technology being limited, and some like Cameron or Lucas will make misguided attempts to modernize to make up for it.
This disregards that Aliens will always be a goofy 80s movie and not a movie from them 2020s. Its look should be preserved, be that the film grain, or its color grade. At least, the few people who still buy optical media should be given the choice, as they are the most likely to care.
It contains very little of humour, all the cast and the director are taking it seriously. I'll buy Goonies as a goofy film, but Aliens? I remember someone glued to a wall and screaming in agony as a creature tears its way out of her chest. I remember the ratcheting tension of the motion-detectors getting more and more high pitched as nothing is happening but we're expecting it any moment now.
I know there's some don't-think-about-it plot-holes, but goofy?
To me, there is something intrinsically goofy about space marines going out to fight the insectoid aliens, even when it is not as much on the nose as e.g. Starship Troopers. It is also not meant as a negative, I really like the movie for what it is.
> In the case of Aliens, James Cameron is explicitly saying he just went with Kodak film that was available at the time, which everyone was using.
Ok, fair. James Cameron can do whatever he wants with his films. They are his artistic output, and if he feels the technology available at the time didn't properly communicate that, he can certainly work to mitigate that however he wants. I assure you, this is not common among directors. If you invited Ridley Scott and Derek Vanlint to a meeting informing them that Alien was being re-released as an AI "enhanced" movie, I sure as fuck wouldn't want to be in that room.
> I will submit that directors are not generally the right people to call the shots here. A director can never watch his own movie the way the audience does.
Ha. I'm a Tech Artist and do VFX, so I have more visibility than most into how a lot of these decisions get made. I'm not trying to be a jerk, but of all the confidently incorrect declarations about art I've seen in tech fora, this has got to be in the top 5.
To some extent, the misconception is understandable from a consumer vantage point: without spending a lot of time creating images from nothing to have a specific impact on the viewer, you really don't understand how many decisions there are to make about minutiae, and how much it all affects how the work hits. Prompt-based AI image generators are a good example of this. Their making these decisions is the real boon for non-artists that want to make images, and also why they're non-starters for high-level professional deliverable work. Learning to use art tools to produce clean images is the easy part. The hard part is deciding at an extremely granular level what those images should contain. Since the models amalgamate those decisions from the millions of decisions that artists have already made in different images, that lets end users indefinitely create visually compelling images without having to know why they're compelling at a granular level. But the decisions at that granular level communicate a lot to the viewer, and even with the finer-grained control you can get with prompt manipulation, control nets, inpainting, loras, etc. it's still bumper cars vs driving. Famous artists are usually famous because their vision isn't just a mix of what other people have done before them-- it's got other frameworks, stimuli, references, connections and influences that are implemented in novel, complex, and very specific ways. Fighting against the amalgamated decisions of every artist that contributed to the model to achieve those results requires so much more effort than just making it from scratch. That's why generative AI in industry tools like Nuke don't integrate prompts or generate direct visuals-- they do things like make masks for compositing. A huge win, but 100% not creating images that go on screen. On a much more extreme level, Disney could not have trained a model with the entirety of early animation and pre-1937 non-animated visual art-- even their own films-- and come up with The Old Mill, no matter how much engineering they did.
In a film, almost all of those decisions made by an AI model when generating images are made, or at least finalized, by the director, often in conjunction with the DP and Production Designer. Most of them are obsessed with things like the slope and toe of film, grain, minute differences in very specific color representations, etc. etc. etc. and it affects nearly everything about the film-- set decisions, framing to get more or less of something, what time of day to shoot at, lighting, grading, etc. etc. etc. Saying directors shouldn't have complete control over the medium on which their movies are captured betrays a complete lack of understanding of making movies, both as an art form, and a practical process. If you want to learn more about how movies are made before you pontificate about the craft of making movies, I recommend looking at StudioBinder's youtube channel for a sub-cliff's-notes level introduction to how it's actually done.
People seem to enjoy that we've been enhancing dead celebrities by digitally digging them up and making them dance for us on stage, so why not? Beyond that, why stop at colorization? Maybe there's other artistic decisions we could enhance. Like we could change the plots of existing movies to reflect our more sophisticated modern experiences. Let's regenerate Mad Men but everyone has smart phones, and make the USO scene in Apocalypse Now an Ariana Grande show.
This is the original sin of this process. They are tampering with the artistic object, it's no longer what the artists intended.
Some might say this is being pedantic, but the quality of the image has a huge impact on the feel of a movie. For some films, such as Blade Runner, the mood and feel created by the (dark, obscured) look of the film is easily half of the impact. Changing that film would be a crime against humanity, yet I imagine it's only a matter of time before it gets regraded so people can see the film "properly".
Most movies ever made were not meant to be watched on home TVs. Playing it on your crappy 50" Samsung with a soundbar is as much of a sacrilege as running it through an AI. I guess you have never done that, right?
So if I have vision issues, I'm also the problem, right?
The first time I saw Blade Runner was on a CRT TV from VHS tape and it was still faithful to the original, which I've seen on the big screen and various digital formats too. Poor reproduction doesn't add something that isn't there in the original, it just degrades the image. That generally can't be controlled and is a lot different to intentionally changing the image.
Yes, that’s exactly right, and that’s why people still go to the movies when almost everyone has a screen they could watch movies if they so desired.
Despite the hyperbole of “original sin” or “crime against humanity”, what people are saying is that AI filtered movies are crappy just like watching a good movie in a shitty screen; but I would argue that our social outlook is worse because nobody would say that they’re improving movies by using a shitty screen and shitty audio system, yet we do see that with all the AI hype.
> On the other hand, the state of HDR support is a mess right now in software players. Playing HDR content in common players like VLC, QuickTime, IINA, and Infuse will give you significantly different results between all of them.
This is the main thing that has kept me from adopting HDR in my media library. I'd expect a feature like HDR would be progressive (if that's the right term that I'm going for), i.e., in a non-supported player, it would look exactly like the non-HDR version of the content, and simply adds more dynamic range when in an environment that supports it. Without that, I'm not going to grab a version of a media file that might look worse if I'm playing it in a non-ideal context.
Does anyone know why this isn't the case? Is it not technically possible for some reason I'm not thinking of, or was it really just a fumble of standards design?
The shame is in comparing compressed-to-hell streaming versions to Blu-Ray in the first place, and commenting on how the Blu-Ray is "sharper." ANY Blu-Ray version should be much better than streaming.
The NYT isn't free. I don't respect such shoddy feature-writing.
Exactly. Making a Blu-ray super high quality involves no additional marginal cost. The disc space is there to begin with, so might as well use it.
But every time a movie gets streamed, it costs the streamer twice as much if the file is twice as large. So streaming bandwidth will generally be minimized as much as possible to a "just barely good enough" quality level.
It's not about technology improving or how old Blu-rays are, it's about economics.
Duh, of course. But as bandwidth becomes more available (ie cheaper), it makes more sense for streaming services to stream in higher and higher quality.
Compare eg Youtube videos today to Youtube videos 15 years ago.
> Compare eg Youtube videos today to Youtube videos 15 years ago.
I see no difference. Their 1080p is still the super-crummy 1080p they had 15 years ago.
In fact, if you pay for YouTube Premium you can get a separate "1080p Premium" with a higher bitrate. But you have to pay. Their free version hasn't gotten better at all. In fact, it's gone the other direction -- they've gotten more agressive about defaulting to 720p or 480p streams when they used to show 1080p by default.
Sure, a tiny handful of videos have 4K versions available, but that's only for a tiny proportion of streams. And YouTube even removes higher resolutions after a video has been published a bit. Which is why you see lots of YouTube VR videos described as 8K but there's no 8K stream anymore. There was when it was uploaded, but YouTube removed it.
So, no. Even if bandwidth gets cheaper, streaming twice the bits is still twice as expensive. Streaming services aren't going to stream more bits -- they're just going to take more profit, as long as it's still minimally good enough for the average viewer.
> Their 1080p is still the super-crummy 1080p they had 15 years ago.
Oh, 1080p indeed been introduced in 2009, eg 15 years ago. I thought it was younger. However, just pretend I was talking about the next higher resolutions, or 60fps.
> [...] they're just going to take more profit, as long as it's still minimally good enough for the average viewer.
> Competition will take care of that extra profit.
No it won't. I said "minimally good enough". Competition takes care of things up to "minimally good enough". And then it stops.
That's my point -- bandwidth getting cheaper is not leading streamers to increase the quality of their 1080p. Because not enough people care. Not enough people will switch services. Competition ceases because it's not a significant factor of differentiation.
(For the niche people that do care, that's why YouTube premium exists, but it's niche.)
It's not surprising. The usual quality metrics that video-encoder people use tend to be positively correlated with saturation (and in fairness, this is what people think is better 'quality').
It's not, though -- it's entirely the problem with players. Not just software players but also TV's.
There are lots of TV shows now that are available in 1080p SDR and in 4K HDR. When you play them both on any player or TV, they should have the same brightness in "regular" scenes like two people talking in a room. They're meant to. HDR is only meant to make a handful of elements brighter -- glowing lightsabers, glints of sunlight, explosions. HDR is never meant to make any of the content darker.
Unfortunately, far too many video players and televisions (and projectors) map HDR terribly. The TV's aren't actually meaningfully brighter, but they want to advertise HDR support, so they map the wider HDR brightness range to their limited brightness, so that lightsabers etc. get the full brightness, but everything else is way too dark. This is the disaster that makes everyone think HDR content is too dark.
When the correct thing to do would be to display HDR content at full brightness so it matches SDR for normal scenes, and then things like lightsabers, glints of sunlight, etc. just get blown out to white. To admit the hardware has brightness limitations and should never have been advertised as HDR in the first place.
So the problem isn't with HDR content. The problem is with HDR players, TV's, and projectors. It's mostly on the hardware side, with manufacturers that want to advertise HDR support when that's a lie, because they're not actually producing hardware that is HDR-level bright enough.
Also just to piggy back on this, every television I have ever purchased has been calibrated like HOT, SICK ASS from the factory.
Step ONE every time I buy a TV is to spend a half an hour or so using tons of calibration images to get it dialed in to a sensible setting, and it's shocking how much a lot of them need changed. Almost every TV comes calibrated to be used as a display unit in the store which means contrast is rammed through the roof, brightness is way up, they usually have about 15 different kinds of smoothing or frame interpolation going, "denoisiers," color filtering for some insane reason, it's fucking ridiculous.
This is honestly why most of the time I buy cheap ass off-brand TV's because there's just less bullshit to turn off when the TV has "fewer" features.
It's so true. Honestly I will never understand it.
Even just talking about SDR for the moment, films and TV shows have a lot of work put in so that their color, brightness, sharpness and contrast are just right.
And then modern TV settings apply all sorts of filters that just destroy it.
It's bizarre. I do the same thing you do in terms of calibration and it takes forever. All just to get it to act like a dumb monitor for my Apple TV. I just want it to display the signal and nothing more.
So many people are watching TV and films with weird color, weird contrast, and weird swimmy frame interpolation. And that's before you get to HDR where it's weird dark on top of it all.
It genuinely makes me want there to be either a government regulatory body or else some kind of private-organization seal that guarantees you get a normal-color, normal-contrast, normal-brightness, normal-motion image as filmmakers and TV producers intend. You can have all the other options too, but either make them non-default, or provide a single-click "director's choice high-fidelity (TM)" option to undo all the garbage they add. Or something like that.
> Step ONE every time I buy a TV is to spend a half an hour or so using tons of calibration images to get it dialed in to a sensible setting, and it's shocking how much a lot of them need changed.
Eh. Usually you can flip it to movie + warm(2) + dynamic contrast off + enhancements off (think noise reduction) and you’re 90% there. You can get a smidge better picture with calibration images, but most midrange+ TVs come decently calibrated out of the factory these days. Not really worth the effort unless you go all the way and use a colorimeter.
The problem is finding all of those settings. They're all under different submenus (there's certainly no global "enhancements off" toggle), and even figuring out the "no filter" setting is often non-obvious.
E.g. on my Epson projector, there's a "sharpness" setting that goes from 1 to 10, and IIRC the only way to turn off sharpening is to set it to 3. Because values 1 and 2 wind up applying a blur filter. It's not documented. You have to figure out the "3" value from trial and error with a calibration image.
Similarly you need to figure out whether each of your connected devices is outputting RGB 0-255 or RGB 16-235, and set the toggles for your TV to handle the correct input range.
And so forth. The calibration images aren't for figuring out perfect color accuracy with a colorimeter. They're for basic things like displaying smooth gradients from black to white, and are the blacks or whites being cut off? Is the gamma totally off when compared to a checkerboard? Are saturation gradients displaying normally or does it show the image is over-saturated? Is a series of alternating black and white lines displaying as simple lines, or is there a crunchy halo around them (from sharpening) or blurring?
So it's not "eh" at all. In my experience it takes 15 to 30 minutes, as you hunt around the submenus to try to find all the relevant settings, and try to figure out what each setting even means since the documentation is useless. Like, on a power-saving display, should the main color setting be "Natural" or "Cinema" or "Bright Cinema"? You're going to have to experiment.
Where do you get the reference images, how do you display them on the tv, and to what do you compare them? I guess I should just search how to calibrate a tv.
These are my go-tos: https://www.eizo.be/monitor-test/ Don't cover everything and will be useless for HDR, but they're a good starting point IMO for SDR.
> HDR is only meant to make a handful of elements brighter -- glowing lightsabers, glints of sunlight, explosions. HDR is never meant to make any of the content darker.
I have to do some math to check if this matters or not, but HDR is able to make dark scenes much better too. SDR video doesn't have nearly enough bit depth to represent anything dark well (it's not even 8-bit) whereas 10bit HDR does. So I would expect to see more dark scenes in properly displayed HDR.
Also, in dynamic HDR you should be able to make a dark scene by encoding it brighter and using the metadata to darken it, though I don't remember if it actually works in practice or if anything does it.
Isn't the actual issue that SDR content tends to be mapped so that 0 is the darkest colour the TV can display and 1.0 is the brightest colour the TV can display?
When an HDR signal comes along, then of course the brightest element it can display isn't any brighter than 1.0 was in the SDR image. This has the effect of the scene being overall darker in order to give more headroom for brighter elements to be brighter.
But what is the other option? Have SDR content be dimmer than what the HDR monitor would otherwise be capable of displaying? I doubt you will get as many sales as your competition with that approach because the TV that renders SDR as bright as it can will always look better.
(I do understand that there are some caveats relating to the fact that max brightness is different depending on the amount of the screen covered in brightness, but I don't think it's actually relevant to this argument overall)
The real problem is that 400-600 nits is allowed to be called a “HDR-certified” display.
For “real” HDR you want at least 800 nits, ideally 1000. Otherwise you have to do what is stated upthread, latch max brightness to whatever your maximum nits is and make everything else very dark to keep the relative contrast intact.
I'm not an expert on the matter but Apple went with 1.0 for full SDR brightness, so you have to query for the maximum display brightness and explicitly use RGB values > 1.0 for HDR. This particular approach might not be a good idea for TVs but this kind of mapping seems easy to reason about and just makes a lot of sense to me. And since this does limit brightness for non-HDR-aware apps and content people figured out they can cover the screen with a multiplying overlay to boost the brightness up to the max.
I actually think that probably is the correct approach. It seems like studios are releasing their 4K Blu-rays in HDR by default now, and it is absolutely worsening the viewing experience for a large majority of consumers.
Somehow the transition from SDR to HDR has been messed up in a major way, for which I can't think of any similar equivalent. E.g. there was never any "step backwards" in going from 480p to 1080p or 4K, or from stereo to 5.1, or from h.264 to h.265.
But the rollout of HDR has been bungled catastrophically. The content is technically fine but the hardware side has been a disaster.
If I were the movie studios, I would seriously think about releasing 4K only in SDR until television manufacturers start showing HDR at full brightness again, and release software updates to fix it in existing models.
Oh yeah spatial audio (at least what iOS calls "spatial audio") is horrible. I disable it everywhere I can, but it has to be disabled in control center per app. Some apps stop playing audio when you open control center, making spatial audio seemingly impossible to disable there. There's not a single instance where I prefer having the audio sound like it's coming from the device.
However, like HDR, it can be difficult to avoid. My laptop screen supports HDR because the laptop is otherwise exactly what I want and it doesn't come in a non-HDR variant. My earbuds support spatial audio because I elected to get AirPods Pro for their other qualities and they don't come in a non-HDR variant.
"Vote with your wallet" isn't really an option with things like this.
I thought that Spatial Audio is different from “audio coming from the device” — the former is better physical separation of sound, the latter adjusts left/right balance when your head turns. And the latter can be disabled globally for AirPods.
I guess the option I'm thinking of is "Spatialize Audio", not "Spatial Audio". My bad.
Anyway I've listened to music mixed for Apple Music with "Spatial Audio" too and it sounds like crap compared to normal stereo mixes IMO so I guess my opinion stands.
Where do you configure "Spatialize Audio" globally? I can't find anything in the Bluetooth settings (where they've hidden every other AirPods config)
It's one of the weights that you put into your purchasing decision. The phone that has all of my desired characteristics is kind of hard to find, and getting harder.
But, the net effect is still in the right direction, maybe.
It is not qualitatively bad, it is inappropriate, but also you do not get to see what the unprocessed 4k image would have looked like. That film grain looks great at higher resolution.
Static images don't tell the whole story. If motion blur is reduced by AI that could make final video worse since motion blur helps us perceive 24fps movement smoother than it is.
Yeah if the difference is this obvious in a small thumbnail of a heavily compressed image then something probably went wrong. As you said the colours space doesn't look quite right, though I can't identify exactly which mix up took place.
I'm also not too sure about comparing it with some unidentified 'streaming' version. That's like comparing a high resolution digital audio file with a phone call.
Selective sharpening does have a tendency to overemphasize clear edges while leaving the 'unclear' parts untouched, this can give a bit of a plasticy effect especially on skin.
It used to be that (rational) people would be fairly in agreement that one cannot extract information which is not there. That's why we laugh at the "zoom -> 9 pixels -> enhance -> clear picture -> zoom ->.." trop in movies..
AI does not change this.. It adds stuff that's not there, sure, it adds stuff that might be there, or could have been there, but it literally cannot know, so it just paints in plausibility.. Which is horrible and gross already in principle.
I imagine we're not far from the first conviction based on AI enhanced images.. give the model a blurry CCTV frame and a list of suspects and it will enhance the image until it looks like one of them, and since people are apparently this stupid, someone is going to be locked up for something they didn't do.
Back to movies, just, fuck no! There's no enhancements to make, the movie is _DONE_, it looks like it does, leave it.
> That's why we laugh at the "zoom -> 9 pixels -> enhance -> clear picture -> zoom ->.." trope in movies..
> AI does not change this..
It kind of changes it in some cases. The information might be there, but superhuman pattern recognition might be needed to extract it.
And of course, in case factuality doesn't matter, the missing information can be generated and filled in. This obviously doesn't work when you are looking for a terrorist's license plate or when you want to watch an original performance in a movie.
"Information" in terms of, what does this thing look like -- could maybe be determined from other shots -- yes, sure.
But I think "information" in the context of film here refers to the indexical mark of light upon the image sensor, and in that case no. If it's not recorded, you can't extract it. And whatever you do put there is of little interest to the film buff to whom "image quality" means a more faithful reproduction of the negative that was seen in theaters.
I'm talking in the context of the quoted statement
>"zoom -> 9 pixels -> enhance -> clear picture -> zoom ->.." trope in movies
You can have e.g. a picture of a blurry piece of paper, that humans can't read, but I imagine software might be able to read it (with reasonable accuracy and consistency). The information might be recorded, but hidden.
That’s not really true though. Unless you’re talking about a trivial distortion, like inverting the colors, there’s always some loss of information. In the case of blurry text, we’re still making an assumption that the paper holds some form of human writing, and not literally a blurry pattern. Maybe there’s external context that confirms this. But solely based on the image itself, you can’t know this. It’s basically a hash function; there are multiple possible “source” images of what’s on the paper that may end up looking exactly the same on the blurry/low-res/degraded etc output video. Human readable text is likely the most plausible but it’s not 100%.
You can’t reverse an operation that loses information with absolute certainty unless you are using other factors to constrain the possible inputs.
> For example with license plates. You know the possible letters and how they appear when "blurred" so you can zoom-enhance them.
And even STILL, you can't be SURE.
Let's imagine some license-plate system optimized to give the biggest hamming distance for visual recognision (for example, no O and o in, only one of them, and no i and 1, only one of them, and so on) to make it as good as possible.
Now, you take some blurry picture of a license-plate and ask the ai to figure out which it is, well, one of the symbols are beyond the threshold of what can be determined, and the ai applies whatever it's learned to conclude (correctly) that the only allowed symbol is X.. Now, thing is, the license-plate was a fake, and the unrecoverable symbol didn't conform to the rules of, it was actually a 1 printed there, but the AI tells it's an 'I' since that's the only allowed symbol.. It just made up stuff that was plausible..
You cannot extract what's not there. You can guess, you can come up with things that _COULD_ be there, but it makes no difference, it's not there.. It's the same with colorized vintage videos, we can argue that it'd not be wrong to assume this jacket was brown since we have lots of data on that model, but we _CAN_NOT_ know if that particular jacket was indeed, brown, it might have been any other color that made the same impression on the monochrome film. The information is _GONE_.
That's why I said "with reasonable accuracy and consistency". Human can't be SURE either. Nothing is ever SURE if we want to stretch it to absurdum.
My entire point is that computers can be better than people at a given visual recognition task. Therefore we might discover that some information is present in the data even though we previously thought that information was not recorded.
That's literally the entire argument. I'm not sure what you are opposing.
I tend to disagree, even with superhuman pattern matching, what makes a frame unique is everything in it which does NOT follow the pattern, the way the grain is distributed, the nth order reflections and shadows, is what makes it what it is.
When you have moving film, you have 24fps from the same scene, a lens and a compression algorithm.
There is a higher chance for pixel a to be color y if over a span of x frames and compression artifact this pixel shows value z.
You will also have the chance to track details visible from frames/stills/pictures from seconds or even minutes ago if the actor just has the same cloth.
And you can estimate the face of the actor across the movie and create an inherant face.
Nonetheless, besides this type of upscaling, if an AI is trained to upscale based on probability of the real world and the movie is from the real world, it is still more than just random.
Btw. there have been plenty of movie makers making movies the way they did because thats the only way they were able to make it. Low light high end camera equipment is expensive.
And if you watch The Matrix without editing on a oled 60" and higher, it looks off because greenscreen details are more noticable than it has been before. Its aboslutly valid to revisit original material to adjust it to today (at least for me).
It's frustrating that the article doesn't mention this relatively simple, well understood, and very relevant principle. (Unless I just missed it somewhere, someone please correct me if I'm wrong).
There’s also the laws of large numbers here. When you make hundreds and thousands of guesses, even with really high accuracy, you’re going to get some wrong. For a 4K movie at 24 frames a second, you’re talking millions of guesses per minute, perhaps billions or more over the length of an entire film. It’s inevitable you’re going to get weird glitches, artifacts, or just parts that look “off”.
It’s better to use a light touch for techniques like this rather just blindly applying it to the entire film.
> AI does not change this.. It adds stuff that's not there, sure, it adds stuff that might be there, or could have been there, but it literally cannot know, so it just paints in plausibility.. Which is horrible and gross already in principle.
I agree. "AI-enhanced" is just the latest badge to put on content in the hope that some suckers will pay more money for the "same" content they already have. There's no striving for a better version here, just a drive for more profit.
Wow, that tweet they link to with a super punched in shot looks really really bad! Hard to believe Cameron thought this looked better than just a normal 4k transfer, yikes. Was really looking forward to a UHD release of The Abyss but now I'm not so sure...
I really hate the result, they apply something like some gaussian filter followed by deconvolution which makes people's faces very uncanny: super soft skin with super sharp wrinkles
The one on the right looks like you ran an edge-directed upscaler on it. Those things have distinct artifacts, and sometimes it looks like all curves turn into snakes. Or it can make new diagonal curves out of random noise.
Not knocking edge-directed upscalers though, they can work in real time and are very good for line-art graphics. You can even inject them into games that have never had that feature before.
The automated HD 'remaster' of Buffy is the prime example of how badly this can go wrong. A great breakdown of the problems is on YouTube here:
https://youtube.com/watch?v=oZWNGq70Oyo
Great video and a travesty. I’m not a huge fan, but I respect that people are. That they ruined the show and it’s hard to find in it’s not ruined form…
> Hard to believe Cameron thought this looked better
I doubt he even looked, he's too busy with his blue monkeys these days. Most likely someone duped him on taking on the AI upscaling and he signed off on it without looking and the movie studio just shipped the output without QA to save time and money, because they're going to streaming not in cinemas.
I also found the studio audio description track for Aliens very ordinary. It's a film that says "aliens" on the tin, but there are no verbal descriptions for them other than "giant", "towering" and "beast". Not really doing Giger, Winston or Cameron justice there. I'd be surprised if he heard it, or read the script prior to recording.
It seems little wild to assume the maker of a movie would care less about the movie than a random mob on the internet (instead of maybe just having a different opinion) but the assumption does feel very internet.
I inherited a 34” flat Sony Trinitron from a roommate moving out. He inherited it from the guy who sold him his house. It was the two inches shy of the largest CRT Sony made, and it was a damn 200 lbs (91 kg) white elephant.
I eventually foisted it off on some deliverymen for free. They didn’t believe me when I told them the weight. When they finally lifted it, it was gratifying to hear one grunt, “Damn. I guess it really is 200 lbs.”
352x480, not 240. The Abyss, being shot on film, is 24fps. A VHS of The Abyss would achieve 29.97fps by using 3:2 pulldown which is lossless since it duplicates fields to make up the difference.
It depends. A large movie - yes. But many movies were made with less of a focus on the cinema and more on the video rental ("direct-to-video") or TV syndication.
TV content like Star Trek the Next generation were definitely never meant to display in high definition, and such things can show in props etc.
TNG is famous for being shot on 35mm film. Because of this, it had a higher "definition" than most other television productions and was able to be very well remastered for later releases, with some wonkyness around reshooting/remaking a lot of visual effects that had not been done on film.
Yes, no argument there. Many shows were shot on film and it made a big difference even back in the SDTV days, because electronic cameras just weren't any good. It wasn't only about resolution. There were electronic cameras with great horizontal resolution, but the look was still very "cheap" with blown highlights, muddy colors etc.
Another famous example is Friends, also shot on 35mm film. It has the advantage over Star The Next Generation is that the props in Friends are actual, real life things, so if we now see the fine details of something, we just sea a teapot a little better. While the prop in Star Trek has a higher risk of looking like that duct taped PVC pipe it is rather than the SciFi ray gun it portrays.
Or say, that one actress in the pool scene where you see her from behind nude and, in low definition, nothing could be seen. Once they upscaled it, things were... more visible.
Since the film does not run in theaters indefinitely and at the time VHS was a common format, among others.
Given film reciprocity and resolution related to period produced film stock and processing techniques, VHS and other reduced resolution formats from the same period, look better, to me! than rescans(4k, which super35 barely supports) and up-resing. Again IMHO because I guess people take this otherwise.
I own equipment capable of view films and plates from any period of film production. I have processed and looked at found "film" from before 1900. I use quotes because it wasnt the same as today and many varieties of techniques existed to capture images or make moving images.
What do you mean 'again'? You never said that in your original comment.
Given film reciprocity and resolution related to period produced film stock and processing techniques, VHS and other reduced resolution formats from the same period, look better, to me!
What you're describing is just nostalgia, nothing more. That's fine, no need to rationalize it.
Why stop at VHS? Why not copy it a few times back and forth between tapes?
I don't see how these can be compared. They are totally different color schemes. It's possible that if you took the right image and displayed it using the same color scheme as the left image that it would look better than the left image.
My biggest gripe with these AI film enhancements is that they are adding information that was never there. You're no longer watching the original film. You no longer have a sense of how contemporary film equipment worked, what its limitations were, how the director dealt with those limitations, etc.
I don't think that's universally true of all AI enhancement though. Information that is "missing" in one frame might be pulled in from a nearby frame. As others have pointed out, we are in the infancy of video enhancement and the future is not fundamentally limited.
If that takes away from the artistic nature of the film I understand the complaint, but I look forward to seeing this technology applied where the original reel has been damaged. In those cases we are already missing out on what the director intended.
In part, we need more vocabulary to distinguish different techniques. Everyone is just "AI" right now, which could mean many different things.
Standard terminology would help us discuss what methods are acceptable for what purposes and what goes too far. And it has to be terminology that the public can understand, so they can make informed decisions.
I also think we already have a bunch of old words for the techniques, like "upscaling" or "statistics". It's "AI" everything now but the old words for the old techniques are waiting to be used again.
If there is a movie which is only shot in 1080p and I have a 4k TV, it seems like there’s three options. One, watch it in the original 1080p with 3/4 of the screen as black border. Two, stretch the image, making it blurry. Three, upscale the image. If you give me the choice, I’m choosing 3 every time.
Sorry if it sounds crass, but I feel the process of shooting the movie is less important than the story it is trying to tell.
Upscaling algorithms vary from the extremely basic to the ML models we see today that straight up replaces or adds new details. Some of the more naive algorithms do indeed just look blurry.
Most people don't care. Photographers had a real great time pointing out that Samsung literally AI replaced the moon, but some Samsung S21 Ultra users were busy bragging how great “their” moon pictures turned out. Let's judge AI enhancements like sound design: Noticeably good if done well, unnoticeable if done satisfactory and noticeably distracting if done poorly. The article shows a case of noticeably distracting, so they're better off with the original version.
It's a fundamentally different concept of photography though, one that becomes more similar to a painting or collage than a captured frame of light. Regardless of the merits of one over the other are for the purposes of storytelling, it's a bit worrisome when the distinction is lost on people altogether.
I get how a film buff might care, and agree the original version should be available, but isn’t there space for people who just want to see the story but experience it with modern levels of image quality? The technical details of technology at some point of time is definitely interesting to some people, but as say the writer or others associated with the creative and less technical aspects of a film I may find the technical limitations make the story less accessible to people used to more modern technologies and quality.
What does "modern levels of image quality" mean in this context?
The article is about AI upscaling "True Lies", which was shot on 35mm film. 35mm provides a very high level of detail -- about equivalent in resolution to a 4k digital picture. We're not talking about getting an old VHS tape to look decent on your TV here.
The differences in quality between 35mm film and 4k digital are really more qualitative than quantitative, such as dynamic range and film grain. But things like lighting and dynamic range are just as much directorial choices as script, story, any other aspect of a film. It's a visual medium, after all.
Is the goal to have all old movies have the same, flatly lit streaming "content" look that's so ubiquitous today?
I think the argument against "isn’t there space for people who just want to see the story but experience it with modern levels of image quality" is that such a space is a-historical -- It's a space for someone that doesn't want to engage with the fact that things were different in the (not even very distant) past, and (at the risk of sounding a bit pretentious) it breeds an intellectually lazy and small-minded culture.
The problem with that is the content is usually shot with the certain definition in mind. If you don't film certain scenes from scratch, they can end up looking weird in higher definition, simply because certain tricks rely on low definition/poor quality, or because you get a mismatch between old VFX and new resolution, for example.
It's a widespread issue with the emulation of old games that have been made for really low resolution/different ratio screens and slow hardware, especially early 3D/2D combinations like Final Fantasy, and those that relied on janky analog video outputs to draw their effects.
For a specific simple example: multiple Star Trek TV series were shot with the assumption that SDTV resolution would hide all the rough edges of props and fake displays. Watch them in (non-remastered) HD and suddenly it's very obvious how much of the set is painted plywood and cardboard.
One somewhat funny example of this is in the first ST:TNG episode "Encounter at Farpoint". In one shot, the captain asks Data a question, and the camera turns to him to show him standing from his seat at the conn and answering. At the bottom of the screen, it's plainly visible (in the new Blu-Ray version) that a patch of extra carpet is under the edge of the seat. It was probably put there to level the seat or something. At the time, this was ignored, because on a standard SDTV screen, the edges are all rounded, so the very edge of the frame isn't normally visible.
Another thing that's plainly obvious in TNG's remastered version is all the black cardboard placed over the display screens in the back of the bridge, to block glare from lights. In SDTV, this wasn't noticeable because the quality was so bad.
Actually I would expect AI up scaling of SDTV in this case would perform better. It would assume semantically the props were real and would extrapolate them as such.
For anything that's not just "grab a camera and shoot the movie" the format that it is shot in is absolutely taken into account. I don't think you can separate the story from how the image is captured.
'Film buff' responses are common to every major change in technology and society. People highly invested in the old way have an understandably conservative reaction - wait! slow down! what happens to all these old values?! They look for and find flaws, confirming their fears (a confirmation bias) and supporting their argument to slow down.
They are right that some values will be lost; hopefully much more will be gained. The existance of flaws in beta / first generation applications doesn't correlate with future success.
Also, they unknowingly mislead by reasoning with what is also an old sales disinformation technique: List the positive values of Option A, compare them to Option B; B, being a different product, inevitably will differ from A's design and strengths and lose the comparison. The comparision misleads us because it omits B's concept and its strengths that are superior to A's; with a new technology, those strengths aren't even all known - in this case, we can see B's far superior resolution and cleaner image. We also don't know what creative, artistic uses people will come up with - for example, maybe it can be used to blend two very different kinds of films together.
These things happen with political and social issues too. It's just another way of saying the second step in what every innovator experiences: 'first they laugh at you, then they tell you it violates the orthodoxy, then they say they knew it all along'.
I draw the line at edits that consider semiotic meaning. Edits are acceptable if they apply globally (e.g. color correction to compensate for faded negatives), or if they apply locally based on purely geometric considerations (e.g. sharpening based on edge detection), but not if they try to decide what some aspect of the image signifies (e.g. red eye removal, which requires guessing which pixels are supposed to represent an eye). AI makes no distinction between geometric and semiotic meaning, so AI edits are never acceptable.
Easy counterexample: dumb unsharp masking will ruin close-up scenes that are shot for softness and/or have bokeh. ML upscalers can do this too when applied mindlessly. But you can also train an upscaler on the same type of footage, or even on the parts of the same footage available in higher definition. Even if you don't, matching the upscaler with the intent behind the content is your job.
The separation you're talking about is imaginary, the line doesn't exist. Any tool will affect the original meaning if it doesn't match the execution. Remastering is an art regardless of the tool, and it's always an interpretation of the original work. It's fine to like or dislike this interpretation.
Remastering can screw up intent with something as simple as color grading.
But there is a line here. An editor that's using simple tools knows exactly what they're changing, and if they're using simple frame-global tools then they're not introducing anything that wasn't already there.
If you throw an AI at things, it will try to guess what things in the image are, and make detail adjustments based on that.
So that's three categories of edit, easily distinguished: human making frame-global changes, human deliberately changing/adding details, AI changing/adding details in a way that's basically impossible to fully supervise.
It sounds like they accept category 1 in remastering, even though it's not foolproof, and reject 2 and 3.
No, "AI" is absolutely not uncontrollable magic that does something you don't want sometimes. It's not an issue really, you always have arbitrarily granular control of the end result, with ML tools or not. You can train them properly, you can control the application, you can fix the result, you can do anything with it. It's the usual VFX process, and it's not the only tool at your disposal.
The problem is that remasters don't make a lot of money, so instead of a properly controlled faithful representation (or a good rethinking) it's typically a half-assed job with a couple filters run over the entire piece. Another issue is that you now have two possibly conflicting intents - one from the original and another from the remaster. ML haven't changed anything in here, it's always been like that.
Sure, my point is that proper remastering is not just applying a couple ML filters. If you're doing that you should either do this selectively or fix the result by other means, i.e. the same thing you would do with dumb processing. This is a labor intensive VFX work, feasible for a new movie but not feasible for a remaster.
Yes, back in the mid-late 80s Turner Entertainment colorized a huge number of old films in their vaults to show on cable movie channels. It was almost universally panned. It was seen at first as a way to give mediocre old films with known stars a brief revival, but then Turner started to colorize classic, multi-award-winning films like The Asphalt Jungle and the whole idea was dismissed as a meretricious money-grab.
Any art and/or media production executed well enough to be culturally significant rests on an enormous depth of artistic and technical choices that most audiences have zero awareness of—and yet, if you took them all away, you would have nothing left. Every change takes you further from the original artist's vision, and if all you want to do is Consume Slop then that's fine I guess, but the stewards of these works should aim higher.
Well there are movies which were technically well executed with poor stories, and great stories with poor execution. And there were movies which did well in both areas.
For example Tenet. Cool story, poor audio mix. (I don’t buy the explanation that Nolan had any reason other than expediency for this.) If we use “AI” to fix the audio after the fact, that’s a win in my book.
I’m not a film buff or a purist though. I watch movies with subtitles which is certainly not what the director had in mind, but that’s ok.
I agree, most people watch the movie for the story that unfolds. Few are looking at things like framing the subject, the pull of the focus, subtle lighting differences between scenes, they are interested in the story, not the art of filmmaking. The people offended by this are the ones that are crying about the art being taken out of it.
The film grain will have no effect if it's not visible due to image/stream compression such as when the viewer sees the film on a video streaming service. HDR wont show up for most viewers. Details you need more than 1080p to see won't show for many (most?) viewers ... so I'd dispute your "will have an effect" here.
Good storytelling (and probably blunt spectacle) is the only thing common to all viewers that can win them over. For mainstream media everything else is gloss that may have no effect.
Most people don't even have their sound/brightness/contrast well-adjusted. Some free-to-view services regularly air content with the wrong ratio (and I've seen people happily sit watching the wrong ratio seemingly oblivious to it).
Yes, media nuances can have an effect on the unwitting, but I suspect much doesn't even have opportunity to.
> The film grain will have no effect if it's not visible due to image/stream compression such as when the viewer sees the film on a video streaming service. HDR wont show up for most viewers. Details you need more than 1080p to see won't show for many (most?) viewers ... so I'd dispute your "will have an effect" here.
You're going too low level, I'm thinking of lighting and colour and intentional blur via adjusting focus.
> Good storytelling (and probably blunt spectacle) is the only thing common to all viewers that can win them over. For mainstream media everything else is gloss that may have no effect.
You really need to reverse spectacle and storytelling in this statement. How else can the box office be dominated by superhero movies that personally I ... just ... can't ... tell ... apart?
> The originals still exist and you’re free to watch those instead.
This is far from certain, unless "you" are willing to engage in piracy. It's often difficult or impossible to legitimately buy (or even rent) the original, unadulterated versions of older films.
I think this kind of AI "enhancement" is where CGI was in the 90s. It might be state of the art tech, but it's still very unrefined, and in ten or twenty years these remasters will look painfully dated and bad.
I dunno – I look at the original Jurassic Park and it still looks pretty amazing to me. Same with Terminator II. In many ways I feel like as directors got more and more capabilities with the tools they became comically overused. I don't think it's the sophistication of the tools, but the way that they're wielded that will make them look dated, or timeless.
I think it's worse than bad CGI. With bad CGI, you can use your imagination and interpret as what it would have looked like if they had unlimited time and budget. You can't do that with bad AI "enhancement", because it's an automation of that same imaginative process. You'd have to somehow mentally reverse the AI processing before imagining a better version, which is much more difficult.
People who knew what they where doing could pull of some timeless art with 90s CGI and a decade of improvements did not stop people from ruining otherwise good movies with bad GCI either. AI is just another tool that needs to be used correctly.
Maybe I'm old, but do we really watch the movies because they are sharper, have vivid colors, etc or because of the story ?
On the other hand, I would probably pay for an AI that will 'undark' that new super artistic movies, because some of them, have worse lighting than the Blair With Project...
Super dark scenes might work on OLED screen, but every projector I saw including those in theatres can't display real black and darkest shades are always a problem. It's not a problem if you have bright parts in the same scene since eye will adapt. But everything is dark it won't look good in cinema. That affected how movies were shot, lighted, which kind of scenes were filmed at all. I wonder if some of those newer movies are intended to look better on modern TVs than in movie theather?
You're not old ,people just have a hammer and are looking for a nail.
We'll see so much of this in the next few years, optimizing everything to the point of boring. Perfect pop songs, perfect movies, perfect novels and on it goes.
How many people watched Dune 1 and 2 because of a story that has been around for decades and already had one interesting film interpretation? How many people watched Avatar for the story?
The existence of IMAX should be a hint that there is value in very crisp visuals.
However, this stupid tiktok filter is NOT "very crisp".
I appreciate the article (eventually) discuss that this is not generative AI—it’s machine learning-enhanced image processing, and it’s not hallucinating and inserting random background characters.
This isn’t really a new debate. This is kinda the “it looks tacky to record a feature film at a high frame rate, Peter Jackson”. We have a cultural visual vernacular that guides how we expect certain media to look, and it just feels weird to have a movie from the last century remastered so that it looks like it was shot on the latest digital camera.
Why is Arnold's head slightly larger in the AI version of the first screenshot? The ears do not align.
Whatever the case, my wife and I went to see The Lost Boys the other day and it was grainy and blurry compared to the super-sharp previews and I just wouldn't have it any other way.
Restoration is one thing. "Improving" it by smearing statistical filter juice over it is quite another.
The issue I see with the screenshots from the article is that it changed the contrast and overall feel of the scene.
I think there’s plenty of opportunity to exchange old videos, but I think it requires some human touch or deeper understanding of the movie to maintain its message.
The light and contrast and color is made a certain way on purpose, usually to convert whatever the scene is meant to convey. You can’t just mess with those things just so you add details.
I find it really odd that they’re comparing the streaming version vs the new Blu-ray, rather than previous Blu-ray vs new Blu-ray. (Not to mention the HDR & color space issues mentioned already).
The streaming copy is compressed so terribly that it contains at least 2x less information, and normally much worse. (A great UHD disk can be up to 120mbps, compared to 10-20 on streaming if you get lucky.) So when we’re trying to compare the minutiae in each frame, it makes no sense to compare to a copy that already had a lot of information removed via compression. Also, the 4k streaming copy is nearly always worse than the 1080p Blu-ray.
‘This is not right! You’ve removed all of this stuff! If the negative is scratched, then we should see that scratch.’ People were really hard-core about it.”
Dat me! But I shoot and process film on a home made camera, sometimes intentionally degrading the film, or processing without times, or mountain dew instead of water.
I recently watched the new Aliens remaster and actually thought it mostly looked very good. It was definitely a bit jarring at first because it doesn’t look like an 80s movie anymore: it’s too crisp. But once I got over the strangeness of it and realized some of my attachment to the original look was nostalgia I liked it. I do wish they’d kept more film grain, it’s a little too cleaned up, but it looks good nevertheless.
From what I’ve read the same team did all these Cameron remasters, and True Lies was the first one so they were still figuring some stuff out and it’s more flawed as a result. Haven’t seen it yet, though. But based on Aliens I’m interested in rewatching The Abyss.
In the age of analog you could adjust brightness, contrast and color of the image. Still there and there are even presets, but not used much.
I imagine in ten years you will be moving AI sliders to adjust brightness and contrast of the movie. Want to have everything super sharp? Art house? You can do that.
At the same time, directors may spend more time on the actual story and less on the picture.
The AI upscaled 4K UHD version of Aliens is freakin' amazing.
I know the main complaint with upscaling older movies is that it can remove most of the film grain and that can give people a waxy, mannequin like look. But they really found a nice balance in Aliens.
I watched the non-UHD a few months ago and just recently saw the 4K version and the difference was pretty stark.
I guess a lot of it just comes down to preference but I love the way it looks.
I think HD Blu-ray was good and I didn't have any complaints but the 4K version looks much better to me. I also watched it on a projector with an 120 inch screen which I think benefits a lot from the increased resolution.
I don't see anything wrong with selling an upscaled film as a new product, if it's done well. Doing a decent upscale isn't trivial, and quality is often improved significantly.
Like anything, not all upscaled re-releases are of worthwhile quality.
Feel like the AI Upscaling was either done shitty or the screenshots they used are really bad examples, no reason AI upscaling should make all the images darker for some reason lol
It's quite strange, because the article text talks about how the "colors are bright and vivid, while blacks are deep and inky" and the problem is that the surface details look off. But then on the screenshots all you can see is a difference in the color grading, and no details of any kind.
Like, the problem with the closeup of Jamie Lee Curtis isn't the skin texture like suggested in the subtitle. It's that she is blue.
It completely ruined the grading and killed the skin tones. There's many other things wrong, but making the actors look like gray clay isn't helping at all.
didn't you notice that most movies and series in the last 5 years have been dark to the point of not even being able to to notice which eye color people have?
not enough to ruin all the newly produced stuff, they also need to ruin all the old stuff now...
At some point this tech will become actually good and useable.
But at this stage, nope, almost there, but not they should wait at least a few more years.
Everyone has 4K is a dumb argument. Most have 1080p. The sensible upgrade path for PCs is 2K. Legal streaming services only support a few devices, so a 4K device may be stuck in 720p. HiDPI is also, somehow, still an unsolved problem in this day and age? Same thing with HDR, especially since many self-proclaimed HDR TVs aren't even good enough to achieve what any layman would expect from a HDR TV. So HDR certifications became a thing, amazing. But 1080p SDR remains the king for now.
Afaik the problem with HiDPI is that fractional scaling sucks, so you have to jump from 1x to 2x to have good quality. For 1080p, that means 4k. So you should end up with the same screen size as you have now, except with four times the pixels, and four times the electricity bill because your hardware needs to render all those extra pixels, and this for some marginal semblance of an improvement in image quality when you look closer at your monitor than you're recommended to if you want to keep your eyesight.
I watch AI enhanced movies and/or shows every day when I use Anime4K, a real-time ML upscaler for anime https://github.com/bloc97/Anime4K
If you give shit-tier blurry DVD messes, it turns them into nicer blurry messes.
If you give it the usual bitrate starved streaming quality show, it looks almost as nice as a blueray.
If you give it a blueray... It basically comes out looking exactly the same. I'll usually still add its "darken lines" filter since I'm a fan of strong linework.
When I was a kid I had a classic playstation (PS1) it had multiple kinds of video outputs, one of which was S-video, which was a lot sharper than composite, problem was sharper wasn't better, it made pixel games have sharp edged pixels and the blur and scanlines that blended were missing, it also made the motions of 3D things just look clunky and disconnected. In almost every way possible the higher resolution output made it look worse.
So yeah, making a movie sharper could very likely make it worse.
And even on the PC side, every game drawn for a CRT monitor looks worse on a LCD because you don't get the free horizontal antialiasing or the tiny black spaces between horizontal lines...
And then some people do masterpiece remasters like that Silent Hill where they removed the fog because modern systems can draw more. Also ruined the atmosphere.
Many comments are missing the point here (although the article doesn't properly explain neither); it's not about resolution, but about fixing imperfections in filming:
> The recent Cameron restorations were based on new 4K scans of the original negative, none of which needed extensive repair of that kind. [...] The A.I. can artificially refocus an out-of-focus image, as well as make other creative tweaks. “You don’t want to crank the knob all the way because then it’ll look like garbage,” Burdick said. “But if we can make it look a little better, we might as well.”
The only movies which would require upscaling to 4K are those released between about mid-2000s to mid-2010s, the advent of native digital cinema, but filmed in 2K. Everything before was filmed in 35mm film, which can be scanned to 4K with information to spare; everything after is filmed in native digital 4K or more.
Moreover, upscaling which deal only with resolution has absolutely no need of AI. Any TV will decently upscale in _real-time_ a non-4K movie, and more sophisticated techniques can give basically indistinguishable results. 2017's _Alien: Covenant_ was voluntarily produced in 2K but released in 4K through upscaling and the image look just great.
> The only movies which would require upscaling to 4K are those released between about mid-2000s to mid-2010s, the advent of native digital cinema, but filmed in 2K. Everything before was filmed in 35mm film, which can be scanned to 4K with information to spare; everything after is filmed in native digital 4K or more.
Good to call this out, I think this is something that's really lost on people.
It really blows my mind that George Lucas, for all of his apparent obsessive concern about his films supposedly looking dated, chose to shoot Star Wars Episode 2 in 1080p in contrast to Episode I on 35mm film.
I guess 1080p was the big shiny edge thing back at the time. 35mm can supposedly be scanned beyond 8K, so you could theoretically consider 4K filming not good enough neither.
I think this would be amazing for jank 90's CGI. The movie "Event Horizon" terrified me as a child. I went to rewatch it as an adult, I could not get past the jank CGI, it pulled me out of my suspension of belief.
Park Road Post used it to clean up the sound in the Beatles Get Back videos. I don't know the details, and I haven't seen the original or Get Back, but apparently the sound in the original was terrible. They used 'AI' to do things like pick out George's (or whoever it was) voice from the rest so they could EQ it separately
Authentic accents seem to be more important than clarity, for some reason. Clarity is somewhere down there at the bottom of the barrel on the priority list, right next to dead cockroaches. I think it's far more challenging to convey a realistic accent while remaining perfectly understandale to the audience. Like, what's the point of voice acting in narrative media if I can't understand a word you're saying? Anyways, try to watch a dub some time if you're proficient in a foreign language. German dubs tend to prioritize clarity in my experience.
I wonder what percentage of home movie watchers have a centre speaker that they can turn up the volume on? Or is the thinking more that if you don't have a centre speaker you can turn up the volume on, you're not really into movies and thus not part of our target audience?
And re-packaged and re-sold, and people will re-buy and re-watch/re-consume.
Imagine "Lord of the Rings - the full trilogy in three 8-hour movies, now with new AI-generated content that fills the gaps left from the 'extended' releases. I know people that will definitely renew their HBO subscriptions in order to watch them (if/when they re-release one movie per year)
Agree. Though today it's sharper, tomorrow you ask for the plot to be changed or for everyone to be wearing spinny hats. At least there's a shared experience now. If people find this outrageous, just wait until tomorrow. Better to accept the infinite progress unfolding before us than to spend another moment angry or enraged. All I ask for is choice.
> Better to accept the infinite progress unfolding before us than to spend another moment angry or enraged.
Some would say this is infinitely regressive. From the twitter picture, it looks like a justified criticism, from the examples in the article, it's hard to tell whether there's a merited criticism or if it reflexive anti-AI'ism
On second thought, that could be hilarious. It would make all the smokers just look like very thoughtful people constantly bringing their fingers to their lips.