Kim Cattrall’s Infamous Scat Singing Has Been Restored to 4K

Most of us have quietly put down our green hobbies, but internet creator Teigan Reamsbottom is still getting up at 4 a.m. to curate classic camp pop culture video clips. He also strives to improve the quality of the videos by using a dedicated gaming PC that he has stuffed with RAM and a suite of software that uses machine learning algorithms to preserve the original features of the clips.

In a world where movies can be written off for tax write-offs and physical media can be scarce, uncovering pop culture artifacts — even if it means no 80s or 90s celebrity is safe.

More from IndieWire

Exhibit A: from the day’s “Sex and the City,” an interview with Kim Cattrall where she screams while her then-husband plays upright bass. The clip has long been an internet fixture (it was once the subject of an exhibit at Lower East Side gallery THNK1994). It’s a perfect example of Reamsbottom’s eye for the delightfully unhinged and sometimes cringe-worthy camp of a pre-TikTok world. And in some sectors of the internet, its restoration was cause for celebration. Watch the restored version versus the blurry clip we’ve been dealing with for years below:

However, upscaling video – improving its quality so a clip can fit on our high-resolution screens – is a challenging (and/or messy) process. Capturing and converting the data is labor intensive and all methods leave artifacts behind that affect the look of the new film. As machine learning models take human labor and judgment out of the equation, that danger increases.

Professional 4K transfers can appear airbrushed or plastic, an unnecessarily yassified version of something that looked perfect in its original format. (If you’ve made it this far into this article, here’s your PSA to make sure you’ve turned off motion smoothing on your TV.) As Chris Person noted for The Aftermath on the pitfalls of with the rise of AI video, “Why transfer tape correctly when we can have a bad guess instead?”

That, in essence, is what AI video software does. It guesses where a person’s face begins and ends, how their hands move, how light and moisture react on their skin. It does this very badly, but it does it so often (hopefully) that it eventually gets close enough to being mostly right. For Reamsbottom, the best solution is to publish clips that don’t have source video available – and, for the kind of short camp moments that don’t necessarily attract the eye of professional restorers.

He told IndieWire that it’s a constant trial and error, balancing the different AI models available and adjusting sharpness and shadows to avoid the things that upscaling programs are most likely to fail. Like teeth.

“Because it’s zooming in on the detail of the person, the AI ​​model will zoom in on the detail on the teeth and you’ll be able to see dark lines between the individual teeth,” Reamsbottom said. “So, I can be scary. Sometimes it suddenly looks like they have very dark teeth because it outlines each tooth.”

Reamsbottom has to play with the amount of detail, always trying to turn, in his words, the subject of the video into a Pixar character. It must also account for challenges in the images themselves. “I recently did one of Phyllis Diller, which was really hard to upgrade because her dress was sequined,” Reamsbottom said. “Her face might look good, but suddenly, the sequins didn’t look good. You really have to play a lot.”

“Playing around” does not begin to take up the time this requires. Reamsbottom said it could take more than a day to do a single pass on a 30-minute video, even with a dedicated upscaling computer. And machine learning can’t pixelate correctly.

“You have to have something of at least medium quality to make it really nice,” he said. “Even then it can be iffy, but some of the things are very low-quality, very pixelated that I wanted to work with? It is tough. The faces are hard, teeth are hard, or, you know, a nose could go missing.”

Although new Samsung phones (and similar models coming to iPhone) have “AI” editing generation features that seem to work much faster, it’s still the province of people who work with video professionally be able to represent reality well through deep learning models. and those, like Reamsbottom, able to devote considerable time and dollars to the effort.

The time demand may still be high, but the tools to do this are not expensive; Reamsbottom uses a $300 software suite from Topaz Labs. And when done right, the results can be extremely satisfying.

Reamsbottom is working with an archive of Connie Francis tapes, which was given to him by a family of fans who recorded a huge amount of footage of the pop singer. “There’s footage of him and personal footage of Connie, and when you see it upgraded, it almost makes you emotional because it’s like you’re experiencing something for the first time,” Reamsbottom said. “It’s magical when you get the end result, and it looks very clear. It’s like you’re there again.”

Best of IndieWire

Sign up for the Indiewire Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.

Leave a Reply

Your email address will not be published. Required fields are marked *