GCReacts – NVIDIA DLSS 5

GCReacts – NVIDIA DLSS 5

*sighs*

You know, I would have loved if my return from my hiatus was about something like…Pokopia, or Resident Evil 9 (both which slap, by the way, and you should play both of them). Or even that bonkers idea I had a couple of months ago where I was going to write about season 5 of Stranger Things, including the season finale, the mass hysteria online, not only regarding Conformity Gate but also things that happened in the season itself, largely perpetrated by Bylers (that was a sentence).

But no.

No.

It’s generative AI (genAI) that gets me crawling back. Thanks, NVIDIA.

I’m just going to say this once as a warning: if you’re here and think I’m going to sing praises about genAI, I am the last person on the planet who will. I hate it, just to put it very plainly. If you can’t handle that, this is your only given cue to close this window.

If you’re still here though, let’s talk about what NVIDIA announced today.

DLSS 5

Today, NVIDIA announced the next generation of DLSS, DLSS 5. Now I could just paraphrase the announcement, but I figure it’s better I just let them speak for themselves.

“DLSS 5 introduces a real-time neural rendering model that infuses pixels with photoreal lighting and materials. Bridging the divide between rendering and reality, DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.” (Source: NVIDIA)

CEO and founder of NVIDIA was quoted as saying:

Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again. DLSS 5 is the GPT moment for graphics — blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression.” (Source: NVIDIA)

So what does DLSS 5 do exactly?

“DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame. DLSS 5 runs in real time at up to 4K resolution for smooth, interactive gameplay.

The AI model is trained end to end to understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast — all by analyzing a single frame. DLSS 5 then uses its deep understanding to generate visually precise images that handle complex elements such as subsurface scattering on skin, the delicate sheen of fabric and light-material interactions on hair, all while retaining the structure and semantics of the original scene.

DLSS 5 provides game developers with detailed controls for intensity, color grading and masking, so artists can determine where and how enhancements are applied to maintain each game’s unique aesthetic. Integration is seamless, using the same NVIDIA Streamline framework used by existing DLSS and NVIDIA Reflex technologies.” (Source: NVIDIA)

Still with me? Good. If words aren’t exactly your speed, well, you’re in luck. Because we got video of what this looks like.

This one is from NVIDIA directly:

And this one is from Digital Foundry. Their video is far more in depth:

Now I’m going to drop one more tweet before I jump into the real meat and potatoes, as they say, of this piece. It’s a tweet from Bryan Catanzaro, the VP of Applied Deep Learning Research at NVIDIA, because I think it’s a good jumping off point.

So What?

GenAI has been a thorn in my side since it was first introduced. At first, it was because I was an artist and seeing people’s work be used without permission or even knowledge. Then it was because of the environmental aspects of it, like how much water it consumes and how it pollutes neighbourhoods forced to live next door to its data centres. Then it was because of how it employs people and pays them an unliveable wage, with articles comparing it to modern day slavery. Then it was because of the threats to personal security. Then it was the threat to children and women, as both were targets of genAI deep fake sexual abuse material. Then it was because of genAi induced psychosis. Then it was because of the cognitive decline due to constant reliance and use. Then it was because of the rising costs of living.

Then.

Then.

Then.

The list goes on and on and on, and I still could list more. I could sit here all day going on and on about the reasons I hate it. And yet it is constantly being shoved down our throats, popping up in places no one wants it.

And now it’s about to pop its ugly head again in a space no one wants.

I don’t care if my games are photorealistic, not at the expense of relying on genAI, which in turn completely changes the artistic vision the developers had when they made the game. NVIDIA can say all they want that it’s just lighting changes, nothing is happening to the character models all the want.

I’m here to tell you that’s bullshit.

Let’s start with Resident Evil 9 Requiem, one of the games this tech was demoed in.

We can start with Leon because the changes are more minor than they are with Grace. You can notice a bit more lines in his brow area and the lighting changes are drastic, with DLSS 5 capturing realistic lighting better…in theory. But I don’t know if it captures it accurately in this. Leon on the left looks well lit for the space he currently is in. In the one on the right, it looks like he has a spotlight on him from the side, giving him this eloquent glow. I’m not sure how well this works for a horror game… specifically one that is rather darkly lit at times, so much so that a horse statue scared the living daylights out of so many people.

Like I said though, Leon is far less egregious than the changes to Grace.

This is especially bad to me because it feels so rooted in sexism. Grace is not wearing make-up in the original shot, but now she suddenly is in the DLSS 5 ones. I think it’s translating shadow into make-up, upon closer inspection. Her lips are fuller, as are her cheeks. Her skin tone brighter, her cheeks redder. Not only that, but her eyes feel more doe-like as well in comparison to the original. She comes off like some glam model versus the FBI analyst she’s supposed to be.

She no longer looks at all like the character the team at Capcom envisioned, let alone the actual model her face is based on.

It feels like the only directive DLSS 5 had (despite me knowing this is not how it works) was “just make her look more pretty”. And I guess it accomplishes that, if that’s the goal. But at what expense? Well, now you’re making the characters look like one of those “I put this female character through genAI to make her look better” posts. I’m not alone in thinking so either.

I really don’t think you want your tech being compared or reduced to a genAI face filter, but hey, I’m not NVIDIA, so what do I know.

I mention it’s rooted in sexism because none of these things I mentioned with Grace are seen in Leon’s. Leon’s just looks more realistic; it doesn’t change how he looks too drastically. But for Grace, and many of the other female characters this technology has touched, it does include features that the original model did not.

Here is a character from Starfield where you can clearly see the same thing that happened to Grace happened to her too. She is not wearing any visible make-up in the original shot, but now she has lip gloss and bottom eyeliner. It also gave her grey hairs when it looks like the original could have just been a highlight? I don’t know about this one because I got really bored in Starfield and fell asleep during a gun fight so, I didn’t finish it. Nevertheless, I do not think that’s supposed to be grey hair. It would have shown up in the original.

I didn’t mention this with Grace and Leon’s, not because they don’t have it, but because this one is a far better example of it. GenAI has such a plasticky feel to it, and you can really notice it in this shot. The sleeves on the puffer jacket, the vest, even her skin. It just feels so fake. And sure, yeah, you could come back at me with “well, neither does the original, it also looks fake.” I would agree with you, but I can excuse it because at least it was never setting out to be 100% photorealistic. There are still some creative liberties being taken here, art direction even, if you will.

But with DLSS 5 turned on, photorealism is the supposed point. And yet I simply feel like I’m looking at a mockery of a person, of life-like. Is our obsession with photorealism in games this important? Is this really what we should be trying to achieve? And again, at what cost? Is something looking so realistic that important to you, you’d toss away artistic vision? Individuality? Creativity?

Are we going to make every character beautiful, perfect, flawless, plastic now? Is that what we truly want in our games? I know I sure as shit don’t. The characters in the games we play should be as unique and varied as we are, not this.

Never this.

And before you come at me, saying that it’s only doing things to the lighting and that’s it, bullshit.

I want you to go back to that video of Digital Foundry and watch the Oblivion segment. Look at what it does to the weather, and then watch what it does to one of the characters when they blink.

Don’t believe me still? Great, because here’s the evidence:

I went back and watched the video specifically for this clip. It’s very clear that something is wrong because this is not how we blink. Our eyelids don’t work like that at all. It honestly looks like he has eyeballs on his eyelids? It’s very strange. I’m not the only one who noticed this though.

But yeah, photorealism, am I right?

I’m not the only person who feels this way either. The internet erupted with anger first at NVIDIA and then at Digital Foundry for not pushing backing on the tech and essentially just glazing it in an article they released. Not too soon after, a surplus of memes followed. I’ll include only a few because I feel like I’ve already included enough.

This last one feels pretty accurate because that’s exactly the vibe DLSS 5 Grace now gives off, a bland character in some fake ad for a mobile game.

I’m going to end this on this note: we need to continue to pushback against the use of genAI. It is a cancer on society, and it is seeping into every facet of our lives while it erodes away at the human element to one of the very core aspects of humanity: art.

And we cannot let it. Otherwise, everything will start looking exactly the same: cookie cutter and plastic with no individuality at all.  

I know I don’t want my video game characters looking like that at all.

Always and forever,


An angry emoji with the angry anime symbol

GCReaction

I usually would type something here, but I feel like this is more apt for once:


Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *