I’m going to reply to both of you in one reply, because you’re both largely trying to say the same thing so I’m just going to address the first and make specific replies that aren’t covered by what I was trying to say earlier/now to the second reply.
Let us also keep in mind the original quote from Tim that Theorist9 was using to make whatever point was prefaced by this in the article:
“Millet also is unconvinced that the game dev universe has adapted to the unique architecture of the M-series chips quite yet, especially the unified memory pool.”
I think it’s weird to focus on that quote without this part. Anyways.
We all did specify that we were confused if he meant the advantage of extra vram for the users playing games - eg my addendum in your quoted part “at least for users”. It’s indeed possible that he meant just during the development process where that could be of use on even your personal development machine, especially for smaller/solo projects.
Tim is talking about development. Apple has repeatedly talked about that, so I can’t imagine it’s any different now. It also offers benefits to users of even the lowest cost Macs, so it works both ways, even if he did not specifically outline that in a particular interview.
Even for minimum and recommended specs there is a reasonable range of specs that can actually be accommodated. For instance 96GB of RAM is unlikely to be a viable recommended spec if your install base consists largely of 8GB users. The minimum is simply too far from the recommended - the performance on those minimum machines would be unpleasant if said performance is based on having a large pool of memory.
96 GB is cool, but is on the higher end of an extreme that is indeed possible but not something I was referring to. Additionally, the game doesn’t need to be designed to take advantage of a large amount of memory for users to benefit. It can just load more of the game/scene at higher res, etc. into unified memory, and users benefit.
If a game developer wants to go beyond and do something, that’s also now possible. As I stated, the difference between required v.s. Recommended. Developers can accommodate minimum specs for basic gameplay whilst unlocking cooler features for higher end machines, so I’m still unsure what the point here is. Game developers can and game developers often do both.
Finally, if your minimum is going to be 24GB or higher then your user base is too small to sell to. Assets are incredibly expensive, if you’re developing a game where 96 GB of vRAM can actually be taken advantage of that’s almost certainly a AAA game - in fact one the likes of which we haven’t seen. Those are incredibly expensive to make. They need a huge user pool to sell to. Selling to a fraction of a fraction of a market is not a viable business strategy for AAA gaming. Even though Mac user base is growing, it is still small and not every Mac user is a prospective gamer or interested in your game in particular. Now if you do as you suggest in the latter half, you’re adding further restrictions by eliminating the most popular Mac models by a substantial margin. Suddenly that’s not a lot of people to sell to. Even if you port the game to Windows with 4090s as minimum specs that’s a tiny fraction of people.
The thing here is, again, is about minimum vs recommended. You can enable gameplay for a large amount of people but enable features for others who have the better specs. Apple’s architecture enables better performance of some stuff inherently by letting developers load more stuff into unified memory, and that is by default the system just works like that. Sometimes, you can alter the game settings to specifically take advantage of X amount of memory in the system with specifying the amount of RAM you want.
But typically the game is just going to use unified memory with its benefits without additional work, so Mac users benefit in a way they did not before on the previous architecture, which is an upgrade to users.
The main point:
You need to keep in mind the most MacBook Pro’s ever had before was 8 GB of graphics memory. Tim explains this is no longer a weakness but now a strength of Mac, where you can literally get up to 128 GB of graphics memory in a Mac.
Let’s keep in mind this is all the beginning? He’s trying to explain why Mac is a good platform to develop on and for. It already was, and now it’s better because it addressed a weakness imposed by AMD graphics chips.
Again, the previous 8 GB maximum is now 128 GB maximum on a MacBook. You keep talking about even if X amount of memory could be used how many Macs realistically could use it. The 8 GB I just referenced required a graphics card AMD 5600M on the top end 16 inch MacBook Pro that cost $800 to upgrade to, so $3200. Entry level MacBooks couldn’t even get more than 1.5 GB of graphics memory, because of Intel’s really crappy graphics chips.
So now you’ve gone from something like the MacBook Air from early 2020, which couldn’t even launch certain games, to now being able to play games on late 2020 with M1 and Apple silicon. The MacBook Air, entry level, went from the most basic, bare minimum graphics power to the dedicated 5300M available on the 16 inch MBP 2019, and went from zero dedicated graphics memory to 8 GB, which also beats out the 5300M’s entry level 4 GB. You beat out the 2019 16 inch MBP on CPU and matched it on its entry dedicated graphics, which cost $2,399, and beat it out on its dedicated memory amount, which cost $3200, with a 2020 MacBook Air that cost $999.
To even get 24 GB on NVIDIA top of the line GPU, you need to spend over $1,600. That doesn’t include, you know, the rest of the god damn computer lol. Even the Mac mini offers 24 GB of graphics memory at a price of $999. That includes everything, too.
So the point you’re missing is that Unified Memory now lets even the most basic, “lowest common denominator” specs compete on the level of one of the top of the line specs of the previous MBP.
That’s the baseline going forward. And Apple has obviously not stood still. On top of that, for future development, Apple is offering unique benefits to developers with the architecture.
That’s the point Tim and Apple at large is making.
So when I read stuff like this:
In conclusion, it is hard to think of how end users benefit for gaming from huge vRAM memory pools. We’re not big enough to influence the development for rest of the market and engine scalability only goes so far as even amongst Mac users, those with such large memory pools are a minority.
I just get plain confused. Unified memory in tandem with Apple’s chip designs just catapulted every base Mac from being good at basic stuff to being good at advanced stuff. The transition to Apple silicon has made it work like this: the top of the line comes to the lower end. The MacBook Air offers better performance than the MBP. The MBP offers better than Mac Pro performance. The Mac Pro offers unheard of performance on a Mac and pushes the boundaries of personal computing.
Unless I’m building an asset that is being prepped “for the future”, making a giant asset that doesn’t fit on target machines kinda misses the point. It would need to be sized for where it gets used, not simply for where I develop it.
Huh?
The ability to work with large assets straight out of memory is the stuff of pipe dreams for creators of anything, let alone games. No idea what you’re trying to push here, but it makes zero sense. Developers have already talked about how it benefits them previously. And even if you ignore developers, a theoretical example is being able to work on a CGI model for a sci-fi TV show entirely in memory. You’re able to create and edit details with that, and you won’t run out of memory like a traditional GPU that only offers 24GB maximum.
The most basic being that the MacBook Air has 8 GB of memory standard and offers 5300 M performance standard? And that is with M1 7 core.
In the case of 1, if my target hardware profile is 8-16GB of system memory, then I’m using some amount of that for the CPU side of the game, plus overhead for the OS. So it’s not clear to me that an 8GB low end Nvidia card is utterly trounced in that particular setup. I would very likely be targeting similar asset complexity in each case.
I’m sorry, isn’t the point of this thread “What does Apple need to do to catch up to NVIDIA?” MacBooks have caught up to NVIDIA in a lot of areas but some stuff they will continue to improve in. Unified memory now lets MacBooks have more than 8 GB of graphics memory. Base Macs now match that level of memory offered by most dedicated GPUs and offer the advantage of going far beyond 24 GB offered by 4090.
I’m really confused here. It feels like I’m explaining Apple silicon to people who simultaneously know about it but have never heard of it. I’m so confused. The paradigm has changed here for the Mac and PC in general. It’s now Windows/Intel/AMD/NVIDIA who is behind, technically and philosophically. And considering I’m reading stuff like this:
www.anandtech.com
“Apple's release of
Arm-based system-on-chips for its desktops and laptops three years ago demonstrated that such processors could offer competitive performance and power consumption. On the Windows-based PC front,
only Qualcomm has offered Arm-based SoCs for notebooks in recent years, but
it looks like it will soon be joined by AMD and NVIDIA, two notable players in the PC space…”
I’m going to presume it’s not just my opinion on that.
And it’s not like developers have had a problem filling the unified memory of the PS5, making me wonder what you think Apple’s doing differently to Sony?
What?
It seems like you are missing the point I was making here. I was pointing out that the low end cards of the last 3 generations include 8GB of VRAM. So it’s not like an asset that fits on an 8GB Air won’t fit on the most popular discrete GPUs available today. And because that VRAM is dedicated, the GPU has a bit more headroom than an 8GB Mac.
And it seems like you’re missing the point
Apple is making. Even the MacBook Air now has 8 GB to work with. It didn’t before.
The 13 inch MBP only offered 1.5 GB.
The most ever offered on a MacBook was 8 GB of memory, and that required a $3200 machine.
Now you get this level of graphics memory standard on every Mac, and you get the advantage of working with up to 128 GB for both development and user’s use on a MacBook.
NVIDIA offers stuff Apple silicon still doesn’t.
Apple silicon offers stuff NVIDIA likely never will.
Apple is trying to explain their platform to developers, of any area, like any business would? I’m so genuinely confused why you guys are confused by this situation.
I’ve enjoyed this thread, and I don’t comment generally. I didn’t rush in and say “zomg Apple zilikon is betters!!! Fuk Nvidia.”
I enjoyed reading what people wrote, and I’ve read since the thread started. I only chimed in to explain unified memory’s benefits, and on top of that only did that because I felt confused why you guys are confused.
If you guys are trying to claim that unified memory doesn’t offer advantages to the current paradigm used by NVIDIA, I’m just going to end it here on my part. If NVIDIA offered this kind of stuff, I don’t feel like the replies would be the same.