In fairness, only a handful of platforms have ever really used FAT binaries. Apple's the only one I'm aware of that uses it consistently (68K/PPC, PPC/Intel, 32/64Bit, Intel/ARM).
Someone tried to introduce FatELF and had some fun with that, but the end result is that it never took off and multi-arch Linux installs look surprisingly similar to Windows multi-arch installs. Only with better folder names.
When talking about CPU cores on modern dies, double die area on a core isn't going to lead to double the die area of the chip. With all the integration going on with Intel, AMD, Apple, etc, there's a lot more going on these days. The CPU cores and cache on a base M3 is what, 1/6th of the die (rough estimate from annotated die shots)? Doubling that is around 16% more die area. But this is all us kinda exaggerating the point, though. The result is that a larger die area will increase costs, but that's all part of the balancing act here.
Yeah I was just exaggerating to make the point. I agree that it’s the total SOC size that matters not just the CPU when it comes to cost and I should add that
supposedly the Qualcomm chips are indeed cheaper. However, the other part of the balancing act is that CPUs are expected to take on a number of different workloads with different levels of multi threading and which workloads it should focus on depends heavily on the device class. This stands in stark contrast with GPUs where cost and power limits may apply limits to core counts, but for the workloads themselves, more cores will always simply be better. For the CPU, targeting one set of workloads like heavily multi threaded can be counterproductive to single threaded and lightly multithreaded tasks. As John Poole said in
his posts about why he changed GB6's multithreaded scoring system, relying solely on infinitely scalable multithreaded benchmarks users were getting suckered into not only buying systems with power they didn't need but even worse were actually slower at some of their most common tasks than "lower-tier" systems they could've bought instead. And that's on HEDT systems never mind processors destined for "thin and lights"! Even if the negative consequences don't come into play, at the very least, users of such devices will either not benefit from high multithreaded capabilities or find heavy throttling and low battery life when they do try to make use of their MT capabilities in such systems. Thus, there is a penalty to be paid for not developing a chip targeted for the right device. This isn't to say that a chip shouldn't be the best it can be but what is best can be heavily context dependent. This is why I was no more impressed with Qualcomm/MS's marketing about multithreaded workload claims than I am with ASUS's for the upcoming AMD chips (AMD did not make those claims themselves).
That's more interesting, but they are measuring system draw which itself is fraught with issues because you are measuring more than just the CPU cores. Is this result because boost clocks? Because of differences in the graphics feeding the external display? Does Ryzen and Intel have a high base load even when the cores are asleep? That last one is something I have seen before. My i7 Mac mini can get under 10W when idle, yet I had a Ryzen 5600 desktop that drew 30W just sitting at the desktop doing nothing. Because these figures are using system measurements, it's harder to make claims about the cores themselves. It's certainly a statement that you can get more battery from a Qualcomm system in this specific scenario though, and that Apple systems are consistently good across the board.
This isn't much of an issue here for multiple reasons. 1)
The chosen Ryzen laptop has a 15W iGPU and gets the same idle wattage than the MacBook Pro/Qualcomm (all are about 7-9W on average). 2) Notebookcheck do pretty good quality controls to ensure as even a test setup as possible between devices. 3) They subtract idle from load to get rid of differences at idle. Thus the remainder is load power.
Now I'll admit this doesn't fully eliminate every factor other than the CPU cores,
heck that's the basis for my previous claims about the discrepancy between the Qualcomm Asus and Qualcomm MS devices, but that was on the order of 10-20% power delivery inefficiency under load, but nothing to come close to 2-3x difference we see in the ST bubble chart I made from Notebookcheck's data. While I'd have to dig up a reference I'm pretty sure this is recapitulated in software measurements from HWInfo and powermetrics. HWInfo is tough to come by for the Qualcomm devices unfortunately as I think it was only recently enabled and many reviews don't have it, especially for ST tasks. And naturally software measurements have their own limitations. It’s why I’d prefer to have both whenever possible.
That said, these relative power numbers just make sense. All the devices listed are on TSMC N4 or N5P which are basically identical nodes but the AMD device is pushing its cores as high as 5.1GHz compared to 3.4-4.0GHz. Now admittedly the AMD core won't spend its entire 10 minute CB ST run at 5.1GHz but the core is going to be running much, much higher clock speeds for longer on the same node. If the cores are indeed bigger, then in terms of CfV^2 they're getting hit on every single variable, including the squared voltage. Now because they're running their cores so far out on the curve it is also true that they could back off clocks and get say 10% less performance for half the power (I'm making the numbers up) but they're already 6-20% slower in CB R24. For obvious reasons AMD don't want to lose an additional X% of performance even if their efficiencies would go up dramatically. Heck that's a major reason why AMD catches up in MT threaded tests, Apple and Qualcomm of course back off clocks too, but AMD backs off much more. Throw in a 20% boost to SMT and suddenly AMD's efficiencies look a lot better even if they can't match Qualcomm/ARM for performance (and when they do efficiency drops even faster for them).
A qualifying statement that I made in my bubble chart post, it should also be noted that in GB 6.2, AMD does much, much better relative to its ARM rivals as an average across the GB subtests than it does in CB R24. AMD are almost certainly still drawing a hell of a lot more power than the ARM-based cores to attain those scores still (probably even worse relative to CB R24 given the bursty nature of the GB workload means a proportionally higher time spent at max boost) but at least the performance deficit is gone and they actually beat some Qualcomm models and nearly match the M2 Pro, while the higher end Qualcomm supersedes both in this test though - just a side note in contrast to CB R23, Apple does incredibly well in CB R24 so yeah Maxxon fixed whatever that problem was, at least for Apple chips. So 2-3 fold differences in power efficiency may be on the high end for benchmarks, but there is still a very substantial gap in ST performance and efficiency between the best ARM (well M2, but still) and the currently best x86 cores.
Using the WYSIWYG editor, you can select and set a fixed size on images.
Ah I thought it was a forum setting. I did notice on the forums we can create thumbnails but testing previews of it I'm not sure I like that any better to be honest as a readability measure. I guess I'll play around with an editor to choose a picture size that looks good on both mobile and desktop versions of this website.
No info on battery life. The Tom’s article doesn’t mention it and the Asus website says “Zenbook S 16 has the day-long stamina you need, and more." Whatever that means.
Yeah no hard info on battery life, but as I linked to in my previous post even from AMD's website we can see that it says it's a 28W TDP device that can go as high as 54W. No doubt that they will have a "whisper" mode, but the HX 370 is fundamentally a different device class than the base M models. It might go into "thin and lights", but I doubt fanless models, if there are any, will be able to exercise all that power ... The GPU at 15W is very impressive even accounting for AMD's use of "double FLOPs" which gives them 11.88 TFLOPs (basically under certain circumstances with ILP, AMD can do two FP32 calculations at the same time. However, that's not always possible and how likely it is for typical GPU workloads is up for debate. But even accounting for that the GPU is still nearly 6 TFLOPs base!). It's basically the equivalent of an Mx Pro, both in GPU and CPU.