Nuvia: don’t hold your breath

Chipsandcheese on the Oryon cores:


I haven't had time to read the whole thing yet but given the title of the thread I thought the following snippet might be amusing!
The article overall seems very good, but there is one part in the CB R24 benchmarking section where I take the opposite view:

SMT helps AMD by giving each Zen 4 core explicit parallelism to work with. However, that’s not enough to counter the Snapdragon X Elite’s higher core count. The Snapdragon X Elite has 12 cores to AMD’s eight, and comes away with a 8.4% performance lead while drawing just 2% more power. From another perspective though, each Zen 4 core punches above its weight. Qualcomm is bringing 50% more cores to the table, and bigger cores too. Snapdragon X Elite should be crushing the competition on paper, but thermal and power restrictions prevent it from pulling away.

This to me is the wrong way around. Yes each Zen Core punches above its weight ... because as chipsandcheese themselves state in the first sentence each core has two threads. Thus comparing Zen 4 to Oryon, it is 8 to 12 cores but 16 to 12 threads. Each Oryon thread thus punches above its weight and for a test like CB R24 that would be, to my mind, the more relevant metric. The other thing I'm not so sure about is the contention that the Qualcomm cores are bigger. They are "bigger" architecturally, as in a wider design, but my impression is that they are the same physical size or smaller in silicon - like with M1 - in comparison to x86 cores fabbed on the same process. Does anyone know what the sizes actually are?

At any rate I do agree with the last statement though that some of the MT results don't quite match up the paper specs with various possible explanations as to why.
 
Last edited:
Chipsandcheese on the Oryon cores:


I haven't had time to read the whole thing yet but given the title of the thread I thought the following snippet might be amusing!
Haven’t had a chance to read it, but I’ve seen other commenters say Oryon has 12% less ipc than Firestorm. Weird.
 
Last edited:
It’s in the hardware subreddit. They didn’t specify how they came to that conclusion. I hoped it might be in he article, but I haven’t read it yet.

Edit. It seems they are comparing SpecInt
I don’t think SpecInt was mentioned in the article. However clam didn’t disagree and mentioned the page size and clock speed increases as possible culprits. Both are reasonable. I haven’t seen SPEC results for Oryon.
 
The article overall seems very good [...]
Don't know that I agree. It suffers from the usual thing I notice in chipsandcheese: their articles read like the output of young PC enthusiasts who've learned lots more about CPU internals than your average forum poster, but still suffer from numerous blind spots due to their narrow x86-centric background and self-taught approach.

An example: they criticize Oryon for having less robust move elimination than x86 cores. This is a feature which attempts to handle many 'mov' (register-to-register copy) instructions in the front-end of the processor with register renaming tricks rather than issuing them to execution units in the backend.

But... three-operand RISC ISAs like Arm A64 provide the regA = regB OP regC pattern for ordinary ALU instructions, while x86 is two-operand, or regA = regA OP regB. Two-operand ISAs put a lot of pressure on move elimination because whenever you must preserve both inputs to an ALU instruction, you're going to have to copy one of the inputs to another register before doing the computation. When you change over to a three-operand ISA, there are far fewer register-to-register copies since you can simply write to a different register than either of the input registers. You don't have to eliminate a move that wasn't encoded in the instruction stream in the first place!

chipsandcheese doesn't mention this at all. I can only assume that while they've read up on why move elimination is good in x86 cores, they just don't have the background to understand that it isn't very important in an Arm A64 core.

In the same section they also ding Oryon's front end for not recognizing xor reg, reg or sub reg, reg as idioms for setting reg to 0. This another x86 special: due to the specifics of x86's variable length instruction encoding, xor reg, reg encodes to fewer instruction bytes than a load-immediate. Therefore, x86 CPU architects recognize such patterns as alternate encodings of "load immediate 0", and encourage their use in programs. In A64, however, mov reg, #0 encodes to 4 bytes, same as all other A64 instructions. There is no point in using this x86 trick, and therefore A64 CPU designers don't bother complicating the front end to avoid issuing these to back-end execution units.

p.s. I am not using correct assembly syntax for anything in particular in this post, consider it pseudocode
 
Don't know that I agree. It suffers from the usual thing I notice in chipsandcheese: their articles read like the output of young PC enthusiasts who've learned lots more about CPU internals than your average forum poster, but still suffer from numerous blind spots due to their narrow x86-centric background and self-taught approach.

An example: they criticize Oryon for having less robust move elimination than x86 cores. This is a feature which attempts to handle many 'mov' (register-to-register copy) instructions in the front-end of the processor with register renaming tricks rather than issuing them to execution units in the backend.

But... three-operand RISC ISAs like Arm A64 provide the regA = regB OP regC pattern for ordinary ALU instructions, while x86 is two-operand, or regA = regA OP regB. Two-operand ISAs put a lot of pressure on move elimination because whenever you must preserve both inputs to an ALU instruction, you're going to have to copy one of the inputs to another register before doing the computation. When you change over to a three-operand ISA, there are far fewer register-to-register copies since you can simply write to a different register than either of the input registers. You don't have to eliminate a move that wasn't encoded in the instruction stream in the first place!

chipsandcheese doesn't mention this at all. I can only assume that while they've read up on why move elimination is good in x86 cores, they just don't have the background to understand that it isn't very important in an Arm A64 core.

I did wonder about that but didn’t have the background myself to question it further. The only standout flaw in the article I could identify with my own eyes was the cores vs threads statement about CB R24. I’m also not sure about the Oryon cores being “bigger” - they may have meant “wider” but in silicon … well the Reddit thread @Jimmyjames linked said there should be die shots soon and most were predicting, like I would, that overall the CPU part of the Snapdragon SOC was going to be smaller than the analog AMD CPU.
In the same section they also ding Oryon's front end for not recognizing xor reg, reg or sub reg, reg as idioms for setting reg to 0. This another x86 special: due to the specifics of x86's variable length instruction encoding, xor reg, reg encodes to fewer instruction bytes than a load-immediate. Therefore, x86 CPU architects recognize such patterns as alternate encodings of "load immediate 0", and encourage their use in programs. In A64, however, mov reg, #0 encodes to 4 bytes, same as all other A64 instructions. There is no point in using this x86 trick, and therefore A64 CPU designers don't bother complicating the front end to avoid issuing these to back-end execution units.

p.s. I am not using correct assembly syntax for anything in particular in this post, consider it pseudocode
Got it
 
Last edited:
You don't have to eliminate a move that wasn't encoded in the instruction stream in the first place!
There is no point in using this x86 trick, and therefore A64 CPU designers don't bother complicating the front end to avoid issuing these to back-end execution units.

I haven't read the article, but these two examples make it sound is if they spent too much time understanding the optimization techiques of x86-64 implementations, but hardly any time on how AArch64 differs from x86-64. Not the best starting point, if you are trying to evaluate an AArch64 chip.
 
This to me is the wrong way around. Yes each Zen Core punches above its weight ... because as chipsandcheese themselves state in the first sentence each core has two threads. Thus comparing Zen 4 to Oryon, it is 8 to 12 cores but 16 to 12 threads. Each Oryon thread thus punches above its weight and for a test like CB R24 that would be, to my mind, the more relevant metric.

To me, comparing by threads is not a better way to compare things. SMT doesn’t add compute resources, and at least with Zen 3, the average uplift for multithreaded tasks was something like 22% according to AnandTech. So it seems to me that it is disingenuous to say that we should really be treating Zen like it has double the compute resources it actually has in this comparison. Even if we disable SMT on Zen4, I expect the per-core performance to be in favor of Zen. It took 50% more cores to get 8% more performance, meaning the uplift from SMT better utilizing the x86 cores aren’t enough to explain the results on their own.

The truth is, Oryon hit 8% higher performance at the cost of 1.9% extra power draw (and I’d expect to see worse single thread on the Oryon). That’s the real take-away here. In terms of perf-per-watt, it is ahead of the 7840HS as tested, just not by much. To be honest, if I was buying a mini PC with one of these, I’d take the 7840HS. Mostly because of the single thread performance.
 
To me, comparing by threads is not a better way to compare things. SMT doesn’t add compute resources, and at least with Zen 3, the average uplift for multithreaded tasks was something like 22% according to AnandTech. So it seems to me that it is disingenuous to say that we should really be treating Zen like it has double the compute resources it actually has in this comparison. Even if we disable SMT on Zen4, I expect the per-core performance to be in favor of Zen. It took 50% more cores to get 8% more performance, meaning the uplift from SMT better utilizing the x86 cores aren’t enough to explain the results on their own.
Hmmmm ... that's a fair point. The rendering pipelines were between 15-26% uplift with SMT on and the old version of CB R20 tested was about 18%. It depends on the workload obviously, some were nearly double, while others were closer to or even less than one. So while it doesn't add compute resources per se it is also fair to say though the amount of compute resources per core is designed with SMT in mind. A different x86 core designed without SMT might be different. We'll see since Intel is now pursuing that approach in its consumer chips.

Maybe the most fair way to adjudicate this would be by silicon area? Beyond power, which is related obviously, that's really what we're trying to get at, how much silicon does a chipmaker have to use to get a certain performance level? Because throwing more cores and threads will always be a benefit to highly MT workloads but there is a reason chipmakers and OEMs don't throw Threadripper-class chips into laptops and just rely on DVFS to scale power down when running heavy multithreaded workloads. In theory, they could, but practically ...

The truth is, Oryon hit 8% higher performance at the cost of 1.9% extra power draw (and I’d expect to see worse single thread on the Oryon). That’s the real take-away here. In terms of perf-per-watt, it is ahead of the 7840HS as tested, just not by much. To be honest, if I was buying a mini PC with one of these, I’d take the 7840HS. Mostly because of the single thread performance.

I'm not quite sure I follow this. The single-thread performance of the Oryon is better no?

 
Last edited:
I'm not quite sure I follow this. The single-thread performance of the Oryon is better no?

Did I miss the single-thread benchmarks somewhere? My statement was an assumption based on the MT results and the guess that single core boost would be roughly similar between the two. Is the Snapdragon’s single core boost more effective than on Zen?

EDIT: Ninja’d by your edit. Although that’s now comparing a different (although similar) chip, so will have to poke around. AMD’s single core perf does tend to have a lot of variance depending on the specific laptop chip line you pick, likely due to power limits.
 
Did I miss the single-thread benchmarks somewhere? My statement was an assumption based on the MT results and the guess that single core boost would be roughly similar between the two. Is the Snapdragon’s single core boost more effective than on Zen?

EDIT: Ninja’d by your edit. Although that’s now comparing a different (although similar) chip, so will have to poke around. AMD’s single core perf does tend to have a lot of variance depending on the specific laptop chip line you pick, likely due to power limits.
Yeah sorry about that, I do that all the time. The 8845HS is basically the 7840HS. It's true that lots of OEMs have different power and thermal budgets but that score in Notebookcheck is one of the higher ones I've seen for the 8845HS/7840HS. But even so the Oryon beats it handily, in CB R24 it has a 20% higher ST score and over double the perf/watt. In GB 6.2 the difference is less pronounced, only 11% higher overall performance across subtests and power not tested. In the Reddit thread and to be fair the article, clam admitted their power/benchmarking tests are not as exact as some dedicated sites.

That said, I did agree with their overall conclusion that MT performance seems to be missing from the Snapdragon chip - though not necessarily due to thermals/power. Also, while there may be some oddities with CB R24 (though again it's only relative to Apple that things look weird here, relative to x86 CB R24 is one of the Snapdragon's better scores), overall these Snapdragon chips seem to be missing a fair chunk of expected MT performance. I still put it down to bandwidth issues - and in some ways the chipsandcheese article backs it up - like with Apple each individual CPU core can pull down a potentially massive amount of bandwidth and they have 12 of them. Their L2 is very bandwidth is very impressive, but relative to the number of cores their L3 amount and memory bandwidth is less so - at least relative to Apple, which maybe the design needs that for its MT performance on something like CB R24. I'm also not sure what the L2 bandwidth on Apple's design is for the Max which is the more relevant comparison. The other possibility is the cluster design (poor core-to-core latency between clusters in Snapdragon), but I wouldn't have necessarily thought for a workload like CB R24 where threads are generally independent that would've mattered as much and Apple should suffer from that as well.

I do like the idea of trying to normalize by silicon die area ... though that idea has a lot of pitfalls too: you don't always have access to that info and chips using different process nodes won't be easily comparable and then of course what counts as part of the CPU die area. Trying to get a sense of MT performance in the abstract is a good deal harder than ST! (which is already a pain)
 
Last edited:
Yeah sorry about that, I do that all the time. The 8845HS is basically the 7840HS. It's true that lots of OEMs have different power and thermal budgets but that score in Notebookcheck is one of the higher ones I've seen for the 8845HS/7840HS. But even so the Oryon beats it handily, in CB R24 it has a 20% higher ST score and over double the perf/watt. In GB 6.2 the difference is less pronounced, only 11% higher and power not tested. In the Reddit thread and to be fair the article, clam admitted their power/benchmarking tests are not as exact as some dedicated sites.

What I mean is that AMD's ST varies quite a bit depending on where in the notebook chip lineup you measure. The 7745HX has about 12% more ST than the 7840HS according to Geekbench. But is in a higher TDP class, so not exactly an even comparison, although AMD's power consumption tends not to go off the rails like Intel's when pushing single core, so it's possible that the higher TDP chip can be used in lower power situations like what I think about (battery powered in the middle of high desert trying to do astrophotography). And on the Snapdragon side, I'm seeing reports that the results vary noticeably depending on how the OEM set things up, which makes things more complicated:

In general, it feels more like trading blows than a clear win one way or another. Which is fine. It adds more evidence that more companies should be looking at ARM for higher power use cases if they've been ignoring it.

I'm still not sure if my next mini PC would be ARM-based though. As ARM compatibility gets fixed, one of my mini PCs just has its work offloaded to my Mac mini. And the other mini PC is a 15W TDP unit which isn't exactly where Qualcomm is aiming at the moment.

I do like the idea of trying to normalize by silicon die area ... though that idea has a lot of pitfalls too and well you don't always have access to that info or using different process nodes isn't comparable. Trying to get a sense of MT performance win the abstract is a good deal harder than ST!

How much do we care about how a CPU designer reaches certain goals though? If AMD/Intel can use SMT to hit certain MT performance thresholds with their cores, at a given power consumption, why would the customer care? While people wanting to tear down the architecture for clues might be interested in die area, I have to wonder how much does the CPU die area materially impact the cost per $ or other metrics.

I would prefer that metrics are based in something real world, IMO. But maybe I'm not as deep in the details to want to try to turn an apple into an orange for the sake of a fine-grained comparison.
 
What I mean is that AMD's ST varies quite a bit depending on where in the notebook chip lineup you measure. The 7745HX has about 12% more ST than the 7840HS according to Geekbench. But is in a higher TDP class, so not exactly an even comparison, although AMD's power consumption tends not to go off the rails like Intel's when pushing single core, so it's possible that the higher TDP chip can be used in lower power situations like what I think about (battery powered in the middle of high desert trying to do astrophotography). And on the Snapdragon side, I'm seeing reports that the results vary noticeably depending on how the OEM set things up, which makes things more complicated:


Right. Sure so the 7745HX would match in performance but it would undoubtedly use much more power ... the Oryon is already twice as efficient as the 7840HS. And yes Windows laptops are notoriously reliant on how the OEM sets things up - for x86 too. Anandtech did a deep dive into that and boy did it get nasty fast. Normalizing by power draw is often enough for single threaded cases (and better than clock speed for most purposes) but MT? Tough.

In general, it feels more like trading blows than a clear win one way or another. Which is fine. It adds more evidence that more companies should be looking at ARM for higher power use cases if they've been ignoring it.

Possible on the other hand you can argue that if it isn't a clear win, the more Qualcomm will struggle to convince OEMs and users to consider ARM for high power situations, especially until, as you mention later, software compatibility evens out. Qualcomm is also struggling in its accelerators and when comparing SOCs, the whole package matters (pun intended). However, despite that gloom, according to some Industry reports, whose accuracy is well ... there are forumers with strong opinions on that, Qualcomm has sold well so far. But we'll see how long that lasts as Intel and AMD release their latest chips.

I'm still not sure if my next mini PC would be ARM-based though. As ARM compatibility gets fixed, one of my mini PCs just has its work offloaded to my Mac mini. And the other mini PC is a 15W TDP unit which isn't exactly where Qualcomm is aiming at the moment.



How much do we care about how a CPU designer reaches certain goals though? If AMD/Intel can use SMT to hit certain MT performance thresholds with their cores, at a given power consumption, why would the customer care? While people wanting to tear down the architecture for clues might be interested in die area, I have to wonder how much does the CPU die area materially impact the cost per $ or other metrics.

I would prefer that metrics are based in something real world, IMO. But maybe I'm not as deep in the details to want to try to turn an apple into an orange for the sake of a fine-grained comparison.
That's where I was trying to go with my original reply (which again was edited after you saw it, sorry) "Because throwing more cores and threads will always be a benefit to highly MT workloads but there is a reason chipmakers and OEMs don't throw Threadripper-class chips into laptops and just rely on DVFS to scale power down when running heavy multithreaded workloads. In theory, they could, but practically ...". In terms of how that affects things that consumers actually care about, at the moment, the only thing we have to go on is the Dell leaks where they compared Qualcomm to Intel/AMD and found that the Snapdragon chips were significantly cheaper and I do believe in part it is because they are smaller - smaller dies, cheaper to produce on TSMC. Though we'll see how much of that is due to the CPU when the die shots are released.

You're right in the sense that the consumer doesn't really care how the sausage is made, only that it is reasonably priced and tasty. For the CPU, if they care at all, they care about how much, what's the performance, and what's the effect on power/battery life, often in that order. What I'm trying to get at is more about what do these tests tell us about the performance of the design when it is scaled up/down and can it be, economically, scaled up or down. It can matter for the end cost of the system and thus the attractiveness of the design for the OEM and customer thus impacting the long term business prospects, but I'll admit that's less of what I'm interested in though I wouldn't say I'm totally disinterested in that aspect either. But how the tech cages as you "grow it" is very interesting to me. I remember for instance when the M1 was first announced and the M1 Pro and Max were distant blips on the horizon, people compared the base M1's MT performance to chips that were way outside its weight class but then down clocked to similar power profiles and said, "see? the cores are not that impressive!". That misjudgment is really what I'm trying to avoid doing by comparing like for like as much as possible. because, well, eventually the M1 Pro and Max DID come out. But maybe that's a fool's errand. And yeah it's more that I'm interested in the inner workings of this stuff because in addition to telling us something about the why's and wherefores of today it can potentially give insights into how tech is moving forwards and I like thinking about that and talking about it, but that's not the same as believing that the average consumer should care one whit about it.
 
Last edited:
In A64, however, mov reg, #0 encodes to 4 bytes, same as all other A64 instructions. There is no point in using this x86 trick, and therefore A64 CPU designers don't bother complicating the front end to avoid issuing these to back-end execution units.

Moreover, AArch64 has a zero in R31 all the time when you read from it, so the times you need a zero operand, it is already there.
 
Right. Sure so the 7745HX would match in performance but it would undoubtedly use much more power ... the Oryon is already twice as efficient as the 7840HS. And yes Windows laptops are notoriously reliant on how the OEM sets things up - for x86 too. Anandtech did a deep dive into that and boy did it get nasty fast. Normalizing by power draw is often enough for single threaded cases (and better than clock speed for most purposes) but MT? Tough.

Sorry, how are you getting twice as efficient from this? This is a new claim. I suspect you are using the charts from the page you shared, but those were run in a mix of states, and so need to be read carefully.

The most efficient power range of the Snapdragon X Elite chips seems to be 20-30 Watts, where they even slightly beat the Ryzen 7 8845HS at 20 Watts. Apple's M2 Pro is also beaten in this range. However, if you increase the power limits, the Snapdragon X Elite loses efficiency quickly, especially at v45 Watts or more.

I think you are misreading the chart drastically here:

Screenshot 2024-07-11 at 12.46.51 PM.png



Screenshot 2024-07-11 at 12.46.57 PM.png


Possible on the other hand you can argue that if it isn't a clear win, the more Qualcomm will struggle to convince OEMs and users to consider ARM for high power situations, especially until, as you mention later, software compatibility evens out. Qualcomm is also struggling in its accelerators. According to some Industry reports, whose accuracy is well ... there are forumers with strong opinions on that, Qualcomm has sold well so far. But we'll see how long that lasts as Intel and AMD release their latest chips.

I'm talking about chip companies, not buyers. Chip companies should generally be more aware of Qualcomm's history, and be looking at this more along the lines of: "If they can produce this, can I do better?" I'm thinking of Nvidia and the like in particular.

I remember for instance when the M1 was first announced and the M1 Pro and Max were distant blips on the horizon, people compared the base M1's MT performance to chips that were way outside its weight class but then down clocked to similar power profiles and said, "see? the cores are not that impressive!". That misjudgment is really what I'm trying to avoid doing by comparing like for like as much as possible. But maybe that's a fool's errand.

With this in mind, I can see why you would want to do some math to "account" for SMT or other architectural decisions, but I'd argue there's better ways to account for it. :)
 
Sorry, how are you getting twice as efficient from this? This is a new claim. I suspect you are using the charts from the page you shared, but those were run in a mix of states, and so need to be read carefully.



I think you are misreading the chart drastically here:

View attachment 30341


View attachment 30342
That's multithreaded efficiency and yes they are much, much closer there. Sorry for being unclear, I was referring to ST efficiency when referring to Oryon having greater >2x efficiency compared to AMD. You were saying how "The 7745HX has about 12% more ST than the 7840HS according to Geekbench." Which might match the Snapdragon in terms of performance though the processor here is only a few percent higher than the 8845HS/7840HS recorded in Notebookcheck and still slower than the (higher) Snapdragon. As you said, it really depends on how the OEM has set its power and thermals and even single threaded is a pain. I mean if you check the chart below the X1P-64-100 (108 pts) got massively higher efficiency than the 80-100 (123 pts) and the 78-100 (108 pts) despite being a lower binned processor and the 64 and 78 scoring the same with the same clockspeed!

Screenshot 2024-07-11 at 1.02.10 PM.png


BTW how do you resize your picture? When I look on the phone mine is fine, but on the computer screen mine is absolutely massive and yours looks normal - I just took a screen shot and pasted it in. Edit: actually yours look small on the phone, huh, either way how did you resize?
I'm talking about chip companies, not buyers. Chip companies should generally be more aware of Qualcomm's history, and be looking at this more along the lines of: "If they can produce this, can I do better?" I'm thinking of Nvidia and the like in particular.
Right. Speaking of Nvidia, the rumors are that in the near future Nvidia is going to pair with MediaTek and use off-the-shelf ARM cores to launch a PC/laptop. So far, ARM themselves haven't been able to produce an X-series core quite as good as Nuvia/Apple, but they are slowly getting closer and yeah maybe one day Nvidia will custom design a core.
With this in mind, I can see why you would want to do some math to "account" for SMT or other architectural decisions, but I'd argue there's better ways to account for it. :)

Sure. :) That's why I was proposing a mix of power (of course) and silicon area but yeah, still imperfect. As my kids' favorite cartoon character would say, "this is trifficult!"
 
Last edited:
Maybe the most fair way to adjudicate this would be by silicon area? Beyond power, which is related obviously, that's really what we're trying to get at, how much silicon does a chipmaker have to use to get a certain performance level? Because throwing more cores and threads will always be a benefit to highly MT workloads but there is a reason chipmakers and OEMs don't throw Threadripper-class chips into laptops and just rely on DVFS to scale power down when running heavy multithreaded workloads. In theory, they could, but practically ...
I only care about performance per watt. I don’t care about die area. If you can double your performance by using double the die area but you reduce power, I salute you.
 
I only care about performance per watt. I don’t care about die area. If you can double your performance by using double the die area but you reduce power, I salute you.
Ah but the OEM you are trying to sell your chip to might if you double the cost of producing the chip - either it becomes lower profit for you, the OEM, or lower volume because the end-user doesn't want to pay that much even if it means better battery life and quieter fans. I mean we see this on the PC-laptop side all the time. The Nvidia "MaxQ" GPU designs despite being the more sane power profile for a laptop are hated by a significant fraction of online vocal end users because they feel they aren't getting a "real" 4070 if they buy a laptop with a "MaxQ" instead of a "Mobile" 4070 to the point that OEMs go out of their way to obfuscate which one is in the laptop when they sell a 4070-equipped laptop (in fairness I have some sympathy for users who you know want to be told which processor they are actually getting when making an informed decision about buying an incredibly expensive item).
 
Ah but the OEM you are trying to sell your chip to might if you double the cost of producing the chip - either it becomes lower profit for you, the OEM, or lower volume because the end-user doesn't want to pay that much even if it means better battery life and quieter fans. I mean we see this on the PC-laptop side all the time. The Nvidia "MaxQ" GPU designs despite being the more sane power profile for a laptop are hated by a significant fraction of online vocal end users because they feel they aren't getting a "real" 4070 if they buy a laptop with a "MaxQ" instead of a "Mobile" 4070 to the point that OEMs go out of their way to hide which one is in the laptop.
nope. the OEM i am selling it to has no alternative source.
 
Back
Top