x86 CPUs from AMD & Intel current/future releases.

The other thing about OpenMP that is a little odd these days is the model of making synchronous code parallel, constrained by the design that the code should behave fundamentally the same if OpenMP is disabled at compile time (just on a single thread). The end result is that you will block the calling thread (i.e. main) when using OpenMP which is a bad situation to be in on Apple platforms. Say hello to the pinwheel of death.
Never used OpenMP, but in theory you could spin up your own background thread that deals with OpenMP and have a simple run loop in the main thread that just goes "Are there new things from the OpenMP multi-threading thread that I should respond to? No? I'll loop back around then" or something - At least I would imagine it'd be possible to make that sort of thing work, but of course this still comes at complexity costs

Regarding LibDispatch (GCD) on Windows - How does Swift Concurrency work on Windows?
 
Regarding LibDispatch (GCD) on Windows - How does Swift Concurrency work on Windows?

It's possible it got ported while I wasn't looking. Windows was a gleam in some bored engineer's eye last time I paid super-close attention.

I'm honestly surprised Concurrency uses libdispatch. I expected the Swift team to try to limit their dependence on the library with Concurrency, much as they are trying to back slowly away from some of the early "Just port Darwin's libraries to Linux" approach now that you have things like swift-nio for networking that are truly cross-plat but also not Apple's platform APIs.

Never used OpenMP, but in theory you could spin up your own background thread that deals with OpenMP and have a simple run loop in the main thread that just goes "Are there new things from the OpenMP multi-threading thread that I should respond to? No? I'll loop back around then" or something - At least I would imagine it'd be possible to make that sort of thing work, but of course this still comes at complexity costs

You could in principle, but that's getting into "Yo Dawg" territory to me. So now I'm managing threads so I can let OpenMP manage my threads. :)
 
Last edited:
It's been a long time since I even looked at OpenMP that I forgot it relied on compiler directives.
Never used OpenMP, but in theory you could spin up your own background thread that deals with OpenMP and have a simple run loop in the main thread that just goes "Are there new things from the OpenMP multi-threading thread that I should respond to? No? I'll loop back around then" or something - At least I would imagine it'd be possible to make that sort of thing work, but of course this still comes at complexity costs
You could in principle, but that's getting into "Yo Dawg" territory to me. So now I'm managing threads so I can let OpenMP manage my threads. :)
I started (then dropped) a MSc in high performance computing just a couple years ago and they were teaching OpenMP including task creation for asynchronous purposes (so not the simple thread - fork - join model). I can't overstate how deranged the decision to base a multithreading API around compiler directives felt at the time, coming from a world where Swift Concurrency already existed. It's the opposite of what you want: the true and tested parts of the compiler know nothing about the concurrency details of what you're attempting to implement. No type safety, no exploiting existing compiler capabilities to attempt to have the compiler detect issues on your code. Just a custom syntax bolted on top of your program hidden in comments.
 
I'm honestly surprised Concurrency uses libdispatch. I expected the Swift team to try to limit their dependence on the library with Concurrency, much as they are trying to back slowly away from some of the early "Just port Darwin's libraries to Linux" approach now that you have things like swift-nio for networking that are truly cross-plat but also not Apple's platform APIs.
I think it's just step 1. I think their perspective was that it was easier to build concurrency on top of LibDispatch to begin with (pretty sure it already had been ported to Linux before Swift Concurrency?) than build a new system for it from scratch - and that they'll refactor it out later. Like SwiftUI separating more from UIKit lately - I could frankly see SwiftUI becoming cross platform mapping to GTK or Qt one day or something. I don't think it will happen, but I also wouldn't be shocked - Like a 15% chance of it happening or so.
You could in principle, but that's getting into "Yo Dawg" territory to me. So now I'm managing threads so I can let OpenMP manage my threads. :)
I don't know. It does sound like a janky way of doing it, but on the other hand your thread management could be über simple, with just a non-blocking main thread checking for data from the OpenMP workers so you only manage two effective threads. - But I dunno; It's certainly not ideal no matter what
 
I think it's just step 1. I think their perspective was that it was easier to build concurrency on top of LibDispatch to begin with (pretty sure it already had been ported to Linux before Swift Concurrency?) than build a new system for it from scratch - and that they'll refactor it out later. Like SwiftUI separating more from UIKit lately - I could frankly see SwiftUI becoming cross platform mapping to GTK or Qt one day or something. I don't think it will happen, but I also wouldn't be shocked - Like a 15% chance of it happening or so.

libdispatch was one of the libraries available from day 1 (or at least very early), along with Foundation and CoreFoundation, yes.

Concurrency according to the Swift devs only uses libdispatch for managing the thread pool, so the coupling is already quite light. I'm just surprised they went this far without simply cutting the cord at that point. It's a bit like going through the effort of making a sandwich and then deciding you are too tired to eat it. (All this said, Swift does support custom executors already, so implementers can replace libdispatch on different platforms: https://github.com/apple/swift/pull/39518)

SwiftUI the library? No. SwiftUI the DSL? Maybe. I already see people implementing SwiftUI-like libraries for other platforms.

I don't know. It does sound like a janky way of doing it, but on the other hand your thread management could be über simple, with just a non-blocking main thread checking for data from the OpenMP workers so you only manage two effective threads. - But I dunno; It's certainly not ideal no matter what

And the infrastructure to communicate between them, don't forget that. Another area that tends to be where people get things wrong with multithreading. :)

I started (then dropped) a MSc in high performance computing just a couple years ago and they were teaching OpenMP including task creation for asynchronous purposes (so not the simple thread - fork - join model). I can't overstate how deranged the decision to base a multithreading API around compiler directives felt at the time, coming from a world where Swift Concurrency already existed. It's the opposite of what you want: the true and tested parts of the compiler know nothing about the concurrency details of what you're attempting to implement. No type safety, no exploiting existing compiler capabilities to attempt to have the compiler detect issues on your code. Just a custom syntax bolted on top of your program hidden in comments.

It makes sense to me in the context of when it was developed and the goals. But a lot has changed in the last 20+ years in terms of thinking around concurrency.
 
Last edited:
It makes sense to me in the context of when it was developed and the goals. But a lot has changed in the last 20+ years in terms of thinking around concurrency.
Yes. Sorry I didn't make that clear in my post. I see where they were coming from, and it totally made sense (back then) as an easy way to parallelize C/C++ programs for certain applications (ie scientific programs) where all you want is for the compiler to execute a loop in parallel. But once they added tasks and more complex stuff I think they should have realized that they were trying to do too much with an approach that was never designed for that.
 
And the infrastructure to communicate between them, don't forget that. Another area that tends to be where people get things wrong with multithreading. :)
My first thought here was "Well we can create channels, atomic flags […] - Then I stopped and thought "Yeah fair play, fairly high complexity" :P
 
A lot has been written about Intel's struggles, but AMD has had their own as well. While AMD's focus on consoles and the datacenter has paid dividends, Lisa Su recently described AMD as a datacenter first company, their consumer division has lagged behind, growing at a snails pace of just half a percent a quarter since the introduction of Zen and even Zen 3 when they offered a no-compromises better x86 chip than Intel. Part of this can be attributed to AMD's focus on the datacenter/console markets and AMD's small size forcing trade offs in manufacturing, but the other part, well ... read on:


In summary, OEMs would like to offer more and better AMD products but AMD just doesn't offer them much help even on top of the poor supply of chips. The article reinforces something that I think @Artemis brought up in the Qualcomm thread that even a new entrant like Qualcomm, and we've seen the struggles there too, got more design wins and devices out in the wild than AMD, never mind Intel. Again, a lot of this is that AMD has been hampered by the fact that they were too small to take advantage of their better processors and had to prioritize and probably the datacenter market was the right call, more lucrative and less fickle. But this inability to execute on the consumer side has reportedly left "billions" on the table. In reference to the "Fall of Intel" thread, this is the most valuable thing that size buys you: time. Their market entrenchment allowed them to survive this long, and by survive I mean retain dominant consumer market share (datacenter more complicated as Intel has more market share but lower ASP), against arguably superior products from their competitor(s - including indirectly Apple whose growth in chip marketshare is hampered by you know not being a chip company and being in its own ecosystem that people have to buy into).
 
A lot has been written about Intel's struggles, but AMD has had their own as well. While AMD's focus on consoles and the datacenter has paid dividends, Lisa Su recently described AMD as a datacenter first company, their consumer division has lagged behind, growing at a snails pace of just half a percent a quarter since the introduction of Zen and even Zen 3 when they offered a no-compromises better x86 chip than Intel. Part of this can be attributed to AMD's focus on the datacenter/console markets and AMD's small size forcing trade offs in manufacturing, but the other part, well ... read on:


In summary, OEMs would like to offer more and better AMD products but AMD just doesn't offer them much help even on top of the poor supply of chips. The article reinforces something that I think @Artemis brought up in the Qualcomm thread that even a new entrant like Qualcomm, and we've seen the struggles there too, got more design wins and devices out in the wild than AMD, never mind Intel. Again, a lot of this is that AMD has been hampered by the fact that they were too small to take advantage of their better processors and had to prioritize and probably the datacenter market was the right call, more lucrative and less fickle. But this inability to execute on the consumer side has reportedly left "billions" on the table. In reference to the "Fall of Intel" thread, this is the most valuable thing that size buys you: time. Their market entrenchment allowed them to survive this long, and by survive I mean retain dominant consumer market share (datacenter more complicated as Intel has more market share but lower ASP), against arguably superior products from their competitor(s - including indirectly Apple whose growth in chip marketshare is hampered by you know not being a chip company and being in its own ecosystem that people have to buy into).

When I was at AMD, we *only* really cared about the datacenter, even though most of our design wins were in consumer. Data center is far more profitable; once you’re in, it’s guaranteed money for a long time, and mass sales. Much more profitable than fighting for customers one-at-a-time.
 
When I was at AMD, we *only* really cared about the datacenter, even though most of our design wins were in consumer. Data center is far more profitable; once you’re in, it’s guaranteed money for a long time, and mass sales. Much more profitable than fighting for customers one-at-a-time.
Oh I get it, as I said above the datacenter focus was probably right the call being more lucrative and less fickle. It's just unfortunate that AMD with the better product couldn't do both. This is where just being big helps.
 
Just came across this article in my feed… interesting naming switch.

“ What's really interesting about these rumors is that the Royal Core project was allegedly started by Jim Keller of AMD Zen fame, according to Notebookcheck.net.”

Why is that interesting? What, exactly, has he done that was a big deal? and how much could he have done on this project that still survives this much later in time?
 
Temporarily dropping HT, but looking at bringing back SMT4 in a few years. We shall see, I guess.
 
“ What's really interesting about these rumors is that the Royal Core project was allegedly started by Jim Keller of AMD Zen fame, according to Notebookcheck.net.”
Why is that interesting? What, exactly, has he done that was a big deal? and how much could he have done on this project that still survives this much later in time?

It seems Keller was only roughly over two years at Intel. For a company like Intel that's probably not enough time to have much of an effect, which might also have been the reason why he left.

Also funny is this following tidbit about Royal/Cobra Core from videocarz.com that is linked in the article:
Goal is to create and execute a new architecture for Intel x86 processors with more power efficiency and high performance compared to the other competitors in the market.

That's just marketing talk without any useful information. Of course you are not planning to be worse than the competition...
 
It’s also not a new “architecture.” At best it’s a new “micro-architecture.” It’s still x64.
 
It’s also not a new “architecture.” At best it’s a new “micro-architecture.” It’s still x64.
Well, I do remember an announcement some while back in which they said they were going to add 16 more registers and d = s + a type ops, because RISC has those things so x86 would benefit from grafting them onto the ISA. Somehow.
 
Well, I do remember an announcement some while back in which they said they were going to add 16 more registers and d = s + a type ops, because RISC has those things so x86 would benefit from grafting them onto the ISA. Somehow.
unfortunately, the article doesn’t suggest any changes of that sort.
 
unfortunately, the article doesn’t suggest any changes of that sort.

I assume this is what is meant:


By the way, the article is just the usual regurgitated nonsense. Rentable units etc. were discussed half a year ago and nobody really knows what they are. It seems to be some sort of architecture that can partition resources depending on the usage scenario (e.g. acting as one super large OoO core if you need single-threaded performance or as multiple slower cores if your IPC is low and you need throughout).
 
I assume this is what is meant:


By the way, the article is just the usual regurgitated nonsense. Rentable units etc. were discussed half a year ago and nobody really knows what they are. It seems to be some sort of architecture that can partition resources depending on the usage scenario (e.g. acting as one super large OoO core if you need single-threaded performance or as multiple slower cores if your IPC is low and you need throughout).
Any word on when we can get a chip that supports this? Seems like pretty big and nice features glancing over it
 
Any word on when we can get a chip that supports this? Seems like pretty big and nice features glancing over it

I am not aware of any announcement of an actual product to use these instructions. At any rate, even if it becomes a reality it will be at least decade until these instructions are widely supported.
 
Back
Top