May 7 “Let Loose” Event - new iPads

I agree about this particular scenario - you could push TB3 too far doing audio, but it would take a pretty big job if you're not using a giant (6k) screen.

"Pretty big job" is an understatement, IMO. Audio coming from most professional interfaces is under 1MB/sec per channel. You can fit a lot of channels in USB, let alone TB. And the XDR with DSC (which will be on with an Apple Silicon device) will fit in under 18Gbps. So even if we assume TB isn't full duplex (it is), there's plenty of bandwidth for the audio device and storage to share the bus.

Now, if you use the USB hub built into the XDR, you're going to be limited, because of how the display handles things. But generally if you are doing that, your USB audio interfaces will still perform just fine.

However, TB5 isn't that niche- or it won't be, if the cost isn't extreme (it probably will be, sigh, even the cables). Nearly all PCIe4/5 x4 NVMe SSDs exceed the bandwidth of TB3/4 by 2-4x. Nobody likes leaving that kind of performance on the table, and more and more workloads today actually demand it.

I have stated that NVME devices are what are keeping up with the bandwidth, so I'm well aware of that use case. However, I don't necessarily agree that "more and more workloads demand it". Certain workloads still demand as much bandwidth as they can get for specific I/O, but again we go back to "How many of those are going to be on base model chips?" The key piece of context in which my stance originates from.

But in general, I've watched as certain bottlenecks related to I/O have progressively been alleviated over the last decade, primarily due to SSDs and buses like Thunderbolt making external storage viable for more situations, or our needs are such that we can do network storage instead and have fewer things attached to our machines. So I'm not sure how you get more workloads demanding more external I/O when so many are not facing the storage demand growth that video and games have been for the last couple decades? And in the case of games, they are still catching up to PCIe SSDs in the general case, sadly. For photography, I've found that the ability to process the number of pixels coming from modern sensors in real time is the bigger hurdle than getting it into memory to manipulate.

Sorry, but I feel like I'm going in circles. Context of my statements are seemingly getting lost, and so I'm having to argue against points already made (including points I've made or conceded in previous posts). I'm not suggesting "nobody wants TB5" or "it won't trickle down to the base models", but rather, I don't think it is such a strong requirement at the low end. It's more interesting at the higher end where people are still pushing boundaries, and hard.
 
I have stated that NVME devices are what are keeping up with the bandwidth, so I'm well aware of that use case. However, I don't necessarily agree that "more and more workloads demand it". Certain workloads still demand as much bandwidth as they can get for specific I/O, but again we go back to "How many of those are going to be on base model chips?" The key piece of context in which my stance originates from.
I didn't even want to say it, because when people talk about this they often sound like they think their use case is the most important in the world, but... for a base-level machine, the obvious big consumer of bandwidth (and CPU cycles, and GPU cycles, and memory, etc etc) is gaming. That's probably by far the biggest driver of high-end NVMe purchases. But I think that over the next ten years we're going to see xR become more important and more prevalent and it may also demand a ton of bandwidth as resolution goes through the roof in the quest for more realism. 3D data may also put a heavy load on I/O.
 
I didn't even want to say it, because when people talk about this they often sound like they think their use case is the most important in the world, but... for a base-level machine, the obvious big consumer of bandwidth (and CPU cycles, and GPU cycles, and memory, etc etc) is gaming. That's probably by far the biggest driver of high-end NVMe purchases. But I think that over the next ten years we're going to see xR become more important and more prevalent and it may also demand a ton of bandwidth as resolution goes through the roof in the quest for more realism. 3D data may also put a heavy load on I/O.
Hmm, no worries man 👍 you’re on track
 
Back
Top