Nvidia stock is taking a "small" beating, allegedly due to the release of DeepSeek AI from China.
Welcome to modern consumer capitalism. Everything on the market is built to ramp up our lazy (in the name of convenience). Electric can openers? Power everything on your car. Keurig, anyone? Can you find anyone out there (besides me) who knows how to use a slide rule or can read a vernier scale? I like my computer and spend way too much time on it, but doing stuff is good for me too. For anyone, really.This stuff is making us all too lazy to think for ourselves.
I can use a slide rule. I even own two.Welcome to modern consumer capitalism. Everything on the market is built to ramp up our lazy (in the name of convenience). Electric can openers? Power everything on your car. Keurig, anyone? Can you find anyone out there (besides me) who knows how to use a slide rule or can read a vernier scale? I like my computer and spend way too much time on it, but doing stuff is good for me too. For anyone, really.
I can use a slide rule. I even own two.![]()
Have you got a bamboo one?
I don’t think so. I inherited them from my dad, who used them in school (I was lucky to be born late enough to skate by in college with my HP-28S and -48G.) They seem to be wood, but I can’t tell wood from bamboo.Have you got a bamboo one?
This stuff is making us all too lazy to think for ourselves.
Saw some discussion of NVIDIA's recently-announced (at CES) Project DIGITS AI-focused PC on MR. Seems interesting (and didn't get any hits when I searched techboards/all threads for "Project DIGITS"), so I'm posting info about it here. Expected release: May 2025.
![]()
Basic specs:
"At its heart lies the GB10 Grace Blackwell Superchip, delivering an impressive 1 petaflop of AI performance. The system packs 128GB of unified memory and up to 4TB of high-speed NVMe storage, enabling users to run large language models with up to 200 billion parameters locally."
Source:
Starting price is $3,000, and I gather all models have 128 GB RAM/1 PFLOPS GPU.
Bandwidth was not specified. Here's an estimate that it will have 825 GB/s:
"From the renders shown to the press prior to the Monday night CES keynote at which Nvidia announced the box, the system appeared to feature six LPDDR5x modules. Assuming memory speeds of 8,800 MT/s we'd be looking at around 825GB/s of bandwidth."
Source: https://www.theregister.com/2025/01/07/nvidia_project_digits_mini_pc/"
I've seen competing claims saying it will be much lower. But 825 GB/s seems strong so, if that's correct, why didn't NVIDIA include it in their announcement along with all the other specs?
Bandwidth comparison, for context:
M4 Max, upper spec (est $4k for M4 Max Studio with 128 GB RAM/1 TB SSD, if such a machine is offered): 546 GB/s.
5080 desktop GPU (MSRP $1k, street price TBD): 960 GB/s.
5090 desktop GPU (MSRP $2k, street price TBD): 1,792 GB/s
I assume you're refering to the article in the Register, not that in the Medium. Yeah, good point about 128 not being divisible by 6. Do you have a prediction for the NVIDIA's bandwidth? I haven't tried to estimate it myself.Pretty sure that article is out of date and wrong information on several fronts.
Yeah the Register article. The lowest it could be is 256-bit bus using 4x32GB modules. That would put it around M4 Pro/Strix Halo bandwidth (~270-300GB/s depending). If they really are using six modules, as @Yoused said in a 2x32 + 4x16 configuration, then that's a 384-bit bus then it would be similar to the binned M4 Max, about 400GB/s. If using 8, like 8x16, modules, then it would be similar to the full M4 Max roughly 550GB/s. They could go higher than that too but it would require many more smaller modules. I don't remember how small the LPDDR modules go but eventually you run into issues where the smallest RAM you can offer is pretty large (the full M4 Max starts at 48GB - 8x6GB modules). Then again, they may be planning on offering only a 128GB variant.I assume you're refering to the article in the Register, not that in the Medium. Yeah, good point about 128 not being divisible by 6. Do you have a prediction for the NVIDIA's bandwidth? I haven't tried to estimate it myself.
Aye I was just about to write that. While that would be odd, I can't think of a reason why it wouldn't work.If, for some inexplicable reason, you put in two 32s and four 16s, that would be six for 128.
The ability to capture text and images from bitmap images is already well understood a long time ago, before the "AI" craze started.So if User A has AI turned off, but sends a message to User B who has it turned on, then AI can see what A sent.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.