This is one problem with AI/ML as an industry rather than a technology used by industry, there's not much of a moat. It's perhaps expensive to develop those techniques, but not to train new folks once they are known. It doesn't help that OpenAI is already chasing AGI and that's what they want to invest in, while DeepSeek is focused on making what's available today cheaper (which is pretty easy).
AI is more akin to the part of 3d graphics that is the mathematical portion of it. The techniques spread quickly, the bit that slows everything down is building the hardware to run it all on as the models scale in size. I think the irony here is that Nvidia should actually do well considering DeepSeek-V3 is still quite a large model. Much like Nvidia does well with 3d graphics because their moat is in the implementation of their acceleration hardware, not the techniques themselves. I wonder if Nvidia getting slammed is more because of the export restrictions (i.e. can't sell the best shovels to the folks behind DeepSeek), than because they missed some sort of boat here.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.