Apple introduced a new version of Xcode with support of autonomous coding via transformer models.
Previous versions had support for basic requests via TM (through a chat interface). Now, they added more support to allow TM algorithms to produce code step-by-step in Xcode. This means it can access different components of Xcode and the OS to accomplish full or partial app development.
When using the new stuff, you get access to older models without signing in. If you sign in via one of these companies that produces models, you get access to newer models. If you do that and also edit the configuration files, you can get access to the newest models.
You can do it solely by chat interface or by uploading photos of what you want.
This is a development effort by both Apple and in coordination with those chatbot companies, led by Apple. What's very cool is the Apple-fied parts of it. For example, when you ask it to change something, it will produce code that does so, then it automatically takes pictures of the simulator and then evaluates if it successfully coded it. It's super cool, and very Apple-fied
But I created this post to criticize John Gruber, sorry. After awhile of writing nothing, the only thing he said was "it's super-duper interesting that Apple released this instead of at next wwdc," and from the context of his "work," I presume he is implying that Apple is somehow is scrambling or behind. But they're simply doing the Apple thing: announce when ready. Gruber released a hysterical rant online about Apple pre-announcing features, and then in response Apple took that feedback and course corrected. Now he's upset that they didn't pre-announce this feature at WWDC, instead waiting and developing it fully then dropping it for release 100%. Users have noticed how well it works. It's very "just works," aside from the usual "it just doesn't work" nature of TMs.
Finally, Apple vertically integrated this entire feature. Because of their work democratizing access to TMs, users can run trillion parameter models ON DEVICE and in full coordination with Xcode. This means you can use autonomous coding locally. Only Apple can deliver this: the "just works" support of models plus unique user-facing features like feedback loops of producing code and verification, the high performance of trillion parameter coding models locally, and unified memory and silicon allowing all of it to deliver all of this; all with a UI that is beautiful, dimensional, and futuristic -- Liquid Glass!
Codex and Opus are the 2 cloud models that they feature but you can connect Gemini or any cloud models, plus LOCAL models like K2.5.
Watch it in action from Apple as it navigates Apple documentation to properly support UI
From Rudrank, who is a popular member of the MLX community
Previous versions had support for basic requests via TM (through a chat interface). Now, they added more support to allow TM algorithms to produce code step-by-step in Xcode. This means it can access different components of Xcode and the OS to accomplish full or partial app development.
When using the new stuff, you get access to older models without signing in. If you sign in via one of these companies that produces models, you get access to newer models. If you do that and also edit the configuration files, you can get access to the newest models.
You can do it solely by chat interface or by uploading photos of what you want.
This is a development effort by both Apple and in coordination with those chatbot companies, led by Apple. What's very cool is the Apple-fied parts of it. For example, when you ask it to change something, it will produce code that does so, then it automatically takes pictures of the simulator and then evaluates if it successfully coded it. It's super cool, and very Apple-fied
But I created this post to criticize John Gruber, sorry. After awhile of writing nothing, the only thing he said was "it's super-duper interesting that Apple released this instead of at next wwdc," and from the context of his "work," I presume he is implying that Apple is somehow is scrambling or behind. But they're simply doing the Apple thing: announce when ready. Gruber released a hysterical rant online about Apple pre-announcing features, and then in response Apple took that feedback and course corrected. Now he's upset that they didn't pre-announce this feature at WWDC, instead waiting and developing it fully then dropping it for release 100%. Users have noticed how well it works. It's very "just works," aside from the usual "it just doesn't work" nature of TMs.
Finally, Apple vertically integrated this entire feature. Because of their work democratizing access to TMs, users can run trillion parameter models ON DEVICE and in full coordination with Xcode. This means you can use autonomous coding locally. Only Apple can deliver this: the "just works" support of models plus unique user-facing features like feedback loops of producing code and verification, the high performance of trillion parameter coding models locally, and unified memory and silicon allowing all of it to deliver all of this; all with a UI that is beautiful, dimensional, and futuristic -- Liquid Glass!
Codex and Opus are the 2 cloud models that they feature but you can connect Gemini or any cloud models, plus LOCAL models like K2.5.
Watch it in action from Apple as it navigates Apple documentation to properly support UI
From Rudrank, who is a popular member of the MLX community