A lot of this depends on how well interfaces evolve. Will some LLM like ChatGPT make it possible to get reliable interaction with an interface without heavy reliance on keyboard, mouse, or touch? If so, I could see wearables take off, so long as people don’t go mad with all the chattering everyone is doing with their devices, or people waving around in front of themselves trying to use gesture-based inputs.
I think this is inevitable, and if the device is mixed reality it can see your hands anyway so you can use those to gesture to on-display widgets.
Also i don’t necessarily think people will be wearing a headset all day every day. It will be part of a set of wearable gear - the watch being the start. And no, wearables won’t be all things to all people, but the majority of people both in and outside of work don’t necessarily need a desktop or a laptop. They need access to reference information, communications and ways to calculate information.
None of these things necessarily require the desktop UI paradigm and if they can be done “out in the field” much better for those people who don’t have huge compute needs.
Things like chatGPT will certainly help here for sure. To enable things like “Hey XXX how much money did we spend on safety equipment last year per new hire” or “hey XXX please pull up the repair manual for this truck” or “hey XXX what component am i looking at?”.
Huge numbers of people are desk bound working on reporting for the typical enterprise and if the data is all accessible in database form, an AI can simply build queries or full reports against it for you on the fly. Without an end user needing to know how to build them.