Intel to lay off thousands

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,296
Reaction score
8,454

1) “thousands?” So, enough people at Intel to design a single chip?

2) Apple was the first shoe to drop. x86 is going to be relegated to smaller and smaller portions of the computing market as time marches on. Intel will eventually have to adopt some sort of RISC, but unless and until they get their fabs in order they would be competing against everyone else but without any advantage - their designs are not good, their fabs are not good, and they won’t have instruction set lock-in.

Expect them to do something stupid like invent a new ISA, throw an x86 on the corner of the die, and try to transition everyone to a new type of lock-in.
 

exoticspice1

Site Champ
Posts
298
Reaction score
101
Good thing AMD is doing fine to keep x86 well and alive. x86 is not going anywhere too many games and business apps and even huge orgs need it.
 

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,296
Reaction score
8,454
Good thing AMD is doing fine to keep x86 well and alive. x86 is not going anywhere too many games and business apps and even huge orgs need it.

that’s short-term thinking. “Business apps” are, more and more, cloud-based. The ones that aren’t can easily be ported to any other architecture.

as for games, that’s not going to be enough of a market to sustain all those hundreds of thousands of Intel+AMD employees. For every gaming desktop there are dozens of dull Dell boxes sitting on employees’ desks. Those machines aren’t going to be x86 in a decade.
 

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,296
Reaction score
8,454

Yoused

up
Posts
5,594
Reaction score
8,881
Location
knee deep in the road apples of the 4 horsemen
I still find it baffling that Intel designed the 386 to look exactly like the 8086, with enhancements. Going from 16- to 32-bit was a golden opportunity for them to switch to a new 16-bit op encoding (with 2 bits of the op code specifically reserved to indicate op length and ditch the prefixing), maybe add some useful registers, ditch the useless segmenting, and keep the ISA nominally compatible with 8086 (easily translatable). And since it had a built-in PMMU, a page flag to prevent 32-bit code from accidentally jumping into legacy 16-bit code. I think they would be in a much better place now if they had had a teensy bit of foresight in the mid '80s.
 

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,296
Reaction score
8,454
I still find it baffling that Intel designed the 386 to look exactly like the 8086, with enhancements. Going from 16- to 32-bit was a golden opportunity for them to switch to a new 16-bit op encoding (with 2 bits of the op code specifically reserved to indicate op length and ditch the prefixing), maybe add some useful registers, ditch the useless segmenting, and keep the ISA nominally compatible with 8086 (easily translatable). And since it had a built-in PMMU, a page flag to prevent 32-bit code from accidentally jumping into legacy 16-bit code. I think they would be in a much better place now if they had had a teensy bit of foresight in the mid '80s.

Semiconductor companies back then were a little too responsive to their customers’ demands. I assume IBM had their hand in it. Not to mention that, all along, Intel’s strategy has been to leverage their monopoly. If they made the 32-bit instruction set too different, some customers and ISV’s may have thought about switching to Motorola :).

Though I’d have to think about whether there was some way to do 32-bit cleanly while still being able to run 8086.
 

Spike

Writer, photographer
Site Donor
Posts
504
Reaction score
2,805
Location
Lisbon, Portugal
Main Camera
Fujifilm
that’s short-term thinking. “Business apps” are, more and more, cloud-based. The ones that aren’t can easily be ported to any other architecture.

as for games, that’s not going to be enough of a market to sustain all those hundreds of thousands of Intel+AMD employees. For every gaming desktop there are dozens of dull Dell boxes sitting on employees’ desks. Those machines aren’t going to be x86 in a decade.
Virtually all cloud computers are x86-based. While there are fewer desktops, there will continue to be more cloud computers.
 

Cmaier

Site Master
Staff Member
Site Donor
Posts
5,296
Reaction score
8,454
Virtually all cloud computers are x86-based. While there are fewer desktops, there will continue to be more cloud computers.
and fewer and fewer of those will be x86-based. Software lock-in is a lot easier to solve when the software is centralized. And cloud facilities are particularly sensitive to power usage and heat. At AMD we started focussing on reducing power consumption not because of mobile, but because of the demands by data farms that wanted to fit as much compute in a given facility (where available electricity and cooling capabilities could not be easily upgraded).
 

Yoused

up
Posts
5,594
Reaction score
8,881
Location
knee deep in the road apples of the 4 horsemen
Though I’d have to think about whether there was some way to do 32-bit cleanly while still being able to run 8086.
I would probably do something like
[WW][..Ext.][....op..][R]
  • WW is the op width (additional 16-bit words, 0~3
  • Ext: opcode extention – 11111 = classic 8086-format op (which must confom to WW)
  • op is the opcode and r/m, combined with Ext if Ext is not 11111
  • R is reg extension, allowing 8 additional register for one of the arguments
  • only the LOCK prefix is applicable
  • only 4 registers added: 16-bit segment registers are part of the exended set

This would simplify the decoder while providing lotsa room for ISA expansion. And like I say, the PMMU that they added could have divided code pages.
 
Top Bottom
1 2