Sounds like a bunch of mixed up bullshit to me.
RISC v CISC died twenty years ago. That was back when people actually knew what the terms meant. These days the argument is basically a thin veneer over the I hate/love Intel thing. That, mixed up with aesthetic arguments about the orthogonality of instructions that very few people discussing the issues even understand or see when developing code.
Which x86 CPUs are you talking of when you say that verification takes a lot of resources? The CPUs that sell into the markets that care about think-of-a-number-nines uptime? Or the markets that ARM sells to, where goldfish-style attention span is good enough?
Lastly - PowerPC. WTF? Seriously? Maybe I'm missing something, can you point at some recent design wins for PPC?
Really, have a browse round the realworldtech.com forum if you haven't found it yet. You might learn a few things (in between the flame wars).
well i found this claimhttp://arstechnica.com/information-tech ... comments=1
b) Itanium was DELIBERATELY made insanely complicated. Intel started with a the technical philosophy of something akin to RISC (although the details were different), namely "let's move all the complexity associated with superscalar execution from the CPU into the compiler, and simply offer a 'language' by which the compiler can indicate to the CPU the superscalar details". But then some bright MBA decided that a chip that was simple to design by Intel would be easy to design by anyone else, and so one bizarre item after another was added to complexify the chip. The end result was something every bit as complicated to design as an x86 --- but with a thousandth the size of the potential market.
And so no money for a steady stream of annual tick-tock updates (and the MASSIVE pool of design engineers working in parallel to keep the thing competitive). And so IBM (starting with a cleaner slate in the form of POWER, with no demand for weird and pointless complexity) is able to basically remain competitive without untenable design effort.
The relevance is, once again, to Atom. As I've said before, the problem Intel has with Atom is not that the x86 overhead costs a lot of power or a lot of area, it's that it costs so much in terms of design and test manpower. And once again Itanium shows us that, for all Intel's skills, especially in fab, the insane levels of complexity in these chips make it much harder than people imagine to simply crank out a new and improved version. I, for one, find it hard to see how Intel could establish a pipeline manned as aggressively as the desktop pipeline to keep pushing ARM aggressively. But without that level of manpower, all the arguments about how Intel COULD compete with ARM are moot.
The really pathetic thing is that this death by complexity in both cases (Itanium and Atom) was absolutely self-imposed. There was no technical reason for it, and Intel could have got pretty much everything it needed from both products by maintaining an engineering focus that was willing to look forward confidently, rather than backwards in paranoia, and design a much simpler chip. There's an interest book waiting to be written here about Intel --- the technical side (fabs and architecture), the business side, and how the business side has managed to so warp the engineering side.
basically coming up with new ARM risc designs by samsung and apple and qualcom is a lot easier than Intel atom designs.