Talk:Complex instruction set computer

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Present tense?[edit]

The tenses are a bit odd here. Maybe they should be unified to present tense unless past tense is actually approriate? —Preceding unsigned comment added by Qbeep (talkcontribs) 01:26, 9 April 2009 (UTC)[reply]

Date[edit]

Is the date ("1994-10-10") at the bottom of the page really necessary? It is the date from the FOLDOC entry that this article is based on, I think. I don't see why it is still relevant. I'm going to remove it, but if it's serving some sort of good purpose, go ahead and put it back. James Foster 09:57, 24 Nov 2004 (UTC)

References?[edit]

"For example, on one processor it was discovered that it was possible to improve performance by not using the procedure call instruction but using a sequence of simpler instructions instead."

Exactly what processor is the author refering to? Or did he just make that up? REFERENCES!!!

The processor was probably the VAX, and the call instruction was probably either CALLG or CALLS. I remember this being discussed ages ago in the early days of modern RISC. Unfortunately, I don't have a reference right now - either it's not available to Google or it's buried in the welter of pages that match a search for "vax performance callg calls jsr" - but I suspect a reference can be found. I'll keep looking (or you, or somebody else, might find it). Guy Harris 07:08, 1 February 2006 (UTC)[reply]
I found a similar example with the VAX-11/780 index instruction being slower than an equivalent, simpler instruction sequence. See: http://www.pattosoft.com.au/jason/Articles/HistoryOfComputers/1970s.html (third to last paragraph). This is also mentioned in the book "Computer Organization & Design" by Patterson and Hennessy, 1st Edition (page 350). If you can't find the reference for your original example, perhaps you can use this and link to the webpage I mentioned. Rogerbrent 18:40, 1 February 2006 (UTC)[reply]

I don't know. I added Tanenbaum's Structured Computing Organization book I used in my assembly class 2 years back, as the section on page 58-59 supports most of this article.Flying Bishop (talk) 15:49, 9 May 2009 (UTC)[reply]

CDC 6600 a CISC machine?[edit]

This article gives it as an example of a CISC processor, but the CDC 6600 page says "The basis for the 6600 CPU is what we would today refer to as a RISC system, one in which the processor is tuned to do instructions which are comparatively simple and have limited and well defined access to memory." It did, as I remember, have fixed-length instructions, only simple addressing modes, and a load/store architecture, which sounds more RISCy than CISCy; how does it qualify as a CISC processor? Guy Harris 19:33, 19 February 2006 (UTC)[reply]

No. It does not qualify as a CISC processor. I removed the reference.--Wws 18:08, 30 October 2006 (UTC)[reply]

Dave Patterson (and John Hennessey) cite the 6600 as a precursor to model RISC. See the referenced note paper (Dave said to do this). But other later CDC Cyber series, say 203 and 205, might be considered CISC or even VCISC. 143.232.210.150 (talk) 22:32, 23 March 2012 (UTC)[reply]

Goals and Differences[edit]

The nature of RISC is not only that it uses simple instructions but that it keeps the chip simple. CISC makes no effort to simplify the chip-- going for speed and capability and only using simplicity when it’s faster.

RICH[edit]

In the development of architectures, the term RICH meant Rich Instruction CHip, implying individual machine instructions were potentially extremely powerful, sometimes competing with compiler instructions. In the 1980s, CISC supplanted RICH in the trade press, although I consider RICH a more descriptive (and mroe clever) acronym than CISC.

This deserves mention within the CISC article, although I don't believe RICH deserves a separate article.

--UnicornTapestry (talk) 23:53, 6 December 2007 (UTC)[reply]

[citation needed] Guy Harris (talk) 00:19, 7 December 2007 (UTC)[reply]

CISC vs RISC[edit]

Not all CISCs are microcoded or have "complex" instructions (compared to a Z80, the MIPS's 32-bit divide or any RISC floating-point instructions are extremely complex) and it's not the number of instructions nor the complexity of the implementation or of the instructions themselves that distinguish a CISC from a RISC, but the addressing modes and memory access. CISC is a catch-all term meaning anything that's not a load-store (RISC) architecture. A PDP-10, a PDP-8, an Intel 386, an Intel 4004, a Motorola 68000, a System z mainframe, a Burroughs B5000, a VAX, a Zilog Z80000, and a 6502 all vary wildly in the number, sizes, and formats of instructions, the number, types, and sizes of registers, and the available data types. Some have hardware support for operations like scanning for a substring, arbitrary-precision BCD arithmetic, or computing an arctangent, while others have only 8-bit addition and subtraction. But they are all CISC because they have "load-operate" instructions that read from memory and perform a calculation at the same time. The PDP-8, having only 8 fixed-length instructions and no microcode at all, is a CISC because of how the instructions work (for example, fetching from memory and computing an addition at once), but PowerPC, which has over 230 instructions (more than some VAXes) and complex internals like register renaming and a reorder buffer is a RISC. This Minimal CISC has 8 instructions, but is clearly a CISC because it combines memory access and computation in the same instructions. 76.205.121.173 (talk) 08:51, 30 May 2011 (UTC)[reply]

I agree fully, but read the "definition" in the intro (last major edit by myself). I cannot see that it contradicts your points in any way really. However, I find your text well put, almost fit for inclusion in the article already as it stands here. 83.255.33.7 (talk) 00:04, 3 June 2011 (UTC)[reply]

Most of the discussion on this page is about historical machines, but shouldn't there be something about how CISC was succeeded by RISC because of the emphasis on pipelining for efficiency, the failure of compiler writers to generate machine code that actually utilized the more complicated CISC instruction, and that the CISC architectures violated Amdahl's law in terms of the biggest bang for the buck? — Preceding unsigned comment added by 173.66.233.28 (talk) 05:49, 30 December 2013 (UTC)[reply]

Given that the primary instruction set architecture for desktop and laptop personal computers, and two of the significant instruction set architectures for servers, are CISC, I'm not sure it was fully "succeeded" by RISC, although the primary instruction set architecture for smartphones and tablets is RISC, and a lot of embedded computing uses various RISC architectures.
The section "The RISC idea" does mention pipelining; the point about compilers not using some aspects of CISC is mentioned in the "Hardware utilization" section of the reduced instruction set computing article.
Amdahl's law doesn't seem to say anything about bang-for-the-buck; it discusses the speedup available for a particular program from parallelizing it (which applies to CISC or RISC).

Instead of saying that CISC was succeeded, it would have been more accurate to have said that CISC was succeeded by RISC in the development of modern architectures, and that legacy architectures stopped executing CISC instructions directly and started breaking up the CISC instructions into RISC "micro-operations" as part of their execution. The Intel x86 and the IBM360 architectures fall in this category. The presence of these legacy architectures in desktops, servers, and mainframes is true, but I think that they are RISC systems that have a preprocessor in order to support CISC legacy code. Conceptually speaking, CISC is not a competitor to RISC for the reasons stated above. Amdahl's Law is very important in pipelining, but its general form says that the maximum expected improvement to an overall system is constrainted when only part of the system is improved. Thus devoting logic on a chip to CISC instructions is a poor choice when they are seldom used (due to compilers) and they can not be pipelined (due to widely varying execution times). Basically I think the section fails to mention that RISC won the CISC/RISC war. — Preceding unsigned comment added by 151.151.16.22 (talk) 19:42, 3 January 2014 (UTC)[reply]

Given that programmers can't write uop code for x86 or z/Architecture, and compilers can't write uop code for x86 or z/Architecture, the "legacy" architectures are still relevant.
As for how RISCy the micro-operations are, note that, with micro-operation fusion, the micro-operations aren't quite as micro; that page speaks of combining a compare instruction and a conditional branch into a single micro-op and of combining the load and add micro-ops of ADD [mem], EAX into a single micro-op. The latter is a bit of a move away from the load-store architecture aspect of RISC.
So the only way in which RISC "won" is that the units of dispatch, scheduling, and execution in modern processors are simpler than some of the instructions in current CISC processors; the units of generated code, however, are still CISC in those processors, even if compilers only use some of the CISCy parts (memory-register and register-memory arithmetic, double-indexing in memory operands, maybe CISCier procedure calls in some cases, maybe decimal and string instructions on z/Architecture or REP/xxx instruction pairs on x86) and ignore the other CISCy parts (which don't get a lot of transistors allocated to them), and even some of the units of dispatch, scheduling, and execution might combine a memory reference and an arithmetic op (micro-operation fusion).
The only RISC ISA that "won", for general-purpose computing, to the extent of displacing competitors or keeping them out in the first place is ARM (not a lot of Atom smartphones or tablets out there); the others lost in the desktop/laptop market (it'll be interesting to see whether ARM comes back there) and are fighting it out with x86-64 and z/Architecture in the server market. The others lost in the desktop/laptop market largely because Intel (and, to a lesser extent, AMD) had the money to throw transistors at decoders that turned x86 instructions into uop sequences; devoting logic on a chip to doing that is a very good choice if it means that you keep PowerPC, MIPS, SPARC, and PA-RISC out of a lucrative market.
Another way to think of it is that the first "C" of "CISC" got split into "the stuff that we need to make go fast, because programmers and compilers use it a lot" and "the stuff that's not used enough, so it just has to work, not go fast", with the former stuff made to "go fast" with techniques such as breaking it into uops, and the latter stuff left around, but with the fraction of the chip used to implement it getting smaller over time. That split is a win for some of the ideas that motivated RISC, but with the "reduction" process not, for example, requiring a load-store architecture. Guy Harris (talk) 22:29, 3 January 2014 (UTC)[reply]

instructions or operations?[edit]

Seems to me that in place of "instruction", the CISC article should generally be using the term "opcode" or "operation specified by an opcode". Not all machines have only one opcode per instruction, and it seems to me that the mere fact that an architecture allows for multiple opcodes per instruction should not force it to be labeled as CISC. In other words, if load, add, and store require separate opcodes, the machine should be labeled as RISC, even if the architecture allows all three of those opcodes to appear together in the same instruction. So I would propose wording like this:

a computer in which a single operation (dictated by a single opcode) can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) and/or are capable of multi-step operations or addressing modes within single operations.

or, how about this more concise statement:

a computer in which a single operation (dictated by a single opcode) can execute a load from memory, an operation on the loaded data, and a store of the result.

Encyclopedant (talk) 18:55, 25 August 2011 (UTC)[reply]

Ok, what are your definitions of "operation", "opcode", and "instruction" here? Is an "opcode" what is specified by the Wikipedia page:
In computer science, an opcode (operation code) is the portion of a machine language instruction that specifies the operation to be performed.
or is it something else? Is an "operation" something specified by an opcode, or is it a low-level operation? Guy Harris (talk) 00:25, 26 August 2011 (UTC)[reply]
See if the above rewrite of my note is clearer Encyclopedant (talk) 07:30, 6 September 2011 (UTC)[reply]
You still speak of instructions with multiple opcodes. Please give an example of this. Guy Harris (talk) 17:33, 6 September 2011 (UTC)[reply]
Are you thinking of VLIW? Guy Harris (talk) 21:36, 7 September 2011 (UTC)[reply]
I am not at liberty to give an example for now. But what if there were a VLIW that has the properties I'm positing? Encyclopedant (talk) 00:27, 15 September 2011 (UTC)[reply]
So presumably if and when this new machine finally comes out, the opcode and instruction (computer science) pages will be updated to take it into account?
"a computer in which a single operation (dictated by a single opcode) can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) and/or are capable of multi-step operations or addressing modes within single operations" uses "operation" in two separate senses, so it's potentially confusing. "a computer in which a single operation (dictated by a single opcode) can execute a load from memory, an operation on the loaded data, and a store of the result" is better, although it seems to imply that, to be a CISC processor, you have to be able to do a load and a store in a single machine operation, and a lot of CISCs don't do that. I might be tempted to just change the opening sentence to say "where single machine operations (specified by a single opcode) can execute several low-level operations ... and/or are capable of multi-step operations or addressing modes within single machine operations", i.e. just replace "instruction" by "machine operation" and, the first time "machine operation" is used, note that a "machine operation" is what's specified by a single opcode. Guy Harris (talk) 18:56, 15 September 2011 (UTC)[reply]

Requested move 19 May 2017[edit]

The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review. No further edits should be made to this section.

The result of the move request was: enacted as a technical request mirroring Reduced instruction set computer — Andy W. (talk) 16:27, 24 May 2017 (UTC)[reply]


Complex instruction set computingComplex instruction set computer – For the same reasons as those presented at Talk:Reduced instruction set computing#Requested move 10 May 2017. 50504F (talk) 04:26, 19 May 2017 (UTC)[reply]


The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page or in a move review. No further edits should be made to this section.

Many dubious and unsourced claims[edit]

This article is mostly personal opinion and original research, rather than information from reliable sources. For example, the article equates RISC with load-store architecture, which doesn't match published RISC definitions. This ahistorical definition then leads to strange conclusions such as the Intel 4004 is a CISC. While it would be nice to have an easy definition where combining arithmetic and memory access = CISC, that's not the case. I could be WP:BOLD and delete all the uncited statements, but there wouldn't be much left, so I'm encouraging people to cite reliable sources. KenShirriff (talk) 02:39, 24 June 2023 (UTC)[reply]

Is RISC currently being used in any technical sense other than "load/store architecture"? (It is often used as a marketing term, as in "CISC: tired; RISC: wired", but that's another matter.)
"Doesn't use microcode" isn't that meaningful a definition of "RISC" any more. The Honeywell 6000 series was hardwired but had some Pretty Complex Instructions, especially in machines with the EIS box, and the IBM System/360 Model 75 was hardwired and implemented the full S/360 instruction set, complete with decimal arithmetic, ED/EDMK, TRT, etc. Furthermore, I'm not sure any current z/Architecture processors have anything that would be considered "microcode" - they have millicode, but that is, as I understand it, closer to PALcode, and apparently also have "i370"/"i390" code, which is somewhere above millicode but still used to implement some architectural features; both of them are subsets of z/Architecture machine code, with millicode being able to execute special chip-dependent instructions to peform certain functions. The Intel 80486 executed some instructions directly in hardware; in the Pentium Pro and later, I have the impression that the microcode engine generates micro-operations that go into the same scheduler as the micro-operations generated by the instruction decoder.
"All instructions take one clock tick" doesn't work any more unless you have not only a one-cycle combinatorial multiplier but a one-cycle combinatorial divider, given that many "RISC" instruction sets have integer multiply and divide and floating-point multiply and divide instructions.
"Fixed-length instructions" may work better, but at least some architectures that are called "RISC" have compressed instructions (Thumb/Thumb-2/T2, whatever they're called in Power ISA, whatever RISC-V calls it).
"No complex instructions" might work, but requires a definition of "complicated", and S/360, for example, may have had ED/EDMK (probably the most complicated instruction, both from the description and from the "is this just an attempt to turn some programming language construct into a single instruction?" notion), but its procedure call instructions BAL/BALR are rather close to the ones most RISC processors have (stuff next instruction PC into a register and jump; leave it up to whoever or whatever generates the code for the called instruction to decide what else to do). "No complex addressing modes" is similar, unless you consider double-indexing, present in both x86 and S/3x0, "too complex for RISC".
And as for "CISC", as the article notes, it was coined retroactively, pretty much meaning "not RISC"; if "RISC" is interpreted sufficiently narrowly, "CISC" would then cover a rather wide range.
So, yes, if there are widely-accepted (with sources to demonstrate the wide acceptance) definitions of RISC and CISC, that'd work, but, absent that, I'm not sure what could be done here. Guy Harris (talk) 05:27, 24 June 2023 (UTC)[reply]
And Patterson and Ditzel's "The Case for the Reduced Instruction Set Computer", which may have been the origin of the "RISC" and "CISC" terms, doesn't appear to offer firm definitions of either term. It offers individual examples of complexity, but no broad definition of "complexity" or reduction of same. (If somebody were to look at VAX and then at post-Advanced Function announcement S/370 (post-Advanced Function so they both have paged MMUs), and don't look at the S/370 I/O instructions, they might well conclude that the latter has less complexity than the former - much simpler procedure call instructions, fewer addressing modes and none that modify registers, and even simpler version of the decimal arithmetic/string processing/"this is for doing COBOL and PL/I PICTUREs" instructions.)
So that paper could be considered a reliable source, but not a source very useful for the goal of clearly defining RISC or CISC.
And the 6th edition of a book John Hennessy co-wrote with another researcher :-) doesn't, as I noted in this edit, use the terms "RISC" or "CISC", so it may be more of a case of "RISC: tired, load-store architecture: wired" abd "CISC: tired, non-load-store architecture: wired" now. Guy Harris (talk) 06:36, 24 June 2023 (UTC)[reply]
Yes, that's the point. RISC and CISC were vague terms in the 1980s and 1990s with multiple contradictory definitions. Since then, advances in computer architectures has made the terms less relevant and the definitions mostly meaningless.
As Steve Furber (co-designer of ARM) said in VLSI RISC Architecture and Organization, "A Reduced Instruction Set Computer (RISC) is a member of an ill-defined class of computing machines. The common factor which associates members of the class is that they all have instruction sets which have been optimized more towards implementation efficiency than members of the alternative class of Complex Instruction Set Computers (CISCs), where the optimization is towards the minimization of the semantic gap between the instruction set and one or more high-level languages."
On the other hand, Blaau and Brooks say in Computer Architecture, "An architecture in which most, if not all, operations can be implemented in a single datapath action and that has few constructs is called a reduced instruction-set computer (RISC). Early examples are STC ZEBRA, DEC PDP8, and first generation microprocessors such as the Intel 8008 and Motorola 6800."
And then you have the extremely quantitative definitions such as Tabak in RISC Architecture:
1. Relatively low number of instructions, desirably less than 100
2. Low number of addressing modes, desirably 1 or 2
3. Low number of instruction formats, desirably 1 or 2, all of the same length
4. Single cycle execution of all instructions
5. Memory access performed by load/store only;
6. Relatively large register set, over 32, most operations register-to-register
7. Hardwired control unit (may be microprogrammed as technology develops)
8. Effort to support High Level Language operations
Thus, one has to accept that RISC and CISC never had nice, clean definitions and describe that with a WP:NPOV rather than trying to invent the One True Definition.
KenShirriff (talk) 18:03, 24 June 2023 (UTC)[reply]
BTW, your post on the 960 pointed to this sequence of John Mashey comp.arch posts, which is somewhat relevant here. Mashey seems to lean towards "CISC means 'not RISC'", and thinks talking about many older CPUs as "CISC" or "RISC" isn't a useful execise.
That's a sequence from 1992, from an era before the "throw a bunch of simple operations into a bucket and run them superscalar and out-of-order - which may involve chopping complex instructions into muliple simple operations" stuff, so it'd be interesting to see what he'd say now.
And the two CISC ISAs he seemed to put in the "relatively simple, as CISCs go" bucket 1) are the two remaining CISCs from that paper, 2) have a ton of money behind them (PC x86 processors and IBM mainframes), and 3) appear now to have implementations in the "throw a bunch of simple operations into a bucket and..." camp. Guy Harris (talk) 22:53, 2 July 2023 (UTC)[reply]

Combine CISC page with RISC?[edit]

I've been thinking that it would make sense to merge the CISC page into the RISC page. The problem is that the RISC and CISC page have a lot of overlap and mostly cover the same history and information, so they are largely redundant (when they aren't contradictory). As WP:OVERLAP says, "Remember, that Wikipedia is not a dictionary; there does not need to be a separate entry for every concept. For example, "flammable" and "non-flammable" can both be explained in an article on flammability."

I'm not saying that CISC is unimportant, of course. But since CISC is essentially defined in opposition to RISC, you can't really discuss one without the other. I think that combining the pages would improve both of them. Comments? KenShirriff (talk) 20:15, 4 December 2023 (UTC)[reply]