Very long instruction word
This article needs additional citations for verification. (March 2014) |
Very long instruction word (VLIW) refers to instruction set architectures that are designed to exploit instruction-level parallelism (ILP). A VLIW processor allows programs to explicitly specify instructions to execute in parallel, whereas conventional central processing units (CPUs) mostly allow programs to specify instructions to execute in sequence only. VLIW is intended to allow higher performance without the complexity inherent in some other designs.
The traditional means to improve performance in processors include dividing instructions into sub steps so the instructions can be executed partly at the same time (termed pipelining), dispatching individual instructions to be executed independently, in different parts of the processor (superscalar architectures), and even executing instructions in an order different from the program (out-of-order execution).[1] These methods all complicate hardware (larger circuits, higher cost and energy use) because the processor must make all of the decisions internally for these methods to work.
In contrast, the VLIW method depends on the programs providing all the decisions regarding which instructions to execute simultaneously and how to resolve conflicts. As a practical matter, this means that the compiler (software used to create the final programs) becomes more complex, but the hardware is simpler than in many other means of parallelism.
History
[edit]The concept of VLIW architecture, and the term VLIW, were invented by Josh Fisher in his research group at Yale University in the early 1980s.[2] His original development of trace scheduling as a compiling method for VLIW was developed when he was a graduate student at New York University. Before VLIW, the notion of prescheduling execution units and instruction-level parallelism in software was well established in the practice of developing horizontal microcode. Before Fisher the theoretical aspects of what would be later called VLIW were developed by the Soviet computer scientist Mikhail Kartsev[3] based on his Sixties work on military-oriented M-9 and M-10 computers. His ideas were later developed and published as a part of a textbook[4] two years before Fisher's seminal paper, but because of the Iron Curtain and because Kartsev's work was mostly military-related it remained largely unknown in the West.
Fisher's innovations involved developing a compiler that could target horizontal microcode from programs written in an ordinary programming language. He realized that to get good performance and target a wide-issue machine, it would be necessary to find parallelism beyond that generally within a basic block. He also developed region scheduling methods to identify parallelism beyond basic blocks. Trace scheduling is such a method, and involves scheduling the most likely path of basic blocks first, inserting compensating code to deal with speculative motions, scheduling the second most likely trace, and so on, until the schedule is complete.
Fisher's second innovation was the notion that the target CPU architecture should be designed to be a reasonable target for a compiler; that the compiler and the architecture for a VLIW processor must be codesigned. This was inspired partly by the difficulty Fisher observed at Yale of compiling for architectures like Floating Point Systems' FPS164, which had a complex instruction set computing (CISC) architecture that separated instruction initiation from the instructions that saved the result, needing very complex scheduling algorithms. Fisher developed a set of principles characterizing a proper VLIW design, such as self-draining pipelines, wide multi-port register files, and memory architectures. These principles made it easier for compilers to emit fast code.
The first VLIW compiler was described in a Ph.D. thesis by John Ellis, supervised by Fisher. The compiler was named Bulldog, after Yale's mascot.[5]
Fisher left Yale in 1984 to found a startup company, Multiflow, along with cofounders John O'Donnell and John Ruttenberg. Multiflow produced the TRACE series of VLIW minisupercomputers, shipping their first machines in 1987. Multiflow's VLIW could issue 28 operations in parallel per instruction. The TRACE system was implemented in a mix of medium-scale integration (MSI), large-scale integration (LSI), and very large-scale integration (VLSI), packaged in cabinets, a technology obsoleted as it grew more cost-effective to integrate all of the components of a processor (excluding memory) on one chip.
Multiflow was too early to catch the following wave, when chip architectures began to allow multiple-issue CPUs.[clarification needed] The major semiconductor companies recognized the value of Multiflow technology in this context, so the compiler and architecture were subsequently licensed to most of these firms.
Motivation
[edit]A processor that executes every instruction one after the other (i.e., a non-pipelined scalar architecture) may use processor resources inefficiently, yielding potential poor performance. The performance can be improved by executing different substeps of sequential instructions simultaneously (termed pipelining), or even executing multiple instructions entirely simultaneously as in superscalar architectures. Further improvement can be achieved by executing instructions in an order different from that in which they occur in a program, termed out-of-order execution.[1]
These three methods all raise hardware complexity. Before executing any operations in parallel, the processor must verify that the instructions have no interdependencies. For example, if a first instruction's result is used as a second instruction's input, then they cannot execute at the same time and the second instruction cannot execute before the first. Modern out-of-order processors have increased the hardware resources which schedule instructions and determine interdependencies.
In contrast, VLIW executes operations in parallel, based on a fixed schedule, determined when programs are compiled. Since determining the order of execution of operations (including which operations can execute simultaneously) is handled by the compiler, the processor does not need the scheduling hardware that the three methods described above require. Thus, VLIW CPUs offer more computing with less hardware complexity (but greater compiler complexity) than do most superscalar CPUs.[1] This is also complementary to the idea that as many computations as possible should be done before the program is executed, at compile time.
Design
[edit]In superscalar designs, the number of execution units is invisible to the instruction set. Each instruction encodes one operation only. For most superscalar designs, the instruction width is 32 bits or fewer.
In contrast, one VLIW instruction encodes multiple operations, at least one operation for each execution unit of a device. For example, if a VLIW device has five execution units, then a VLIW instruction for the device has five operation fields, each field specifying what operation should be done on that corresponding execution unit. To accommodate these operation fields, VLIW instructions are usually at least 64 bits wide, and far wider on some architectures.
For example, the following is an instruction for the Super Harvard Architecture Single-Chip Computer (SHARC). In one cycle, it does a floating-point multiply, a floating-point add, and two autoincrement loads. All of this fits in one 48-bit instruction:
f12 = f0 * f4, f8 = f8 f12, f0 = dm(i0, m3), f4 = pm(i8, m9);
Since the earliest days of computer architecture,[6] some CPUs have added several arithmetic logic units (ALUs) to run in parallel. Superscalar CPUs use hardware to decide which operations can run in parallel at runtime, while VLIW CPUs use software (the compiler) to decide which operations can run in parallel in advance. Because the complexity of instruction scheduling is moved into the compiler, complexity of hardware can be reduced substantially.[clarification needed]
A similar problem occurs when the result of a parallelizable instruction is used as input for a branch. Most modern CPUs guess which branch will be taken even before the calculation is complete, so that they can load the instructions for the branch, or (in some architectures) even start to compute them speculatively. If the CPU guesses wrong, all of these instructions and their context need to be flushed and the correct ones loaded, which takes time.
This has led to increasingly complex instruction-dispatch logic that attempts to guess correctly, and the simplicity of the original reduced instruction set computing (RISC) designs has been eroded. VLIW lacks this logic, and thus lacks its energy use, possible design defects, and other negative aspects.
In a VLIW, the compiler uses heuristics or profile information to guess the direction of a branch. This allows it to move and preschedule operations speculatively before the branch is taken, favoring the most likely path it expects through the branch. If the branch takes an unexpected way, the compiler has already generated compensating code to discard speculative results to preserve program semantics.
Vector processor cores (designed for large one-dimensional arrays of data called vectors) can be combined with the VLIW architecture such as in the Fujitsu FR-V microprocessor, further increasing throughput and speed.[citation needed]
Implementations
[edit]Cydrome was a company producing VLIW numeric processors using emitter-coupled logic (ECL) integrated circuits in the same timeframe (late 1980s). This company, like Multiflow, failed after a few years.
One of the licensees of the Multiflow technology is Hewlett-Packard, which Josh Fisher joined after Multiflow's demise. Bob Rau, founder of Cydrome, also joined HP after Cydrome failed. These two would lead computer architecture research at Hewlett-Packard during the 1990s.
Along with the above systems, during the same time (1989–1990), Intel implemented VLIW in the Intel i860, their first 64-bit microprocessor, and the first processor to implement VLIW on one chip.[7] This processor could operate in both simple RISC mode and VLIW mode:
In the early 1990s, Intel introduced the i860 RISC microprocessor. This simple chip had two modes of operation: a scalar mode and a VLIW mode. In the VLIW mode, the processor always fetched two instructions and assumed that one was an integer instruction and the other floating-point.[7]
The i860's VLIW mode was used extensively in embedded digital signal processor (DSP) applications since the application execution and datasets were simple, well ordered and predictable, allowing designers to fully exploit the parallel execution advantages enabled by VLIW. In VLIW mode, the i860 could maintain floating-point performance in the range of 20-40 double-precision MFLOPS; a very high value for its time and for a processor running at 25-50Mhz.
In the 1990s, Hewlett-Packard researched this problem as a side effect of ongoing work on their PA-RISC processor family. They found that the CPU could be greatly simplified by removing the complex dispatch logic from the CPU and placing it in the compiler. Compilers of the day were far more complex than those of the 1980s, so the added complexity in the compiler was considered to be a small cost.
VLIW CPUs are usually made of multiple RISC-like execution units that operate independently. Contemporary VLIWs usually have four to eight main execution units. Compilers generate initial instruction sequences for the VLIW CPU in roughly the same manner as for traditional CPUs, generating a sequence of RISC-like instructions. The compiler analyzes this code for dependence relationships and resource requirements. It then schedules the instructions according to those constraints. In this process, independent instructions can be scheduled in parallel. Because VLIWs typically represent instructions scheduled in parallel with a longer instruction word that incorporates the individual instructions, this results in a much longer opcode (termed very long) to specify what executes on a given cycle.
Examples of contemporary VLIW CPUs include the TriMedia media processors by NXP (formerly Philips Semiconductors), the Super Harvard Architecture Single-Chip Computer (SHARC) DSP by Analog Devices, the ST200 family by STMicroelectronics based on the Lx architecture (designed in Josh Fisher's HP lab by Paolo Faraboschi), the FR-V from Fujitsu, the BSP15/16[8] from Pixelworks, the CEVA-X DSP from CEVA, the Jazz DSP from Improv Systems, the HiveFlex[9] series from Silicon Hive, and the MPPA Manycore family by Kalray. The Texas Instruments TMS320 DSP line has evolved, in its C6000 family, to look more like a VLIW, in contrast to the earlier C5000 family. These contemporary VLIW CPUs are mainly successful as embedded media processors for consumer electronic devices.
VLIW features have also been added to configurable processor cores for system-on-a-chip (SoC) designs. For example, Tensilica's Xtensa LX2 processor incorporates a technology named Flexible Length Instruction eXtensions (FLIX) that allows multi-operation instructions. The Xtensa C/C compiler can freely intermix 32- or 64-bit FLIX instructions with the Xtensa processor's one-operation RISC instructions, which are 16 or 24 bits wide. By packing multiple operations into a wide 32- or 64-bit instruction word and allowing these multi-operation instructions to intermix with shorter RISC instructions, FLIX allows SoC designers to realize VLIW's performance advantages while eliminating the code bloat of early VLIW architectures. The Infineon Carmel DSP is another VLIW processor core intended for SoC. It uses a similar code density improvement method called configurable long instruction word (CLIW).[10]
Outside embedded processing markets, Intel's Itanium IA-64 explicitly parallel instruction computing (EPIC) and Elbrus 2000 appear as the only examples of a widely used VLIW CPU architectures. However, EPIC architecture is sometimes distinguished from a pure VLIW architecture, since EPIC advocates full instruction predication, rotating register files, and a very long instruction word that can encode non-parallel instruction groups. VLIWs also gained significant consumer penetration in the graphics processing unit (GPU) market, though both Nvidia and AMD have since moved to RISC architectures to improve performance on non-graphics workloads.
ATI Technologies' (ATI) and Advanced Micro Devices' (AMD) TeraScale microarchitecture for graphics processing units (GPUs) is a VLIW microarchitecture.
In December 2015, the first shipment of PCs based on VLIW CPU Elbrus-4s was made in Russia.[11]
The Neo by REX Computing is a processor consisting of a 2D mesh of VLIW cores aimed at power efficiency.[12]
The Elbrus 2000 (Russian: Эльбрус 2000) and its successors are Russian 512-bit wide VLIW microprocessors developed by Moscow Center of SPARC Technologies (MCST) and fabricated by TSMC.
Backward compatibility
[edit]This section needs additional citations for verification. (June 2016) |
When silicon technology allowed for wider implementations (with more execution units) to be built, the compiled programs for the earlier generation would not run on the wider implementations, as the encoding of binary instructions depended on the number of execution units of the machine.
Transmeta addressed this issue by including a binary-to-binary software compiler layer (termed code morphing) in their Crusoe implementation of the x86 architecture. This mechanism was advertised to basically recompile, optimize, and translate x86 opcodes at runtime into the CPU's internal machine code. Thus, the Transmeta chip is internally a VLIW processor, effectively decoupled from the x86 CISC instruction set that it executes.
Intel's Itanium architecture (among others) solved the backward-compatibility problem with a more general mechanism. Within each of the multiple-opcode instructions, a bit field is allocated to denote dependency on the prior VLIW instruction within the program instruction stream. These bits are set at compile time, thus relieving the hardware from calculating this dependency information. Having this dependency information encoded in the instruction stream allows wider implementations to issue multiple non-dependent VLIW instructions in parallel per cycle, while narrower implementations would issue a smaller number of VLIW instructions per cycle.
Another perceived deficiency of VLIW designs is the code bloat that occurs when one or more execution unit(s) have no useful work to do and thus must execute No Operation NOP instructions. This occurs when there are dependencies in the code and the instruction pipelines must be allowed to drain before later operations can proceed.
Since the number of transistors on a chip has grown, the perceived disadvantages of the VLIW have diminished in importance. VLIW architectures are growing in popularity, especially in the embedded system market, where it is possible to customize a processor for an application in a system-on-a-chip.
See also
[edit]- No instruction set computing
- One-instruction set computer – Abstract machine that uses only one instruction
- Complex instruction set computer – Processor with instructions capable of multi-step operations
- Explicitly parallel instruction computing – Instruction set architecture
- Minimal instruction set computer – CPU architecture
- Reduced instruction set computer – Processor executing one instruction in minimal clock cycles
- Elbrus (computer) – Line of Soviet and Russian computer systems
- Itanium – Family of 64-bit Intel microprocessors
- Movidius – American computer processor chip design company
- Single instruction, multiple data – Type of parallel processing
- Single instruction, multiple threads – Execution model used in parallel computing
- Transport triggered architecture – Type of computer processor design
References
[edit]- ^ a b c "Very Long Instruction Word (VLIW) Architecture". GeeksforGeeks. 2020-12-01. Retrieved 2022-10-14.
- ^ Fisher, Joseph A. (1983). "Very Long Instruction Word architectures and the ELI-512". Proceedings of the 10th annual international symposium on Computer architecture. International Symposium on Computer Architecture. New York, NY, USA: Association for Computing Machinery (ACM). pp. 140–150. doi:10.1145/800046.801649. ISBN 0-89791-101-6.
- ^ Kartsev, Mikhail (1970). "Вопросы построения многопроцессорных вычислительных систем" [Building the multiprocessor computer systems]. Radioelectronic Matters, Electronic Computing Technics (in Russian) (5–6): 3–19.
- ^ Kartsev, Mikhail; Brik, Vladimir (1981). Вычислительные системы и синхронная арифметика [Compuring systems and synchronous arythmetics] (in Russian). Moscow: Radio i Svyaz.
- ^ "ACM 1985 Doctoral Dissertation Award". Association for Computing Machinery (ACM). Archived from the original on 2008-04-02. Retrieved 2007-10-15.
For his dissertation Bulldog: A Compiler for VLIW Architecture.
- ^ "Control Data 6400/6500/6600 Computer Systems Reference Manual". 1969-02-21. Archived from the original on 2014-01-02. Retrieved 2013-11-07.
- ^ a b "An Introduction To Very-Long Instruction Word (VLIW) Computer Architecture" (PDF). Philips Semiconductors. Archived from the original (PDF) on 2011-09-29.
- ^ "Pixelworks | BSP15/16". Archived from the original on 1996-12-24. Retrieved 2016-07-28.
- ^ "silicon hive Products". Silicon Hive. Silicon Hive BV. Archived from the original on 2012-01-28. Retrieved 2012-01-28.
- ^ "EEMBC Publishes Benchmark Scores for Infineon Technologies' Carmel - DSP Core and TriCore - TC11IB Microcontroller". eembc.org. Retrieved 2016-07-28.
- ^ "ТАСС". tass.ru. Retrieved 2016-07-28.
- ^ "The Tiny Chip That Could Disrupt Exascale Computing". The Next Platform. Stackhouse Publishing Inc. 12 March 2015. Retrieved 26 April 2021.
External links
[edit]- Paper That Introduced VLIWs
- Book on the history of Multiflow Computer, VLIW pioneering company
- ISCA "Best Papers" Retrospective On Paper That Introduced VLIWs Archived 2012-03-10 at the Wayback Machine
- VLIW and Embedded Processing
- FR500 VLIW-architecture High-performance Embedded Microprocessor
- Historical background for EPIC instruction set architectures
- DIS: an Architecture for fast LISP execution. A similar VLIW architecture, with a parallelizing compiler directed toward LISP.