Jump to content

Talk:Bytecode

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

untitled

[edit]

Some pretentious people making something of nothing. Bytecode is nonsense (just invented to make your dicks seem bigger).This article is total tosh and needs completely re-writing in terms of hex vs binary and interpreting for virtual machines. Better still, delete it all. —Preceding unsigned comment added by 124.187.137.11 (talk) 12:39, 13 April 2009 (UTC)[reply]

Hey guys, you'd better point out some disadvantages of bytecode once you speak about its advantages!!

This article is, umm, rather incomprehensible to someone who doesn't already know everything about this topic. k.lee

Bytecode may be used as an intermediate code of a compiler, or may be the saved 'tokenized' form used by an interpreter

How can this sentence be related to virtual machine? -- HJH

Bytecode may be used as an intermediate code of a compiler, or may be the saved 'tokenized' form used by an interpreter or a virtual machine

"Byte code", "byte-code", and "bytecode" seem to be fighting it out. Specifically, there is an entry for the Java Bytecode. Anyone have a strong preference as to which the final version should be? Charles Merriam 21:10, 4 February 2006 (UTC)[reply]

Page moved. Eugène van der Pijll 21:14, 24 June 2006 (UTC)[reply]

"The current reference implementation of the Ruby programming language does not use bytecode, however it relies on tree-like structures which resemble intermediate representations used in compilers.". Is it relevant to talk about Ruby not using bytecode in this article? - Philoctet

incorrect use of term bytecode

[edit]

I believe that this entire article is a misuse of the term bytecode. I have worked near machine level in computer science for many years, and in my experience, bytecode applies specifically to the Java Virtual Machine, whose instruction set does indeed consist of one-byte opcodes. For other programming languages, the correct term for what this article described is "intermediate language". Visual Basic compiles to an intermediate language, as did Pascal, Smalltalk, and others. These were NEVER to my knowledge called "bytecode."

I think this needs to be fixed, hopefully by the author of this article.

I agree. / HenkeB (talk) 21:40, 13 January 2008 (UTC)[reply]

I disagree. See http://www.google.com/search?hl=en&q=bytecode -java&btnG=Search PuerExMachina (talk) 04:51, 14 January 2008 (UTC)[reply]

Ok, is there a fundamental difference between "bytecode" and other intermediate representations then, as you see it? / HenkeB (talk) 14:47, 14 January 2008 (UTC)[reply]
Yes an intermediary representation does not have to have it's tokens stored in a single byte per token. It's quite possible that a completely different scheme is used. A "bytecode" representation however, always uses a single byte for a single pseudo opcode token.Mahjongg (talk) 16:43, 14 January 2008 (UTC)[reply]
A quite superficial distinction then, if I understand you correctly? Sounds like, perhaps, this article should be renamed intermediate code with the word bytecode redirect here (instead of the other way round) - or better - create a separate one for intermediate code. However, I had the feeling this java-term had begun to mean just about any representation that is more similar (isomorph) to ordinary machine code than, say, tree-structured code, "quadruples", or stack code? / HenkeB (talk) 17:47, 14 January 2008 (UTC)[reply]

Smalltalk-80 used the term Bytecode as well, and, it was always a inconsistent notion. Smalltalk bytecode do not use a fixed size to encode opcodes, but 4bit to 8bit, and there are instructions which are encoded with 2byte. So,in essence, the term Bytecode is usually used to name a VM instruction set which is designed with a hardware instruction set architecture in mind. 2009-08-20 —Preceding unsigned comment added by 134.184.43.183 (talk) 12:09, 20 August 2009 (UTC)[reply]


Well, I will just comment that personally I find both this article, as well as the one on interpreters, to be rather vague / misleading.

For example, usually it is not "semantic analysis" (as I understand the term) which produces bytecode, rather, it is more commonly the process of flattening an AST which produces bytecode, with this process driving the remainder of the compiler logic, and often with little or no "semantic analysis" (at least for many dynamic languages, where most of this is left to be figured out at runtime).

As for Java and Bytecode, I think Java popularized the term, but they by no means own it. Generally, it refers to a byte-centric opcode-based structure, with 1 (or more) bytes for an opcode, and usually any arguments directly following. Usually, it is understood to be interpreted linearly as well (similar to machine code), and often handling control flow via offsets and jumps, rather than being tree or graph structured or using high-level control flow.

Its main property then is usually that of being similar to, but at the same time usually far less complex and bit-twiddly than, machine code (as well as traditionally interpreted or JIT-compiled rather than being directly run on a piece of hardware). —Preceding unsigned comment added by 174.18.204.116 (talk) 18:56, 2 October 2009 (UTC)[reply]

The focus should probably be on 'byte-oriented', in the sense of simplifying instruction decoding. The op-code is only one of several fields -- it is not a great benefit if the op-code is easy to extract, while other fields are complex. I've always thought the instruction encoding used for the EM-1 'machine' was a good example: opcode is one byte, escape sequence is one byte, and address fields is one or two bytes. There are a few exceptions where the instructions and arguments were encoded into one byte, but this was to speed execution of very common instructions. (See Informatica Report IR-81 (from 1983) by Andrew S Tanenbaum et al.: Description of a machine architecture for use with block structured languages.) Although the term 'bytecode' is not used by the authors, it has been used in descriptions of the Amsterdam Compiler Kit, of which EM-1 was a central concept.Athulin (talk) 09:46, 10 January 2011 (UTC)[reply]

Layman's terms

[edit]

I have to agree with the above comments about how the article needs to be easier to understand. I'm a part-time developer for various languages for the past 10 years. And I don't even understand what byte-code is, nor has this article helped. I'm not suggesting we compromise and make a 'for dummies' article, but just add a sentence here and there to help clarify.

I also agree with this. Specifically, in the sentence, "Since it is processed by software, [bytecodes are] usually more abstract than machine code". In what sense is the word "abstract" being used? When comparing it to "machine code", do you mean more abstract than binary code or more abstract than assembly language? So, is bytecode higher level compared to one of these or lower level than one of these, or just different syntactically?

The following sentence is also similarly confusing: "Compared to source code (intended to be human-readable), bytecodes are less abstract, more compact, and more computer-centric."

From what I understand, being "more abstract" usually means lower level. So how could bytecode be more human readable (less abstract) but then more abstract than machine code??? From the current description, I interpret the former sentence to mean that Bytecode is lower level than assembly, or possibly, even lower level than binary, which isn't possible! I am also not familiar with how the word "computer-centric" is generally used when refering to levels of computer code, but I think this needs to be described more simply.

In normal computer terminology, the more abstract the code the further it is removed from the physical implementation on the hardware. Usually more abstract code is therefore easier for a human to understand in everyday concepts instead of machine concepts. -- RTC 18:47, 7 February 2007 (UTC)[reply]

Bytecode execution techniques?

[edit]

This and virtual machine both don't explain any techniques used to execute bytecode. I've sketched some basic thoughts out on a blog entry of mine at KernelTrap; but I don't know how current day ones operate, if there's generally optimization, if instruction ordering counts, etc.

binary requirement

[edit]

Could a textual language be considered bytecode? Obviously not, but this article lists CIL as an example of bytecode, and many other articles call it "bytecode". The article for CIL even calls it both "human-readable" and "bytecode". CIL example Herorev 21:22, 19 November 2006 (UTC)[reply]

It seems CIL is not in itself bytecode, but can be assembled into bytecode. So CIL itself is not a form of bytecode. The only "human readable bytecode" I can think of is one which just uses each letter of the alphabet were each letter stands for the mnemonic of one opcode, for example the letter 'G' (47Hex) for 'Goto'. That would be "readable", and would use one byte for each instruction. Mahjongg (talk) 11:57, 14 January 2008 (UTC)[reply]
Another human-readable bytecode: the original wiki mentions "sed scripts don't need to be tokenized; they already are ... All sed commands are one byte long, not including arguments. ... More languages than just sed have this property or a similar one." -- WikiWikiWeb: LittleLanguage. --68.0.124.33 (talk) 01:43, 25 October 2008 (UTC)[reply]

Old page history

[edit]

For old page history that used to be at this title, see Talk:Bytecode/old. Graham87 08:33, 5 February 2009 (UTC)[reply]

Merge neologisms

[edit]

So far as I can tell, “bytecode” appears to be nothing but an euphemism for a slightly lower level Interpreted language. 72.235.213.232 (talk) 03:43, 19 June 2010 (UTC)[reply]

Bytecode vs. machine code

[edit]

I have suggested that bytecode and machine code be merged on the machine code page's discussion page -- my point being that the two words are interchangeable in all cases and that there is no way to tell them apart nor do they in any way differ from one another. Bytecode langauges are just ordinary languages we pretend are not, by, implmenting them in software rather than hardware. There is no reason why any language cannot be run in hardware as well as software. There are plenty of example of hardware implementations of bytecode languages (various java processors) and software implementations of machine code languages (qemu, boch, vmware, etc.). Besides, the bytecode article appears mostly to be a list of example languages that are considered to be bytecode. I would also like to note that it is entirely possible to translate langauges which typically are translated into a "bytecode" language, into a "machine code" language and vice versa (GCJ for instance translates Java source code into native machine code. I'm certain you can find C compilers that target any particular "bytecode" machine). FrederikHertzum (talk) 13:35, 19 June 2010 (UTC)[reply]

The main difference is that bytecodes are specifically designed to run on systems independent of the native machine code of the system. You are right that the bytecode could be the native machine code, but that is often done to run such bytecode on a platform optimized to run it. bytecode is NOT the same thing as machinecode. Mahjongg (talk) 14:21, 19 June 2010 (UTC)[reply]
From a computer scientists point of view, there is no difference between a virtual or a real machine and their instruction sets are clearly equally powerful (given that both are Turing complete). My point is not what is being done, but the technical difference between the two terms, being that there is none. Being designed to be run on-top of another, unknown, machine is hardly a good criterion for distinction. FrederikHertzum (talk) 03:41, 21 June 2010 (UTC)[reply]
Keep Separate — Well, the difference is that a "virtual machine" needs a real machine... Also, all computer related articles cannot have an abstract computer "science" point of view; although basic principles are indeed important to clarify, real world usage, practical aspects, and conventions are equally (or more) important in most articles on computing, electronics and science. 83.255.42.68 (talk) 08:03, 26 July 2010 (UTC)[reply]
I'm not sure if this was correct from a computer science perspective. A 'virtual machine' (back in the 1970s or so, when I first got interested in compiler language) was non-existing hardware. The MIX 'computer' used by Donald Knuth to teach and discuss certain aspects of computing was very much a virtual machine. Today a 'virtual machine' is what then would probably have been called an 'emulated machine'. (See https://en.wikipedia.org/wiki/Stack_machine for several uses of 'virtual machine' used for stack-oriented designs. The link from that article to 'virtual machine' however, is somewhat misleading, as it is not until the term 'Process virtual machines' that the intended meaning is beginning to emerge. (That term is almost certainly a modern term.) Athulin (talk) 17:59, 29 November 2022 (UTC)[reply]
Not in my experience. In the 1970s a virtual machine was usually a machine with the same architecture as the host, running under, e.g., VM/370.
Also in that time frames, P-code was specifically UCSD p-code. I never heard the term byte code until Java. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 08:09, 30 November 2022 (UTC)[reply]
Keep Separate — But this article needs to be refined and its category has to be determined. Is it related to virtual machines, interpreters, or process virtual machines? The page on VMs has to explain the type of code --Melab±1 22:45, 19 June 2010 (UTC)[reply]
Keep Separate The term Bytecode has been in use ever since its canonical example (or maybe even earlier) of a byte-code in the form of the p-code used by UCSD Pascal, which was one of the contenders for an operating system for the upcoming IBM PC. So at the very least the term has historical significance. Mahjongg (talk) 17:52, 21 June 2010 (UTC)[reply]
This is a little strange. I don't know that anything deserves to be called bytecode more than VAX machine code. (And there are plenty of software implementations of VAX.) But also, I suspect the reason VAX went away, was that it uses a bytecode. VAX instructions can have from one to at least six operands. Each operand has a byte indicating the address mode for that argument. Depending on the address mode, that byte is followed by an appropriate number of bytes for that mode. It might be one to 60 bytes for an instruction. Like JVM code, it is well designed for processing one byte at a time. It is, however, extremely difficult to decode instructions in parallel. Very convenient for the microcoded VAX machines. The important distinction, then, is that it is convenient for processing one byte at a time, based on the state determined by earlier bytes. And especially, a large number of different modes depending on those bytes. Gah4 (talk) 09:23, 30 November 2022 (UTC)[reply]
The VAX-11/780 was hardly the first machine with variable length instructions, nor the first with complicated operand specifications. The Bendix G-20, GE 635 and RCA 601 have similar complexities in one way or another. A more modern example is the encoding of opcodes on IBM z/Architecture. The VAX was more influenced by the PDP-11 than by, e.g., UCSD p-code --Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:25, 30 November 2022 (UTC)[reply]
Not the first, but maybe worst. It would have been a lot better for VAX to put all the address mode bytes immediately after the opcode byte. Until you get to an address mode byte, you don't know how long that address mode is. For S/370, and I believe z/, you know the instruction length from the first byte. For VAX, it is almost the last byte. Address modes can have a 0, 1, 2, 4, 8, or 16 byte immediate value. For z/, you know quickly where the next instruction starts, and can start decoding it in parallel. Or even that z/ instructions are always an even number of bytes. I suspect VAX would have been what Brooks calls a Second-system effect. The PDP-11 is nice and simple in many ways, that VAX is complicated. It isn't even easy to figure out what the longest VAX instruction is, with which addressing modes are allowed for which operands, of which instructions. (I believe you can't write to immediate operands, for example.) Gah4 (talk) 21:03, 30 November 2022 (UTC)[reply]

Bytecode v8

[edit]

Since Chrome 66, now V8 does partitally utilize bytecode. — Preceding unsigned comment added by 119.145.72.152 (talk) 04:17, 20 April 2019 (UTC)[reply]

List of examples

[edit]

It seems to me like the list of examples will grow indefinitely, as most interpreters use bytecode as an intermediate representation for interpretation, including many of those which can also optionally produce native code or compile to another language (i.e. to C or LLVM). I can myself immediately think of various examples which are missing from the list, but am not sure it would be wise to add them... 76.10.128.192 (talk) 15:13, 5 July 2013 (UTC) It wouldn't.[reply]

Bytecode versus P-code

[edit]

The article and DAB page for P code treat bytecode and P-codeas synonymous. However, I have never seen the term P-code used in the literature for anything but the code generated by UCSD Pascal. Shmuel (Seymour J.) Metz Username:Chatul (talk) 06:39, 31 May 2021 (UTC)[reply]

The UCSD people inherited the term from the the Zurich Pascal (or Pascal P) compiler, that was distributed to interested parties. The P4 version was released in 1976 or 1977. I don't have Barron's book around (Pascal: The language and its implementation), but I think P-code was discussed in it as well. The Pascal-S subset (1975 or so) had a combined compiler/interpreter, based on a P code.Athulin (talk) 18:16, 6 November 2021 (UTC)[reply]
The term P-code is certainly confusing to me. It sounds very much like Pseudocode. I sure do not know what the P in P-code would be. The term P-code used in this context seems to be from a beginner that does not understand what they mean. Sam Tomato (talk) 11:41, 6 May 2022 (UTC)[reply]
It would be very understandable if a term such as pseudo-assembly code were used but I do not know how common that is. Sam Tomato (talk) 11:44, 6 May 2022 (UTC)[reply]

History of?

[edit]

Bytecode. Came here looking for "history of". Is always a good lead in to understanding anything. Wanted to know who was first to implement it (or name 'bytecode'), subsequent implementations. Aspidistra9812 (talk) 03:57, 27 July 2021 (UTC)[reply]

I only have a suggestion: look for material coming out of the compilers / interpreters designed for portability starting in the 1960s, and usually based on the ideas of solving the M*N compiler problem (languages*architectures), which some researchers proposed to solve by defining a universal intermediate language between the M languages and the N computer architectures, making it into a M N problem. When I hear the term I think of Algol 60 (Randall & Russell: Algol 60 implementation, (1964)), Smalltalk-80 and Pascal, and certainly the Amsterdam Compiler Kit (Andrew Tanenbaum). Tanenbaum's paper on the design of the EM-1 byte code ("DESCRIPTION OF A MACHINE ARCHITECTURE FOR USE WITH BLOCK STRUCTURED LANGUAGE", Informatica Report IR-81) is worth reading. It doesn't use the term 'byte code', but the instruction format is heavily byte-oriented for processing efficiency, and anyone knowing EM-1 would almost certainly interpret the term as a synonym for intermediate code based on the same design principles. At a stretch, Griswold's book on portable SNOBOL4 implementation might be relevant, perhaps also Lisp Machine stuff, and I vaguely remember another author who wrote a lot about portable code (Winter?). Barron's and Pemberton's books on Pascal implementation may also provides clues. Athulin (talk) 09:27, 20 October 2022 (UTC)[reply]
... and interestingly enough I come on the https://en.wikipedia.org/wiki/Virtual_machine#History where the term 'O-Code' is used for one such intermediate language. It is possible, that the term P code is a reference back to O code: they seem to have been used in much the same way. Athulin (talk) 18:02, 29 November 2022 (UTC)[reply]