What was the rationale behind 36 bit computer architectures?What register size did early computers use?What was the first CPU with exposed pipeline?What was the value of the HP/1000 repeated indirection capability?Can we express the instructions to the Analytical Engine in terms of assembler or machine code?Was the IBM 5100 ever used for codebreaking?What is the history of the PDP-11 MARK instruction?How does the SNES (Super Nintendo) calculate the address of a character?Can S-100 cards attach to the ZX machines?Were there any working computers using residue number systems?
Can I pay with HKD in Macau or Shenzhen?
Why can't a country print its own money to spend it only abroad?
How can I print a 1 cm overhang with minimal supports?
Was US film used in Luna 3?
What was the rationale behind 36 bit computer architectures?
Adding one more column to a table
ExactlyOne extension method
Import data from a current web session?
Wiring IKEA light fixture into old fixture
Has Peter Parker ever eaten bugs?
Does switching on an old games console without a cartridge damage it?
Is it possible to access the complete command line including pipes in a bash script?
How can I calculate the cost of Skyss bus tickets
Chemistry Riddle
Can't understand how static works exactly
Were the Apollo broadcasts recorded locally on the LM?
Can an infinite group have a finite number of elements with order k?
Why did computer video outputs go from digital to analog, then back to digital?
What kind of world would drive brains to evolve high-throughput sensory?
Why are Oscar, India, and X-Ray (O, I, and X) not used as taxiway identifiers?
How to run a substitute command on only a certain part of the line
Xcode 10.3 Installation
Company requiring me to let them review research from before I was hired
Inverse Colombian Function
What was the rationale behind 36 bit computer architectures?
What register size did early computers use?What was the first CPU with exposed pipeline?What was the value of the HP/1000 repeated indirection capability?Can we express the instructions to the Analytical Engine in terms of assembler or machine code?Was the IBM 5100 ever used for codebreaking?What is the history of the PDP-11 MARK instruction?How does the SNES (Super Nintendo) calculate the address of a character?Can S-100 cards attach to the ZX machines?Were there any working computers using residue number systems?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?
architecture
New contributor
add a comment |
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?
architecture
New contributor
Related question: retrocomputing.stackexchange.com/questions/1621/…
– snips-n-snails
6 hours ago
Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)
– Solomon Slow
3 hours ago
add a comment |
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?
architecture
New contributor
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?
architecture
architecture
New contributor
New contributor
edited 1 hour ago
Mark Harrison
New contributor
asked 8 hours ago
Mark HarrisonMark Harrison
1364 bronze badges
1364 bronze badges
New contributor
New contributor
Related question: retrocomputing.stackexchange.com/questions/1621/…
– snips-n-snails
6 hours ago
Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)
– Solomon Slow
3 hours ago
add a comment |
Related question: retrocomputing.stackexchange.com/questions/1621/…
– snips-n-snails
6 hours ago
Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)
– Solomon Slow
3 hours ago
Related question: retrocomputing.stackexchange.com/questions/1621/…
– snips-n-snails
6 hours ago
Related question: retrocomputing.stackexchange.com/questions/1621/…
– snips-n-snails
6 hours ago
Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)
– Solomon Slow
3 hours ago
Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)
– Solomon Slow
3 hours ago
add a comment |
4 Answers
4
active
oldest
votes
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?
Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.
As opposed to the various power-of-2 word sizes?
There is no inherent benefit of power of two word sizes. Any number can do.
Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.
Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.
Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.
Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)
"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.
– DrSheldon
7 mins ago
add a comment |
Wiki page 36-bit shows some reasons (all copied from the page):
"This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "
And for characters:
- six 5.32-bit DEC Radix-50 characters, plus four spare bits
- six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
- six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]
- four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
- four 9-bit characters1[2] (the Multics convention).
New contributor
add a comment |
36 bit word size attractive
Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.
As opposed to the various power-of-2 word sizes?
It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.
add a comment |
The key point made by Wikipedia seems to be:
Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....
Many early computers did this by storing decimal digits. But when switching to binary:
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).
35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.
36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:
Char size | 35 bit word | 36 bit word
----------+-------------------+-------------------
6-bit | 5 + 5 bits unused | 6 + 0 bits unused
7-bit | 5 + 0 bits unused | 5 + 1 bit unused
8-bit | 4 + 3 bits unused | 4 + 4 bits unusedIf you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11801%2fwhat-was-the-rationale-behind-36-bit-computer-architectures%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?
Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.
As opposed to the various power-of-2 word sizes?
There is no inherent benefit of power of two word sizes. Any number can do.
Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.
Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.
Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.
Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)
"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.
– DrSheldon
7 mins ago
add a comment |
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?
Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.
As opposed to the various power-of-2 word sizes?
There is no inherent benefit of power of two word sizes. Any number can do.
Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.
Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.
Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.
Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)
"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.
– DrSheldon
7 mins ago
add a comment |
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?
Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.
As opposed to the various power-of-2 word sizes?
There is no inherent benefit of power of two word sizes. Any number can do.
Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.
Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.
Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.
Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?
Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.
As opposed to the various power-of-2 word sizes?
There is no inherent benefit of power of two word sizes. Any number can do.
Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.
Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.
Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.
Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)
answered 5 hours ago
RaffzahnRaffzahn
64.8k6 gold badges159 silver badges268 bronze badges
64.8k6 gold badges159 silver badges268 bronze badges
"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.
– DrSheldon
7 mins ago
add a comment |
"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.
– DrSheldon
7 mins ago
"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.
– DrSheldon
7 mins ago
"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.
– DrSheldon
7 mins ago
add a comment |
Wiki page 36-bit shows some reasons (all copied from the page):
"This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "
And for characters:
- six 5.32-bit DEC Radix-50 characters, plus four spare bits
- six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
- six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]
- four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
- four 9-bit characters1[2] (the Multics convention).
New contributor
add a comment |
Wiki page 36-bit shows some reasons (all copied from the page):
"This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "
And for characters:
- six 5.32-bit DEC Radix-50 characters, plus four spare bits
- six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
- six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]
- four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
- four 9-bit characters1[2] (the Multics convention).
New contributor
add a comment |
Wiki page 36-bit shows some reasons (all copied from the page):
"This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "
And for characters:
- six 5.32-bit DEC Radix-50 characters, plus four spare bits
- six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
- six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]
- four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
- four 9-bit characters1[2] (the Multics convention).
New contributor
Wiki page 36-bit shows some reasons (all copied from the page):
"This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "
And for characters:
- six 5.32-bit DEC Radix-50 characters, plus four spare bits
- six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
- six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]
- four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
- four 9-bit characters1[2] (the Multics convention).
New contributor
New contributor
answered 7 hours ago
Michel KeijzersMichel Keijzers
2511 silver badge7 bronze badges
2511 silver badge7 bronze badges
New contributor
New contributor
add a comment |
add a comment |
36 bit word size attractive
Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.
As opposed to the various power-of-2 word sizes?
It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.
add a comment |
36 bit word size attractive
Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.
As opposed to the various power-of-2 word sizes?
It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.
add a comment |
36 bit word size attractive
Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.
As opposed to the various power-of-2 word sizes?
It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.
36 bit word size attractive
Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.
As opposed to the various power-of-2 word sizes?
It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.
answered 7 hours ago
Erik EidtErik Eidt
1,8006 silver badges13 bronze badges
1,8006 silver badges13 bronze badges
add a comment |
add a comment |
The key point made by Wikipedia seems to be:
Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....
Many early computers did this by storing decimal digits. But when switching to binary:
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).
35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.
36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:
Char size | 35 bit word | 36 bit word
----------+-------------------+-------------------
6-bit | 5 + 5 bits unused | 6 + 0 bits unused
7-bit | 5 + 0 bits unused | 5 + 1 bit unused
8-bit | 4 + 3 bits unused | 4 + 4 bits unusedIf you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)
add a comment |
The key point made by Wikipedia seems to be:
Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....
Many early computers did this by storing decimal digits. But when switching to binary:
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).
35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.
36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:
Char size | 35 bit word | 36 bit word
----------+-------------------+-------------------
6-bit | 5 + 5 bits unused | 6 + 0 bits unused
7-bit | 5 + 0 bits unused | 5 + 1 bit unused
8-bit | 4 + 3 bits unused | 4 + 4 bits unusedIf you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)
add a comment |
The key point made by Wikipedia seems to be:
Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....
Many early computers did this by storing decimal digits. But when switching to binary:
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).
35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.
36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:
Char size | 35 bit word | 36 bit word
----------+-------------------+-------------------
6-bit | 5 + 5 bits unused | 6 + 0 bits unused
7-bit | 5 + 0 bits unused | 5 + 1 bit unused
8-bit | 4 + 3 bits unused | 4 + 4 bits unusedIf you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)
The key point made by Wikipedia seems to be:
Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....
Many early computers did this by storing decimal digits. But when switching to binary:
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).
35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.
36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:
Char size | 35 bit word | 36 bit word
----------+-------------------+-------------------
6-bit | 5 + 5 bits unused | 6 + 0 bits unused
7-bit | 5 + 0 bits unused | 5 + 1 bit unused
8-bit | 4 + 3 bits unused | 4 + 4 bits unusedIf you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)
edited 1 hour ago
answered 2 hours ago
Curt J. SampsonCurt J. Sampson
1,1882 silver badges18 bronze badges
1,1882 silver badges18 bronze badges
add a comment |
add a comment |
Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.
Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.
Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.
Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11801%2fwhat-was-the-rationale-behind-36-bit-computer-architectures%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Related question: retrocomputing.stackexchange.com/questions/1621/…
– snips-n-snails
6 hours ago
Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)
– Solomon Slow
3 hours ago