What is the hex versus octal timeline?What was the rationale behind 36 bit computer architectures?What computers had redefinable character sets?First flat-panel display technology capable of 640x480640x480 color display in 1980Timeline of progressive scan CRT resolutionsIBM 5153 monitor vertical resolutionProportional fonts on 8-bit computersWhy did the original Apple //e have two sets of inverse video characters?Did arcade monitors have same pixel aspect ratio as TV sets?Why did Super-VGA offer the 5:4 1280*1024 resolution?Why did computer video outputs go from digital to analog, then back to digital?

Vacuum collapse -- why do strong metals implode but glass doesn't?

Was this pillow joke on Friends intentional or a mistake?

Church Booleans

Is there a known non-euclidean geometry where two concentric circles of different radii can intersect? (as in the novel "The Universe Between")

Have only girls been born for a long time in this village?

Does adding the 'precise' tag to daggers break anything?

Is refusing to concede in the face of an unstoppable Nexus combo punishable?

The sound of thunder's like a whip

To "hit home" in German

What are the pros and cons of Einstein-Cartan Theory?

Should my "average" PC be able to discern the potential of encountering a gelatinous cube from subtle clues?

What happens when I copy a legendary creature with Rite of Replication?

What can I do to keep a threaded bolt from falling out of its slot?

What is "Wayfinder's Guide to Eberron"?

Turn TDE off when restoring SQL databases

How to get the pandadocs from an opportunity?

Why my earth simulation is slower than the reality?

Was Tuvok bluffing when he said that Voyager's transporters rendered the Kazon weapons useless?

How to dismiss intrusive questions from a colleague with whom I don't work?

How much code would a codegolf golf if a codegolf could golf code?

Why would the US President need briefings on UFOs?

Shouldn't the "credit score" prevent Americans from going deeper and deeper into personal debt?

How can I support the recycling, but not the new production of aluminum?

Metal that glows when near pieces of itself



What is the hex versus octal timeline?


What was the rationale behind 36 bit computer architectures?What computers had redefinable character sets?First flat-panel display technology capable of 640x480640x480 color display in 1980Timeline of progressive scan CRT resolutionsIBM 5153 monitor vertical resolutionProportional fonts on 8-bit computersWhy did the original Apple //e have two sets of inverse video characters?Did arcade monitors have same pixel aspect ratio as TV sets?Why did Super-VGA offer the 5:4 1280*1024 resolution?Why did computer video outputs go from digital to analog, then back to digital?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2















When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?










share|improve this question



















  • 6





    I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)

    – dirkt
    8 hours ago











  • Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes

    – scruss
    7 hours ago











  • In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.

    – Walter Mitty
    6 hours ago











  • In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.

    – Walter Mitty
    6 hours ago






  • 1





    I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)

    – fadden
    4 hours ago

















2















When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?










share|improve this question



















  • 6





    I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)

    – dirkt
    8 hours ago











  • Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes

    – scruss
    7 hours ago











  • In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.

    – Walter Mitty
    6 hours ago











  • In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.

    – Walter Mitty
    6 hours ago






  • 1





    I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)

    – fadden
    4 hours ago













2












2








2








When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?










share|improve this question














When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?







display






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 8 hours ago









hotpaw2hotpaw2

3,1906 silver badges25 bronze badges




3,1906 silver badges25 bronze badges










  • 6





    I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)

    – dirkt
    8 hours ago











  • Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes

    – scruss
    7 hours ago











  • In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.

    – Walter Mitty
    6 hours ago











  • In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.

    – Walter Mitty
    6 hours ago






  • 1





    I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)

    – fadden
    4 hours ago












  • 6





    I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)

    – dirkt
    8 hours ago











  • Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes

    – scruss
    7 hours ago











  • In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.

    – Walter Mitty
    6 hours ago











  • In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.

    – Walter Mitty
    6 hours ago






  • 1





    I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)

    – fadden
    4 hours ago







6




6





I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)

– dirkt
8 hours ago





I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)

– dirkt
8 hours ago













Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes

– scruss
7 hours ago





Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes

– scruss
7 hours ago













In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.

– Walter Mitty
6 hours ago





In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.

– Walter Mitty
6 hours ago













In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.

– Walter Mitty
6 hours ago





In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.

– Walter Mitty
6 hours ago




1




1





I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)

– fadden
4 hours ago





I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)

– fadden
4 hours ago










3 Answers
3






active

oldest

votes


















2














Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.



PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was



1 bit generally byte/word indicator (b)
3 bits opcode (o)
3 bits source mode (s)
3 bits source register (r)
3 bits destination mode (d)
3 bits destination register (R)


making octal the perfect way to express it:



booosssrrrdddRRR


The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was



4 bits mode (m)
4 bits register (r)


thus hex was perfect to express these.



mmmmrrrr


Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".






share|improve this answer


































    1














    Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.



    Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.



    You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.






    share|improve this answer
































      1















      When and why




      That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.



      Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?



      After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.




      See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.






      share|improve this answer



























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "648"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f12101%2fwhat-is-the-hex-versus-octal-timeline%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        2














        Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.



        PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was



        1 bit generally byte/word indicator (b)
        3 bits opcode (o)
        3 bits source mode (s)
        3 bits source register (r)
        3 bits destination mode (d)
        3 bits destination register (R)


        making octal the perfect way to express it:



        booosssrrrdddRRR


        The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was



        4 bits mode (m)
        4 bits register (r)


        thus hex was perfect to express these.



        mmmmrrrr


        Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".






        share|improve this answer































          2














          Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.



          PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was



          1 bit generally byte/word indicator (b)
          3 bits opcode (o)
          3 bits source mode (s)
          3 bits source register (r)
          3 bits destination mode (d)
          3 bits destination register (R)


          making octal the perfect way to express it:



          booosssrrrdddRRR


          The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was



          4 bits mode (m)
          4 bits register (r)


          thus hex was perfect to express these.



          mmmmrrrr


          Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".






          share|improve this answer





























            2












            2








            2







            Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.



            PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was



            1 bit generally byte/word indicator (b)
            3 bits opcode (o)
            3 bits source mode (s)
            3 bits source register (r)
            3 bits destination mode (d)
            3 bits destination register (R)


            making octal the perfect way to express it:



            booosssrrrdddRRR


            The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was



            4 bits mode (m)
            4 bits register (r)


            thus hex was perfect to express these.



            mmmmrrrr


            Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".






            share|improve this answer















            Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.



            PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was



            1 bit generally byte/word indicator (b)
            3 bits opcode (o)
            3 bits source mode (s)
            3 bits source register (r)
            3 bits destination mode (d)
            3 bits destination register (R)


            making octal the perfect way to express it:



            booosssrrrdddRRR


            The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was



            4 bits mode (m)
            4 bits register (r)


            thus hex was perfect to express these.



            mmmmrrrr


            Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited 2 hours ago

























            answered 4 hours ago









            another-daveanother-dave

            2,8261 gold badge8 silver badges24 bronze badges




            2,8261 gold badge8 silver badges24 bronze badges


























                1














                Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.



                Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.



                You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.






                share|improve this answer





























                  1














                  Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.



                  Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.



                  You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.






                  share|improve this answer



























                    1












                    1








                    1







                    Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.



                    Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.



                    You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.






                    share|improve this answer













                    Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.



                    Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.



                    You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 4 hours ago









                    ChromatixChromatix

                    1,1199 silver badges8 bronze badges




                    1,1199 silver badges8 bronze badges
























                        1















                        When and why




                        That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.



                        Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?



                        After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.




                        See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.






                        share|improve this answer





























                          1















                          When and why




                          That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.



                          Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?



                          After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.




                          See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.






                          share|improve this answer



























                            1












                            1








                            1








                            When and why




                            That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.



                            Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?



                            After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.




                            See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.






                            share|improve this answer














                            When and why




                            That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.



                            Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?



                            After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.




                            See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 3 hours ago









                            RaffzahnRaffzahn

                            67.2k6 gold badges166 silver badges278 bronze badges




                            67.2k6 gold badges166 silver badges278 bronze badges






























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Retrocomputing Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f12101%2fwhat-is-the-hex-versus-octal-timeline%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

                                Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

                                19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу