Technology in terms you understand. Sign up for the Confident Computing newsletter for weekly solutions to make your life easier. Click here and get The Ask Leo! Guide to Staying Safe on the Internet — FREE Edition as my thank you for subscribing!

What’s So Special About 32?

It has to do with the way computers think and the way Microsoft tried to organize things.

32, like 16 before it and 64 to follow, is special simply because computers think in powers of 2, as does Windows itself.
Progression: 8 to 16 to 32 to 64.
Progression of computing power: 8 to 16 to 32 to 64. (Image: askleo.com)
Question: Why is the number 32 used so often in the naming of computer folders and files? For example, System32, Twain32, Regedt32, and so on?

I had to chuckle when I read this question. It’s a perfect example of how things computer folks take for granted are just so much magic to “real” people.

32 is special because it comes after 16 and before 64. (Which I’m sure doesn’t help at all.)

Become a Patron of Ask Leo! and go ad-free!

TL;DR:

The transition from 8 to 16 to 32 and beyond

Computers use bits valued at either “1” or “0” to represent all data. For convenience, bits are grouped into eight-bit bytes. Early personal computers processed data one byte at a time, and thus were referred to as eight-bit computers. As technology advanced, computers evolved to process 16 bits, then 32, and now 64 bits at a time. Software and operating systems like Windows had to adapt to these changes. One result of that adaptation is folder and file naming conventions, which in Windows can be quite confusing due to the evolution of the transition.

In the beginning was the bit

To refresh, modern digital computers really only know two things: “1” or “0”.

Or put another way, they can know only single things, called bits, that can be either on or off, zero or one.

Seriously, that’s all they know. Every computer program, every digital photo, every music file, every video you watch, even the text you’re looking at right now is nothing more than a collection of bits. A lot of bits, but only bits … 1’s and 0’s. This is the definition of digital.1

I’ve often joked that my job is nothing more than putting a lot of zeros and ones in the right order.

Groups of bits

It’s cumbersome to have to think of everything as a series of 1’s and 0’s, so there are some convenient ways of grouping them.

You’ve probably already heard of a byte, which is a collection of eight bits. If you look at the number of possible different combinations of how those eight bits can be set — each on or off — you’ll find that there are 256 possible combinations: 00000000 through 11111111, or, more conveniently, 0 through 255. You’ll also sometimes see them represented in hexadecimal notation, like 00 through FF — a topic for another day.

The first popular microprocessors used in personal computers back in the days of the Apple II and the Altair dealt with data one byte at a time. Thus they were called eight-bit computers.

Bits and microprocessors

When the IBM PC was introduced, it used a processor that was effectively a 16-bit processor2, handling data 16 bits, or two bytes, at a time.

Over the years, processors evolved to handle information four bytes at a time and thus were called 32-bit computers. Most modern machines now use 64-bit processors capable of handling data eight bytes at a time.

In case you couldn’t guess from the progression — 8, then 16, 32, and 64 — processors tend to double the way they look at data as the technology improves. Is a 128-bit processor on the horizon? They exist, but whether or not they’ll become mainstream is unclear.

Bits and software

As the Intel-based processors used in the IBM and compatible machines grew in power and made those leaps from 8 to 16 and so on, software became an issue.

In general, software written to run on a 16-bit processor would not run on a 32-bit machine, at least not without help. That help comes in the form of compatibility modes on the 32-bit processors that allow them to “act like” 16-bit processors. Similarly, 32-bit software wouldn’t run on 64-bit processors without help.

In reality, the cart led the horse. MS-DOS and early versions of Windows were, in fact, 16-bit software, so even 32-bit processors were running in compatibility mode most of the time. Slowly, as 32-bit versions of Windows and other software were written to replace older code, the processor’s new features, like true multitasking, proper virtual memory, and lots of physical memory, could be exploited. A similar progression happened during the transition from 32- to 64-bit software.

Bits and Windows

Windows and other operating systems had the ability to provide that help but needed to keep things organized.

64-bit software needed the operating system to provide 64-bit support, 32-bit software needed 32-bit support, while 16-bit software needed the operating system to handle 16-bit specific support. 3

Thus things like SYSTEM32 were born. It was the directory to hold the 32-bit versions of Windows components. The old SYSTEM directory was left to hold the older 16-bit versions. Internally and among programmers, though, there were many references to win32, win16, and now win64.

When you see the “32” in an actual file name, it’s often the result of the same evolution. That’s probably the 32-bit version of a program or component.

So where’s System64?

Ugh. This is where things get more confusing.

In 64-bit versions of Windows — meaning all current versions — the folder we might expect to be “System64” is called … wait for it … “System32”. Specifically, in Windows 64-bit systems:

  • The folder for 64-bit system software is C:\Windows\System32.
  • The folder for 32-bit system software that runs in 64-bit Windows is C:\Windows\SysWOW64.

“WOW” stands for “Windows On Windows”, or more completely “running Windows 32 on Windows 64”. (In the past, it also meant Windows 16 on Windows 32, and so on.)

Either way, it’s confusing, but it is what it is.

Rules are meant to be bent

What I’ve outlined so far is the ideal, the theory, or the standard layout of files.

More or less.

It’s best thought of as a rule of thumb rather than absolute laws all programmers follow. As we’ve progressed from 16 to 32 to 64, these rules haven’t always been followed exactly, and they don’t apply in all cases. But they do apply in most.

The concepts, at least, remain the same. It’s all about organizing the software according to the “bit-ness” (16, 32, or 64) required to run it properly.

Do this

There’s no real action here, other than an appreciation (I hope) for the incredible complexity involved in creating an operating system that tries to be as compatible with the past as it can be for as long as it can be while continuing to move forward into the future.

I try to clarify that complexity every week in my newsletter, Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week.

Podcast audio

Play

Footnotes & References

1: As opposed to analog, where items can range between upper and lower bounds. There are analog computers, but they tend to be for special purposes.

2: There’s a strong argument that the 8088 processor used was still an 8-bit computer, and that only some aspects were actually 16 bits. It wasn’t until the 8086 processor that things were truly 16-bit.

3: 16-bit support was its own special nightmare, and we were all happy to see it fall into history. If you want to see someone who programmed 16-bit software for Windows cringe, just ask them about segmented memory.

15 comments on “What’s So Special About 32?”

  1. To put the ‘bitness’ progression more succinctly, since a byte consists of 8 bits, and computers are organized to use bits, bytes, words, etc., the earliest PC CPUs were considered 8-bit CPUs, then came 16-bit, 32-bit and 64-bit CPUs essentially in that order, based on the size of the data they can process in one cycle (step).

    A bit consists of one element (switch) that is either ‘on’ or ‘off’ (representing ‘one’ or ‘zero’).

    A byte consists of 8 bits.
    A WORD consists of 2 bytes (16 bits).
    A DWORD consists of two WORDs, 4 bytes (32 bits). (double-word)
    A QWORD consists of four WORDs, 8 bytes (64 bits). (quad-word)

    As you can see from all this, a 64-bit CPU is several orders of magnitude more powerful than the early 8-bit CPUs because of the amount of data they can process in a single cycle, and the number of cycles they can execute in a second.

    The 8088 CPU in my first IBM-compatible PC ran at about 10Mhz (10 million cycles per second) in boost mode (the mode I most commonly used). The CPU in my current laptop PC runs at about 5Ghz (5 billion cycles per second), so it is effectively about 4000 times faster that my first IBM-compatible PC (with the comparison between an 8-bit CPU and a 64-Bit CPU combined with the speed of their operation taken into account). When I ran the IBM-compatible in ‘normal’ mode (about 4.5Mhz), the difference was more than twice as great.

    Note: In this discussion, I’m comparing a single core for my current laptop’s CPU with the 8088 CPU I had in my first IBM-compatible PC because it had only one core. My current laptop PC has 6 cores, so it is roughly a bit less than 24,000 times faster than my first IBM-compatible PC when all six cores are taken into account.

    Isn’t modern technology amazing!

    Ernie (Oldster)

    Reply
  2. I actually ENJOYED reading this! But, Leo, in your list of 8-bit microcomputers, you left out the Commodore-64 and Commodore-128… the latter of which I *STILL HAVE*, and *STILL PROGRAM ON*!!! :) :) :)

    Reply
  3. While I agree that powers of 2 are very important in computer business, in the real world there is only one very, very important number: 42.

    It’s the answer to everything.
    Insiders know what I mean. ;-)

    Reply
  4. It was. I remember that back in 1990 I was working on a 80486 based motherboard (in a team), where the board was running on 50MHz, whereas the CPU itself was internally running on 100MHz. We had to jump through hoops to get it working.

    Reply
  5. In the early days there were also 4-bit processors: Intel 4004 and 4040.
    There have also been 12-bit processors, such as Intersil IM6100 and DEC PDP-8.
    Modern processors have enhancements for more throughput besides clock speed and number of core, with pipelining, rearranged instruction execution, etc.

    Reply

Leave a reply:

Before commenting please:

  • Read the article.
  • Comment on the article.
  • No personal information.
  • No spam.

Comments violating those rules will be removed. Comments that don't add value will be removed, including off-topic or content-free comments, or comments that look even a little bit like spam. All comments containing links and certain keywords will be moderated before publication.

I want comments to be valuable for everyone, including those who come later and take the time to read.