Tag: PC Does Not See RAM

I was reading about CPU fetch time, where I found that CPUs take much less time to access data from RAM as compared to accessing a hard disk, and that RAM is present for storing the information and data of executing program.

Then I wondered about what will happen when we only use a hard disk but no RAM?

At some point this gets into the question of what even counts as “RAM.” There are many CPUs and microcontrollers that have plenty of on-chip memory to run small operating systems with no separate RAM chips attached. In fact, this is actually relatively common in the world of embedded systems. So, if you’re just referring to not having any separate RAM chips attached, then, yes, you can do it with many current chips, especially those designed for the embedded world. I’ve done it myself at work. However, since the only real difference between addressable on-chip memory and separate RAM chips is just the location (and, obviously, latency,) it’s perfectly reasonable to consider the on-chip memory to itself be RAM. If you’re counting that as RAM, then the number of current, real-world processors that would actually run without RAM is greatly reduced.

If you’re referring to a normal PC, no, you can’t run it without separate RAM sticks attached, but that’s only because the BIOS is designed not to attempt to boot with no RAM installed (which is, in turn, because all modern PC operating systems require RAM to run, especially since x86 machines typically don’t allow you to directly address the on-chip memory; it’s used solely as cache.)

Finally, as Zeiss said, there’s no theoretical reason that you can’t design a computer to run without any RAM at all, aside from a couple of registers. RAM exists solely because it’s cheaper than on-chip memory and much faster than disks. Modern computers have a hierarchy of memories that range from large, but slow to very fast, but small. The normal hierarchy is something like this:

Registers – Very fast (can be operated on by CPU instructions directly, generally with no additional latency,) but usually very small (64-bit x86 processor cores have only 16 general-purposes registers, for instance, with each being able to store a single 64-bit number.) Register sizes are generally small because registers are very expensive per byte.
CPU Caches – Still very fast (often 1-2 cycle latency) and significantly larger than registers, but still much smaller (and much faster) than normal DRAM. CPU cache is also much more expensive per byte than DRAM, which is why it’s typically much smaller. Also, many CPUs actually have hierarchies even within the cache. They usually have smaller, faster caches (L1 and L2) in addition to larger and slower caches (L3.)
DRAM (what most people think of as ‘RAM’) – Much slower than cache (access latency tends to be dozens to hundreds of clock cycles,) but much cheaper per byte and, therefore, typically much larger than cache. DRAM is still, however many times faster than disk access (usually hundreds to thousands of times faster.)
Disks – These are, again, much slower than DRAM, but also generally much cheaper per byte and, therefore, much larger. Additionally, disks are usually non-volatile, meaning that they allow data to be saved even after a process terminates (as well as after the computer is restarted.)
Note that the entire reason for memory hierarchies is simply economics. There’s no theoretical reason (not within computer science, at least) why we couldn’t have a terabyte of non-volatile registers on a CPU die. The issue is that it would just be insanely difficult and expensive to build. Having hierarchies that range from small amounts of very expensive memory to large amounts of cheap memory allows us to maintain fast speeds with reasonable costs.

Top Stories

How To fix: The Computer (PC) Does Not See RAM?

By RockedBuzz

It is unlikely that any of us would like your newly purchased memory not to be recognized by the computer.