[From: cls at truffula.sj.ca.us (Cameron L. Spitzer)]
Read/write memory in computers is implemented using Random Access Memory chips (RAMs). RAMs are also used to store the displayed image in a video board, to buffer frames in a network controller or sectors in a disk controller, etc. RAMs are sold by their size (in bits), word width (how many bits can you access in one cycle), and access time (how fast you can read a location), among other characteristics.
RAMs can be classified into two types: "static" and "dynamic."
In a static RAM, each bit is represented by the state of a circuit with two stable states. Such a "bistable" circuit can be built with four transistors (for maximum density) or six (for highest speed and lowest power). Static RAMs (SRAMs) are available in many configurations. (Almost) all SRAMs have one pin per address line, and all of them are able to store data for as long as power is applied, without any external circuit activity.
In a dynamic RAM (DRAM), each bit is represented by the charge on a *very* small (30-50 femptofarads) capacitor, which is built into a single, specialized transistor. DRAM storage cells take only about a quarter of the silicon area that SRAM cells take, and silicon area translates into cost.
The cells in a DRAM are organized into rows and columns. To access a bit, you first select its row, and then you select its column. Unfortunately, the charge leaks off the capacitor over time, so each cell must be periodically "refreshed" by reading it and writing it back. This happens automatically whenever a row is accessed. After you're finished accessing a row, you have to give the DRAM time to copy the row of bits back to the cells: the "precharge" time.
Because the row and column addresses are not needed at the same time, they share the same pins. This makes the DRAM package smaller and cheaper, but it makes the problem of distributing the signals in the memory array difficult, because the timing becomes so critical. Signal integrity in the memory array is one of the things that differentiate a lousy motherboard from a high quality one.
Through the 1970s, RAMs were shipped in tubes, and the board makers soldered them into boards or plugged them into sockets on boards. This became a problem when end-users started installing their own RAMs, because the leads ("pins") were too delicate. Also, the individual dual in-line package (DIP) sockets took up too much board area.
In the early 1980s, DRAM manufacturers began offering DRAMs on tiny circuit boards which snap into special sockets, and by the late '80s these "single in-line memory modules" (SIMMs) had become the most popular DRAM packaging. Board vendors who didn't trust the new SIMM sockets used modules with pins: single inline pinned packages (SIPPs), which plug into sockets with more traditional pin receptacles.
PC-compatibles store each byte in main memory with an associated check bit, or "parity bit." That's why you add memory in multiples of nine bits. The most common SIMMs present nine bits of data at each cycle (we say they're "nine bits wide") and have thirty contact pads, or "leads." (The leads are commonly called "pins" in the trade, although "pads" is a more appropriate term. SIMMs don't *have* pins!)
At the high end of the PC market, "36 bit wide" SIMMs with 72 pads are gaining popularity. Because of their wide data path, 36-bit SIMMs give the motherboard designer more configuration options (you can upgrade in smaller chunks) and allow bandwidth-enhancing tricks (i.e. interleaving) which were once reserved for larger machines. Another advantage of 72-lead SIMMs is that four of the leads are used to tell the motherboard how fast the RAMs are, so it can configure itself automatically. (I do not know whether the current crop of motherboards takes advantage of this feature.)
In 1988 and '89, when 1 megabit (1Mb) DRAMs were new, manufacturers had to pack nine RAMs onto a 1 megabyte (1MB) SIMM. Now (1993) 4Mb DRAMs are the most cost-effective size. So a 1MB SIMM can be built with two 4Mb DRAMs (configured 1M x4) plus a 1Mb (x1) for the check-bit.
In graphics-capable video boards, the displayed image is almost always stored in DRAMs. Access to this data must be shared between the hardware which continuously copies it to the display device (this process is called "display refresh" or "video refresh") and the CPU. Most boards do it by time-sharing ordinary, single-port DRAMs. But the faster, more expensive boards use specialized DRAMs which are equipped with a second data port whose function is tailored to the display refresh operation. These "Video DRAMs" (VRAMs) have a few extra pins and command a price premium. They nearly double the bandwidth available to the CPU or graphics engine.
(As far as I know, the first dual-ported DRAMs were built by Four- Phase Systems Inc., in 1970, for use in their "IV-70" minicomputers, which had integrated video. The major DRAM vendors started offering VRAMs in about 1983 [Texas Instruments was first], and workstation vendors snapped them up. They made it to the PC trade in the late '80s.)
DRAMs are characterized by the time it takes to read a word, measured from the row address becoming valid to the data coming out. This parameter is called Row Access Time, or tRAC. There are many other timing parameters to a DRAM, but they scale with tRAC remarkably well. tRAC is measured in nanoseconds (ns). A nanosecond is one billionth (10 e-9) of a second.
It's so difficult to control the semiconductor fabrication processes, that the parts don't all come out the same. Instead, their performance varies widely, depending on many factors. A RAM design which would yield 50 ns tRAC parts if the fab were always tuned perfectly, instead yields a distribution of parts from 80 to 50. When the plant is new, it may turn out mostly nominal 70 ns parts, which may actually deliver tRAC between 60.1 ns and 70.0 ns, at 70 or 85 degrees Celcius and 4.5 volts power supply. As it gets tuned up, it may turn out mostly 60 ns parts and a few 50s and 70s. When it wears out it may get less accurate and start yielding more 70s again.
RAM vendors have to test each part off the line to see how fast it is. An accurate, at-speed DRAM tester can cost several million dollars, and testing can be a quarter of the cost of the parts. The finished parts are not marked until they are tested and their speed is known.
Individual DRAMs are marked with their speed after they are tested. The mark is usually a suffix to the part number, representing tens of nanoseconds. Thus, a 511024-7 on a SIMM is very likely a 70 ns DRAM. (vendor numbering scheme table to be added)
[From: cls at truffula.sj.ca.us (Cameron L. Spitzer)]
There is no reliable formula for deriving the required RAM speed from the clock rate or wait states on the motherboard. Do not buy a motherboard that doesn't come with a manual that clearly specifies what speed SIMMs are required at each clock rate. You can always substitute *faster* SIMMs for the ones that were called out in the manual. If you are investing in a substantial quantity of RAM, consider buying faster than you need on the chance you can keep it when you get a faster CPU.
That said, most 25 MHz and slower motherboards work fine with 80 ns parts, most 33 MHz boards and some 40 MHz boards were designed for 70 ns parts, and some 40 MHz boards and everything faster require 60 ns or faster. Some motherboards allow programming extra wait states to allow for slower parts, but some of these designs do not really relax all the critical timing requirements by doing that. It's much safer to use DRAMs that are fast enough for the no-wait or one-wait cycles at the top end of the motherboard's capabilities.
[From: cls at truffula.sj.ca.us (Cameron L. Spitzer)]
Almost always. But there are exceptions.
You might find the real solution is to use SIMMs one speed faster than the manual calls for, because the particular motherboard design just cuts too many things too close.
[From: uwvax!astroatc!nicmad!madnix!zaphod (Ron Bean)]
All 72-pin SIMMs are 32 bits wide (36 with parity), but double-sided SIMMs have four RAS (Row Address Strobe) lines instead of two. This can be thought of as two single-sided SIMMs wired in parallel. But since there is only one set of data lines, you can only access one "side" at a time.
Usually, 1Mb, 4Mb, and 16Mb 72-pin SIMMs are single-sided, and 2Mb, 8Mb, and 32Mb SIMMs are double-sided. This only refers to how the chips are wired-- SIMMs that are electrically "single-sided" may have chips on both sides of the board.
Most 486 motherboards use memory in banks of 32 bits (plus parity), and may treat a double-sided SIMM as "two banks" (see your motherboard's manual for details). Some can take four SIMMs if they're single-sided, but only two if they're double-sided. Others can take four of either type.
Pentium (and some 486) motherboards use pairs of 72-pin SIMMs for 64-bit memory. Since double-sided SIMMs can only access 32 bits at a time, you still need to use them in pairs to make 64 bits.
The precise method is to count the number and type of each chip (after looking them up in a databook for that DRAM manufacturer). However, you can get a good guess just by counting the number of chips.
DRAMs (for PC SIMMs) are either 1 or 4 bits wide. The total bit width is 8 or 9 (for 30 pin SIMMs) and 32 or 36 (for 72 pin SIMMs). DRAMs to hold parity are usually 1 bit wide to allow byte writes. Some examples:
Some new 72 pin SIMMs have two 32 (or 36) bit banks per SIMM and therefore have double the number of chips as a normal SIMM.
It also seems that some cheap SIMMs have begun using 'fake' parity on SIMMs; XOR gates that generate parity from 8 bit data rather than store and recall the actual parity generated by the DRAM controller. The only way to tell if you've been taken by one of these fake parity SIMMs is to look up all of the suspected parts in a DRAM databook.
Yes, just about all SIMMs are compatible, be they from another personal computer, a mainframe, or even a laser printer, though are a few odd systems out there. There are three significant issues: speed, parity and number of pins (data width). Speed is obvious, check the rating, ie: 70ns, to make sure they meet the minimum requirements of your system. Parity either exists or doesn't exist and can be identified by an extra bit per byte, ie: 9 bits or 36 bits. If your system does not require parity, you can still use SIMMs with parity. If, however, your system does require parity, you can't use SIMMs without parity. For this case, many PC's have an option to disable the parity requirement via a jumper or BIOS setting; refer to your motherboard manual. The final issue is the number of pins on the SIMM; the two most common are 30 pins (8 or 9 bit SIMMs) and 72 pins (32 or 36 bit SIMMs); the second is physically larger thus the one can not be used in the other. A few motherboards have both types of sockets.