4
Contributors
11
Replies
13
Views
10 Years
Discussion Span
Last Post by sanzilla
0

Hi

The Internet is full of answers on this subject.

open www.google.com

and type in the search box ....memory management of Windows 2000

I you will find info for your project


HTH

Darren
South Africa

0

well ur rite but they are not to the point i have serached the web but found nothing. i was told to me that windows is not freeware thats why we can't find any thing on the net .

0

Hi

Ok i think we need to be more specific about what the project requires from you.

What tasks have been set by the University and what are the questions or tasks they need you to achive. The more information we have the more we can help you

Darren
South Africa

1

Hi

This is my thesis i wrote for my degree in Computer science. if refers to how memory works. Its a large document so let it download.


Memory Modules
Memory chips in desktop computers originally used a pin configuration called dual inline
package
(DIP). This pin configuration could be soldered into holes on the computer's
motherboard or plugged into a socket that was soldered on the motherboard. This method
worked fine when computers typically operated on a couple of megabytes or less of RAM,
but as the need for memory grew, the number of chips needing space on the motherboard
increased.
The solution was to place the memory chips, along with all of the support components, on a
separate printed circuit board (PCB) that could then be plugged into a special connector
(memory bank) on the motherboard. Most of these chips use a small outline J-lead (SOJ)
pin configuration, but quite a few manufacturers use the thin small outline package (TSOP)
configuration as well. The key difference between these newer pin types and the original DIP
configuration is that SOJ and TSOP chips are surface-mounted to the PCB. In other words,
the pins are soldered directly to the surface of the board, not inserted in holes or sockets.
Memory chips are normally only available as part of a card called a module. You've probably
seen memory listed as 8x32 or 4x16. These numbers represent the number of the chips
multiplied by the capacity of each individual chip, which is measured in megabits (Mb), or
one million bits. Take the result and divide it by eight to get the number of megabytes on that
module. For example, 4x32 means that the module has four 32-megabit chips. Multiply 4 by
32 and you get 128 megabits. Since we know that a byte has 8 bits, we need to divide our
result of 128 by 8. Our result is 16 megabytes!
The type of board and connector used for RAM in desktop computers has evolved over the
past few years. The first types were proprietary, meaning that different computer
manufacturers developed memory boards that would only work with their specific systems.
Then came SIMM, which stands for single in-line memory module. This memory board
used a 30-pin connector and was about 3.5 x .75 inches in size (about 9 x 2 cm). In most
computers, you had to install SIMMs in pairs of equal capacity and speed. This is because
the width of the bus is more than a single SIMM. For example, you would install two 8-
megabyte (MB) SIMMs to get 16 megabytes total RAM. Each SIMM could send 8 bits of data
at one time, while the system bus could handle 16 bits at a time. Later SIMM boards, slightly
larger at 4.25 x 1 inch (about 11 x 2.5 cm), used a 72-pin connector for increased bandwidth
and allowed for up to 256 MB of RAM.
From the top: SIMM, DIMM and SODIMM memory modules
As processors grew in speed and bandwidth capability, the industry adopted a new standard
in dual in-line memory module (DIMM). With a whopping 168-pin or 184-pin connector and
a size of 5.4 x 1 inch (about 14 x 2.5 cm), DIMMs range in capacity from 8 MB to 1 GB per
module and can be installed singly instead of in pairs. Most PC memory modules and the
modules for the Mac G5 systems operate at 2.5 volts, while older Mac G4 systems typically
use 3.3 volts. Another standard, Rambus in-line memory module (RIMM), is comparable in
size and pin configuration to DIMM but uses a special memory bus to greatly increase
speed.
Many brands of notebook computers use proprietary memory modules, but several
manufacturers use RAM based on the small outline dual in-line memory module
(SODIMM) configuration. SODIMM cards are small, about 2 x 1 inch (5 x 2.5 cm), and have
144 or 200 pins. Capacity ranges from 16 MB to 1 GB per module. To conserve space, the
Apple iMac desktop computer uses SO-DIMMs instead of the traditional DIMMs. Subnotebook
computers use even smaller DIMMs, known as Micro-DIMMs, which have either
144 pins or 172 pins.
Error Checking
Most memory available today is highly reliable. Most systems simply have the memory
controller check for errors at start-up and rely on that. Memory chips with built-in errorchecking
typically use a method known as parity to check for errors. Parity chips have an
extra bit for every 8 bits of data. The way parity works is simple. Let's look at even parity
first.
When the 8 bits in a byte receive data, the chip adds up the total number of 1s. If the total
number of 1s is odd, the parity bit is set to 1. If the total is even, the parity bit is set to 0.
When the data is read back out of the bits, the total is added up again and compared to the
parity bit. If the total is odd and the parity bit is 1, then the data is assumed to be valid and is
sent to the CPU. But if the total is odd and the parity bit is 0, the chip knows that there is an
error somewhere in the 8 bits and dumps the data. Odd parity works the same way, but the
parity bit is set to 1 when the total number of 1s in the byte are even.
The problem with parity is that it discovers errors but does nothing to correct them. If a byte
of data does not match its parity bit, then the data are discarded and the system tries again.
Computers in critical positions need a higher level of fault tolerance. High-end servers often
have a form of error-checking known as error-correction code (ECC). Like parity, ECC
uses additional bits to monitor the data in each byte. The difference is that ECC uses several
bits for error checking -- how many depends on the width of the bus -- instead of one. ECC
memory uses a special algorithm not only to detect single bit errors, but actually correct them
as well. ECC memory will also detect instances when more than one bit of data in a byte
fails. Such failures are very rare, and they are not correctable, even with ECC.
The majority of computers sold today use nonparity memory chips. These chips do not
provide any type of built-in error checking, but instead rely on the memory controller for error
detection.
Common RAM Types
SRAM
Static random access memory
uses multiple transistors, typically four to six, for each
memory cell but doesn't have a capacitor in each cell. It is used primarily for cache
DRAM
Dynamic random access memory
has memory cells with a paired transistor and capacitor
requiring constant refreshing.
FPM DRAM
Fast page mode dynamic random access memory
was the original form of DRAM. It
waits through the entire process of locating a bit of data by column and row and then reading
the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately
176 MBps.
EDO DRAM
Extended data-out dynamic random access memory
does not wait for all of the
processing of the first bit before continuing to the next one. As soon as the address of the
first bit is located, EDO DRAM begins looking for the next bit. It is about five percent faster
than FPM. Maximum transfer rate to L2 cache is approximately 264 MBps.
SDRAM
Synchronous dynamic random access memory
takes advantage of the burst mode
concept to greatly improve performance. It does this by staying on the row containing the
requested bit and moving rapidly through the columns, reading each bit as it goes. The idea
is that most of the time the data needed by the CPU will be in sequence. SDRAM is about
five percent faster than EDO RAM and is the most common form in desktops today.
Maximum transfer rate to L2 cache is approximately 528 MBps.
DDR SDRAM
Double data rate synchronous dynamic RAM
is just like SDRAM except that is has higher
bandwidth, meaning greater speed. Maximum transfer rate to L2 cache is approximately
1,064 MBps (for DDR SDRAM 133 MHZ).
RDRAM
Rambus dynamic random access memory
is a radical departure from the previous DRAM
architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory module
(RIMM)
, which is similar in size and pin configuration to a standard DIMM. What makes
RDRAM so different is its use of a special high-speed data bus called the Rambus channel.
RDRAM memory chips work in parallel to achieve a data rate of 800 MHz, or 1,600 MBps.
Since they operate at such high speeds, they generate much more heat than other types of
chips. To help dissipate the excess heat Rambus chips are fitted with a heat spreader, which
looks like a long thin wafer. Just like there are smaller versions of DIMMs, there are also SORIMMs,
designed for notebook computers.
Credit Card Memory
Credit card memory is a proprietary self-contained DRAM memory module that plugs into a
special slot for use in notebook computers.
PCMCIA Memory Card
Another self-contained DRAM module for notebooks, cards of this type are not proprietary
and should work with any notebook computer whose system bus matches the memory card's
configuration.
CMOS RAM
CMOS RAM is a term for the small amount of memory used by your computer and some
other devices to remember things like hard disk settings. This memory uses a small battery
to provide it with the power it needs to maintain the memory contents.
VRAM
VideoRAM
, also known as multiport dynamic random access memory (MPDRAM), is a
type of RAM used specifically for video adapters or 3-D accelerators. The "multiport" part
comes from the fact that VRAM normally has two independent access ports instead of one,
allowing the CPU and graphics processor to access the RAM simultaneously. VRAM is
located on the graphics card and comes in a variety of formats, many of which are
proprietary. The amount of VRAM is a determining factor in the resouloution and colour
depth of the display. VRAM is also used to hold graphics-specific information such as 3-D
geometry data and texture maps. True multiport VRAM tends to be expensive, so today;
many graphics cards use SGRAM (synchronous graphics RAM) instead. Performance is
nearly the same, but SGRAM is cheaper.
Maybe you have been thinking about buying a computer, and it has occurred to you that you
might want to buy a laptop version. After all, today's laptops have just as much computing
power as desktops, without taking up as much space. You can take a laptop on the road with
you to do your computing or make presentations. Perhaps you prefer comfortably working on
your couch in front of the TV instead of sitting at a desk. Maybe a laptop is for you.
A Brief History
Alan Kay of the Xerox Palo Alto Research Center originated the idea of a portable computer
in the 1970s. Kay envisioned a notebook-sized, portable computer called the Dynabook that
everyone could own, and that could handle all of the user's informational needs. Kay also
envisioned the Dynabook with wireless network capabilities. Arguably, the first laptop
computer was designed in 1979 by William Moggridge of Grid Systems Corp. It had 340
kilobytes of bubble memory, a die-cast magnesium case and a folding electroluminescent
graphics display screen. In 1983, Gavilan Computer produced a laptop computer with the
following features:
· 64 kilobytes (expandable to 128 kilobytes) of random access memory (RAM)
· Gavilan operating system (also ran MS-DOS)
· 8088 microprocessor
· touchpad mouse
· portable printer
· weighed 9 lb (4 kg) alone or 14 lb (6.4 kg) with printer
The Gavilan computer had a floppy drive that was not compatible with other computers, and
it primarily used its own operating system. The company failed.
In 1984, Apple Computer introduced its Apple IIc model. The Apple IIc was a notebook-sized
computer, but not a true laptop. It had a 65C02 microprocessor, 128 kilobytes of memory, an
internal 5.25-inch floppy drive, two serial ports, a mouse port, modem card, external power
supply, and a folding handle. The computer itself weighed about 10 to 12 lb (about 5 kg), but
the monitor was heavier. The Apple IIc had a 9-inch monochrome monitor or an optional
LCD panel. The combination computer/ LCD panel made it a genuinely portable computer,
although you would have to set it up once you reached your destination. The Apple IIc was
aimed at the home and educational markets, and was highly successful for about five years.
Later, in 1986, IBM introduced its IBM PC Convertible. Unlike the Apple IIc, the PC
Convertible was a true laptop computer. Like the Gavilan computer, the PC Convertible used
an 8088 microprocessor, but it had 256 kilobytes of memory, two 3.5-inch (8.9-cm) floppy
drives, an LCD, parallel and serial printer ports and a space for an internal modem. It came
with its own applications software (basic word processing, appointment calendar,
telephone/address book, calculator), weighed 12 lbs (5.4 kg) and sold for $3,500. The PC
Convertible was a success, and ushered in the laptop era. A bit later, Toshiba was
successful with an IBM laptop clone.
Since these early models, many manufacturers have introduced and improved laptop
computers over the years. Today's laptops are much more sophisticated, lighter and closer
to Kay's original vision.
The First Laptop?
By Ian McKay
The following claim is the sort of thing that can get you
into trouble, but only M.A.D. offers you the chance to
verify the news of what I imagine is the first auction
appearance of the Grid Compass Computer 1109 that
Bonhams, which offered it in a "20th Century Design"
sale of June 1, claimed is "the first ever lap-top
computer."
Designed in 1979 by a Briton, William Moggridge, for
Grid Systems Corporation, the Grid Compass was one fifth the weight of any model
equivalent in performance and was used by NASA on the space shuttle program in the
early 1980's.
The sale catalog describes it as a "340K byte bubble memory lap-top computer with diecast
magnesium case and folding electroluminescent graphics display screen."
Complete with manual, it sold for $800.
When you think about it, it's amazing how many different types of electronic memory you
encounter in daily life. Many of them have become an integral part of our vocabulary:
· RAM
· ROM
· Cache
· Dynamic RAM
· Static RAM
· Flash Memory
· Memory Sticks
· Virtual Memory
· Video memory
· BIOS
You already know that the Computer in front of you has memory. What you may not know is
that most of the electronic items you use every day have some form of memory also. Here
are just a few examples of the many items that use memory:
· Cell phones
· PDA’s
· Game consoles
· Car radios
· VCRs
· TVs
Each of these devices uses different types of memory in different ways!
In this article, you'll learn why there are so many different types of memory and what all of
the terms mean.
RAM Basics
Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of
transistors and capacitors. In the most common form of computer memory, dynamic
random access memory
(DRAM), a transistor and a capacitor are paired to create a
memory cell, which represents a single bit of data. The capacitor holds the bit of information
-- a 0 or a 1. The transistor acts as a switch that lets the control circuitry on the memory chip
read the capacitor or change its state.
A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory
cell, the bucket is filled with electrons. To store a 0, it is emptied. The problem with the
capacitor's bucket is that it has a leak. In a matter of a few milliseconds a full bucket
becomes empty. Therefore, for dynamic memory to work, either the CPU or the memory
controller
has to come along and recharge all of the capacitors holding a 1 before they
discharge. To do this, the memory controller reads the memory and then writes it right back.
This refresh operation happens automatically thousands of times per second.
The capacitor in a dynamic RAM memory cell is like a leaky bucket.
It needs to be refreshed periodically or it will discharge to 0.
This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be
dynamically refreshed all of the time or it forgets what it is holding. The downside of all of this
refreshing is that it takes time and slows down the memory.
Memory cells are etched onto a silicon wafer in an array of columns (bitlines) and rows
(wordlines). The intersection of a bitline and wordline constitutes the address of the
memory cell.
Memory is made up of bits arranged in a two-dimensional grid.
In this figure, red cells represent 1s and white cells represent 0s.
In the animation, a column is selected and then rows are charged to write data into the
specific column.
DRAM works by sending a charge through the appropriate column (CAS) to activate the
transistor at each bit in the column. When writing, the row lines contain the state the
capacitor should take on. When reading, the sense-amplifier determines the level of charge
in the capacitor. If it is more than 50 percent, it reads it as a 1; otherwise it reads it as a 0.
The counter tracks the refresh sequence based on which rows have been accessed in what
order. The length of time necessary to do all this is so short that it is expressed in
nanoseconds (billionths of a second). A memory chip rating of 70ns means that it takes 70
nanoseconds to completely read and recharge each cell.
Memory cells alone would be worthless without some way to get information in and out of
them. So the memory cells have a whole support infrastructure of other specialized circuits.
These circuits perform functions such as:
· Identifying each row and column (row address select and column address select)
· Keeping track of the refresh sequence (counter)
· Reading and restoring the signal from a cell (sense amplifier)
· Telling a cell whether it should take a charge or not (write enable)
Other functions of the memory controller include a series of tasks that include identifying
the type, speed and amount of memory and checking for errors.
Static RAM uses a completely different technology. In static RAM, a form of flip-flop holds
each bit of memory. A flip-flop for a memory cell takes four or six transistors along with some
wiring, but never has to be refreshed. This makes static RAM significantly faster than
dynamic RAM. However, because it has more parts, a static memory cell takes up a lot more
space on a chip than a dynamic memory cell. Therefore, you get less memory per chip, and
that makes static RAM a lot more expensive.
So static RAM is fast and expensive, and dynamic RAM is less expensive and slower. So
static RAM is used to create the CPU's speed-sensitive cache, while dynamic RAM forms
the larger system RAM space.
How Much Do You Need?
It's been said that you can never have enough money, and the same holds true for RAM,
especially if you do a lot of graphics-intensive work or gaming. Next to the CPU itself, RAM is
the most important factor in computer performance. If you don't have enough, adding RAM
can make more of a difference than getting a new CPU!
If your system responds slowly or accesses the hard drive constantly, then you need to add
more RAM. If you are running Windows XP, Microsoft recommends 128MB as the minimum
RAM requirement. At 64MB, you may experience frequent application problems. For optimal
performance with standard desktop applications, 256MB is recommended. If you are running
Windows 95/98, you need a bare minimum of 32 MB, and your computer will work much
better with 64 MB. Windows NT/2000 needs at least 64 MB, and it will take everything you
can throw at it, so you'll probably want 128 MB or more.
Linux works happily on a system with only 4 MB of RAM. If you plan to add X-Windows or do
much serious work, however, you'll probably want 64 MB. Mac OS X systems should have a
minimum of 128 MB, or for optimal performance, 512 MB.
The amount of RAM listed for each system above is estimated for normal usage -- accessing
the Internet, word processing, standard home/office applications and light entertainment. If
you do computer-aided design (CAD), 3-D modeling/animation or heavy data processing, or
if you are a serious gamer, then you will most likely need more RAM. You may also need
more RAM if your computer acts as a server of some sort.
Another question is how much VRAM you want on your video card. Almost all cards that you
can buy today have at least 16 MB of RAM. This is normally enough to operate in a typical
office environment. You should probably invest in a 32-MB or better graphics card if you
want to do any of the following:
· Play realistic games
· Capture and edit video
· Create 3-D graphics
· Work in a high-resolution, full-color environment
· Design full-color illustrations
When shopping for video cards, remember that your monitor and computer must be capable
of supporting the card you choose.
Read-only memory (ROM), also known as firmware, is an integrated circuit programmed
with specific data when it is manufactured. ROM chips are used not only in computers, but in
most other electronic items as well. In this edition you will learn about the different types of
ROM and how each works. This article is one in a series of articles dealing with computer
memory, including:
· How Computer Memory Works
· How RAM Works
· How Virtual Memory Works
· How Flash Memory Works
· How BIOS Works
Let's start by identifying the different types of ROM.
ROM Types
There are five basic ROM types:
· ROM
· PROM
· EPROM
· EEPROM
· Flash memory
Each type has unique characteristics, which you'll learn about in this article, but they are all
types of memory with two things in common:
· Data stored in these chips is nonvolatile -- it is not lost when power is removed.
· Data stored in these chips is either unchangeable or requires a special operation to
change (unlike RAM, which can be changed as easily as it is read).
This means that removing the power source from the chip will not cause it to lose any data.
ROM at Work
Similar to RAM, ROM chips (Figure 1) contain a grid of columns and rows. But where the
columns and rows intersect, ROM chips are fundamentally different from RAM chips. While
RAM uses transistors to turn on or off access to a capacitor at each intersection, ROM uses
a diode to connect the lines if the value is 1. If the value is 0, then the lines are not
connected at all.
Figure 1. BIOS uses Flash memory, a type of ROM.
A diode normally allows current to flow in only one direction and has a certain threshold,
known as the forward breakover, that determines how much current is required before the
diode will pass it on. In silicon-based items such as processors and memory chips, the
forward breakover voltage is approximately 0.6 volts. By taking advantage of the unique
properties of a diode, a ROM chip can send a charge that is above the forward break over
down the appropriate column with the selected row grounded to connect at a specific cell. If
a diode is present at that cell, the charge will be conducted through to the ground, and, under
the binary system, the cell will be read as being "on" (a value of 1). The neat part of ROM is
that if the cell's value is 0, there is no diode at that intersection to connect the column and
row. So the charge on the column does not get transferred to the row.
As you can see, the way a ROM chip works necessitates the programming of perfect and
complete data when the chip is created. You cannot reprogram or rewrite a standard ROM
chip. If it is incorrect, or the data needs to be updated, you have to throw it away and start
over. Creating the original template for a ROM chip is often a laborious process full of trial
and error. But the benefits of ROM chips outweigh the drawbacks. Once the template is
completed, the actual chips can cost as little as a few cents each. They use very little power,
are extremely reliable and, in the case of most small electronic devices, contain all the
necessary programming to control the device. A great example is the small chip in the
singing fish toy. This chip, about the size of your fingernail, contains the 30-second song
clips in ROM and the control codes to synchronize the motors to the music.
PROM
Creating ROM chips totally from scratch is time-consuming and very expensive in small
quantities. For this reason, mainly, developers created a type of ROM known as
programmable read-only memory (PROM). Blank PROM chips can be bought
inexpensively and coded by anyone with a special tool called a programmer.
PROM chips (Figure 2) have a grid of columns and rows just as ordinary ROMs do. The
difference is that every intersection of a column and row in a PROM chip has a fuse
connecting them. A charge sent through a column will pass through the fuse in a cell to a
grounded row indicating a value of 1. Since all the cells have a fuse, the initial (blank) state
of a PROM chip is all 1s. To change the value of a cell to 0, you use a programmer to send a
specific amount of current to the cell. The higher voltage breaks the connection between the
column and row by burning out the fuse. This process is known as burning the PROM.
Figure 2
PROMs can only be programmed once. They are more fragile than ROMs. A jolt of static
electricity can easily cause fuses in the PROM to burn out, changing essential bits from 1 to
0. But blank PROMs are inexpensive and are great for prototyping the data for a ROM before
committing to the costly ROM fabrication process.
EPROM
Working with ROMs and PROMs can be a wasteful business. Even though they are
inexpensive per chip, the cost can add up over time. Erasable programmable read-only
memory
(EPROM) addresses this issue. EPROM chips can be rewritten many times.
Erasing an EPROM requires a special tool that emits a certain frequency of ultraviolet (UV)
light. EPROM’s are configured using an EPROM programmer that provides voltage at
specified levels depending on the type of EPROM used.
Once again we have a grid of columns and rows. In an EPROM, the cell at each intersection
has two transistors. The two transistors are separated from each other by a thin oxide layer.
One of the transistors is known as the floating gate and the other as the control gate. The
floating gate's only link to the row (wordline) is through the control gate. As long as this link
is in place, the cell has a value of 1. To change the value to 0 requires a curious process
called Fowler-Nordheim tunneling. Tunneling is used to alter the placement of electrons in
the floating gate. An electrical charge, usually 10 to 13 volts, is applied to the floating gate.
The charge comes from the column (bitline), enters the floating gate and drains to a ground.
This charge causes the floating-gate transistor to act like an electron gun. The excited
electrons are pushed through and trapped on the other side of the thin oxide layer, giving it a
negative charge. These negatively charged electrons act as a barrier between the control
gate and the floating gate. A device called a cell sensor monitors the level of the charge
passing through the floating gate. If the flow through the gate is greater than 50 percent of
the charge, it has a value of 1. When the charge passing through drops below the 50-percent
threshold, the value changes to 0. A blank EPROM has all of the gates fully open, giving
each cell a value of 1.
To rewrite an EPROM, you must erase it first. To erase it, you must supply a level of energy
strong enough to break through the negative electrons blocking the floating gate. In a
standard EPROM, this is best accomplished with UV light at a frequency of 253.7. Because
this particular frequency will not penetrate most plastics or glasses, each EPROM chip has a
quartz window on top of it. The EPROM must be very close to the eraser's light source,
within an inch or two, to work properly.
An EPROM eraser is not selective, it will erase the entire EPROM. The EPROM must be
removed from the device it is in and placed under the UV light of the EPROM eraser for
several minutes. An EPROM that is left under too long can become over-erased. In such a
case, the EPROM's floating gates are charged to the point that they are unable to hold the
electrons at all.
EEPROMs and Flash Memory
Though EPROMs are a big step up from PROMs in terms of reusability, they still require
dedicated equipment and a labor-intensive process to remove and reinstall them each time a
change is necessary. Also, changes cannot be made incrementally to an EPROM; the whole
chip must be erased. Electrically erasable programmable read-only memory (EEPROM)
chips remove the biggest drawbacks of EPROMs.
In EEPROMs:
· The chip does not have to removed to be rewritten.
· The entire chip does not have to be completely erased to change a specific portion of
it.
· Changing the contents does not require additional dedicated equipment.
Instead of using UV light, you can return the electrons in the cells of an EEPROM to normal
with the localized application of an electric field to each cell. This erases the targeted cells
of the EEPROM, which can then be rewritten. EEPROMs are changed 1 byte at a time,
which makes them versatile but slow. In fact, EEPROM chips are too slow to use in many
products that make quick changes to the data stored on the chip.
Manufacturers responded to this limitation with Flash memory, a type of EEPROM that uses
in-circuit wiring to erase by applying an electrical field to the entire chip or to predetermined
sections of the chip called blocks. Flash memory works much faster than traditional
EEPROMs because it writes data in chunks, usually 512 bytes in size, instead of 1 byte at a
time.
DB Consulting 2004© has written the definitive document related to memory and the technology
behind it. Everything you ever wanted to know about memory can be found here.
Select from the following topics:
· What is Memory?
· How Much Memory Do You Need?
· A Closer Look
· How Memory Works
· How Much Memory Is On a Module?
· Different Kinds of Memory
· Other Memory Technologies
· What to Consider When Buying Memory
· How to Install Memory
· Troubleshooting Memory Problems
· More About Kingston
· The Glossary
The Ultimate Memory Guide is also available in Adobe Acrobat (PDF) format, in the following
languages
WHAT IS MEMORY?
INTRODUCTION
These days, no matter how much memory your computer has, it never seems to be quite enough.
Not long ago, it was unheard of for a PC (Personal Computer), to have more than 1 or 2 MB
(Megabytes)
of memory. Today, most systems require 128MB to run basic applications. And up
to 512MB or more is needed for optimal performance when using graphical and multimedia
programs.
As an indication of how much things have changed over the past two decades, consider this: in
1981, referring to computer memory, Bill Gates said, "640K (roughly 1/2 of a megabyte) ought to
be enough for anybody."
For some, the memory equation is simple: more is good; less is bad. However, for those who
want to know more, this reference guide contains answers to the most common questions, plus
much, much more.
THE ROLE OF MEMORY IN THE COMPUTER
People in the computer industry commonly use the term "memory" to refer to RAM (Random
Access Memory). A computer uses Ram to hold temporary instructions and data needed to
complete tasks. This enables the computer's CPU (Central Processing Unit), to access
instructions and data stored in memory very quickly.
A good example of this is when the CPU loads an application program - such as a word
processing or page layout program - into memory, thereby allowing the application program to
work as quickly and efficiently as possible. In practical terms, having the program loaded into
memory means that you can get work done more quickly with less time spent waiting for the
computer to perform tasks.
The process begins when you enter a command from your keyboard. The CPU interprets the
command and instructs the hard drive to load the command or program into memory. Once the
data is loaded into memory, the CPU is able to access it much more quickly than if it had to
retrieve it from the hard drive.
This process of putting things the CPU needs in a place where it can get at them more quickly is
similar to placing various electronic files and documents you're using on the computer into a
single file folder or directory. By doing so, you keep all the files you need handy and avoid
searching in several places every time you need them.
THE DIFFERENCE BETWEEN MEMORY AND STORAGE
People often confuse the terms memory and storage, especially when describing the amount
they have of each. The term memory refers to the amount of RAM installed in the computer,
whereas the term storage refers to the capacity of the computer's hard disk. To clarify this
common mix-up, it helps to compare your computer to an office that contains a desk and a file
cabinet.
The file cabinet represents the computer's
hard disk, which provides storage for all the
files and information you need in your
office. When you come in to work, you take
out the files you need from storage and put
them on your desk for easy access while
you work on them. The desk is like memory
in the computer: it holds the information and
data you need to have handy while you're
working.
Consider the desk-and-file-cabinet metaphor for a moment. Imagine what it would be like if every
time you wanted to look at a document or folder you had to retrieve it from the file drawer. It
would slow you down tremendously, not to mention drive you crazy. With adequate desk space -
our metaphor for memory - you can lay out the documents in use and retrieve information from
them immediately, often with just a glance.
Here's another important difference between memory and storage: the information stored on a
hard disk remains intact even when the computer is turned off. However, any data held in
memory is lost when the computer is turned off. In our desk space metaphor, it's as though any
files left on the desk at closing time will be thrown away.
MEMORY AND PERFORMANCE
It's been proven that adding more memory to a computer system increases its performance. If
there isn't enough room in memory for all the information the CPU needs, the computer has to set
up what's known as a virtual memory file. In so doing, the CPU reserves space on the hard disk
to simulate additional RAM. This process, referred to as "swapping", slows the system down. In
an average computer, it takes the CPU approximately 200ns (nanoseconds) to access RAM
compared to 12,000,000ns to access the hard drive. To put this into perspective, this is
equivalent to what's normally a 3 1/2 minute task taking 4 1/2 months to complete!
Access time comparison between RAM and a hard drive.
MEMORY UPGRADE ON A PC: LIFE IS GOOD
If you've ever had more memory added to your PC, you probably noticed a performance
improvement right away. With a memory upgrade, applications respond more quickly, Web pages
load faster, and you can have more programs running simultaneously. In short, additional
memory can make using your computer a lot more enjoyable.
MEMORY UPGRADE ON A SERVER: LIFE IS EVEN BETTER
These days, more and more people are using computers in a workgroup and sharing information
over a network. The computers that help distribute information to people on a network are called
servers. And their performance has a huge impact on the performance of the network: if a server
is performing poorly, everyone on the network "feels the pain." So, while a memory upgrade on
an individual PC makes a big difference for the person who uses it, a memory upgrade in a server
has even more far-reaching effects and benefits everyone who accesses the server.
To better understand the benefits of increasing memory on a server, take a look at these results
from an independent study done on Windows NT-based servers.
Application servers are utilized to host a wide range of applications, such as word processing and
spreadsheet programs. By increasing base memory from 64MB to 256MB, Windows NT Server
was able to support five times as many clients before transactions per second dropped.
Web servers are employed to serve up Web pages in response to HTTP requests from users.
Doubling memory can cut response time by more than 50%.
Directory servers are vital to corporate productivity, handling most email and messaging tasks. In
this environment, more memory increases the speed with which a server can access information
from linked databases. Doubling memory increased performance from 248 to 3000%.
How Much Memory Do You Need?
Perhaps you already know what it's like to work on a computer that doesn't have quite enough
memory. You can hear the hard drive operating more frequently and the "hour glass" or "wrist
watch" cursor symbol appears on the screen for longer periods of time. Things can run more
slowly at times, memory errors can occur more frequently, and sometimes you can't launch an
application or a file without first closing or quitting another.
So, how do you determine if you have enough memory, or if you would benefit from more? And if
you do need more, how much more? The fact is, the right amount of memory depends on the
type of system you have, the type of work you're doing, and the software applications you're
using. Because the right amount of memory is likely to be different for a desktop computer than
for a server, we've divided this section into two parts - one for each type of system.
Memory Requirements For A Desktop Computer
If you're using a desktop computer, memory requirements depend on the computer's operating
system and the application software you're using. Today's word processing and spreadsheet
applications require as little as 32MB of memory to run. However, software and operating system
developers continue to extend the capabilities of their products, which usually means greater
memory requirements. Today, developers typically assume a minimum memory configuration of
64MB. Systems used for graphic arts, publishing, and multimedia call for at least 128MB of
memory and it's common for such systems to require 256MB or more for best performance.
The chart on the next page provides basic guidelines to help you decide how much memory is
optimal for your desktop computer. The chart is divided by operating system and by different
kinds of work. Find the operating system you're using on your computer, then look for the
descriptions of work that most closely match the kind of work you do.
DESKTOP MEMORY MAP
WINDOWS® 2000 PROFESSIONAL
Windows 2000 Professional runs software applications faster. Notebook-ready and designed with
the future in mind, Windows 2000 Professional allows users to take advantage of a full range of
features today. Windows 2000 Professional is future-ready and promises to run today's and
tomorrow's applications better.
Baseline: 64MB - 128MB
Optimal: 128MB - 512MB
Light- Word processing, email, data-entry 64MB - 96MB
Medium- Fax/communications, database administration, spreadsheets; >2
applications open at a time
64MB - 128MB
Administrative & Service
Heavy-
Complex documents, accounting, business graphics, presentation software,
network connectivity
96MB - 256MB
Light- Proposals, reports, spreadsheets, business graphics, databases, scheduling,
presentations
64MB - 96MB
Medium- Complex presentations, sales/market analysis, project management,
Internet access
96MB - 128MB
Executives & Analysts
Heavy-
Statistical applications, large databases, research/technical analysis,
complex presentations, video conferencing
128MB -
512MB
Light- Page layout, 2 - 4 color line drawings, simple image manipulation, simple
graphics
96MB - 128MB
Medium- 2D CAD, rendering, multimedia presentations, simple photo-editing, Web
development
128MB -
512MB
Engineers & Designers
Heavy-
Animation, complex photo-editing, real-time video, 3D CAD, solid modeling,
finite element analysis
256MB - 1GB
WINDOWS® 98
Windows 98 requires 16 - 32MB to run basic applications. Tests show 45 - 65% performance
improvements at 64MB and beyond.
Baseline: 32MB - 64MB
Optimal: 128MB - 256MB
Light- Word processing, basic financial management, email and other light Internet use 32MB - 64MB
Medium- Home office applications, games, Internet surfing, downloading images, spreadsheets,
presentations
64MB - 128MB
Students
Heavy-
Multimedia use such as video, graphics, music, voice recognition, design, complex
images
128MB -
384MB
Light- Word processing, basic financial management, email and other light Internet use 32MB - 48MB
Medium- Home office applications, games, Internet surfing, downloading images, spreadsheets,
presentations 48MB - 64MB
Home Users
Heavy-
Multimedia use such as video, graphics, music, voice recognition, design, complex
images
64MB - 128MB
LINUX®
The Linux operating system is quickly gaining popularity as an alternative to Microsoft Windows.
It includes true multitasking, virtual memory, shared libraries, demand loading, proper memory
management, TCP/IP networking, and other features consistent with Unix-type systems.
Baseline: 48MB - 112MB
Optimal: 112MB - 512MB
Light- Word processing, email, data-entry 48MB - 80MB
Medium- Fax /communications, database administration, spreadsheets; >2
applications open at a time
48MB - 112MB
Administrative & Service
Heavy-
Complex documents, accounting, business graphics, presentation software,
network connectivity
80MB - 240MB
Light- Proposals, reports, spreadsheets, business graphics, databases, scheduling,
presentations 48MB - 80MB
Medium- Complex presentations, sales/market analysis, project management,
Internet access
80MB - 112MB
Executives & Analysts
Heavy-
Statistical applications, large databases, research/technical analysis,
complex presentations, video conferencing
112MB -
512MB
Light- Page layout, 2 - 4 color line drawings, simple image manipulation, simple
graphics
80MB - 112MB
Medium- 2D CAD, rendering, multimedia presentations, simple photo-editing, Web
development 112MB -
512MB
Engineers & Designers
Heavy-
Animation, complex photo-editing, real-time video, 3D CAD, solid modeling,
finite element analysis
240MB - 1GB
MACINTOSH™ OS
The Macintosh operating system manages memory in substantially different ways than other
systems. Still, System 9.0 users will find that 48MB is a bare minimum. When using PowerMac ®
applications with Internet connectivity, plan on a range between 64 and 128MB as a minimum.
Baseline: 48MB - 64MB
Optimal: 128MB - 512MB
Light- Word processing, email, data- entry 48MB - 64MB
Medium- Fax /communications, database administration, spreadsheets; >2
applications open at a time
64MB - 96MB
Administrative & Service
Heavy-
Complex documents, accounting, business graphics, presentation software,
network connectivity
96MB - 128MB
Light- Proposals, reports, spreadsheets, business graphics, databases, scheduling,
presentations
64MB - 256MB
Medium- Complex presentations, sales/ market analysis, project management,
Internet access
128MB - 1GB
Executives & Analysts
Heavy-
Statistical applications, large databases, research/ technical analysis,
complex presentations, video conferencing
96MB - 128MB
Light- Page layout, 2 - 4 color line drawings, simple image manipulation, simple
graphics
128MB -
512MB
Medium- 2D CAD, rendering, multimedia presentations, simple photo-editing, Web
development
256MB - 1GB
Engineers & ;Designers
Heavy-
Animation, complex photo-editing, real- ime video, 3D CAD, solid modeling,
finite element analysis 512MB - 2GB
· Please Note: These figures reflect work done in a typical desktop environment. Higher-end workstation tasks may
require up to 4GB. Naturally, a chart such as this evolves as memory needs and trends change. Over time, developers
of software and operating systems will continue to add features and functionality to their products. This will continue to
drive the demand for more memory. More complex character sets, like Kanji, may require more memory than the
standard Roman based (English) character sets.
· SERVER MEMORY REQUIREMENTS
How can you tell when a server requires more memory? Quite often, the users of the
network are good indicators. If network-related activity such as email, shared
applications, or printing slows down, they'll probably let their Network Administrator know.
Here are a few proactive strategies that can be used to gauge whether or not a server
has sufficient memory:
· Monitor server disk activity. If disk swapping is detected, it is usually a result of
inadequate memory.
· Most servers have a utility that monitors CPU, memory, and disk utilization. Review this
at peak usage times to measure the highest spikes in demand.
Once it's determined that a server does need more memory, there are many factors to consider
when deciding on how much is enough:
What functions does the server perform (application, communication, remote access,
email, Web, file, multimedia, print, database)?
Some servers hold a large amount of information in memory at once, while others
process information sequentially. For example, a typical large database server does a lot
of data processing; with more memory, such a server would likely run much faster
because more of the records it needs for searches and queries could be held in memory -
that is, "at the ready." On the other hand, compared to a database server, a typical file
server can perform efficiently with less memory because its primary job is simply to
transfer information rather than to process it.
What operating system does the server use?
Each server operating system manages memory differently. For example, a network
operating system (NOS)
such as the Novell operating system handles information much
differently than an application-oriented system such as Windows NT. Windows NT's
richer interface requires more memory, while the traditional Novell functions of file and
print serving require less memory.
How many users access the server at one time?
Most servers are designed and configured to support a certain number of users at one
time. Recent tests show that this number is directly proportional to the amount of memory
in the server. As soon as the number of users exceeds maximum capacity, the server
resorts to using hard disk space as virtual memory, and performance drops sharply. In
recent studies with Windows NT, additional memory allowed an application server to
increase by several times the number of users supported while maintaining the same
level of performance.
What kind and how many processors are installed on the server?
Memory and processors affect server performance differently, but they work hand in
hand. Adding memory allows more information to be handled at one time, while adding
processors allows the information to be processed faster. So, if you add processing
power to a system, additional memory will enable the processors to perform at their full
potential.
How critical is the server's response time?
In some servers, such as Web or e-commerce servers, response time directly affects the
customer experience and hence revenue. In these cases, some IT Managers choose to
install more memory than they think they would ever need in order to accommodate
surprise surges in use. Because server configurations involve so many variables, it's
difficult to make precise recommendations with regard to memory. The following chart
shows two server upgrade scenarios.
SERVER MEMORY MAP
WINDOWS® 2000 SERVER
Designed to help businesses of all sizes run better, Windows 2000 Server offers a manageable,
reliable and internet-ready solution for today's growing enterprises. For optimal performance,
consider adding more memory to take advantage of Windows 2000 Server's robust feature set.
Windows 2000 Server is internet-ready and promises to run today's and tomorrow's applications
better.
Baseline: 128MB
Optimal: 256MB - 1GB
Application Server Houses one or more applications to be accessed over a wide user base 256MB - 4GB
Directory Server Central Management of network resources 128MB - 1GB
Print Server Distributes print jobs to appropriate printers 128MB - 512MB
Communication Server Manages a variety of communications such as PBX, Voicemail, Email, and
VPN
512MB - 2GB
Web Server Internet and intranet solutions 512MB - 2GB
Database Server Manages simple to complex databases of varying sizes 256MB - 4GB
LINUX®
Linux is a reliable, cost-effective alternative to traditional UNIX servers. Depending on the
distribution, the Linux server platform features a variety of utilities, applications, and services.
Baseline: 64MB - 128MB
Optimal: 256MB - 1GB
Application Server Houses one or more applications to be accessed over a wide user base 64MB - 4GB
Directory Server Central Management of network resources 128MB - 1GB
Print Server Distributes print jobs to appropriate printers 128MB - 512MB
Communication Server Manages a variety of communications such as PBX, Voicemail, Email, and
VPN
512MB - 2GB
Web Server Internet and intranet solutions 512MB - 2GB
Database Server Manages simple to complex databases of varying sizes 256MB - 4GB
* Please Note: These figures reflect work done in a typical server environment. Higher-end
workstation tasks may require up to 4GB. Naturally, a chart such as this evolves as memory
needs and trends change. Over time, developers of software and operating systems will continue
to add features and functionality to their products. This will continue to drive the demand for more
memory. More complex character sets, like Kanji, may require more memory than the standard
Roman based (English) character sets.
A CLOSER LOOK
Memory comes in a variety of sizes and shapes. In general, it looks like a flat green stick with little
black cubes on it. Obviously, there's a lot more to memory than that. The illustration below shows
a typical memory module and points out some of its most important features.
WHAT MEMORY LOOKS LIKE
A closer look at a 168-pin SDRAM DIMM.
PCB (PRINTED CIRCUIT BOARD)
The green board that all the memory chips sit on is actually made up of several layers. Each layer
contains traces and circuitry, which facilitate the movement of data. In general, higher quality
memory modules use PCBs with more layers. The more layers a PCB has, the more space there
is between traces. The more space there is between traces, the lesser the chance of noise
interference. This makes the module much more reliable.
DRAM (DYNAMIC RANDOM ACCESS MEMORY)
DRAM is the most common form of RAM. It's called "dynamic" RAM because it can only hold data
for a short period of time and must be refreshed periodically. Most memory chips have black or
chrome coating, or packaging, to protect their circuitry. The following section titled "Chip
Packaging" shows pictures of chips housed in different types of chip packages.
CONTACT FINGERS
The contact fingers, sometimes referred to as "connectors" or "leads," plug into the memory
socket on the system board, enabling information to travel from the system board to the memory
module and back. On some memory modules, these leads are plated with tin while on others, the
leads are made of gold.
INTERNAL TRACE LAYER
The magnifying glass shows a layer of the PCB stripped away to reveal the traces etched in the
board. Traces are like roads the data travels on. The width and curvature of these traces as well
as the distance between them affect both the speed and the reliability of the overall module.
Experienced designers arrange, or "lay out", the traces to maximize speed and reliability and
minimize interference.
CHIP PACKAGING
The term "chip packaging" refers to the material coating around the actual silicon. Today's most
common packaging is called TSOP (Thin Small Outline Package). Some earlier chip designs
used DIP (Dual In-line Package) packaging and SOJ (Small Outline J-lead). Newer chips, such
as RDRAM use CSP (Chip Scale Package). Take a look at the different chip packages below, so
you can see how they differ.
DIP (DUAL IN-LINE PACKAGE)
When it was common for memory to be installed directly on the computer's system board, the
DIP-style DRAM package was extremely popular. DIPs are through-hole components, which
means they install in holes extending into the surface of the PCB. They can be soldered in place
or installed in sockets.
SOJ (SMALL OUTLINE J-LEAD)
SOJ packages got their name because the pins coming out of the chip are shaped like the letter
"J". SOJs are surface-mount components - that is, they mount directly onto the surface of the
PCB.
TSOP (THIN SMALL OUTLINE PACKAGE)
TSOP packaging, another surface-mount design, got its name because the package was much
thinner than the SOJ design. TSOPs were first used to make thin credit card modules for
notebook computers.
CSP (CHIP SCALE PACKAGE)
Unlike DIP, SOJ, and TSOP packaging, CSP packaging doesn't use pins to connect the chip to
the board. Instead, electrical connections to the board are through a BGA (Ball Grid Array) on the
underside of the package. RDRAM (Rambus DRAM) chips utilize this type of packaging.
CHIP STACKING
For some higher capacity modules, it is necessary to stack chips on top of one another to fit them
all on the PCB. Chips can be "stacked" either internally or externally. "Externally" stacked chip
arrangements are visible, whereas "internally" stacked chip arrangements are not.
WHERE MEMORY COMES FROM
MAKING THE CHIP
Amazing but true: memory starts out as common beach sand. Sand contains silicon, which is the
primary component in the manufacture of semiconductors, or "chips." Silicon is extracted from
sand, melted, pulled, cut, ground, and polished into silicon wafers. During the chip-making
process, intricate circuit patterns are imprinted on the chips through a variety of techniques. Once
this is complete, the chips are tested and die-cut. The good chips are separated out and proceed
through a stage called "bonding": this process establishes connections between the chip and the
gold or tin leads, or pins. Once the chips are bonded, they're packaged in hermetically sealed
plastic or ceramic casings. After inspection, they're ready for sale.
MAKING THE MEMORY MODULE
This is where memory module manufacturers enter the picture. There are three major
components that make up a memory module: the memory chips, PCB, and other "on-board"
elements such as resistors and capacitors. Design engineers use CAD (computer aided design)
programs to design the PCB. Building a high-quality board requires careful consideration of the
placement and the trace length of every signal line. The basic process of PCB manufacture is
very similar to that of the memory chips. Masking, layering, and etching techniques create copper
traces on the surface of the board. After the PCB is produced, the module is ready for assembly.
Automated systems perform surface-mount and through-hole assembly of the components onto
the PCB. The attachment is made with solder paste, which is then heated and cooled to form a
permanent bond. Modules that pass inspection are packaged and shipped for installation into a
computer.
WHERE MEMORY GOES IN THE COMPUTER
Originally, memory chips were connected directly to the computer's motherboard or system
board. But then space on the board became an issue. The solution was to solder memory chips
to a small modular circuit board - that is, a removable module that inserts into a socket on the
motherboard. This module design was called a SIMM (single in-line memory module), and it
saved a lot of space on the motherboard. For example, a set of four SIMMs might contain a total
of 80 memory chips and take up about 9 square inches of surface area on the motherboard.
Those same 80 chips installed flat on the motherboard would take up more than 21 square inches
on the motherboard.
These days, almost all memory comes in the form of memory modules and is installed in sockets
located on the system motherboard. Memory sockets are easy to spot because they are normally
the only sockets of their size on the board. Because it's critical to a computer's performance for
information to travel quickly between memory and the processor(s), the sockets for memory are
typically located near the CPU.
Examples of where memory can be installed.
MEMORY BANKS AND BANK SCHEMAS
Memory in a computer is usually designed and arranged in memory banks. A memory bank is a
group of sockets or modules that make up one logical unit. So, memory sockets that are
physically arranged in rows may be part of one bank or divided into different banks. Most
computer systems have two or more memory banks - usually called bank A, bank B, and so on.
And each system has rules or conventions on how memory banks should be filled. For example,
some computer systems require all the sockets in one bank to be filled with the same capacity
module. Some computers require the first bank to house the highest capacity modules. If the
configuration rules aren't followed, the computer may not start up or it may not recognize all the
memory in the system.
You can usually find the memory configuration rules specific to your computer system in the
computer's system manual. You can also use what's called a memory configurator. Most thirdparty
memory manufacturers offer free memory configurator available in printed form, or
accessible electronically via the Web. Memory configurator allow you to look up your computer
and find the part numbers and special memory configuration rules that apply to your system.
HOW MEMORY WORKS
Earlier, we talked about how memory holds information in a place where the CPU can get to it
quickly. Let's look at that process in more detail.
HOW MEMORY WORKS WITH THE PROCESSOR
Main components of a computer system.
The CPU is often referred to as the brain of the computer. This is where all the actual computing
is done.
The chipset supports the CPU. It usually contains several "controllers" which govern how
information travels between the processor and other components in the system. Some systems
have more than one chipset.
The memory controller is part of the chipset, and this controller establishes the information flow
between memory and the CPU.
A bus is a data path in a computer, consisting of various parallel wires to which the CPU,
memory, and all input/output devices are connected. The design of the bus, or bus architecture,
determines how much and how fast data can move across the motherboard. There are several
different kinds of busses in a system, depending on what speeds are required for those particular
components. The memory bus runs from the memory controller to the computer's memory
sockets. Newer systems have a memory bus architecture in which a frontside bus (FSB) runs
from the CPU to main memory and a backside bus (BSB) which runs from the memory
controller to L2 cache.
MEMORY SPEED
When the CPU needs information from memory, it sends out a request that is managed by the
memory controller. The memory controller sends the request to memory and reports to the CPU
when the information will be available for it to read. This entire cycle - from CPU to memory
controller to memory and back to the CPU - can vary in length according to memory speed as
well as other factors, such as bus speed.
Memory speed is sometimes measured in Megahertz (MHz), or in terms of access time - the
actual time required to deliver data - measured in nanoseconds (ns). Whether measured in
Megahertz or nanoseconds, memory speed indicates how quickly the memory module itself can
deliver on a request once that request is received.
ACCESS TIME (NANOSECONDS)
Access time measures from when the memory module receives a data request to when that data
becomes available. Memory chips and modules used to be marked with access times ranging
from 80ns to 50ns. With access time measurements (that is, measurements in nanoseconds),
lower numbers indicate faster speeds.
In this example, the memory controller requests data from memory and memory reacts to the
request in 70ns.The CPU receives the data in approximately 125ns. So, the total time from when
the CPU first requests information to when it actually receives the information can be up to 195ns
when using a 70ns memory module. This is because it takes time for the memory controller to
manage the information flow, and the information needs to travel from the memory module to the
CPU on the bus.
MEGAHERTZ (MHZ)
Beginning with Synchronous DRAM technology, memory chips had the ability to synchronize
themselves with the computer's system clock, making it easier to measure speed in megahertz,
or millions of cycles per second. Because this is the same way speed is measured in the rest of
the system, it makes it easier to compare the speeds of different components and synchronize
their functions. In order to understand speed better, it's important to understand the system clock.
SYSTEM CLOCK
A computer's system clock resides on the motherboard. It sends out a signal to all other
computer components in rhythm, like a metronome. This rhythm is typically drawn as a square
wave, like this:
In reality, however, the actual clock signal, when viewed with an oscilloscope, looks more like the
example shown below.
Each wave in this signal measures one clock cycle. If a system clock runs at 100MHz that
means there are 100 million clock cycles in one second. Every action in the computer is timed by
these clock cycles, and every action takes a certain number of clock cycles to perform. When
processing a memory request, for example, the memory controller can report to the processor
that the data requested will arrive in six clock cycles.
It's possible for the CPU and other devices to run faster or slower than the system clock.
Components of different speeds simply require a multiplication or division factor to synchronize
them. For example, when a 100MHz system clock interacts with a 400MHz CPU, each device
understands that every system clock cycle is equal to four clock cycles on the CPU; they use a
factor of four to synchronize their actions.
Many people assume that the speed of the processor is the speed of the computer. But most of
the time, the system bus and other components run at different speeds.
MAXIMIZING PERFORMANCE
Computer processor speeds have been increasing rapidly over the past several years. Increasing
the speed of the processor increases the overall performance of the computer. However, the
processor is only one part of the computer, and it still relies on other components in a system to
complete functions. Because all the information the CPU will process must be written to or read
from memory, the overall performance of a system is dramatically affected by how fast
information can travel between the CPU and main memory.
So, faster memory technologies contribute a great deal to overall system performance. But
increasing the speed of the memory itself is only part of the solution. The time it takes for
information to travel between memory and the processor is typically longer than the time it takes
for the processor to perform its functions. The technologies and innovations described in this
section are designed to speed up the communication process between memory and the
processor.
CACHE MEMORY
Cache memory
is a relatively small amount (normally less than 1MB) of high speed memory that
resides very close to the CPU. Cache memory is designed to supply the CPU with the most
frequently requested data and instructions. Because retriev-ing data from cache takes a fraction
of the time that it takes to access it from main memory, having cache memory can save a lot of
time. If the information is not in cache, it still has to be retrieved from main memory, but checking
cache memory takes so little time, it's worth it. This is analogous to checking your refrigerator for
the food you need before running to the store to get it: it's likely that what you need is there; if not,
it only took a moment to check.
The concept behind caching is the "80/20" rule, which states that of all the programs, information,
and data on your computer, about 20% of it is used about 80% of the time. (This 20% data might
include the code required for sending or deleting email, saving a file onto your hard drive, or
simply recognizing which keys you've touched on your keyboard.) Conversely, the remaining 80%
of the data in your system gets used about 20% of the time. Cache memory makes sense
because there's a good chance that the data and instructions the CPU is using now will be
needed again.
HOW CACHE MEMORY WORKS
Cache memory is like a "hot list" of instructions needed by the CPU. The memory controller saves
in cache each instruction the CPU requests; each time the CPU gets an instruction it needs from
cache - called a "cache hit" - that instruction moves to the top of the "hot list." When cache is full
and the CPU calls for a new instruction, the system overwrites the data in cache that hasn't been
used for the longest period of time. This way, the high priority information that's used continuously
stays in cache, while the less frequently used information drops out.
LEVELS OF CACHE
Today, most cache memory is incorporated into the processor chip itself; however, other
configurations are possible. In some cases, a system may have cache located inside the
processor, just outside the processor on the motherboard, and/or it may have a memory cache
socket near the CPU, which can contain a cache memory module. Whatever the configuration,
any cache memory component is assigned a "level" according to its proximity to the processor.
For example, the cache that is closest to the processor is called Level 1 (L1) Cache, the next
level of cache is numbered L2, then L3, and so on. Computers often have other types of caching
in addition to cache memory. For example, sometimes the system uses main memory as a cache
for the hard drive. While we won't discuss these scenarios here, it's important to note that the
term cache can refer specifically to memory and to other storage technologies as well.
You might wonder: if having cache memory near the processor is so beneficial, why isn't cache
memory used for all of main memory? For one thing, cache memory typically uses a type of
memory chip called SRAM (Static RAM), which is more expensive and requires more space per
megabyte than the DRAM typically used for main memory. Also, while cache memory does
improve overall system performance, it does so up to a point. The real benefit of cache memory is
in storing the most frequently-used instructions. A larger cache would hold more data, but if that
data isn't needed frequently, there's little benefit to having it near the processor.
It can take as long as 195ns for main memory to
satisfy a memory request from the CPU. External
cache can satisfy a memory request from the CPU in
as little as 45ns.
SYSTEM BOARD LAYOUT
As you've probably figured out, the placement of
memory modules on the system board has a direct
effect on system performance. Because local memory must hold all the information the CPU
needs to process, the speed at which the data can travel between memory and the CPU is critical
to the overall performance of the system. And because exchanges of information between the
CPU and memory are so intricately timed, the distance between the processor and the memory
becomes another critical factor in performance.
INTERLEAVING
The term interleaving refers to a process in which the CPU alternates communication between
two or more memory banks. Interleaving technology is typically used in larger systems such as
servers and workstations. Here's how it works: every time the CPU addresses a memory bank,
the bank needs about one clock cycle to "reset" itself. The CPU can save processing time by
addressing a second bank while the first bank is resetting. Interleaving can also function within
the memory chips themselves to improve performance. For example, the memory cells inside
SDRAM chip are divided into two independent cell banks, which can be activated
simultaneously. Interleaving between the two cell banks produces a continuous flow of data. This
cuts down the length of the memory cycle and results in faster transfer rates.
BURSTING
Bursting is another time-saving technology. The purpose of bursting is to provide the CPU with
additional data from memory based on the likelihood that it will be needed. So, instead of the
CPU retrieving information from memory one piece of at a time, it grabs a block of information
from several consecutive addresses in memory. This saves time because there's a statistical
likelihood that the next data address the processor will request will be sequential to the previous
one. This way, the CPU gets the instructions it needs without having to send an individual request
for each one. Bursting can work with many different types of memory and can function when
reading or writing data.
Both bursting and pipelining became popular at about the same time that EDO technology
became available. EDO chips that featured these functions were called "Burst EDO" or "Pipeline
Burst EDO" chips.
PIPELINING
Pipelining is a computer processing technique where a task is divided into a series of stages with
some of the work completed at each stage. Through the division of a larger task into smaller,
overlapping tasks, pipelining is used to improve performance beyond what is possible with nonpipelined
processing. Once the flow through a pipeline is started, execution rate of the
instructions is high, in spite of the number of stages through which they progress.
HOW MUCH MEMORY IS ON A MODULE?
Up to now, we've discussed some of the technical attributes of memory and how memory
functions in a system. What's left are the technical details - the "bits and bytes," as they say. This
section covers the binary numbering system, which forms the basis of computing, and
calculation of a memory module's capacity.
BITS AND BYTES
Computers speak in a "code" called machine language, which uses only two numerals: 0 and 1.
Different combinations of 0s and 1s form what are called binary numbers. These binary
numbers form instructions for the chips and microprocessors that drive computing devices - such
as computers, printers, hard disk drives, and so on. You may have heard the terms "bit" and
"byte." Both of these are units of information that are important to computing. The term bit is short
for "binary digit." As the name suggests, a bit represents a single digit in a binary number; a bit is
the smallest unit of information used in computing and can have a value of either 1 or a 0. A byte
consists of 8 bits. Almost all specifications of your computer's capabilities are represented in
bytes. For example, memory capacity, data-transfer rates, and data-storage capacity are all
measured in bytes or multiples thereof (such as kilobytes, megabytes, or gigabytes).
This discussion of bits and bytes becomes very relevant when it comes to computing devices and
components working together. Here, we'll address specifically how bits and bytes form the basis
of measuring memory component performance and interaction with other devices like the CPU.
CPU AND MEMORY REQUIREMENTS
A computer's CPU (central processing unit) processes data in 8-bit chunks. Those chunks, as we
learned in the previous section, are commonly referred to as bytes.
Because a byte is the fundamental unit of processing, the CPU's processing power is often
described in terms of the maximum number of bytes it can process at any given time. For
example, Pentium and PowerPC microprocessors currently are 64-bit CPUs, which means they
can simultaneously process 64 bits, or 8 bytes, at a time.
Each transaction between the CPU and memory is called a bus cycle. The number of data bits a
CPU can transfer during a single bus cycle affects a computer's performance and dictates what
type of memory the computer requires. Most desktop computers today use 168-pin DIMMs, which
support 64-bit data paths. Earlier 72-pin SIMMs supported 32-bit data paths, and were originally
used with 32-bit CPUs. When 32-bit SIMMs were used with 64-bit processors, they had to be
installed in pairs, with each pair of modules making up a memory bank. The CPU communicated
with the bank of memory as one logical unit.
Interestingly, RIMM modules, which are newer than DIMMs, use smaller 16-bit data paths;
however they transmit information very rapidly, sending several packets of data at a time. RIMM
modules use pipelining technology to send four 16-bit packets at a time to a 64-bit CPU, so
information still gets processed in 64-bit chunks.
CALCULATING THE CAPACITY OF A MODULE
Memory holds the information that the CPU needs to process. The capacity of memory chips and
modules are described in megabits (millions of bits) and megabytes (millions of bytes). When
trying to figure out how much memory you have on a module, there are two important things to
remember:
A module consists of a group of chips. If you add together the capacities of all the chips on the
module, you get the total capacity of the module. Exceptions to this rule are:
· If some of the capacity is being used for another function, such as error checking.
· If some of the capacity is not being used, for example some chips may have extra rows to
be used as back-ups. (This isn't common.)
While chip capacity is usually expressed in megabits, module capacity is expressed in
megabytes. This can get confusing, especially since many people unknowingly use the word "bit"
when they mean "byte" and vice versa. To help make it clear, we'll adopt the following standards
in this book:
When we talk about the amount of memory on a module, we'll use the term "module capacity";
when we are referring to chips, we'll use the term "chip density". Module capacity will be
measured in megabytes (MB) with both letters capital, and chip density will be measured in
megabits (Mbit), and we'll spell out the word "bit" in small letters.
COMPONENT CAPACITY EXPRESSION CAPACITY UNITS EXAMPLE
Chips Chip Density Mbit (megabits) 64Mbit
Memory Modules Module Capacity MB (megabytes) 64MB
CHIP DENSITY
Each memory chip is a matrix of tiny cells. Each cell holds one bit of information. Memory chips
are often described by how much information they can hold. We call this chip density. You may
have encountered examples of chip densities, such as "64Mbit SDRAM" or "8M by 8". A 64Mbit
chip has 64 million cells and is capable of holding 64 million bits of data. The expression "8M by
8" describes one kind of 64Mbit chip in more detail.
In the memory industry, DRAM chip densities are often described by their cell organization. The
first number in the expression indicates the depth of the chip (in locations) and the second
number indicates the width of the chip (in bits). If you multiply the depth by the width, you get the
density of the chip. Here are some examples:
CURRENT AVAILABLE CHIP TECHNOLOGY
CHIP DEPTH IN
MILLIONS OF
LOCATIONS
CHIP WIDTH
IN BITS
CHIP DENSITY
=
DEPTH x WIDTH
16Mbit Chips
4Mx4 4 4 16
1Mx16 1 16 16
2Mx8 2 8 16
16Mx1 16 1 16
64Mbit Chips
4Mx16 4 16 64
8Mx8 8 8 64
16Mx4 16 4 64
128Mbit Chips
8Mx16 8 16 128
16Mx8 16 8 128
32Mx4 32 4 128
256Mbit Chips
32Mx8 32 8 256
MODULE CAPACITY
It's easy to calculate the capacity of a memory module if you know the capacities of the chips on
it. If there are eight 64Mbit chips, it's a 512Mbit module. However, because the capacity of a
module is described in megabytes, not megabits, you have to convert bits to bytes. To do this,
divide the number of bits by 8. In the case of the 512Mbit module:
You may hear standard memory modules in the industry being described as: "4Mx32" (that is, "4
Meg by 32"), or "16Mx64" ("16 Meg by 64"). In these cases, you can calculate the capacity of the
module exactly as if it were a chip:
Here are some additional examples:
STANDARD MODULE TYPES
STANDARD
MODULE DEPTH IN
LOCATIONS
MODULE WIDTH
IN DATA BITS
CAPACITY
IN MBITS =
DEPTH X
WIDTH
CAPACITY IN MB
= MBITS/8
72-
Pin
1Mx32
2Mx32
4Mx32
8Mx32
16Mx32
32Mx32
1
2
4
8
16
32
32
32
32
32
32
32
32
64
128
256
512
1024
4
8
16
32
64
128
168-
Pin
2Mx64
4Mx64
8Mx64
16Mx64
32Mx64
2
4
8
16
32
64
64
64
64
64
128
256
512
1024
2048
16
32
64
128
256
As we mentioned earlier, there's only room for a certain number of chips on a PCB. Based on an
industry standard 168-pin DIMM, the largest capacity module manufacturers can make using
64Mbit chips is 128MB; with 128Mbit chips, the largest module possible is 256MB; and with
256Mbit chips, the largest module possible is 512MB.
STACKING
Many large servers and workstations require higher capacity modules in order to reach system
memory capacities of several gigabytes or more. There are two ways to increase the capacity of
a module. Manufacturers can stack chips on top of one another, or they can stack boards.
CHIP STACKING
With chip stacking, two chips are stacked together and occupy the space that one chip would
normally take up. In some cases, the stacking is done internally at the chip manufacturing plant
and can actually appear to be one chip. In other cases the chips are stacked externally. The
example below shows two externally stacked chips.
Example of externally stacked chips.
BOARD STACKING
As you might expect, board stacking involves putting two memory module printed circuit boards
(PCBs) together. With board stacking, a secondary board mounts onto the primary board, which
fits into the memory socket on the system motherboard.
Example of a stacked module.
DIFFERENT KINDS OF MEMORY
Some people like to know a lot about the computer systems they own - or are considering buying
- just because. They're like that. It's what makes them tick. Some people never find out about
their systems and like it that way. Still other people - most of us, in fact - find out about their
systems when they have to - when something goes wrong, or when they want to upgrade it. It's
important to note that making a choice about a computer system - and its memory features - will
affect the experience and satisfaction you derive from the system. This chapter is here to make
you smarter about memory so that you can get more out of the system you're purchasing or
upgrading.
MODULE FORM FACTORS
The easiest way to categorize memory is by form factor. The form factor of any memory module
describes its size and pin configuration. Most computer systems have memory sockets that can
accept only one form factor. Some computer systems are designed with more than one type of
memory socket, allowing a choice between two or more form factors. Such designs are usually a
result of transitional periods in the industry when it's not clear which form factors will gain
predominance or be more available.
SIMMS
The term SIMM stands for Single In-Line Memory Module. With SIMMs, memory chips are
soldered onto a modular printed circuit board (PCB), which inserts into a socket on the system
board.
The first SIMMs transferred 8 bits of data at a time. Later, as CPUs began to read data in 32-bit
chunks, a wider SIMM was developed, which could supply 32 bits of data at a time. The easiest
way to differentiate between these two different kinds of SIMMs was by the number of pins, or
connectors. The earlier modules had 30 pins and the later modules had 72 pins. Thus, they
became commonly referred to as 30-pin SIMMs and 72-pin SIMMs.
Another important difference between 30-pin and 72-pin SIMMs is that 72-pin SIMMs are 3/4 of
an inch (about 1.9 centimeters) longer than the 30-pin SIMMs and have a notch in the lower
middle of the PCB. The graphic below compares the two types of SIMMs and indicates their data
widths.
4-1/4" 72-Pin SIMM
3-1/2" 30-Pin SIMM
Comparison of a 30-pin and a 72-pin SIMM
DIMMS
Dual In-line Memory Modules, or DIMMs, closely resemble SIMMs. Like SIMMs, most DIMMs
install vertically into expansion sockets. The principal difference between the two is that on a
SIMM, pins on opposite sides of the board are "tied together" to form one electrical contact; on a
DIMM, opposing pins remain electrically isolated to form two separate contacts.
168-pin DIMMs transfer 64 bits of data at a time and are typically used in computer configurations
that support a 64-bit or wider memory bus. Some of the physical differences between 168-pin
DIMMs and 72-pin SIMMs include: the length of module, the number of notches on the module,
and the way the module installs in the socket. Another difference is that many 72-pin SIMMs
install at a slight angle, whereas 168-pin DIMMs install straight into the memory socket and
remain completely vertical in relation to the system motherboard. The illustration below compares
a 168-pin DIMM to a 72-pin SIMM.
4-1/4" 72-Pin SIMM
5-1/4" 168-Pin DIMM
Comparison of a 72-pin SIMM and a 168-pin DIMM.
SO DIMMs
A type of memory commonly used in notebook computers is called SO DIMM or Small Outline
DIMM. The principal difference between a SO DIMM and a DIMM is that the SO DIMM, because
it is intended for use in notebook computers, is significantly smaller than the standard DIMM. The
72-pin SO DIMM is 32 bits wide and the 144-pin SO DIMM is 64 bits wide. 144-pin and 200-pin
modules are the most common SO DIMMs today.
2.35" 72-pin SO DIMM 2.66" 144-Pin SO DIMM
Comparison of a 72-pin SO DIMM and a 144-pin SO DIMM.
MicroDIMM
(Micro Dual In-Line Memory Module)
Smaller than an SO DIMM, MicroDIMMs are primarily used in sub-notebook computers.
MicroDIMMs are available in 144-pin SDRAM and 172-pin DDR.
RIMMS AND SO-RIMMS
RIMM
is the trademarked name for a Direct Rambus memory module. RIMMs look similar to
DIMMs, but have a different pin count. RIMMs transfer data in 16-bit chunks. The faster access
and transfer speed generates more heat. An aluminum sheath, called a heat spreader, covers
the module to protect the chips from overheating.
A 184-pin Direct Rambus RIMM shown with heat spreaders pulled away.
A SO-RIMM looks similar to an SO DIMM, but it uses Rambus technology.
A 160-pin SO-RIMM module.
FLASH MEMORY
Flash memory is a solid-state, non-volatile, rewritable memory that functions like RAM and a hard
disk drive combined. Flash memory stores bits of electronic data in memory cells, just like DRAM,
but it also works like a hard-disk drive in that when the power is turned off, the data remains in
memory. Because of its high speed, durability, and low voltage requirements, flash memory is
ideal for use in many applications - such as digital cameras, cell phones, printers, handheld
computers, pagers, and audio recorders.
Flash memory is avaliable in many different form factors, including: CompactFlash, Secure
Digital, SmartMedia, MultiMedia and USB Memory
PC CARD AND CREDIT CARD MEMORY
Before SO DIMMs became popular, most notebook memory was developed using proprietary
designs. It is always more cost-effective for a system manufacturer to use standard components,
and at one point, it became popular to use the same "credit card" like packaging for memory that
is used on PC Cards today. Because the modules looked like PC Cards, many people thought
the memory cards were the same as PC Cards, and could fit into PC Card slots. At the time, this
memory was described as "Credit Card Memory" because the form factor was the approximate
size of a credit card. Because of its compact form factor, credit card memory was ideal for
notebook applications where space is limited.
PC Cards use an input/output protocol that used to
be referred to as PCMCIA (Personal Computer Memory Card
International Association). This standard is designed for
attaching
input/output devices such as network adapters, fax/modems, or hard drives to notebook
computers. Because PC Card memory resembles the types of cards designed for use in a
notebook computer's PC Card slot, some people have mistakenly thought that the memory
modules could be used in the PC Card slot. To date, RAM has not been packaged on a PCMCIA
card because the technology doesn't allow the processor to communicate quickly enough with
memory. Currently, the most common type of memory on PC Card modules is Flash memory.
On the surface, credit card memory does not resemble a typical memory module configuration.
However, on the inside you will find standard TSOP memory chips.
This section presents the most common memory technologies used for main memory: This road
map offers an overview of the evolution of memory.
YEAR INTRODUCED TECHNOLOGY SPEED LIMIT
1987 FPM 50ns
1995 EDO 50ns
1997 PC66 SDRAM 66MHz
1998 PC100 SDRAM 100MHz
1999 RDRAM 800MHz
1999/2000 PC133 SRAM 133MHz (VCM option)
2000 DDR SDRAM 266MHz
2001 DDR SDRAM 333MHz
2002 DDR SDRAM 434MHz
2003 DDR SDRAM 500MHz
MAJOR CHIP TECHNOLOGIES
It's usually pretty easy to tell memory module form factors apart because of physical differences.
Most module form factors can support various memory technologies so, it's possible for two
modules to appear to be the same when, in fact, they're not. For example, a 168-pin DIMM can
be used for EDO, Synchronous DRAM, or some other type of memory. The only way to tell
precisely what kind of memory a module contains is to interpret the marking on the chips. Each
DRAM chip manufacturer has different markings and part numbers to identify the chip technology.
FAST PAGE MODE (FPM)
At one time, FPM was the most common form of DRAM found in computers. In fact, it was so
common that people simply called it "DRAM," leaving off the "FPM". FPM offered an advantage
over earlier memory technologies because it enabled faster access to data located within the
same row.
EXTENDED DATA OUT (EDO)
In 1995, EDO became the next memory innovation. It was similar to FPM, but with a slight
modification that allowed consecutive memory accesses to occur much faster. This meant the
memory controller could save time by cutting out a few steps in the addressing process. EDO
enabled the CPU to access memory 10 to 15% faster than with FPM.
SYNCHRONOUS DRAM (SDRAM)
In late 1996, SDRAM began to appear in systems. Unlike previous technologies, SDRAM is
designed to synchronize itself with the timing of the CPU. This enables the memory controller to
know the exact clock cycle when the requested data will be ready, so the CPU no longer has to
wait between memory accesses. SDRAM chips also take advantage of interleaving and burst
mode functions, which make memory retrieval even faster. SDRAM modules come in several
different speeds so as to synchronize to the clock speeds of the systems they'll be used in. For
example, PC66 SDRAM runs at 66MHz, PC100 SDRAM runs at 100MHz, PC133 SDRAM runs at
133MHz, and so on. Faster SDRAM speeds such as 200MHz and 266MHz are currently in
development.
DOUBLE DATA RATE SYNCHRONOUS DRAM (DDR SDRAM)
DDR SDRAM, is a next-generation SDRAM technology. It allows the memory chip to perform
transactions on both the rising and falling edges of the clock cycle. For example, with DDR
SDRAM, a 100 or 133MHz memory bus clock rate yields an effective data rate of 200MHz or
266MHz.
DIRECT RAMBUS
Direct Rambus is a DRAM architecture and interface standard that challenges traditional main
memory designs. Direct Rambus technology is extraordinarily fast compared to older memory
technologies. It transfers data at speeds up to 800MHz over a narrow 16-bit bus called a Direct
Rambus Channel
. This high-speed clock rate is possible due to a feature called "double
clocked," which allows operations to occur on both the rising and falling edges of the clock cycle.
Also, each memory device on an RDRAM module provides up to 1.6 gigabytes per second of
bandwidth - twice the bandwidth available with current 100MHz SDRAM.
In addition to chip technologies designed for use in main memory, there are also specialty
memory technologies that have bee

0

well here u have written about the how memory works. but i need how windows 2000 manages the memory..

0

assadtarik, what you are asking here is for someone else to do your work for you. This is completely against the forum rules. We are here to help you out, not to do your assignment. Darren here has already given you a whole lot of info, and if you expect someone else to do your complete assignment, sorry, you've come to the wrong place. You have google out there. Just run a search for what you are looking for. Don't expect people here to spoon feed you the exact info. If there are 21 million results for your query, it's your job to sift through them and get the relevant info. No one here will do your work for you. If you need help, tell us what you have done so far and we'll go from there. But you have to show some initiative.

0

hi everyone ,
[TEX]We are here to help you out, not to do your assignment[/TEX]

yep they cannot do your assignment , but you can download a pre-done assignment like pre-compiled binary , easy money ! :)

anyway ,
To know about how windows 2000 manages memory you needs to know the fundermantals of the operating systems , let me ask you a question , what is the difference between a thread and a process ? How they are implemented ? How they are works parrally , how they are intercommunicate with each other ? If you haven't answers to this question then

read the Mordern Operating Systems by andrew S. Tanbaum book chapter 1 and 2 , Introduction and the process and threads . Anyway ,I'm using the second edition , these can be little bit change in new editions ! , anyway then you know something about how they are and how they are working .
then read the memory management chapter whole , that chapter is not the memory management in windows 2000 , its general for any operating system . After you know these things you can read a book like
Inside Microsoft Windows 2000 , Third Edition ebook , by David A. Solomon Mark E. Russionvich ,


[TEX]You have google out there. Just run a search for what you are looking for.[/TEX]
:) Yes he was correct ! You can google these books for ebooks and download it and have a fun . The two books that I above say is have in the pre-compiled binary FORMAZ ! you just download it . For that you can use the google shit ! Just put up your shitt into the google search box hole :) I cannot mention the places that you really can downlod them , thus beacuse the form they hate me ! For that you can google it as goldeagle2005 said or you know what to do he he :)


anyway , just forgot about it and let me introduce me to you .
sanzilla Jackcat : sandundhammikaperera@yahoo.com

anyway , friend what about your project ?I also have a project to do . You know I'm very busy these days . Therefore I outsource my project !I just have to find some money anyway .

I mention ebook download is the leagal way , and ANY OF HERE EBOOK GOOGLE SEARCH IS I MEANS THE LEAGAL WAY TO DO THEM ON GOOGLE . ANYWAY THE READER OF THIS THREAD CAN AND HAVE THE FREEDOM TO TAKE HIS/HER OVERLOADED MEANING OF IT . DISCLAMER : I NEVER TELL HERE TO DOWNLOAD THE E-BOOKS illeagally . THE READER IS FULLY RESPONSIBLLY FOR WHATEVER HE/SHE DOING WITH THE GOOGLE AND DO IT WITH YOUR OWN RISK .

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.