i am students of EE at COMSATS University, i have an assingment on memory management of Windows 2K. i need help in this concern

Recommended Answers

All 11 Replies


The Internet is full of answers on this subject.

open www.google.com

and type in the search box ....memory management of Windows 2000

I you will find info for your project


South Africa

Yup, and if there's something specific you need help with, get back and we'll be glad to help you out.

well i cant find anything on the net


there are 21 million reference to what you are looking for my friend


South Africa

well ur rite but they are not to the point i have serached the web but found nothing. i was told to me that windows is not freeware thats why we can't find any thing on the net .


Ok i think we need to be more specific about what the project requires from you.

What tasks have been set by the University and what are the questions or tasks they need you to achive. The more information we have the more we can help you

South Africa


This is my thesis i wrote for my degree in Computer science. if refers to how memory works. Its a large document so let it download.

Memory Modules
Memory chips in desktop computers originally used a pin configuration called dual inline
(DIP). This pin configuration could be soldered into holes on the computer's
motherboard or plugged into a socket that was soldered on the motherboard. This method
worked fine when computers typically operated on a couple of megabytes or less of RAM,
but as the need for memory grew, the number of chips needing space on the motherboard
The solution was to place the memory chips, along with all of the support components, on a
separate printed circuit board (PCB) that could then be plugged into a special connector
(memory bank) on the motherboard. Most of these chips use a small outline J-lead (SOJ)
pin configuration, but quite a few manufacturers use the thin small outline package (TSOP)
configuration as well. The key difference between these newer pin types and the original DIP
configuration is that SOJ and TSOP chips are surface-mounted to the PCB. In other words,
the pins are soldered directly to the surface of the board, not inserted in holes or sockets.
Memory chips are normally only available as part of a card called a module. You've probably
seen memory listed as 8x32 or 4x16. These numbers represent the number of the chips
multiplied by the capacity of each individual chip, which is measured in megabits (Mb), or
one million bits. Take the result and divide it by eight to get the number of megabytes on that
module. For example, 4x32 means that the module has four 32-megabit chips. Multiply 4 by
32 and you get 128 megabits. Since we know that a byte has 8 bits, we need to divide our
result of 128 by 8. Our result is 16 megabytes!
The type of board and connector used for RAM in desktop computers has evolved over the
past few years. The first types were proprietary, meaning that different computer
manufacturers developed memory boards that would only work with their specific systems.
Then came SIMM, which stands for single in-line memory module. This memory board
used a 30-pin connector and was about 3.5 x .75 inches in size (about 9 x 2 cm). In most
computers, you had to install SIMMs in pairs of equal capacity and speed. This is because
the width of the bus is more than a single SIMM. For example, you would install two 8-
megabyte (MB) SIMMs to get 16 megabytes total RAM. Each SIMM could send 8 bits of data
at one time, while the system bus could handle 16 bits at a time. Later SIMM boards, slightly
larger at 4.25 x 1 inch (about 11 x 2.5 cm), used a 72-pin connector for increased bandwidth
and allowed for up to 256 MB of RAM.
From the top: SIMM, DIMM and SODIMM memory modules
As processors grew in speed and bandwidth capability, the industry adopted a new standard
in dual in-line memory module (DIMM). With a whopping 168-pin or 184-pin connector and
a size of 5.4 x 1 inch (about 14 x 2.5 cm), DIMMs range in capacity from 8 MB to 1 GB per
module and can be installed singly instead of in pairs. Most PC memory modules and the
modules for the Mac G5 systems operate at 2.5 volts, while older Mac G4 systems typically
use 3.3 volts. Another standard, Rambus in-line memory module (RIMM), is comparable in
size and pin configuration to DIMM but uses a special memory bus to greatly increase
Many brands of notebook computers use proprietary memory modules, but several
manufacturers use RAM based on the small outline dual in-line memory module
(SODIMM) configuration. SODIMM cards are small, about 2 x 1 inch (5 x 2.5 cm), and have
144 or 200 pins. Capacity ranges from 16 MB to 1 GB per module. To conserve space, the
Apple iMac desktop computer uses SO-DIMMs instead of the traditional DIMMs. Subnotebook
computers use even smaller DIMMs, known as Micro-DIMMs, which have either
144 pins or 172 pins.
Error Checking
Most memory available today is highly reliable. Most systems simply have the memory
controller check for errors at start-up and rely on that. Memory chips with built-in errorchecking
typically use a method known as parity to check for errors. Parity chips have an
extra bit for every 8 bits of data. The way parity works is simple. Let's look at even parity
When the 8 bits in a byte receive data, the chip adds up the total number of 1s. If the total
number of 1s is odd, the parity bit is set to 1. If the total is even, the parity bit is set to 0.
When the data is read back out of the bits, the total is added up again and compared to the
parity bit. If the total is odd and the parity bit is 1, then the data is assumed to be valid and is
sent to the CPU. But if the total is odd and the parity bit is 0, the chip knows that there is an
error somewhere in the 8 bits and dumps the data. Odd parity works the same way, but the
parity bit is set to 1 when the total number of 1s in the byte are even.
The problem with parity is that it discovers errors but does nothing to correct them. If a byte
of data does not match its parity bit, then the data are discarded and the system tries again.
Computers in critical positions need a higher level of fault tolerance. High-end servers often
have a form of error-checking known as error-correction code (ECC). Like parity, ECC
uses additional bits to monitor the data in each byte. The difference is that ECC uses several
bits for error checking -- how many depends on the width of the bus -- instead of one. ECC
memory uses a special algorithm not only to detect single bit errors, but actually correct them
as well. ECC memory will also detect instances when more than one bit of data in a byte
fails. Such failures are very rare, and they are not correctable, even with ECC.
The majority of computers sold today use nonparity memory chips. These chips do not
provide any type of built-in error checking, but instead rely on the memory controller for error
Common RAM Types
Static random access memory
uses multiple transistors, typically four to six, for each
memory cell but doesn't have a capacitor in each cell. It is used primarily for cache
Dynamic random access memory
has memory cells with a paired transistor and capacitor
requiring constant refreshing.
Fast page mode dynamic random access memory
was the original form of DRAM. It
waits through the entire process of locating a bit of data by column and row and then reading
the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately
176 MBps.
Extended data-out dynamic random access memory
does not wait for all of the
processing of the first bit before continuing to the next one. As soon as the address of the
first bit is located, EDO DRAM begins looking for the next bit. It is about five percent faster
than FPM. Maximum transfer rate to L2 cache is approximately 264 MBps.
Synchronous dynamic random access memory
takes advantage of the burst mode
concept to greatly improve performance. It does this by staying on the row containing the
requested bit and moving rapidly through the columns, reading each bit as it goes. The idea
is that most of the time the data needed by the CPU will be in sequence. SDRAM is about
five percent faster than EDO RAM and is the most common form in desktops today.
Maximum transfer rate to L2 cache is approximately 528 MBps.
Double data rate synchronous dynamic RAM
is just like SDRAM except that is has higher
bandwidth, meaning greater speed. Maximum transfer rate to L2 cache is approximately
1,064 MBps (for DDR SDRAM 133 MHZ).
Rambus dynamic random access memory
is a radical departure from the previous DRAM
architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory module
, which is similar in size and pin configuration to a standard DIMM. What makes
RDRAM so different is its use of a special high-speed data bus called the Rambus channel.
RDRAM memory chips work in parallel to achieve a data rate of 800 MHz, or 1,600 MBps.
Since they operate at such high speeds, they generate much more heat than other types of
chips. To help dissipate the excess heat Rambus chips are fitted with a heat spreader, which
looks like a long thin wafer. Just like there are smaller versions of DIMMs, there are also SORIMMs,
designed for notebook computers.
Credit Card Memory
Credit card memory is a proprietary self-contained DRAM memory module that plugs into a
special slot for use in notebook computers.
PCMCIA Memory Card
Another self-contained DRAM module for notebooks, cards of this type are not proprietary
and should work with any notebook computer whose system bus matches the memory card's
CMOS RAM is a term for the small amount of memory used by your computer and some
other devices to remember things like hard disk settings. This memory uses a small battery
to provide it with the power it needs to maintain the memory contents.
, also known as multiport dynamic random access memory (MPDRAM), is a
type of RAM used specifically for video adapters or 3-D accelerators. The "multiport" part
comes from the fact that VRAM normally has two independent access ports instead of one,
allowing the CPU and graphics processor to access the RAM simultaneously. VRAM is
located on the graphics card and comes in a variety of formats, many of which are
proprietary. The amount of VRAM is a determining factor in the resouloution and colour
depth of the display. VRAM is also used to hold graphics-specific information such as 3-D
geometry data and texture maps. True multiport VRAM tends to be expensive, so today;
many graphics cards use SGRAM (synchronous graphics RAM) instead. Performance is
nearly the same, but SGRAM is cheaper.
Maybe you have been thinking about buying a computer, and it has occurred to you that you
might want to buy a laptop version. After all, today's laptops have just as much computing
power as desktops, without taking up as much space. You can take a laptop on the road with
you to do your computing or make presentations. Perhaps you prefer comfortably working on
your couch in front of the TV instead of sitting at a desk. Maybe a laptop is for you.
A Brief History
Alan Kay of the Xerox Palo Alto Research Center originated the idea of a portable computer
in the 1970s. Kay envisioned a notebook-sized, portable computer called the Dynabook that
everyone could own, and that could handle all of the user's informational needs. Kay also
envisioned the Dynabook with wireless network capabilities. Arguably, the first laptop
computer was designed in 1979 by William Moggridge of Grid Systems Corp. It had 340
kilobytes of bubble memory, a die-cast magnesium case and a folding electroluminescent
graphics display screen. In 1983, Gavilan Computer produced a laptop computer with the
following features:
· 64 kilobytes (expandable to 128 kilobytes) of random access memory (RAM)
· Gavilan operating system (also ran MS-DOS)
· 8088 microprocessor
· touchpad mouse
· portable printer
· weighed 9 lb (4 kg) alone or 14 lb (6.4 kg) with printer
The Gavilan computer had a floppy drive that was not compatible with other computers, and
it primarily used its own operating system. The company failed.
In 1984, Apple Computer introduced its Apple IIc model. The Apple IIc was a notebook-sized
computer, but not a true laptop. It had a 65C02 microprocessor, 128 kilobytes of memory, an
internal 5.25-inch floppy drive, two serial ports, a mouse port, modem card, external power
supply, and a folding handle. The computer itself weighed about 10 to 12 lb (about 5 kg), but
the monitor was heavier. The Apple IIc had a 9-inch monochrome monitor or an optional
LCD panel. The combination computer/ LCD panel made it a genuinely portable computer,
although you would have to set it up once you reached your destination. The Apple IIc was
aimed at the home and educational markets, and was highly successful for about five years.
Later, in 1986, IBM introduced its IBM PC Convertible. Unlike the Apple IIc, the PC
Convertible was a true laptop computer. Like the Gavilan computer, the PC Convertible used
an 8088 microprocessor, but it had 256 kilobytes of memory, two 3.5-inch (8.9-cm) floppy
drives, an LCD, parallel and serial printer ports and a space for an internal modem. It came
with its own applications software (basic word processing, appointment calendar,
telephone/address book, calculator), weighed 12 lbs (5.4 kg) and sold for $3,500. The PC
Convertible was a success, and ushered in the laptop era. A bit later, Toshiba was
successful with an IBM laptop clone.
Since these early models, many manufacturers have introduced and improved laptop
computers over the years. Today's laptops are much more sophisticated, lighter and closer
to Kay's original vision.
The First Laptop?
By Ian McKay
The following claim is the sort of thing that can get you
into trouble, but only M.A.D. offers you the chance to
verify the news of what I imagine is the first auction
appearance of the Grid Compass Computer 1109 that
Bonhams, which offered it in a "20th Century Design"
sale of June 1, claimed is "the first ever lap-top
Designed in 1979 by a Briton, William Moggridge, for
Grid Systems Corporation, the Grid Compass was one fifth the weight of any model
equivalent in performance and was used by NASA on the space shuttle program in the
early 1980's.
The sale catalog describes it as a "340K byte bubble memory lap-top computer with diecast
magnesium case and folding electroluminescent graphics display screen."
Complete with manual, it sold for $800.
When you think about it, it's amazing how many different types of electronic memory you
encounter in daily life. Many of them have become an integral part of our vocabulary:
· Cache
· Dynamic RAM
· Static RAM
· Flash Memory
· Memory Sticks
· Virtual Memory
· Video memory
You already know that the Computer in front of you has memory. What you may not know is
that most of the electronic items you use every day have some form of memory also. Here
are just a few examples of the many items that use memory:
· Cell phones
· PDA’s
· Game consoles
· Car radios
· VCRs
· TVs
Each of these devices uses different types of memory in different ways!
In this article, you'll learn why there are so many different types of memory and what all of
the terms mean.
RAM Basics
Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of
transistors and capacitors. In the most common form of computer memory, dynamic
random access memory
(DRAM), a transistor and a capacitor are paired to create a
memory cell, which represents a single bit of data. The capacitor holds the bit of information
-- a 0 or a 1. The transistor acts as a switch that lets the control circuitry on the memory chip
read the capacitor or change its state.
A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory
cell, the bucket is filled with electrons. To store a 0, it is emptied. The problem with the
capacitor's bucket is that it has a leak. In a matter of a few milliseconds a full bucket
becomes empty. Therefore, for dynamic memory to work, either the CPU or the memory
has to come along and recharge all of the capacitors holding a 1 before they
discharge. To do this, the memory controller reads the memory and then writes it right back.
This refresh operation happens automatically thousands of times per second.
The capacitor in a dynamic RAM memory cell is like a leaky bucket.
It needs to be refreshed periodically or it will discharge to 0.
This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be
dynamically refreshed all of the time or it forgets what it is holding. The downside of all of this
refreshing is that it takes time and slows down the memory.
Memory cells are etched onto a silicon wafer in an array of columns (bitlines) and rows
(wordlines). The intersection of a bitline and wordline constitutes the address of the
memory cell.
Memory is made up of bits arranged in a two-dimensional grid.
In this figure, red cells represent 1s and white cells represent 0s.
In the animation, a column is selected and then rows are charged to write data into the
specific column.
DRAM works by sending a charge through the appropriate column (CAS) to activate the
transistor at each bit in the column. When writing, the row lines contain the state the
capacitor should take on. When reading, the sense-amplifier determines the level of charge
in the capacitor. If it is more than 50 percent, it reads it as a 1; otherwise it reads it as a 0.
The counter tracks the refresh sequence based on which rows have been accessed in what
order. The length of time necessary to do all this is so short that it is expressed in
nanoseconds (billionths of a second). A memory chip rating of 70ns means that it takes 70
nanoseconds to completely read and recharge each cell.
Memory cells alone would be worthless without some way to get information in and out of
them. So the memory cells have a whole support infrastructure of other specialized circuits.
These circuits perform functions such as:
· Identifying each row and column (row address select and column address select)
· Keeping track of the refresh sequence (counter)
· Reading and restoring the signal from a cell (sense amplifier)
· Telling a cell whether it should take a charge or not (write enable)
Other functions of the memory controller include a series of tasks that include identifying
the type, speed and amount of memory and checking for errors.
Static RAM uses a completely different technology. In static RAM, a form of flip-flop holds
each bit of memory. A flip-flop for a memory cell takes four or six transistors along with some
wiring, but never has to be refreshed. This makes static RAM significantly faster than
dynamic RAM. However, because it has more parts, a static memory cell takes up a lot more
space on a chip than a dynamic memory cell. Therefore, you get less memory per chip, and
that makes static RAM a lot more expensive.
So static RAM is fast and expensive, and dynamic RAM is less expensive and slower. So
static RAM is used to create the CPU's speed-sensitive cache, while dynamic RAM forms
the larger system RAM space.
How Much Do You Need?
It's been said that you can never have enough money, and the same holds true for RAM,
especially if you do a lot of graphics-intensive work or gaming. Next to the CPU itself, RAM is
the most important factor in computer performance. If you don't have enough, adding RAM
can make more of a difference than getting a new CPU!
If your system responds slowly or accesses the hard drive constantly, then you need to add
more RAM. If you are running Windows XP, Microsoft recommends 128MB as the minimum
RAM requirement. At 64MB, you may experience frequent application problems. For optimal
performance with standard desktop applications, 256MB is recommended. If you are running
Windows 95/98, you need a bare minimum of 32 MB, and your computer will work much
better with 64 MB. Windows NT/2000 needs at least 64 MB, and it will take everything you
can throw at it, so you'll probably want 128 MB or more.
Linux works happily on a system with only 4 MB of RAM. If you plan to add X-Windows or do
much serious work, however, you'll probably want 64 MB. Mac OS X systems should have a
minimum of 128 MB, or for optimal performance, 512 MB.
The amount of RAM listed for each system above is estimated for normal usage -- accessing
the Internet, word processing, standard home/office applications and light entertainment. If
you do computer-aided design (CAD), 3-D modeling/animation or heavy data processing, or
if you are a serious gamer, then you will most likely need more RAM. You may also need
more RAM if your computer acts as a server of some sort.
Another question is how much VRAM you want on your video card. Almost all cards that you
can buy today have at least 16 MB of RAM. This is normally enough to operate in a typical
office environment. You should probably invest in a 32-MB or better graphics card if you
want to do any of the following:
· Play realistic games
· Capture and edit video
· Create 3-D graphics
· Work in a high-resolution, full-color environment
· Design full-color illustrations
When shopping for video cards, remember that your monitor and computer must be capable
of supporting the card you choose.
Read-only memory (ROM), also known as firmware, is an integrated circuit programmed
with specific data when it is manufactured. ROM chips are used not only in computers, but in
most other electronic items as well. In this edition you will learn about the different types of
ROM and how each works. This article is one in a series of articles dealing with computer
memory, including:
· How Computer Memory Works
· How RAM Works
· How Virtual Memory Works
· How Flash Memory Works
· How BIOS Works
Let's start by identifying the different types of ROM.
ROM Types
There are five basic ROM types:
· Flash memory
Each type has unique characteristics, which you'll learn about in this article, but they are all
types of memory with two things in common:
· Data stored in these chips is nonvolatile -- it is not lost when power is removed.
· Data stored in these chips is either unchangeable or requires a special operation to
change (unlike RAM, which can be changed as easily as it is read).
This means that removing the power source from the chip will not cause it to lose any data.
ROM at Work
Similar to RAM, ROM chips (Figure 1) contain a grid of columns and rows. But where the
columns and rows intersect, ROM chips are fundamentally different from RAM chips. While
RAM uses transistors to turn on or off access to a capacitor at each intersection, ROM uses
a diode to connect the lines if the value is 1. If the value is 0, then the lines are not
connected at all.
Figure 1. BIOS uses Flash memory, a type of ROM.
A diode normally allows current to flow in only one direction and has a certain threshold,
known as the forward breakover, that determines how much current is required before the
diode will pass it on. In silicon-based items such as processors and memory chips, the
forward breakover voltage is approximately 0.6 volts. By taking advantage of the unique
properties of a diode, a ROM chip can send a charge that is above the forward break over
down the appropriate column with the selected row grounded to connect at a specific cell. If
a diode is present at that cell, the charge will be conducted through to the ground, and, under
the binary system, the cell will be read as being "on" (a value of 1). The neat part of ROM is
that if the cell's value is 0, there is no diode at that intersection to connect the column and
row. So the charge on the column does not get transferred to the row.
As you can see, the way a ROM chip works necessitates the programming of perfect and
complete data when the chip is created. You cannot reprogram or rewrite a standard ROM
chip. If it is incorrect, or the data needs to be updated, you have to throw it away and start
over. Creating the original template for a ROM chip is often a laborious process full of trial
and error. But the benefits of ROM chips outweigh the drawbacks. Once the template is
completed, the actual chips can cost as little as a few cents each. They use very little power,
are extremely reliable and, in the case of most small electronic devices, contain all the
necessary programming to control the device. A great example is the small chip in the
singing fish toy. This chip, about the size of your fingernail, contains the 30-second song
clips in ROM and the control codes to synchronize the motors to the music.
Creating ROM chips totally from scratch is time-consuming and very expensive in small
quantities. For this reason, mainly, developers created a type of ROM known as
programmable read-only memory (PROM). Blank PROM chips can be bought
inexpensively and coded by anyone with a special tool called a programmer.
PROM chips (Figure 2) have a grid of columns and rows just as ordinary ROMs do. The
difference is that every intersection of a column and row in a PROM chip has a fuse
connecting them. A charge sent through a column will pass through the fuse in a cell to a
grounded row indicating a value of 1. Since all the cells have a fuse, the initial (blank) state
of a PROM chip is all 1s. To change the value of a cell to 0, you use a programmer to send a
specific amount of current to the cell. The higher voltage breaks the connection between the
column and row by burning out the fuse. This process is known as burning the PROM.
Figure 2
PROMs can only be programmed once. They are more fragile than ROMs. A jolt of static
electricity can easily cause fuses in the PROM to burn out, changing essential bits from 1 to
0. But blank PROMs are inexpensive and are great for prototyping the data for a ROM before
committing to the costly ROM fabrication process.
Working with ROMs and PROMs can be a wasteful business. Even though they are
inexpensive per chip, the cost can add up over time. Erasable programmable read-only
(EPROM) addresses this issue. EPROM chips can be rewritten many times.
Erasing an EPROM requires a special tool that emits a certain frequency of ultraviolet (UV)
light. EPROM’s are configured using an EPROM programmer that provides voltage at
specified levels depending on the type of EPROM used.
Once again we have a grid of columns and rows. In an EPROM, the cell at each intersection
has two transistors. The two transistors are separated from each other by a thin oxide layer.
One of the transistors is known as the floating gate and the other as the control gate. The
floating gate's only link to the row (wordline) is through the control gate. As long as this link
is in place, the cell has a value of 1. To change the value to 0 requires a curious process
called Fowler-Nordheim tunneling. Tunneling is used to alter the placement of electrons in
the floating gate. An electrical charge, usually 10 to 13 volts, is applied to the floating gate.
The charge comes from the column (bitline), enters the floating gate and drains to a ground.
This charge causes the floating-gate transistor to act like an electron gun. The excited
electrons are pushed through and trapped on the other side of the thin oxide layer, giving it a
negative charge. These negatively charged electrons act as a barrier between the control
gate and the floating gate. A device called a cell sensor monitors the level of the charge
passing through the floating gate. If the flow through the gate is greater than 50 percent of
the charge, it has a value of 1. When the charge passing through drops below the 50-percent
threshold, the value changes to 0. A blank EPROM has all of the gates fully open, giving
each cell a value of 1.
To rewrite an EPROM, you must erase it first. To erase it, you must supply a level of energy
strong enough to break through the negative electrons blocking the floating gate. In a
standard EPROM, this is best accomplished with UV light at a frequency of 253.7. Because
this particular frequency will not penetrate most plastics or glasses, each EPROM chip has a
quartz window on top of it. The EPROM must be very close to the eraser's light source,
within an inch or two, to work properly.
An EPROM eraser is not selective, it will erase the entire EPROM. The EPROM must be
removed from the device it is in and placed under the UV light of the EPROM eraser for
several minutes. An EPROM that is left under too long can become over-erased. In such a
case, the EPROM's floating gates are charged to the point that they are unable to hold the
electrons at all.
EEPROMs and Flash Memory
Though EPROMs are a big step up from PROMs in terms of reusability, they still require
dedicated equipment and a labor-intensive process to remove and reinstall them each time a
change is necessary. Also, changes cannot be made incrementally to an EPROM; the whole
chip must be erased. Electrically erasable programmable read-only memory (EEPROM)
chips remove the biggest drawbacks of EPROMs.
· The chip does not have to removed to be rewritten.
· The entire chip does not have to be completely erased to change a specific portion of
· Changing the contents does not require additional dedicated equipment.
Instead of using UV light, you can return the electrons in the cells of an EEPROM to normal
with the localized application of an electric field to each cell. This erases the targeted cells
of the EEPROM, which can then be rewritten. EEPROMs are changed 1 byte at a time,
which makes them versatile but slow. In fact, EEPROM chips are too slow to use in many
products that make quick changes to the data stored on the chip.
Manufacturers responded to this limitation with Flash memory, a type of EEPROM that uses
in-circuit wiring to erase by applying an electrical field to the entire chip or to predetermined
sections of the chip called blocks. Flash memory works much faster than traditional
EEPROMs because it writes data in chunks, usually 512 bytes in size, instead of 1 byte at a
DB Consulting 2004© has written the definitive document related to memory and the technology
behind it. Everything you ever wanted to know about memory can be found here.
Select from the following topics:
· What is Memory?
· How Much Memory Do You Need?
· A Closer Look
· How Memory Works
· How Much Memory Is On a Module?
· Different Kinds of Memory
· Other Memory Technologies
· What to Consider When Buying Memory
· How to Install Memory
· Troubleshooting Memory Problems
· More About Kingston
· The Glossary
The Ultimate Memory Guide is also available in Adobe Acrobat (PDF) format, in the following
These days, no matter how much memory your computer has, it never seems to be quite enough.
Not long ago, it was unheard of for a PC (Personal Computer), to have more than 1 or 2 MB
of memory. Today, most systems require 128MB to run basic applications. And up
to 512MB or more is needed for optimal performance when using graphical and multimedia
As an indication of how much things have changed over the past two decades, consider this: in
1981, referring to computer memory, Bill Gates said, "640K (roughly 1/2 of a megabyte) ought to
be enough for anybody."
For some, the memory equation is simple: more is good; less is bad. However, for those who
want to know more, this reference guide contains answers to the most common questions, plus
much, much more.
People in the computer industry commonly use the term "memory" to refer to RAM (Random
Access Memory). A computer uses Ram to hold temporary instructions and data needed to
complete tasks. This enables the computer's CPU (Central Processing Unit), to access
instructions and data stored in memory very quickly.
A good example of this is when the CPU loads an application program - such as a word
processing or page layout program - into memory, thereby allowing the application program to
work as quickly and efficiently as possible. In practical terms, having the program loaded into
memory means that you can get work done more quickly with less time spent waiting for the
computer to perform tasks.
The process begins when you enter a command from your keyboard. The CPU interprets the
command and instructs the hard drive to load the command or program into memory. Once the
data is loaded into memory, the CPU is able to access it much more quickly than if it had to
retrieve it from the hard drive.
This process of putting things the CPU needs in a place where it can get at them more quickly is
similar to placing various electronic files and documents you're using on the computer into a
single file folder or directory. By doing so, you keep all the files you need handy and avoid
searching in several places every time you need them.
People often confuse the terms memory and storage, especially when describing the amount
they have of each. The term memory refers to the amount of RAM installed in the computer,
whereas the term storage refers to the capacity of the computer's hard disk. To clarify this
common mix-up, it helps to compare your computer to an office that contains a desk and a file
The file cabinet represents the computer's
hard disk, which provides storage for all the
files and information you need in your
office. When you come in to work, you take
out the files you need from storage and put
them on your desk for easy access while
you work on them. The desk is like memory
in the computer: it holds the information and
data you need to have handy while you're
Consider the desk-and-file-cabinet metaphor for a moment. Imagine what it would be like if every
time you wanted to look at a document or folder you had to retrieve it from the file drawer. It
would slow you down tremendously, not to mention drive you crazy. With adequate desk space -
our metaphor for memory - you can lay out the documents in use and retrieve information from
them immediately, often with just a glance.
Here's another important difference between memory and storage: the information stored on a
hard disk remains intact even when the computer is turned off. However, any data held in
memory is lost when the computer is turned off. In our desk space metaphor, it's as though any
files left on the desk at closing time will be thrown away.
It's been proven that adding more memory to a computer system increases its performance. If
there isn't enough room in memory for all the information the CPU needs, the computer has to set
up what's known as a virtual memory file. In so doing, the CPU reserves space on the hard disk
to simulate additional RAM. This process, referred to as "swapping", slows the system down. In
an average computer, it takes the CPU approximately 200ns (nanoseconds) to access RAM
compared to 12,000,000ns to access the hard drive. To put this into perspective, this is
equivalent to what's normally a 3 1/2 minute task taking 4 1/2 months to complete!
Access time comparison between RAM and a hard drive.
If you've ever had more memory added to your PC, you probably noticed a performance
improvement right away. With a memory upgrade, applications respond more quickly, Web pages
load faster, and you can have more programs running simultaneously. In short, additional
memory can make using your computer a lot more enjoyable.
These days, more and more people are using computers in a workgroup and sharing information
over a network. The computers that help distribute information to people on a network are called
servers. And their performance has a huge impact on the performance of the network: if a server
is performing poorly, everyone on the network "feels the pain." So, while a memory upgrade on
an individual PC makes a big difference for the person who uses it, a memory upgrade in a server
has even more far-reaching effects and benefits everyone who accesses the server.
To better understand the benefits of increasing memory on a server, take a look at these results
from an independent study done on Windows NT-based servers.
Application servers are utilized to host a wide range of applications, such as word processing and
spreadsheet programs. By increasing base memory from 64MB to 256MB, Windows NT Server
was able to support five times as many clients before transactions per second dropped.
Web servers are employed to serve up Web pages in response to HTTP requests from users.
Doubling memory can cut response time by more than 50%.
Directory servers are vital to corporate productivity, handling most email and messaging tasks. In
this environment, more memory increases the speed with which a server can access information
from linked databases. Doubling memory increased performance from 248 to 3000%.
How Much Memory Do You Need?
Perhaps you already know what it's like to work on a computer that doesn't have quite enough
memory. You can hear the hard drive operating more frequently and the "hour glass" or "wrist
watch" cursor symbol appears on the screen for longer periods of time. Things can run more
slowly at times, memory errors can occur more frequently, and sometimes you can't launch an
application or a file without first closing or quitting another.
So, how do you determine if you have enough memory, or if you would benefit from more? And if
you do need more, how much more? The fact is, the right amount of memory depends on the
type of system you have, the type of work you're doing, and the software applications you're
using. Because the right amount of memory is likely to be different for a desktop computer than
for a server, we've divided this section into two parts - one for each type of system.
Memory Requirements For A Desktop Computer
If you're using a desktop computer, memory requirements depend on the computer's operating
system and the application software you're using. Today's word processing and spreadsheet
applications require as little as 32MB of memory to run. However, software and operating system
developers continue to extend the capabilities of their products, which usually means greater
memory requirements. Today, developers typically assume a minimum memory configuration of
64MB. Systems used for graphic arts, publishing, and multimedia call for at least 128MB of
memory and it's common for such systems to require 256MB or more for best performance.
The chart on the next page provides basic guidelines to help you decide how much memory is
optimal for your desktop computer. The chart is divided by operating system and by different
kinds of work. Find the operating system you're using on your computer, then look for the
descriptions of work that most closely match the kind of work you do.
Windows 2000 Professional runs software applications faster. Notebook-ready and designed with
the future in mind, Windows 2000 Professional allows users to take advantage of a full range of
features today. Windows 2000 Professional is future-ready and promises to run today's and
tomorrow's applications better.
Baseline: 64MB - 128MB
Optimal: 128MB - 512MB
Light- Word processing, email, data-entry 64MB - 96MB
Medium- Fax/communications, database administration, spreadsheets; >2
applications open at a time
64MB - 128MB
Administrative & Service
Complex documents, accounting, business graphics, presentation software,
network connectivity
96MB - 256MB
Light- Proposals, reports, spreadsheets, business graphics, databases, scheduling,
64MB - 96MB
Medium- Complex presentations, sales/market analysis, project management,
Internet access
96MB - 128MB
Executives & Analysts
Statistical applications, large databases, research/technical analysis,
complex presentations, video conferencing
128MB -
Light- Page layout, 2 - 4 color line drawings, simple image manipulation, simple
96MB - 128MB
Medium- 2D CAD, rendering, multimedia presentations, simple photo-editing, Web
128MB -
Engineers & Designers
Animation, complex photo-editing, real-time video, 3D CAD, solid modeling,
finite element analysis
256MB - 1GB
Windows 98 requires 16 - 32MB to run basic applications. Tests show 45 - 65% performance
improvements at 64MB and beyond.
Baseline: 32MB - 64MB
Optimal: 128MB - 256MB
Light- Word processing, basic financial management, email and other light Internet use 32MB - 64MB
Medium- Home office applications, games, Internet surfing, downloading images, spreadsheets,
64MB - 128MB
Multimedia use such as video, graphics, music, voice recognition, design, complex
128MB -
Light- Word processing, basic financial management, email and other light Internet use 32MB - 48MB
Medium- Home office applications, games, Internet surfing, downloading images, spreadsheets,
presentations 48MB - 64MB
Home Users
Multimedia use such as video, graphics, music, voice recognition, design, complex
64MB - 128MB
The Linux operating system is quickly gaining popularity as an alternative to Microsoft Windows.
It includes true multitasking, virtual memory, shared libraries, demand loading, proper memory
management, TCP/IP networking, and other features consistent with Unix-type systems.
Baseline: 48MB - 112MB
Optimal: 112MB - 512MB
Light- Word processing, email, data-entry 48MB - 80MB
Medium- Fax /communications, database administration, spreadsheets; >2
applications open at a time
48MB - 112MB
Administrative & Service
Complex documents, accounting, business graphics, presentation software,
network connectivity
80MB - 240MB
Light- Proposals, reports, spreadsheets, business graphics, databases, scheduling,
presentations 48MB - 80MB
Medium- Complex presentations, sales/market analysis, project management,
Internet access
80MB - 112MB
Executives & Analysts
Statistical applications, large databases, research/technical analysis,
complex presentations, video conferencing
112MB -
Light- Page layout, 2 - 4 color line drawings, simple image manipulation, simple
80MB - 112MB
Medium- 2D CAD, rendering, multimedia presentations, simple photo-editing, Web
development 112MB -
Engineers & Designers
Animation, complex photo-editing, real-time video, 3D CAD, solid modeling,
finite element analysis
240MB - 1GB
The Macintosh operating system manages memory in substantially different ways than other
systems. Still, System 9.0 users will find that 48MB is a bare minimum. When using PowerMac ®
applications with Internet connectivity, plan on a range between 64 and 128MB as a minimum.
Baseline: 48MB - 64MB
Optimal: 128MB - 512MB
Light- Word processing, email, data- entry 48MB - 64MB
Medium- Fax /communications, database administration, spreadsheets; >2
applications open at a time
64MB - 96MB
Administrative & Service
Complex documents, accounting, business graphics, presentation software,
network connectivity
96MB - 128MB
Light- Proposals, reports, spreadsheets, business graphics, databases, scheduling,
64MB - 256MB
Medium- Complex presentations, sales/ market analysis, project management,
Internet access
128MB - 1GB
Executives & Analysts
Statistical applications, large databases, research/ technical analysis,
complex presentations, video conferencing
96MB - 128MB
Light- Page layout, 2 - 4 color line drawings, simple image manipulation, simple
128MB -
Medium- 2D CAD, rendering, multimedia presentations, simple photo-editing, Web
256MB - 1GB
Engineers & ;Designers
Animation, complex photo-editing, real- ime video, 3D CAD, solid modeling,
finite element analysis 512MB - 2GB
· Please Note: These figures reflect work done in a typical desktop environment. Higher-end workstation tasks may
require up to 4GB. Naturally, a chart such as this evolves as memory needs and trends change. Over time, developers
of software and operating systems will continue to add features and functionality to their products. This will continue to
drive the demand for more memory. More complex character sets, like Kanji, may require more memory than the
standard Roman based (English) character sets.
How can you tell when a server requires more memory? Quite often, the users of the
network are good indicators. If network-related activity such as email, shared
applications, or printing slows down, they'll probably let their Network Administrator know.
Here are a few proactive strategies that can be used to gauge whether or not a server
has sufficient memory:
· Monitor server disk activity. If disk swapping is detected, it is usually a result of
inadequate memory.
· Most servers have a utility that monitors CPU, memory, and disk utilization. Review this
at peak usage times to measure the highest spikes in demand.
Once it's determined that a server does need more memory, there are many factors to consider
when deciding on how much is enough:
What functions does the server perform (application, communication, remote access,
email, Web, file, multimedia, print, database)?
Some servers hold a large amount of information in memory at once, while others
process information sequentially. For example, a typical large database server does a lot
of data processing; with more memory, such a server would likely run much faster
because more of the records it needs for searches and queries could be held in memory -
that is, "at the ready." On the other hand, compared to a database server, a typical file
server can perform efficiently with less memory because its primary job is simply to
transfer information rather than to process it.
What operating system does the server use?
Each server operating system manages memory differently. For example, a network
operating system (NOS)
such as the Novell operating system handles information much
differently than an application-oriented system such as Windows NT. Windows NT's
richer interface requires more memory, while the traditional Novell functions of file and
print serving require less memory.
How many users access the server at one time?
Most servers are designed and configured to support a certain number of users at one
time. Recent tests show that this number is directly proportional to the amount of memory
in the server. As soon as the number of users exceeds maximum capacity, the server
resorts to using hard disk space as virtual memory, and performance drops sharply. In
recent studies with Windows NT, additional memory allowed an application server to
increase by several times the number of users supported while maintaining the same
level of performance.
What kind and how many processors are installed on the server?
Memory and processors affect server performance differently, but they work hand in
hand. Adding memory allows more information to be handled at one time, while adding
processors allows the information to be processed faster. So, if you add processing
power to a system, additional memory will enable the processors to perform at their full
How critical is the server's response time?
In some servers, such as Web or e-commerce servers, response time directly affects the
customer experience and hence revenue. In these cases, some IT Managers choose to
install more memory than they think they would ever need in order to accommodate
surprise surges in use. Because server configurations involve so many variables, it's
difficult to make precise recommendations with regard to memory. The following chart
shows two server upgrade scenarios.
Designed to help businesses of all sizes run better, Windows 2000 Server offers a manageable,
reliable and internet-ready solution for today's growing enterprises. For optimal performance,
consider adding more memory to take advantage of Windows 2000 Server's robust feature set.
Windows 2000 Server is internet-ready and promises to run today's and tomorrow's applications
Baseline: 128MB
Optimal: 256MB - 1GB
Application Server Houses one or more applications to be accessed over a wide user base 256MB - 4GB
Directory Server Central Management of network resources 128MB - 1GB
Print Server Distributes print jobs to appropriate printers 128MB - 512MB
Communication Server Manages a variety of communications such as PBX, Voicemail, Email, and
512MB - 2GB
Web Server Internet and intranet solutions 512MB - 2GB
Database Server Manages simple to complex databases of varying sizes 256MB - 4GB
Linux is a reliable, cost-effective alternative to traditional UNIX servers. Depending on the
distribution, the Linux server platform features a variety of utilities, applications, and services.
Baseline: 64MB - 128MB
Optimal: 256MB - 1GB
Application Server Houses one or more applications to be accessed over a wide user base 64MB - 4GB
Directory Server Central Management of network resources 128MB - 1GB
Print Server Distributes print jobs to appropriate printers 128MB - 512MB
Communication Server Manages a variety of communications such as PBX, Voicemail, Email, and
512MB - 2GB
Web Server Internet and intranet solutions 512MB - 2GB
Database Server Manages simple to complex databases of varying sizes 256MB - 4GB
* Please Note: These figures reflect work done in a typical server environment. Higher-end
workstation tasks may require up to 4GB. Naturally, a chart such as this evolves as memory
needs and trends change. Over time, developers of software and operating systems will continue
to add features and functionality to their products. This will continue to drive the demand for more
memory. More complex character sets, like Kanji, may require more memory than the standard
Roman based (English) character sets.
Memory comes in a variety of sizes and shapes. In general, it looks like a flat green stick with little
black cubes on it. Obviously, there's a lot more to memory than that. The illustration below shows
a typical memory module and points out some of its most important features.
A closer look at a 168-pin SDRAM DIMM.
The green board that all the memory chips sit on is actually made up of several layers. Each layer
contains traces and circuitry, which facilitate the movement of data. In general, higher quality
memory modules use PCBs with more layers. The more layers a PCB has, the more space there
is between traces. The more space there is between traces, the lesser the chance of noise
interference. This makes the module much more reliable.
DRAM is the most common form of RAM. It's called "dynamic" RAM because it can only hold data
for a short period of time and must be refreshed periodically. Most memory chips have black or
chrome coating, or packaging, to protect their circuitry. The following section titled "Chip
Packaging" shows pictures of chips housed in different types of chip packages.
The contact fingers, sometimes referred to as "connectors" or "leads," plug into the memory
socket on the system board, enabling information to travel from the system board to the memory
module and back. On some memory modules, these leads are plated with tin while on others, the
leads are made of gold.
The magnifying glass shows a layer of the PCB stripped away to reveal the traces etched in the
board. Traces are like roads the data travels on. The width and curvature of these traces as well
as the distance between them affect both the speed and the reliability of the overall module.
Experienced designers arrange, or "lay out", the traces to maximize speed and reliability and
minimize interference.
The term "chip packaging" refers to the material coating around the actual silicon. Today's most
common packaging is called TSOP (Thin Small Outline Package). Some earlier chip designs
used DIP (Dual In-line Package) packaging and SOJ (Small Outline J-lead). Newer chips, such
as RDRAM use CSP (Chip Scale Package). Take a look at the different chip packages below, so
you can see how they differ.
When it was common for memory to be installed directly on the computer's system board, the
DIP-style DRAM package was extremely popular. DIPs are through-hole components, which
means they install in holes extending into the surface of the PCB. They can be soldered in place
or installed in sockets.
SOJ packages got their name because the pins coming out of the chip are shaped like the letter
"J". SOJs are surface-mount components - that is, they mount directly onto the surface of the
TSOP packaging, another surface-mount design, got its name because the package was much
thinner than the SOJ design. TSOPs were first used to make thin credit card modules for
notebook computers.
Unlike DIP, SOJ, and TSOP packaging, CSP packaging doesn't use pins to connect the chip to
the board. Instead, electrical connections to the board are through a BGA (Ball Grid Array) on the
underside of the package. RDRAM (Rambus DRAM) chips utilize this type of packaging.
For some higher capacity modules, it is necessary to stack chips on top of one another to fit them
all on the PCB. Chips can be "stacked" either internally or externally. "Externally" stacked chip
arrangements are visible, whereas "internally" stacked chip arrangements are not.
Amazing but true: memory starts out as common beach sand. Sand contains silicon, which is the
primary component in the manufacture of semiconductors, or "chips." Silicon is extracted from
sand, melted, pulled, cut, ground, and polished into silicon wafers. During the chip-making
process, intricate circuit patterns are imprinted on the chips through a variety of techniques. Once
this is complete, the chips are tested and die-cut. The good chips are separated out and proceed
through a stage called "bonding": this process establishes connections between the chip and the
gold or tin leads, or pins. Once the chips are bonded, they're packaged in hermetically sealed
plastic or ceramic casings. After inspection, they're ready for sale.
This is where memory module manufacturers enter the picture. There are three major
components that make up a memory module: the memory chips, PCB, and other "on-board"
elements such as resistors and capacitors. Design engineers use CAD (computer aided design)
programs to design the PCB. Building a high-quality board requires careful consideration of the
placement and the trace length of every signal line. The basic process of PCB manufacture is
very similar to that of the memory chips. Masking, layering, and etching techniques create copper
traces on the surface of the board. After the PCB is produced, the module is ready for assembly.
Automated systems perform surface-mount and through-hole assembly of the components onto
the PCB. The attachment is made with solder paste, which is then heated and cooled to form a
permanent bond. Modules that pass inspection are packaged and shipped for installation into a
Originally, memory chips were connected directly to the computer's motherboard or system
board. But then space on the board became an issue. The solution was to solder memory chips
to a small modular circuit board - that is, a removable module that inserts into a socket on the
motherboard. This module design was called a SIMM (single in-line memory module), and it
saved a lot of space on the motherboard. For example, a set of four SIMMs might contain a total
of 80 memory chips and take up about 9 square inches of surface area on the motherboard.
Those same 80 chips installed flat on the motherboard would take up more than 21 square inches
on the motherboard.
These days, almost all memory comes in the form of memory modules and is installed in sockets
located on the system motherboard. Memory sockets are easy to spot because they are normally
the only sockets of their size on the board. Because it's critical to a computer's performance for
information to travel quickly between memory and the processor(s), the sockets for memory are
typically located near the CPU.
Examples of where memory can be installed.
Memory in a computer is usually designed and arranged in memory banks. A memory bank is a
group of sockets or modules that make up one logical unit. So, memory sockets that are
physically arranged in rows may be part of one bank or divided into different banks. Most
computer systems have two or more memory banks - usually called bank A, bank B, and so on.
And each system has rules or conventions on how memory banks should be filled. For example,
some computer systems require all the sockets in one bank to be filled with the same capacity
module. Some computers require the first bank to house the highest capacity modules. If the
configuration rules aren't followed, the computer may not start up or it may not recognize all the
memory in the system.
You can usually find the memory configuration rules specific to your computer system in the
computer's system manual. You can also use what's called a memory configurator. Most thirdparty
memory manufacturers offer free memory configurator available in printed form, or
accessible electronically via the Web. Memory configurator allow you to look up your computer
and find the part numbers and special memory configuration rules that apply to your system.

well here u have written about the how memory works. but i need how windows 2000 manages the memory..

assadtarik, what you are asking here is for someone else to do your work for you. This is completely against the forum rules. We are here to help you out, not to do your assignment. Darren here has already given you a whole lot of info, and if you expect someone else to do your complete assignment, sorry, you've come to the wrong place. You have google out there. Just run a search for what you are looking for. Don't expect people here to spoon feed you the exact info. If there are 21 million results for your query, it's your job to sift through them and get the relevant info. No one here will do your work for you. If you need help, tell us what you have done so far and we'll go from there. But you have to show some initiative.

hi everyone ,
[TEX]We are here to help you out, not to do your assignment[/TEX]

yep they cannot do your assignment , but you can download a pre-done assignment like pre-compiled binary , easy money ! :)

anyway ,
To know about how windows 2000 manages memory you needs to know the fundermantals of the operating systems , let me ask you a question , what is the difference between a thread and a process ? How they are implemented ? How they are works parrally , how they are intercommunicate with each other ? If you haven't answers to this question then

read the Mordern Operating Systems by andrew S. Tanbaum book chapter 1 and 2 , Introduction and the process and threads . Anyway ,I'm using the second edition , these can be little bit change in new editions ! , anyway then you know something about how they are and how they are working .
then read the memory management chapter whole , that chapter is not the memory management in windows 2000 , its general for any operating system . After you know these things you can read a book like
Inside Microsoft Windows 2000 , Third Edition ebook , by David A. Solomon Mark E. Russionvich ,

[TEX]You have google out there. Just run a search for what you are looking for.[/TEX]
:) Yes he was correct ! You can google these books for ebooks and download it and have a fun . The two books that I above say is have in the pre-compiled binary FORMAZ ! you just download it . For that you can use the google shit ! Just put up your shitt into the google search box hole :) I cannot mention the places that you really can downlod them , thus beacuse the form they hate me ! For that you can google it as goldeagle2005 said or you know what to do he he :)

anyway , just forgot about it and let me introduce me to you .
sanzilla Jackcat : sandundhammikaperera@yahoo.com

anyway , friend what about your project ?I also have a project to do . You know I'm very busy these days . Therefore I outsource my project !I just have to find some money anyway .


Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, learning, and sharing knowledge.