0

I came across through memory clock,cpu clock ,gpu clock,warp clock in the computer .
Who could elaborate for me about this clocks,what is their main work ,or where in the web can I can I get good material about this?
thanx in advance.

4
Contributors
5
Replies
6
Views
12 Years
Discussion Span
Last Post by joshuu
0

Prepare yourself and get comfortable, this is a long one....

By 'clock' they refer to the clock frequency or 'speed' of the processors they mention.
The number of clocks these chips perform is considered a measure of the processors speed or performance.
These speeds vary greatly depending on what chip and its application.
A CPU clock is simply the speed the core of the CPU operates as a whole. Intel processors for example currently retail at a maximum speed of 3.6GHz thats Gigahertz.

At this point i should mention what Hertz are. It means the 'frequency' but for Processor measurement purposes its considered a measure of 'times per second'. That is to say the number of operations or tasks it can do in 1 second. So for a CPU to run at 3.6Gigahertz, it can do 3.6 billions things per second (thousand million).
This isnt entirely accurate as some operations require 1 clock cycle where others require 8 or more. But i wont go into that for now, and let you investigate that if such information grabs you.
Other CPU manufacturers such as AMD, Transmeta and VIA operate on different methods and efficiencies when compared to Intel but they do always provide you with information on their chips equivelent Performance Rating or 'PR' usually rated in MHz (MegaHertz, or million times per second). So a CPU of performance rating of 2000 would compare to the equivelant 2GHz processor. Its all a bit complicated and i would be happy to explain the differences to you if you really want to know.

Now on to GPUs. These are Graphics Processing Units. These are extremely clever and sophisticated, state of the art processors used specificly for Graphics intensive tasks. Anything from High end 3D rendering engines for graphics design workstations to the common 3d gaming applications. These processors have 2 'clocks'. The first is the GPUs core frequency or operating speed. The second is the Memory interface 'clock or RAMDAC speed. The RAMDAC (Random Access Memory Digital to Analogue Converter) is the portion of the GPU thats sole purpose is to convert digital data into analogue signals (for the purpose of displaying on your monitor and so on). These speeds will be similar but always different. The GPUs cure 'clock' would be in the hundreds of MegaHertz and so will the RAMDAC 'clock'. The memory for these GPUs is getting increasingly fast to be able to cope with the higher efficiency and demanding needs of todays GPUs. The ability to throw around millions of pixels and polygons is all good and well but you need the memory with its textures (pictures applied to the polygon forms to result in an object from a box to a gun or a door/wall) to be able to keep up with what the GPUs processes. (otherwise you would end up with lots of boxes or guns in your games without textures or even damaged textures that would mean they wouldnt look like boxes or guns..).

If you want to have a look around at online stores and see for yourself about the CPU frequencies and the latest Graphics cards and their 2 clock frequencies and see the difference they make to the price of the cards.
Notice the graphics cards may have some performance statistics with them. stating trillions of pixels(or texels) per second and fill rates, polygon counts and so on. Its staggering to imagine if you had to claculate and draw a single scene it would take weeks. These things do it anything up to 90 frames per second, faster in some cases.

Second to mobile phones, graphics processors have some of the absolute latest technology advances known to man. I'm often staggered how they can reproduce them in such quantities at such low prices.

Here ends Fast-Stuff™ class 101 :)

0

to be even more specific, Ghz is the frequency of the cpu, the frequency is the amount of times a wave length passes through a specific point.

frequency is measured in hertz and thats where you get mega or giga hertz from.

0

ok,memory clock is simply the memory speed(for reading and writing I suppose) but what about warp,I mean ,what is it,whats its work?
does a memory that has a 8 clock cycle for instance powerful than a one with a clock cycle of 2 ?

0

Hi,

The more clock cycles an instruction takes, the longer the instruction takes. Computers "read" and do things on the top of the pulse of the clock cycle.

We are talking electricity.

For example, lets say it is a 1 volt system. When the clock trips, the voltage goes from 0 to 1 volt, and the reading is done shortly after the power level reaches 1 volt. If it happens too early, such as at .85 volts, the circuit is not ready to be read, and the data could be garbled. When you hear of memory statistics talking about speed in nano-seconds, it is talking about the time it takes the memory to adjust to a change in voltage.

You can also think of clock cycles as reading music. The clock is the "thump" heard on the beat. The instructions are the notes. If the instructions are 1 clock cycle, then that instruction will complete with each thump/note. If the instructions take more clock cycles, it would be like a whole note (one sound, multiple thumps).

Christian

0

In that case ,is this clock cycles a prerequisite when purchasing for this components?

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.