I couldn't find it on net, so. Do you guys know if Intel or AMD or any other company might've been producing mono-core with higher clock frequency? Mono-core with 10GHz would suit good.

Edited 2 Years Ago by pritaeas: Fixed typo.

The word your looking for is "single-core". Single-core proccesors are less commonly produced now (in desktops) because multi-core proccesors almost always provide better performance. In general, it's been "kind-of" decided to have lower clock-rates for better energy efficency, and speed is given by improved instructions, multiple cores and bigger memory cashes.

There are certian operations that cannot be paralellized, but in those cases, the bottleneck will usually be the memory unless the operation is highly cashe optimised. In that case, you might want to look into using FPGA's to implement "better instructions" for the operation, which is sometimes possible for things like cryptography (going AES in a single clock-cycle might be possible, so it's that kind of idea), etc.

If you want to clock up a proccessor, the biggest problem is heat. You need to make sure it's energy efficient. You should probably still use a multi-core (even if your only using one core) since the operating system will try to balance the load (reducing the heat). Though, these proccessor's gave up on high clokc rates, so I'm not sure. You might be able to do it with an Intel Pentium (I've head of those things getting clocked to 6). Then you're going to need one heck of a cooling system in place to get it up to 10GHz.

Basically, at the point of trying to get 10GHz you're not going for speed, you're probably going for a high clock rate because it's fun and expensive.

Instinctively I would say intel might be the better bet here. But if your using server proccessors, it's up in the air.

Edited 2 Years Ago by Hiroshe

Like Hiroshe, I had never heard of a CPU being clocked above 5 or 6GHz, and even then, that requires liquid cooling, or even liquid nitrogen cooling. It would seem that the world record is 8.8GHz.

Basically, the main reason why the CPU speed as been limited in the past 10 years is the physical limitations of heat transfer. A silicon chip and the heat sink on top of it has a certain maximum amount of heat transfer rate, i.e., there is a limit to the amount of heat that could possibly flow out of it, even under ideal conditions (ambient temperature at 0 Kelvin). And transistor technology has also reached its limit, pretty much. The energy consumption of a CPU, and thus, the heat coming out of it, is proportional to the number of transistor switches per second, i.e., the clock frequency. The higher the frequency the more heat is generated, and there is no way to reduce that amount of heat with existing transistor technology (or any transistor technology, i.e., there would need to be a major technology switch to something else (not transistors)). And the only way to increase the amount of heat that can flow out of a CPU is by lowering the ambient temperature or having a better coolant than air (such as water, and thus, the liquid cooling), but even then, you are still limited to the theoretical ideal limit, and I would assume that even that limit isn't much better than with a liquid nitrogen cooling system.

All the great answers. I am aware of fact that less cores will mean bottlenecking each other. But I will not have just 1 CPU. I plan buying motherboard with 4 CPUs and gettin' them high clocked. So it will become 8-cored monster >:D

These days, a 4 CPU system will have from 8 (more likely 16) cores, to as many as 32. More cores == slower clock speeds. You can overclock 4 core high-end Intel CPU's to about 5GHz, and possibly more with agressive liquid cooling.

rubberman So it would end up 4*2 on 5GHz. Which would super-technically be something like 8-core 5GHz right and then in Intel's edition and as far as I understand 2.5GHz in Intel equals 5GHz in AMD. So. 8-core 5GHz, wouldn't that be a powerful computer?

If your going for speed, then you should probably stick towards using a couple of multi-core processors. It'll be faster then a couple of single-core proccessors (though I'm sure you can cunstruct a computation that runs faster on single-core processors, I wouldn't expect to see one naturally).

If you don't care as much for energy efficency, you can overclock and use a better cooling system, however at that point, it still might be cheaper in the long run to not overclock and add another CPU (this is how most supercomputers try to do it).

The CPU isn't the only important part though! If your other parts don't keep up your other programs will bottleneck elswhere. So you might want to look into buying a server blade instead of a desktop.

Depending on what you need this computation speed for, you might want to look intp gpu computing with CUDA/OpenCL, cluster computing or using a set of FPGA's. I've head a story about one guy doing research, and he needed a small super computer. He ended up buying a bunch of rasberry pi's (for their relatively big gpu), and that did the job well fairly cheaply.

If you want to dump a lot of money on a fast personal computer, just use one or two regular multi-core processors (or more if it's a server blade), overclock it with a nice cooling system (perhaps submerge the system in mineral oil, add lights and bubbles so it looks cool), undervolt it, and maybe get an nvidea tesla. Get high performance RAM, and pay attention about how you set up your hard drives (ie, use an SSD for booting/installedprograms, and a hdd for storing data). Maybe overclock the memory bus. Make sure you use the right tools, and make sure you keep an eye on how the system runs (keep an eye on heat, run it though a few tests suites).

I understand 2.5GHz in Intel equals 5GHz in AMD

No. 2.5GHz is the number of instructions per second. So 2.5GHz is always 2.5GHz, and is not a measure of the realtime performance of the programs. For example, you would expect a MIPS processor at 2GHz to be slower then an x86-64 processor at 2GHz, just because of the instruction set. At the same time, you might expect that MIPS would be a better choice (depending) for a supercomputer because of the properties of RISC processors.

That being said, a there are instructional differences between AMD and Intel. If you program is compiled to take advantage of a processors specific capabilities, then you might expect a difference. However, if a program is compiled for general x86_64, the difference would be harder to see.

For example, newer intel processors have build in instructions for AES encryption, so you would expect that that a program compiled to take advantage of those instructions would run faster (and they do by a factor of about 4 on my machine).

Now I think you might be talking about intel's hyper-threading when your talking about "double the clock rate". Intel's hyper-threading is complicated. It's not really the same as doubling the number of cores. For example, if you're running a chess engine like Stockfish, you would want use as many threads as you have physical cores, not virtual cores. But if your running a few unrelated programs (less intensive) at once, you might expect the virtual cores to provide more energy efficiency and a bit more speed.

In general, Intel might be faster for users computers. AMD is probably more energy efficient-ish, and is probably easier to overclock. Intel's probably don't need to be overclocked as much to get simular performance, but that depends. Also, AMD is cheaper, and you might be able to get an extra processor for the cost.

There is no striagt forward way to compair the speeds of theoretical systems. Even benchmarks of real systems arn't always representitive.

Wow, Mike great explanation was about to mention that. Just a quick add-on. The problem with higher frequencies is the heat indeed but also the connection between the components in the chip. The nano technology has reached a limit where higher frequencies and the heat exposed would damage the few nano mm "wires" thus the chip would break

Hiroshe, ugh, 2.5GHz doesn't always equal 2.5GHz. 8-cored AMD is as softer than 4-cored Intel. It isn't always equal.

Mind that Intel and AMD CPU's have different architecture. If you want to get to know basic architectures take a look at MIPS processors

Edited 2 Years Ago by Slavi

2.5GHz is always 2.5Ghz. GHz means "billions of cycles per second (average)". This number is measured and is absolute. If you have 2 processors clocked at 2.5GHz, then they're both clocked at 2.5GHz, and it's as simple as that. It doesn't matter if they're completly different architectures, or made by different companies. A frequency is a frequency. However, the clock frequency isn't the only factor that dictates the runtime of your programs (and that's what I was trying to make clear in my post). 2 processors clocked at the same frequency might have different runtimes because it's not the only factor.

For example, you would expect processor with a bigger cashe will have fewer cashe misses. Even if the clock rate is lower, the cashe size can sometimes be far more important. We do try to write programs to take advantage of cashe space (and even design algorithms around the concept, for example, a cashe optimized prime seive is able to run 10 times faster(ish) then one that's not cashed optimised. So even if the clock rate is halved, the cashe can sometimes more then make up for it). Other examples include the instruction set, etc...

Having multiple cores means that they are running in paralell. This has nothing to do with the frequency they are clocked at.

So yes, having more cores means that you'll usually have more speed in multi-threaded applications.

Be carefull though. Intel processors use hyper-threading which means that each physical core has two virtual cores. Two virtual cores arn't usually twice as fast as one physical core. In fact, for certian operations (like a chess engine say), you should run programs using the number of physical cores, not virtual cores.

Speed isn't as straight forward as frequency * number of cores.

Also, saying that a 2.5GHz Intel is equvalent to a 5GHz AMD is not usually correct. You can check out benchmarks at pheronix. This is not representitive of the entire Intel line vs the entire AMD line though.

Edited 2 Years Ago by Hiroshe

Well. It's not the only factor, but the fact of it affecting performance is "feelable" by GHz performed.

You might want to use Gflops for a more "feelable" number. Instead of compairing clock frequency, it will check the total number of "billions of floating point operations per second (average)" for the entire system. This is more usefull in a sceintific context, but will still get you a more realistic number then compairing clock frequencies.

Better yet, use some extensive benchmarking suite.

Yea. Just because there is GFlops on Intel's/AMD's website next to processor's name.

Well. It's still same logic... if AMD with 4.4GHz 8-core from AMD performs worse than Intel 2.8 GHz 6-core then you can say that 2.8GHz in Intel = 4.4GHz in AMD.

Correct. If you have one particular Intel processor (not all intel processors) and one particuler AMD processor (not all AMD processors), and time a particular computation on both processors, then you can compute the s/GHz for each processor, and you can expect it to scale linearly (thus you can convert the time it takes to run this computation by compairing GHz). This linear property of clockrate to execution speed is used in practice. However, the clockspeed alone doesn't provide a very accurate picture without the context of a computation and the "s/GHs" for all the processors in question.

I cheated a little bit when I said it scale's linearly - it does if it's cashe optimised. But you'll have bottlenecks appearing elsewhere long before 10GHz.

It would be incorrect to say that all computations on all Intel processors will have double the s/GHz then an AMD processor (each proccessor has it's own "s/GHz" for a particular computation, and this is not a funtion of the brand of processor). This can be easily seen in benchmarks.

In practice, when you buy a processor you wouldn't see the "s/GHz" for a large number of different kinds of computations. In fact, when you buy a processor, the only thing you can really compair it to is older iterations of the same processor (going under the assumption that processors get better over time, this does paint an accurate picture usually) along with the clockrate, number of cores, and the L1, L2 and L3 cashe sizes.

Now in your situation, it doesn't seem applicable to use "s/GHz" (unless you're setting up multiple computers). That is why I am saying, the most accurate way to get a picture of the speed of the processor is to compair benchmarks.

Also, get high performance ram, make sure that the bus rates are nice and high (and compatable), and get a good graphics card. If you're a gamer, then you would probably prefer to hve a top-of-the-time graphics card as opposed to a top-of-the-line CPU. If your a scientist (or doing numeracle computation), you'll probably need a good both.

Correct. If you have one particular Intel processor (not all intel processors) and one particuler AMD processor (not all AMD processors), and time a particular computation on both processors, then you can compute the s/GHz for each processor, and you can expect it to scale linearly (thus you can convert the time it takes to run this computation by compairing GHz).

I don't know about that. Intel seems to be heavily dominating over AMD in benchmarks you provided (with couple small exceptions). After AMD has higher significately clock rate and 2 more logical CPUs.

Correct, in these benchmarks this Intel processor seems to outperform AMD for these benchmarks on his machine. That does not imply that Intel processors are always better (that would be indictive reasoning based on one test). Intel processors do tend to run better at lower frequencies in my experience, but it's not a rule.

Furthermore, it's even more incorrect to say that "Intel processors always have better S/GHz by a factor of exactly 2." This is variable between processor lines. Keep in mind also that AMD tends to design their chips to run at higher clock frequencies (which comes at a cost of other things within the cpu) and Intel's are generally designed to have more optimized instructions (which also has a cost).

For reference, AMD seems to provide better performance per watt. When your dealing with servers or supercomputers (where the bigger costs are electricity and heat control) performance-per-watt seems to outweigh performance-per-time slightly (since you can add more computers to the SC for more performance fairly cheaply). The better performance-per-watt allwos you to overclock it more then an Intel. That being said, Intel's work to have more optimized instructions, so it's really a tossup.

Edited 2 Years Ago by Hiroshe

So you mean to say: AMD for economic reasons and Intel for speed reasons? 'Cause that's what I understood.

Edited 2 Years Ago by RikTelner

Yes, I would say that it tends to be true for personal computers in this day and age - unless perhaps if you're overclocking.

Edited 2 Years Ago by Hiroshe

As you said. Clock isn't only factor. So overclocking both will get almost same results.

It's true that the frequency that a processor run's at isn't the only factor. Overclocking both may or may not give simular results. Luckily, the clock fequency does linearly corralate with the run time (up to a point), so you'll need to use that fact to judge for yourself.

This article has been dead for over six months. Start a new discussion instead.