Was on the rocks about which server to choose. Here were the options:

  • Dual Xeon 2.8 - 2 gig ram - 2 10k 73 gig SCSI
  • Dual dual core Opteron - 2 gig ram - 2 160 gig SATA

The question I had was; Is the performance gain from the 2x dual core opterons greater than the performance decrease from the SATAs?

It's a good thing dani was on hand. Since she had just upgraded her servers for daniweb, she was glad to provide some insight.

It comes down to this. The SATA drives will put an extra load on the CPU vs the SCSI drives will put the load on the SCSI controller. So if you are serving and running the database on the same server, the scsi route might be better; freeing up your cpu for apache.

I am still a bit on the rocks about what is better, so any more insight from others would be greatly appreciated.

- KUB

Recommended Answers

All 8 Replies

While Dani's advice was kind of correct, this really only applies to a raid system (hardware -> software etc.). Basically what you are looking at here would probably be the way the applications use threading support, and what operating system you have installed to determine which is the best in your scenario.

Example:
On Redhat, I would definately recommend the Dual Opteron over the Dual Xeon for a web server due to the threading support on redhat.

Before I make my argument here are some things to consider:
1. Dual Xeon servers typically have 2 Xeon processors with hyperthreading support, and 533Mhz FSB.

The motherboards of the Dual Xeon's also typically use DDR 333 memory.

2. The Dual Opteron servers, (typically the new ones), are Dual Core, Dual Opteron servers with either 533 or 800 FSB.

The Motherboards of the dual optersons typically use DDR 400.

Now on with what I was saying, with redhat, while it does have hyperthreading support, when under major stress from the web servers and database servers, the hyperthreading support is still 'virtual' and not physical, thus adding more load to the processors.

The Dual Opterons on the other hand have 4 PHYSICAL cores. This will definately handle the threading use of the http and mysql services much better than that of a virtual processor (hyperthreading).

Now, the hard disks do play an important role in the server, however a SATA RAID can perform the same, (or better) as a typical SCSI drive, with very little decrease in performance from the SCSI raid.

I'm not sure this provided any assistance in your choice, or if this just made it harder, but at least you can now make an informed decision.

Kub never told me it was between an Opteron and a Xeon :) He just asked for my feedback on SCSI vs SATA.

I'd go with the opterons

Now, the hard disks do play an important role in the server, however a SATA RAID can perform the same, (or better) as a typical SCSI drive, with very little decrease in performance from the SCSI raid.

This is not really accurate... SCSI drives have faster spindle speeds and are far more reliable than SATA. SATA drives also run hotter than SCSI drives much like their predecessor using PATA such as UltraATA and I believe that plays a part in their unreliability compared to SCSI. For disk intensive loads, SCSI's TCQ is far superior to SATA's NCQ. SCSI's support for disconnects on the bus are also at the core of the higher performance. In short, a SATA-based system will show a higher processor load for disk operations then SCSI.

More importantly, there is the idea of reliability of the RAID controller. SCSI has been doing this for a long time. Do you really want to trust your data, your web site to a relatively new technology? Look at how long Compaq/HP's Smartarray and IBM's ServeRAID technologies has had to mature. I went through some of Dell's PERC I/II growing pains (and still have nightmares about corrupted RAID stripes) and it finally has matured as a decent RAID controller in just the last few years. SATA RAID does not yet have that kind of track record.

Go to any hardware manufacturer's web site, what do all the low end servers have in common? Cheap SATA drives. Don't make the mistake of choosing a technology for its price alone. After working with servers for over 16 years and seeing what lasts and what doesn't, I won't run them in my servers nor would I recommend them in its present state.

The future? We'll see. For now, I take comfort when I open my server closet and see my Netfinity servers with SCSI RAID 5 arrays-humming along for years now without a hitch.

Just to mention, I was compairing a SATA Raid combination to a single SCSI Drive, not the SCSI raid ^^

But I will say that the SATA drives are getting better as time goes on.

I agree with that since with all RAID levels you are striping the data. The more physical spindles the better performance, the faster the spindles the faster disk access. Of course with UltraATA (PATA) drives that is not true. But the question was centered around hardware for a web server and who would run a single drive of any type in a server? Even OS software based RAID is better than a single drive.

SATA is getting better, agreed as is SAS. I think even in the Ultra320 SCSI world we've reached the limits of parallel connected based drives since Fast-320 (Ultra640) has limited cable length requirements and is not being adopted very well with vendors concentrating on SAS for SCSI. 128 devices is better than the 15 drive limit on current SCSI for sure.

Good discussion though.

Member Avatar for WangVS

I'm a little confused by the scope of this discussion. We began using Dell PowerEdges (1850, 2850, 2800) in 2005 for a specialized virtual machine application. We selected dual 3.6 GHz Xeon, then with 1 MB cache, later wth 2 MB cache, with 800 FSB, PERC4/i SCSI RAID and DRAC4 remote access. Nothing slower or less would do for our application.

When Dell abruptly and without warning withdrew the x8x0 machines in favor of the newer x9x0 ones, we were really bent out of shape as we had orders in hand and overnight couldn't get the x8x0 machines from Dell. There is cost and time associated with qualifying a new machine model.

We dealt with it, though, and were pleasantly surprised to find that the x9x0 with single 3.0 GHz Dual Core Xeon with 1333 FSB and 667 MHz memory outperformed the earlier 3.6 GHz Xeon by 50%. The newer machines also brought a change from SCSI to SAS/SATA. We chose SAS and so far it has been good.

I always favor hardware RAID over sofwtare RAID. The only scenario in which we would consider using software RAID would be to mirror the internal hardware RAID with an external hardware RAID partition, allowing the machine to boot from either one. We have done this with one system but the jury is still out.

In both generations of PowerEdge we had to go for the max specs (although we didn't try to use the briefly available 3.8 GHz Xeon single core) because we need maximum execution rate of a single-thread virtual machine. We have had very good results with both generations but I'm glad we didn't have to suffer through Dell's learning curve with earlier generations. We have had three or four SCSI and SAS disk failures in the field with replacement, rebuild and no adverse consequences.

Dual Core Xeon brought larger cache, faster FSB and faster memory to the table, at the cost of slightly slower CPU clock rate (3.0 GHz vs 3.6). The net result was very good for single-thread applications. Quad Core doesn't bring any more to the table except perhaps larger cache, and seriously impairs the core clock rate. For multithread apps the Quad Core could be very good because there are four physical processor cores. For single thread apps, which should include running a single virtual machine in VMware, for example, performance is determined by how fast one core can run. Virtual machines can't generally parallelize what they are running becaue they have no idea what the virtualized machine is doing. Multiple VMs in VMware could benefit from Quad Core, but single ones and our virtual machine probably will suffer rather than benefit from Quad Core. For us, the top of the heap is presently the highest-speed Xeon Dual Core with the fastest FSB and memory.

The only really gnarly aspect of the PowerEdges is the Dell Remote Access Card (DRAC), which has to be the flakiest piece of hardware and software I've ever worked with. In its 4th generation (DRAC4) it misbehaves badly, often requiring pulling power from the machine as the only way to fully reset the DRAC after it begins to refuse to redirect the console. 5th Gen experience isn't fully in yet.

WE have a fair amount of experience with the 2850, 2800, 2950 and 2900, the PERC 4 and 5 RAIDs and the DRAC 4 and 5 remotes, and host-independent SCSI RAID from Infortrend before they stopped making "canister" RAID controllers. We run only Linux on these boxes and have had far more issues with SUSE and SLES than with the PowerEdges. We remotely install and support these systems in eight countries, receiving and prepping them ourselves and reshipping them only for U.S. customers.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.