We're having issues with database performance, and up till now, have done very little with indexing. We try to turn on full text indexes when available, but we do support (though we are phasing this out of our support)MSSQL2000, we try to advise our clients to use the newest available when possible. Our database model consists of over 200 tables, and stored procedures for accessing each of those tables. Everyone in my offices agrees that we need to update our data-model to become more streamlined, but that is a work in progress, and the boss wants instant gratification here. So we're looking at indexing.

What would be the best way to get what we need done, improved performance, with minimal impact and effort?

To implement indexes for getting performance is always an option. But to go for that one needs to have complete knowledge of the database design and the flow of data (how application hits the DB).

We've run the sql performance tuner on several client installs, but each one comes back with a different set of recommendations. I know we should put the indexes on the hardest hit tables, but I was told that writing to a table can reap havoc on an index, and our most read and most written tables are mostly the same tables.

You might try updating stats (check BOL for syntax) as an immediate thing. One indexing strategy is create a partial index (i.e. not a complete unique index) for the tables in your most-used queries. That will get you halfway on speed of retrieval without sacrificing performance on updates. Another possibility is to look at making sure your data files and log files are on different physical spindles (or RAID arrays) to reduce disk contention. Relatively easy to migrate with minimal downtime. You might also consider creating multiple filegroups or even implementing a partitioning schema.

Not to burst your bubble (although I am pretty good at that :) ) but 200 tables isn't really that big.

Eh, wasn't much of a bubble. Also we don't have much control over hardware. The part of our database that is large isn't number of tables, it's number of rows in a table. For most of our clients, the backup file doesn't exceed 500MB, but our larger clients can hit 3-4GB or more. We have one table that in some cases has more than 500,000 rows in it, and gets joined to another table in several stored procedures with tables or equal or greater row numbers. Though we do filter this using WHERE clauses, this occasionally takes a REALLY long time to run.

What I'm looking for, I think, is a one size fits most theory for database optimization.

Okay, how's this: Find the place where you are joining/where-ing (is that a word?) your large tables, and create an index that covers JUST those columns in the appropriate table. You don't have to worry about uniqueness in the index.

Next, if you have multi-table joins, use ShowPlan (i.e. "Display Estimated Execution Plan" in SSMS) on the actual select statements to locate table scans, and try to construct indexes that will minimize those. Rule of thumb: Index Scan is preferable to Table Scan, Index Seek is preferable to Index Scan.

Hope that helps. Bubble re-inflated?

I'm not one for bubbles, unless they smell like bacon, then they make me want breakfast...

Yes, actually, that helps quite a bit. My background is a programming/web development background, and I only know enough about SQL to get my job done. Each new job has required me to learn more about SQL. Back in school, the biggest (# of tables) database I ever had to deal with, was one I designed from the ground up with only 25 tables in it. Usage patterns and size made indexing mostly trivial. I'm always using the w3schools website and the MSDN for tips, tricks, and references.

Thanks a lot for the help.

Database relationship , stored procedure, your select queries, field length in tables also play a major role in performance of database

Please elaborate on the relationship aspect, we're in debate in our office about how relationships play a role in databases, and whether they're necessary.

Thank you very much. The was extremely helpful. I shall return with the results of our interoffice debate. the source of our debate is the conversion many years ago from Oracle to SQL. My department head, and the person whom he's worked with the longest were thrown into MSSQL when Oracle decided to change their licencing policies. The database is a wreck, but we're fixing it little by little and this is one of many steps toward the future for it.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.