Thursday, March 29, 2012

Getting the most out of indexes (best practices and such)

Hi group,
This is a very general question about indexes and best practices, no DDL
will be provided.
There's a fairly indexed table with over 30mill records. Over 20 millions
have been added in the last 2 ws. I am noticing that queries that used to
run really fast take hours now. I can't even begin to contemplate how much
time it would take to rebuild indexes.
Also out of those 30millions probably 40% is duplicate records.
My question is will indexes be useful when there is such a ration of dups.
Should we concentrate on deduping this table then rebuilding indexes? Or
just rebuild indexes?
Also someone suggested creating another data file moving to another physical
drive and rebuild indexes storing it to the new file on separate physical
drive. That supposedly will make it much faster. Any truth to that?here is what I would do:
1. Make sure you have a clustered index on the table(s) in question. IF NOT,
bite the bullet and add one.
2. Check your fill factor settings so that inserts are not causing page
splits (Start at 70% and monitor page splits. If page splits remain high,
keep dropping that number. Probably dont want to go much lower than say 50%)
3. Use dbcc indexdefrag on a regular schedule (Nightly in my opinion) to
keep the table from becoming fragemented. This is an online operation so the
table(s) will be available during the defrag.
NOTE: teh first time you defrag these things, be prepared to watch the paint
dry. ALSO, logging activity will be really high. Make sure you have adequate
disk space for the logging and backup your logs very regularly OR put the
database in SIMPLE recovery mode while defragging (I'm not a fan of option
2)
I would not worry about de-duping unless the duplicate records are
unintentional and causing data integrity issues, etc.
the disk swapping thing is a waste of time in my opinion.
You are better off placing this massive table on it's own filegroup. Put
that File Group on a seperate RAID device with it's own controller if you
want. But I am not even convinced that this will be necessary (kind of a
fine tuning thing). Youve got bigger fish to fry with the basic index
maintenancne first.
Make sure that your database is on a RAID Disk (I guess I never asked if it
was). Your Data and indexes should be on RAID 1 or RAID 1+0 Volume
(Preferably RAID 1+0)
Your logs, and tempdb should be on RAID 1 or RAID 1+0 also (Again,
Preferabley RAID 1+0)
this will help for sure. But you might have more work to do to really get
this thing smoking.
Greg Jackson
PDX, Oregon|||You can update statistics, de-dupe your data, make sure you're using
clustered indexes, non-clustered indexes and Primary Keys appropriately (I
won't even ask how you ended up with 12 million duplicate records), defrag
your data and use QA to get your Estimated Query Execution Plans. For
starters.
Moving non-clustered indexes to their own physical drive will help improve
performance, as will moving the transaction log to its own physical drive.
"LP" <lp@.a.com> wrote in message
news:eSpK5LmLFHA.3296@.TK2MSFTNGP15.phx.gbl...
> Hi group,
> This is a very general question about indexes and best practices, no DDL
> will be provided.
> There's a fairly indexed table with over 30mill records. Over 20 millions
> have been added in the last 2 ws. I am noticing that queries that used
> to
> run really fast take hours now. I can't even begin to contemplate how much
> time it would take to rebuild indexes.
> Also out of those 30millions probably 40% is duplicate records.
> My question is will indexes be useful when there is such a ration of dups.
> Should we concentrate on deduping this table then rebuilding indexes? Or
> just rebuild indexes?
> Also someone suggested creating another data file moving to another
> physical
> drive and rebuild indexes storing it to the new file on separate physical
> drive. That supposedly will make it much faster. Any truth to that?
>|||Indexes are only of any use if the optimizer uses them, so its a good
idea to closely monitor which indexes are being used and which are not
and hence potentially redundant. Once that has been done then find out
the query plan for your scripts. This will show you which indexes your
script will use, if any.
Adrian
LP wrote:
> Hi group,
> This is a very general question about indexes and best practices, no DDL
> will be provided.
> There's a fairly indexed table with over 30mill records. Over 20 millions
> have been added in the last 2 ws. I am noticing that queries that used
to
> run really fast take hours now. I can't even begin to contemplate how much
> time it would take to rebuild indexes.
> Also out of those 30millions probably 40% is duplicate records.
> My question is will indexes be useful when there is such a ration of dups.
> Should we concentrate on deduping this table then rebuilding indexes? Or
> just rebuild indexes?
> Also someone suggested creating another data file moving to another physic
al
> drive and rebuild indexes storing it to the new file on separate physical
> drive. That supposedly will make it much faster. Any truth to that?
>

No comments:

Post a Comment