Maybe I am just a lot better at this than I thought, but I figure that somewhere there is a mathematical rule that is being overlooked. When I run dbcc sqlperf (lrustats) on some of my production machines, I sometimes end up with a cache hit ratio (which is defined as a percentage, mind you) that is slightly over the limit:
Statistic Value
---------- --------
Cache Hit Ratio 100.00898
Cache Flushes 0.0
Free Page Scan (Avg) 0.0
Free Page Scan (Max) 0.0
Min Free Buffers 331.0
Cache Size 4362.0
Free Buffers 9434.0
I suspect some counter somewhere is getting wrapped around its 4 byte limit. Is there any reliable source for getting statisics about SQL Server performance? Users tend be unreliable and say everything is slow.I'd say use the SQL Anaylser and use the Estimate Execution Plan area to start. Then, there alway the current activity under Management to look at. Then there the Profiler. I also suggest checking out a software called BMC. Maybe get a test copy.|||Those are good suggestions for tuning individual queries, but I am looking more for server level statistics. Suppose I have one query that fills my buffer cache with tables A through G. Then after that query, there is another query that brings in tables J through Q. Unless the buffer is big enough to hold them all in RAM, then some or all of tables A though G are going to be paged out. This is what I am looking for. The interaction of multiple queries as reflected by a cache hit ratio (and other waittime statistics as I find them). BMC would also have to depend on the statistics gathered from SQL Server, so I should be able to get those statistics that BMC is reading. Clear as mud?
No comments:
Post a Comment