OS: Windows 2008 R2
SQL Server: 2008 R2 SP2
OS Memory: 16 GB
SQL Server Max Memory: 12 GB
Database is in SIMPLE recovery mode.
1. I saw some blocking on executing sp_who2 and ran Paul Randal's Where it hurts?
The results show, LCK_M_U and LCK_M_IX wait stats at 90%
2. I ran Glenn Berry's Memory DMV:
-- Good basic information about memory amounts and state
SELECT total_physical_memory_kb, available_physical_memory_kb,
total_page_file_kb, available_page_file_kb,
system_memory_state_desc
FROM sys.dm_os_sys_memory OPTION (RECOMPILE);
-- You want to see "Available physical memory is high"
The result was Available physical memory is high.
3. I ran Pinal Dave's DMV:
SELECT dm_ws.wait_duration_ms,
dm_ws.wait_type,
dm_es.status,
dm_t.TEXT,
--dm_qp.query_plan,
--dm_ws.session_ID,
--dm_es.cpu_time,
--dm_es.memory_usage,
--dm_es.logical_reads,
--dm_es.total_elapsed_time,
dm_es.program_name,
DB_NAME(dm_r.database_id) DatabaseName,
-- Optional columns
dm_ws.blocking_session_id--,
--dm_r.wait_resource,
--dm_es.login_name,
--dm_r.command,
--dm_r.last_wait_type
FROM sys.dm_os_waiting_tasks dm_ws
INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id
INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id
CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t
CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp
WHERE dm_es.is_user_process = 1
order by wait_duration_ms desc
GO
The result showed an Update/Insert Trigger on a 34 million row table causing LCK_M_U and LCK_M_IX waits
4. I ran Glenn Berry's DMV for Signal Waits:
-- Signal Waits for instance
SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2))
AS [%signal (cpu) waits],
CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2))
AS [%resource waits]
FROM sys.dm_os_wait_stats
The result was 2% signal waits
5. dbcc loginfo returned 283 rows (VLFs).
On analyzing the logs, it was determined that 10% autogrowth was set for logs.
I changed the log growth from % to MB and brought the VLFs to close to 50.
6. I ran Glenn Berry's script for IO bottleneck:
-- Calculates average stalls per read, per write, and per total input/output for each database file.
SELECT DB_NAME(fs.database_id) AS [Database Name], mf.physical_name, io_stall_read_ms, num_of_reads,
CAST(io_stall_read_ms/(1.0 + num_of_reads) AS NUMERIC(10,1)) AS [avg_read_stall_ms],io_stall_write_ms,
num_of_writes,CAST(io_stall_write_ms/(1.0+num_of_writes) AS NUMERIC(10,1)) AS [avg_write_stall_ms],
io_stall_read_ms + io_stall_write_ms AS [io_stalls], num_of_reads + num_of_writes AS [total_io],
CAST((io_stall_read_ms + io_stall_write_ms)/(1.0 + num_of_reads + num_of_writes) AS NUMERIC(10,1))
AS [avg_io_stall_ms]
FROM sys.dm_io_virtual_file_stats(null,null) AS fs
INNER JOIN sys.master_files AS mf
ON fs.database_id = mf.database_id
AND fs.[file_id] = mf.[file_id]
ORDER BY avg_io_stall_ms DESC OPTION (RECOMPILE);
-- Helps you determine which database files on the entire instance have the most I/O bottlenecks
-- This can help you decide whether certain LUNs are overloaded and whether you might
-- want to move some files to a different location
The results are listed in the spreadsheet image below. Database1 is the database causing blocking. I don't know about what the numbers in the individual columns mean with regards to accepted scale of good to worse I/O but since Database1 is the first in the list, I am assuming that Database1 is causing the most I/O bottleneck.
Can I infer from the spreadsheet below that the LUN assigned to drive E:\ is overloaded and moving some files off the drive E:\ assigned LUN could potentially ease I/O bottlenecks?
This is how far my knowledge will take me.
I am starting to look into purging rows from the 34 million table. However, the purge will be a "future" fix as the vendor for the app needs to get in the game. Moving the drive with .mdf files to a faster storage (like Solid State Drives) is not an option but moving some files off the LUN assigned to E:\ could be an option.
Any guidance is appreciated.
Thanks in advance.
-Jeelani