Thanks
maximum transactions per second sql server support value
Result to Text : Spaces
hi~ I am writing a query to result to text:
select db_name(dbid),dbid from sys.sysprocesses
but there is alot of spaces after the column of db_name...
how to trim them ??
Msg 2601, Level 14, State 1, Procedure sp_flush_commit_table, Line 15 Cannot insert duplicate key row in object 'sys.syscommittab' with unique index 'si_xdes_id'. The duplicate key value is (2238926153). The statement has been terminated.
I am using SQL server 2008 R1 SP3. And when we are doing back up operations we are facing the below error
Msg 2601, Level 14, State 1, Procedure sp_flush_commit_table, Line 15
Cannot insert duplicatekey row in object 'sys.syscommittab' with unique index 'si_xdes_id'. Theduplicatekey value is (2238926153).
The statement has been terminated.
Please assist me with your inputs.
Thanks,
Rakesh.
Wrong Execution Plan getting picked on some occassions
Hi,
The following query sometimes runs efficiently; gives quick results, CPU and IO utilization is also up to the mark. But sometimes the query takes very long time to execute. At that time I do not observe any blocks as well. Also the CPU and IO is too huge for running this query. It seems the engine picks up wrong execution plan some times.
This query is fired through some product. So there is no scope to change the query. Any suggestions for the engine to pick up right execution plan?
select i . bpd_instance_id as instanceId , i . instance_name as instanceName , bpd . name as bpdName , istatus . name as instanceStatus , t . subject as taskSubject , tpriority . name as taskPriority , t . due_date as taskDueDate , t . attached_form_ref as taskAttachedInfoPathFormRef , t . attached_ext_activity_ref as taskAttachedExtActivityRef , t . task_id as taskId , tstatus . name as taskStatus , tuser . user_name as assignedToUser , tpriority . ranking as taskPriorityRanking from msadmin.lsw_task t with ( nolock ) inner join msadmin.lsw_bpd_instance i with ( nolock ) on t . bpd_instance_id = i . bpd_instance_id left join msadmin.lsw_task_status_codes tstatus on t . status = tstatus . status_value left join msadmin.lsw_bpd_status_codes istatus on i . execution_status = istatus . status_id left join msadmin.lsw_priority tpriority on t . priority_id = tpriority . priority_id left join msadmin.lsw_bpd bpd on i . cached_bpd_version_id = bpd . version_id left join msadmin.lsw_usr_xref tuser on t . user_id = tuser . user_id where ( t . status in ( '11','12' ) and ( t . user_id = 5909 or t . task_id in (
select t . task_id
from msadmin.lsw_task t with ( NOLOCK )
inner join msadmin.lsw_usr_grp_mem_xref m with ( NOLOCK )
on t . group_id = m . group_id
where m . user_id = 5909
and t . user_id = -1
) or t . task_id in (
select t . task_id
from msadmin.lsw_task t with ( NOLOCK )
inner join msadmin.lsw_grp_grp_mem_exploded_xref x with ( NOLOCK )
on t . group_id = x . container_group_id
inner join msadmin.lsw_usr_grp_mem_xref m with ( NOLOCK )
on m . group_id = x . group_id
where m . user_id = 5909
and t . user_id = -1
) ) ) order by taskDueDate , taskPriorityRanking , instanceId , taskId
Trace Issue
Hi,
I closed trace files, but still it will start automatically everyday @ 12:00AM
How can i stop this?
Thanks
Shashikala
Become our FIRST Microsoft TechNet SS DBE Guru of 2014!!
Happy New Year!
Time for a fresh start!
We're looking for the first Gurus of 2014!!
This is your chance to make your mark on the Microsoft developer community.
All you have to do is add an article to TechNet Wiki from your own specialist field. Something that fits into one of the categories listed on the submissions page. Copy in your own blog posts, a forum solution, a white paper, or just something you had to solve for your own day's work today.
Drop us some nifty knowledge, or superb snippets, and become MICROSOFT TECHNOLOGY GURU OF THE MONTH!
This is an official Microsoft TechNet recognition, where people such as yourselves can truly get noticed!
HOW TO WIN
1) Please copy over your Microsoft technical solutions and revelations toTechNet Wiki.
2) Add a link to it on THIS WIKI COMPETITION PAGE (so we know you've contributed)
3) Every month, we will highlight your contributions, and select a "Guru of the Month" in each technology.
If you win, we will sing your praises in blogs and forums, similar to the weekly contributor awards. Once "on our radar" and making your mark, you will probably be interviewed for your greatness, and maybe eventually even invited into other inner TechNet/MSDN circles!
Winning this award in your favoured technology will help us learn the active members in each community.
Feel free to ask any questions below.
More about TechNet Guru Awards
#PEJL
Got any nice code? If you invest time in coding an elegant, novel or impressive answer on MSDN forums, why not copy it over to the one and onlyTechNet Wiki, for future generations to benefit from! You'll never get archived again!
If you are a member of any user groups, please make sure you list them in the
Microsoft User Groups Portal. Microsoft are trying to help promote your groups, and collating them here is the first step.
Runtime parameter values
Hello everybody,
I have a couple of queries running longer than usual (started not in SSMS). They are being executed with cached execution plans. I can see compiled values, but how can I find the runtime values ? (hopefully over some DMF/DMV, no Trace or Extended Event session is running)
i nead a help
Collation conflict
Ok, Im doing a query and its all within the same database, same server etc. Looking at the database properties, the collation is set to Latin1_General_BIN, as are the individual tables. So now why would I get an error such as this?
Cannot resolve the collation conflict between "Latin1_General_BIN" and "SQL_Latin1_General_CP1_CI_AS"
How to apply query governor cost limit to a single login.
I have an active directory group login (external users) that I need to limit the cost of their query.
I know we can prevent a query from running if its cost exceeds a threshold with the sp_configure 'query governor cost limit' but we don't want ALL queries to not run if above the threshold. We only want this to apply to our external users, who are all members of the AD group login.
Another option would be to control the individual session with "set query governor cost limit". This would work if there is a way to guarantee that this setting would get set when the user logs in. Is there any way to run a session level start up script (in unix we can run .bashrc scripts to set environment variables, for example). Ideally, the savy user would not be able to reset this back to 0.
A logon trigger is not an option here, since SET commands are reset to the initial state once the trigger (or stored procedure) exits.
SSMS has an option to set the query_governer_cost_limt (SSMS==>Options==>Query Execution==>SQL Server==>Advanced). Is there a way to pre-set this value in SSMS? And then disable the user's ability to reset it back to 0? My guess is that SSMS can be preset with registry changes... hoping not to have to go there for individual logins.
Resource gov is another option that will place restrictions, but it is not the same.
Performance monitoring - capture query time with STATISTICS?
I realize I could use perfmon to monitor all kinds of stuff, but was thinking one simple way would be to start capturing how long a query took.
Is there a way to select the client statistics or the length of time a query took to execute?
system spid regularly allocating in tempdb but not deallocating
Early this week we had a system spid that was constantly allocating in tempdb but not deallocating:
SELECT s.session_id, dbu.database_id , SUM(dbu.internal_objects_alloc_page_count) internal_objects_alloc_page_count , SUM(dbu.internal_objects_dealloc_page_count) internal_objects_dealloc_page_count , (SUM(dbu.internal_objects_alloc_page_count) - SUM(dbu.internal_objects_dealloc_page_count)) * 8096 / 1024 kbytes_used_internal FROM sys.dm_Exec_requests r INNER JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id LEFT JOIN sys.dm_db_task_space_usage dbu ON dbu.session_id = r.session_id AND dbu.request_id = r.request_id WHERE internal_objects_alloc_page_count > 0 GROUP BY s.session_id, dbu.database_id HAVING SUM(dbu.internal_objects_alloc_page_count) - SUM(dbu.internal_objects_dealloc_page_count) <> 0 ORDER BY kbytes_used_internal DESC;
The drive space for our tempdb was at 90% full. We restarted the service and this morning the same symptoms exist. I cannot connect this process to anything so i will present all the information i have and solicit suggestions for how to get more information. This is a 2008r2 sp2 CU4 instance.
In the SQL above SPID 25 is the culprit and here is the output:
session_id database_id internal_objects_alloc_page_count internal_objects_dealloc_page_count kbytes_used_internal 25 2 36960 0 292215
It hasn't grown that much - yet.
An attempt to find out what it is doing:
SELECT DB_NAME(r.database_id) dbname, s.host_name , SUBSTRING(t.text, r.statement_start_offset /2, CASE r.statement_end_offset WHEN -1 THEN LEN(t.text) ELSE r.statement_end_offset / 2 END - r.statement_start_offset /2) executing_text , s.host_name, s.program_name, s.login_name, s.login_time, s.last_request_start_time FROM sys.dm_Exec_requests r INNER JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) t WHERE r.session_id = 25
yeilds:
dbname host_name executing_text host_name program_name login_name login_time last_request_start_time master NULL NULL NULL NULL sa 2014-01-22 22:35:44.330 2014-01-22 22:35:44.330
I turned every event on in profiler for this spid. That yields these events
Scan:Started Scan:Stopped
The database for all of them is tempdb and the read count value is always greater than zero. Occasionally, two more events are squeezed between: Lock:LockReleased and text data is a file:pageno format.
That session isn't in a transaction and isn't holding any locks that are reported by:
SELECT * FROM sys.dm_tran_locks WHERE request_session_id = 25
It is regularly waiting on the waittype: BROKER_EVENTHANDLER.
That is all i know and all the dots i have been able to connect to this point. I know it isn't much but i would certainly appreciate any 'where to look next' ideas.
thanks
danny
-- dan http://dnhlmssql.blogspot.com/
CREATE CLUSTERED INDEX generates Msg 0, Level 11, State 0, Line 0 in SQL Server 2012
We have a server that was upgraded last year from SQL Server 2005 32 bit to SQL Server 2012 64 bit (both Standard Edition). A new array has been added to the server to allow the creation of a new filegroup that would split the largest and most used tables across the primary and secondary filegroup. I am in the process of starting to move the tables that get assigned to the secondary file group. The table has around 150 GB of disk space used. It is a heap so the clustered index is only used to move the table to secondary file group and then dropped. The statement where I'm getting the mesages is:
CREATE CLUSTERED INDEX [xcTempMove]
ON [dbo].[table_name] (
[PkID] ASC
)WITH ( FILLFACTOR = 90 ) ON [SECONDARY]
The error messages (because I am not getting a single one) are the following:
Msg 0, Level 11, State 0, Line 0
A severe error has occurred on current command. The results, if any, should be discarded.
Msg 0, Level 20, State 0, Line 0
A severe error has occurred on current command. The results, if any, should be discarded.
Event Viewer and SQL logs only show that the SPID was terminated by the host but there is no other information. The session is disconnected.
I would appreciate any assistance that leads to solving this issue.
Eduardo Olivera
Viewing an encrypted data in MSSQLSERVER
Hi Everyone,
I have a table in my database that contains an encrypted column. I dont want to decrypt the column but I want to be able to view the encrypted data.
Please i need help!!
me
SQLutilising more memory than allocated memory
Hi,
49152 MB only.
We are using only one external3rd party tool called AVAMAR-EMC for backup but 15GB RAM used by SQL in excess makes me think something is not OK, since am using the 64-bit box there should not be an consideration to lock server pages in memory.
My counters are as below :
----------------------------------------------------------------------------------------------------
Memory usage details for SQL Server instance INGURA11 (10.50.2500.0 - X64) - Enterprise Edition (64-bit))
----------------------------------------------------------------------------------------------------
--------------------------------------------------------------
Memory Configuration on the Server visible to Operating System
(1 row(s) affected)
Physical Memory_MB Physical Memory_GB Virtual Memory MB
--------------------------------------- --------------------------------------- ---------------------------------------
98284.960937500 95.981407165527 8388607.875000000
(1 row(s) affected)
-------------------------------
Buffer Pool Usage at the Moment
(1 row(s) affected)
BPool_Committed_MB BPool_Commit_Tgt_MB BPool_Visible_MB
--------------------------------------- --------------------------------------- ---------------------------------------
49152.000000 49152.000000 49152.000000
(1 row(s) affected)
-------------------------------------------------------
Total Memory used by SQL Server instance from Perf Mon
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
50331648 49152.000000 48.000000000
(1 row(s) affected)
-------------------------------------------------------------
Memory needed as per current Workload for SQL Server instance
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
50331648 49152.000000 48.000000000
(1 row(s) affected)
------------------------------------------------------------------------------
Total amount of dynamic memory the server is using for maintaining connections
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
7320 7.148437 0.006980895
(1 row(s) affected)
------------------------------------------------------------
Total amount of dynamic memory the server is using for locks
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
431944 421.820312 0.411933898
(1 row(s) affected)
----------------------------------------------------------------------------
Total amount of dynamic memory the server is using for the dynamic SQL cache
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
19184 18.734375 0.018295288
(1 row(s) affected)
-------------------------------------------------------------------------
Total amount of dynamic memory the server is using for query optimization
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
4488 4.382812 0.004280090
(1 row(s) affected)
-------------------------------------------------------------------------------
Total amount of dynamic memory used for hash, sort and create index operations.
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
10081336 9845.054687 9.614311218
(1 row(s) affected)
------------------------------------------
Total Amount of memory consumed by cursors
(1 row(s) affected)
Mem_KB Mem_MB Mem_GB
-------------------- --------------------------------------- ---------------------------------------
204984 200.179687 0.195487976
(1 row(s) affected)
-------------------------------------------------------------------------
Number of pages in the buffer pool (includes database, free, and stolen).
(1 row(s) affected)
8KB_Pages Pages_in_KB Pages_in_MB
-------------------- --------------------------------------- ---------------------------------------
6291456 50331648.000000 49152.000000000
(1 row(s) affected)
---------------------------------------
Number of Data pages in the buffer pool
(1 row(s) affected)
8KB_Pages Pages_in_KB Pages_in_MB
-------------------- --------------------------------------- ---------------------------------------
6025312 48202496.000000 47072.750000000
(1 row(s) affected)
---------------------------------------
Number of Free pages in the buffer pool
(1 row(s) affected)
8KB_Pages Pages_in_KB Pages_in_MB
-------------------- --------------------------------------- ---------------------------------------
4119 32952.000000 32.179687500
(1 row(s) affected)
-------------------------------------------
Number of Reserved pages in the buffer pool
(1 row(s) affected)
8KB_Pages Pages_in_KB Pages_in_MB
-------------------- --------------------------------------- ---------------------------------------
1255281 10042248.000000 9806.882812500
(1 row(s) affected)
-----------------------------------------
Number of Stolen pages in the buffer pool
(1 row(s) affected)
8KB_Pages Pages_in_KB Pages_in_MB
-------------------- --------------------------------------- ---------------------------------------
262025 2096200.000000 2047.070312500
(1 row(s) affected)
---------------------------------------------
Number of Plan Cache pages in the buffer pool
(1 row(s) affected)
8KB_Pages Pages_in_KB Pages_in_MB
-------------------- --------------------------------------- ---------------------------------------
147470 1179760.000000 1152.109375000
(1 row(s) affected)
SEQUENCE issuing numbers more than once?
This is all on
Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)Oct 19 2012 13:38:57
Copyright (c) Microsoft Corporation
Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)
Running on Server 2008 Standard as a VMWare instance
I have an ETL load process that requires me to merge data form 4 sources. To do this we load data incrementally from each of the 4 instances into a STAGING table then we load it onwards into the destination table. During the load into the STAGING table we issue a unique RowID value for each loaded row based on a SEQUENCE attached to the STAGING table which is built using a script like this
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[STAGING].[Fact_Bert]') AND type in (N'U'))DROP TABLE [STAGING].[Fact_Bert]
GO
IF EXISTS (SELECT * FROM sys.sequences WHERE name = N'Bert_SEQ')
DROP SEQUENCE [FACT].[Bert_SEQ]
GO
IF NOT EXISTS (SELECT * FROM sys.sequences WHERE name = N'Bert_SEQ')
BEGIN
CREATE SEQUENCE [FACT].[Bert_SEQ]
AS [int]
START WITH 1
INCREMENT BY 1
MINVALUE -2147483648
MAXVALUE 2147483647
CYCLE
CACHE 100
END
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [STAGING].[Fact_Bert](
[source_system_sk] [int] NOT NULL,
[RowID] [int] NOT NULL,
... other cols for data
CONSTRAINT [PK_STAGING_Bert] PRIMARY KEY CLUSTERED
(
[source_system_sk] ASC,
... other PKEY cols form source system
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [STAGING]
) ON [STAGING]
GO
IF NOT EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[FACT].[DF_Bert_RowID') AND type = 'D')
BEGIN
ALTER TABLE [STAGING].[Fact_Bert] ADD DEFAULT (NEXT VALUE FOR [FACT].[Bert_SEQ]) FOR [RowID]
END
GO
So each row in STAGING ends up with a unique RowID, we carry these forwards into the LIVE tables and RowID is used as the PKEY on the LIVE tables.
This should be all well and good.
Every day a SQL Agent job runs a SSIS package that loads data into STAGING (after truncating it) and then moves it into LIVE.
The problem is that for some of the STAGING tables every time we run the daily load jobs we find that it re-allocates SEQUENCE values. So when we try and move data into the LIVE tables we get a PKEY error. Looking at the data before we run a load we can see that there are values in the STAGING table that run to 2700010 but if I look at the sys.sequences DMV the current_value for the sequence concerned is 2700000 and
SELECTNEXTVALUEFOR[FACT].[Bert_SEQ]
-- returns 2700008
So when we run the package we end up with values in STAGING that already exist in the LIVE table and hey presto a PKEY violation when we try and move the data into LIVE
I've got an SP that I can run on the table that will look at the sequence and reset it to start at a new value if the currently used value is higher than the current value returned by the DMV (it pads the values by 1000) I had to write this as manually figuring it out was taking forever
The server isn't usually restarted between loads, though I have done that deliberately today between load issues and it seems to make no difference.
The SEQUENCES are CACHED but the CACHE size is 100 and the overlaps seem pretty randomly sized. Though the more data I load the bigger the overlap seems to be. This makes me think that something is amiss in the way data is loaded by the OLEDB Destination in the SSIS packages that populate the STAGING tables. I'm not using transactions anywhere nor am I using SSIS transaction/rollback capability
I'm going to try switching the SEQUENCES to NO CACHE to see if that fixes the issue, though I don't like the perf hit I'm going to take as we're loading many tens of thousands running to hundreds of thousands of rows a day across 30+ tables. I appreciate this isn't exactly big but its big enough to be a problem.
Has anyone got an ideas/advice?
many thanks
Steve
BI Addict!
What is the best option to use 6 databases with one instance and 6 databases with 6 instance in one server?
Hello,
I need your help, to answer my question as below:
In one server we need to create 6 databases, I have 2 options to fulfill this requirement.
Option1: Install SQL Server 2008 with one instance then create 6 databases.
Option2: Install SQL Server 2008 with 6 instance then create one database in each instance.
What is the best option ? why?
Regards,
Pitou
SQL Full-text filter daemon launcher service
Hi,
What is the user SQL Full-text filter daemon launcher service if in case we can stop the service what will happen.
Can you please give me the clear picture of this SQL Full-text filter daemon launcher service.
SS2008 Log Shipping Performance Issues on Secondary Server Restore Job
Hi folks,
I'm having an issue with log shipping. What has taken 2-5 minutes in the past is now taking 20-40 minutes which is no good really. File sizes have not increased.
I've run the log shipping with a global trace on 3004 & 3605.
The issue appears to be the step Restore: Transferring data to <db_name>which is taking about 90% of the process time in the Restore job on the secondary server.
I'm unsure what this element does in the restore, so am looking for further details / suggestions / where to look for issues.
I'm providing the log from the trace below,
many thanks
Stuart
2014-01-24 09:56:01.41 spid88 RestoreLog: Database <db_name>2014-01-24 09:56:01.41 spid88 X-locking database: <db_name>
2014-01-24 09:56:01.41 spid88 Opening backup set
2014-01-24 09:56:01.43 spid88 Restore: Configuration section loaded
2014-01-24 09:56:01.43 spid88 Restore: Backup set is open
2014-01-24 09:56:01.43 spid88 Restore: Planning begins
2014-01-24 09:56:01.44 spid88 Halting FullText crawls on database <db_name>
2014-01-24 09:56:01.44 spid88 Dismounting FullText catalogs
2014-01-24 09:56:01.44 spid88 Restore: Planning complete
2014-01-24 09:56:01.44 spid88 Restore: BeginRestore (offline) on <db_name>
2014-01-24 09:56:01.44 spid88 Restore: Undoing STANDBY for <db_name>
2014-01-24 09:56:03.10 spid88 SnipEndOfLog from LSN: (19960:7863:1)
2014-01-24 09:56:03.10 spid88 FixupLogTail(progress) zeroing F:\<db_name>_Log1.ldf from 0x139b6e00 to 0x139b8000.
2014-01-24 09:56:03.10 spid88 Zeroing F:\<db_name>_Log1.ldf from page 40156 to 40177 (0x139b8000 to 0x139e2000)
2014-01-24 09:56:03.10 spid88 Zeroing completed on F:\<db_name>_Log1.ldf
2014-01-24 09:56:03.10 spid88 Restore: Finished undoing STANDBY for <db_name>
2014-01-24 09:56:03.13 spid88 Restore: PreparingContainers
2014-01-24 09:56:03.13 spid88 Restore: Containers are ready
2014-01-24 09:56:03.13 spid88 Restore: Restoring backup set
2014-01-24 09:56:03.13 spid88 Restore: Transferring data to <db_name>
2014-01-24 10:15:44.16 spid88 Restore: Waiting for log zero on <db_name>
2014-01-24 10:15:44.16 spid88 Restore: LogZero complete
2014-01-24 10:15:44.41 spid88 FileHandleCache: 440 files opened. CacheSize: 14
2014-01-24 10:15:44.41 spid88 Restore: Data transfer complete on <db_name>
2014-01-24 10:15:44.41 spid88 Restore: Backup set restored
2014-01-24 10:15:44.41 spid88 Restore-Redo begins on database <db_name>
2014-01-24 10:15:52.18 spid88 Rollforward complete on database <db_name>
2014-01-24 10:15:52.18 spid88 Restore: Done with fixups
2014-01-24 10:15:52.20 spid88 Transitioning to STANDBY
2014-01-24 10:15:52.33 spid88 Starting up database '<db_name>'.
2014-01-24 10:15:52.34 spid88 The database '<db_name>' is marked RESTORING and is in a state that does not allow recovery to be run.
2014-01-24 10:15:58.25 spid88 FixupLogTail(progress) zeroing F:\<db_name>_Log1.ldf from 0x5b000 to 0x5c000.
2014-01-24 10:15:58.25 spid88 Zeroing F:\<db_name>_Log1.ldf from page 46 to 526 (0x5c000 to 0x41c000)
2014-01-24 10:15:58.27 spid88 Zeroing completed on F:\<db_name>_Log1.ldf
2014-01-24 10:15:59.03 spid88 Recovery is writing a checkpoint in database '<db_name>' (10). This is an informational message only. No user action is required.
2014-01-24 10:16:00.46 spid88 Recovery completed for database <db_name> (database ID 10) in 8 second(s) (analysis 5796 ms, redo 0 ms, undo 546 ms.) This is an informational message only. No user action is required.
2014-01-24 10:16:00.58 spid88 Starting up database '<db_name>'.
2014-01-24 10:16:00.69 spid88 Database <db_name> was started .
2014-01-24 10:16:01.00 spid88 CHECKDB for database '<db_name>' finished without errors on 2014-01-18 17:31:21.020 (local time). This is an informational message only; no user action is required.
2014-01-24 10:16:01.01 spid88 Database is in STANDBY
2014-01-24 10:16:01.01 spid88 Resuming any halted fulltext crawls
2014-01-24 10:16:01.01 spid88 Restore: Writing history records
2014-01-24 10:16:01.01 Backup Log was restored. Database: <db_name>, creation date(time): 2012/07/18(16:20:49), first LSN: 19960:7863:1, last LSN: 19961:712:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'G:\<db_name>_LS\<db_name>_20140124095000.trn'}). This is an informational message. No user action is required.
2014-01-24 10:16:01.01 spid88 Writing backup history records
2014-01-24 10:16:01.40 spid88 Restore: Done with MSDB maintenance
2014-01-24 10:16:01.40 spid88 RestoreLog: Finished
2014-01-24 10:16:01.43 spid88 Setting database option MULTI_USER to ON for database <db_name>.
SQL Server 2005 - High CPU and blockings using Outlook with BCM 3.0
Hello,
we are using Outlook with BCM 3.0 for a long period of time (about 1-2 years) now. In the background works a SQL Server 2005 64bit with 16GB of RAM on it and 4 CPUs. From day 1 on we experienced major performance problems using BCM on every client (20-25 clients). We already tried several patches, upgrades, fixes... but nothing helped. We also tried to improve the performance on the clients (using SSD hard disk, configuring registry "Polling-Interval") but nothing helped.
The server still uses a lot of CPU time (from 30 - 70 %) for that instance and the clients always seem to be frozen.
I've made a little trace-file from the server, it would be great if someone of you could analyze it for me, cause I have no experience with it.
I've uploaded the files here: http://www.speedyshare.com/files/28846031/MSSMLBIZ.rar
Any help would be really appreciated.