Quantcast
Channel: SQL Server Database Engine forum
Viewing all 15264 articles
Browse latest View live

dm_os_schedulers Returns 4 VISIBLE ONLINE only !

$
0
0

Hello All

I read in some forums that the SQL Server Standard Version has some limitations in number of socket and cores (whichever is smaller) : 4 sockets or 24 cores. OK OK.

For my server, I have 8 CPU and 8 Cores by CPU, but when i execute this query, i get only 4 rows, can some one explain to me what is the situation ?

select scheduler_id,cpu_id, status, is_online from sys.dm_os_schedulers where status='VISIBLE ONLINE'

Thanks :)


Updates to Windows brakes SQL Server

$
0
0

Hi, ALL,

I have a laptop with Windows 8.1 pre-installed and SQL Server 2005 also pre-installed.

Recently this laptop was updated and after the update, I was not able to run some queries against the system tables in the SQL Server. Everything was working prior to the update (no issues at all).

I know that the vsersion of the server is outdated, but the machine came with it pre-installed and I am wondering if there is a possibility of someone to look into the breakage.

I did install SQL Server 2012 some time ago (not after the update of Windows), but I kind of hesitate of running that version and trying those queries.

BTW, and this is kind of off-topic here, after successful login I always being asked to create  Screen Name. It looks like the system does not recognize that I already have a screen name and it should use it.

Thank you for any information you can provide.

Backup Encryption, is it provide security?

$
0
0

HI all,

I am running sql server 2016 standard and tested backup encryption to apply it to my production environment.

In my tests, if we lost backup of certificate and private key used for encryption, we can backup those again from production with out losing thumbprint.

so, here my question why many of sources including microsoft highlighting protect your certificate and private key in a secure location and without certificate you can't restore backup?

1. There is nothing to scare about certificate and private key. If you lost those you can backup again with new password from same instance.

From point number 1 any intruder hit the server and backup the key, he can able to restore all databases which he robbed.

Cost in Execution Plan summing up more than 100% , its some where 140%+

$
0
0

Hi All

I am using Azure SQL, while performance tuning I found one of the queries generated execution plan value more than 140%

Query plan attached, Can you help me to understand if its a bug and can be fixed or something new in SQL engine.

Query Plan greater than 100%


Thanks

Saurabh Sinha

Blog TwitterLinkedInGallery Facebook 



Script to return columns from a table...

$
0
0

Good Day,

I hope this message finds all here doing well...

I received this script way back from the Microsoft Forum. I need to assure that it is still valid because I am using it in SQL 2014.

I can confirm that it is still valid but my manager is requesting additional confirmation. Any assistance from anyone would be greatly appreciated.

declare @resultstable

(

ID varchar(36),

TableName varchar(250),

ColumnName varchar(250),

DataType varchar(250),

MaxLengthvarchar(250),

Longest varchar(250),

SQLText varchar(250)

)

INSERTINTO @results(ID,TableName,ColumnName,DataType,MaxLength,Longest,SQLText)

SELECT

    NEWID(),

    Object_Name(c.object_id),

    c.name,

    t.Name,

    case

        when t.Name!='varchar' Then'NA'

       when c.max_length=-1 then'Max'

        elseCAST(c.max_lengthasvarchar)

    end,

    'NA',

    'SELECT Max(Len('+ c.name+')) FROM '+OBJECT_SCHEMA_NAME(c.object_id)+'.'+Object_Name(c.object_id)

FROM   

    sys.columns c

INNERJOIN

    sys.types tON c.system_type_id= t.system_type_id

WHERE

    c.object_id=OBJECT_ID('table123')   

DECLARE @idvarchar(36)

DECLARE @sqlvarchar(200)

declare @receivertable(theCount int)

DECLARE length_cursorCURSOR

    FORSELECT ID, SQLTextFROM @results WHEREMaxLength!= 'NA'

OPEN length_cursor

FETCHNEXTFROM length_cursor

INTO @id, @sql

WHILE@@FETCH_STATUS= 0

BEGIN

    INSERTINTO @receiver(theCount)

    exec(@sql)

    UPDATE @results

    SET Longest =(SELECT theCount FROM @receiver)

    WHERE ID = @id

    DELETEFROM @receiver

    FETCHNEXTFROM length_cursor

    INTO @id, @sql

END

CLOSE length_cursor

DEALLOCATE length_cursor

SELECT

    TableName,

    ColumnName,

    DataType,

    [MaxLength],

    Longest  

FROM

    @results


A SQL Server MVP or MSFT Eng should be replying soon as well. Hope this helps. Frank Garcia *** Please select "Vote As Helpful" if the information provided was helpful to you. If an answer to your issue solved the problem then please mark it as"Propose As Answer" located at the bottom. Thank you. ***

SQL Management Studio Issue

$
0
0

Hi

Was hoping someone could help with the following.

I have installed SQL Server 2017 along with Management Studio. When I first open management studio I get the following error:

Object reference not set to an instance of an object. (Microsoft.SqlServer.Management.SqlStudio)

When I click through and connect to the server I get the following error:

Service 'Microsoft.SqlServer.Management.IRegistrationService' not found (Microsoft.SqlServer.Management.SDK.SqlStudio)

I get the same message each time I try and expand the database branch.

Does anyone know what the issue could be, I have tried a repair on SSMS but the issue remains.

Thanks

restore database without With replace

$
0
0

I always think that you need put with replace clause to restore overwrite an existing database, however, I just happened to restore a database overwriting an existing one without WITH REPLACE, not sure if this is supposed to be.

restore database MAP201808 from disk='e:\map.bak' with 
move 'map' to '...map201808.mdf',
move 'map_log' to '...map201808.ldf';
dbcc checkdb('map201808') with no_infomsgs;

Weird error from the application

$
0
0

Hi, ALL,

I am not sure whether this question belongs to this forum or more to the develpment one, but hopefuilly someone will tell...

I'm trying to develop an application in C++ where I connect to the DB, execute some queries and disconnect. I'm using direct ODBC API for it.

Everything compiles and runs without errors. The initial connection is executed and queries are run.

However, when I try to exit the application and free all handles (statement, connection and environment) I'm getting "Function sequence error" on freeing the last one - environment.

That means that some query is still being executed and I'm just prematurely exiting the routine where this query is performing.

Is there a way in C++ (either thru the ODBC API or by other means) to find out which query is still performing?

Thank you.

P.S.: As I said - if this post belongs to a different forum - just let me know and I will be happy to move it. ;-)


Resource govenor

$
0
0
what's the different of MAX_CPU_PERCENT  and CAP_CPU_PERCENT in resource governor?

How to run DBCC ShrinkFile efficiently? SS Import and Export Wizard?

$
0
0

Hi experts,
  I have a DB size of 14.9TB(full page-compressed) and I delete 95% of data.
I want to shrinkDB to put it to another small storage, like 500GB SSD. It seems to take "forever"
to run DBCC ShrinkFile because ShrinkFile runs in single thread and cause blocking locks.
  How do I run DBCC ShrinkFile efficiently? Could SQL Server Import and Export Wizard handle a problem like this?
By the way, list the hardware spec I use
DL580 Gen 10:
4 * Intel Xeon Platinum 8180
3TB DRAM
2 * 6.4TB PCIe SSD
1 * 1.6TB PCIe SSD

I use Windows Server 2016 and SQL Server 2012 SP2CU10.

Any recommendations are welcome and thanks for your help.

SQL Server 2016 memory pressure leads to the plan cache clearing

$
0
0

Two weeks ago we migrated into SQL Server 2016 SP1 CU3 (previously our app was using SQL Server 2014 SP2). The application hasn't been upgraded or changed nor the workload has changed.

Since the migration, every few minutes up to more or less one hour, the proc cache is being flushed (not entirely, but still the majority of plans go away). If I run:

  1. SELECT count (*) FROM sys.dm_exec_cached_plans
  2.  

...just after clearing happens, the number of plans drops to 1 up to 3 hundred and then gradually increase to more or less 2,000, then usually clearing happens again and so on. It worth to mention that Buffer Pool seems to stay intact and only caches are affected.

The server runs on VMWare, it has 128 GB of RAM (SQL Server max server memory is set to 102 GB, min. server memory is set to 72 GB). Based on the output from SentryOne, I can see that buffer pool consumes ~61 GB.

SQL Server memory usage

My SQL Server version is as follows:

Microsoft SQL Server 2016 (SP1-CU3) (KB4019916) - 13.0.4435.0 (X64)

It's a Standard Edition.

Finally, I came across an article by Jonathan Kehayias and I decided to check ring buffers and boom!, it turns out that I have notifications from resource monitors saying: 'low physical memory'. Occurrences of this notification fit perfectly to the proc cache clearing. Now the case is how to interpret these results and how to find the responsible process. As you can find in the query result:

  1. SELECT
  2. EventTime,
  3. record.value('(/Record/ResourceMonitor/Notification)[1]','varchar(max)')as[Type],
  4. record.value('(/Record/ResourceMonitor/IndicatorsProcess)[1]','int')as[IndicatorsProcess],
  5. record.value('(/Record/ResourceMonitor/IndicatorsSystem)[1]','int')as[IndicatorsSystem],
  6. record.value('(/Record/MemoryRecord/AvailablePhysicalMemory)[1]','bigint') AS [AvailPhysMem,Kb],
  7. record.value('(/Record/MemoryRecord/AvailableVirtualAddressSpace)[1]','bigint') AS [Avail VAS,Kb] FROM (
  8. SELECT
  9. DATEADD (ss,(-1*((cpu_ticks / CONVERT (float,( cpu_ticks / ms_ticks )))-[timestamp])/1000), GETDATE()) AS EventTime,
  10. CONVERT (xml, record) AS record
  11. FROM sys.dm_os_ring_buffers
  12. CROSS JOIN sys.dm_os_sys_info
  13. WHERE ring_buffer_type ='RING_BUFFER_RESOURCE_MONITOR') AS tab ORDER BY EventTime DESC;
  14.  

Ring buffer query results

we can observe 'low physical memory' flag although the value of Available Physical memory stays at the same level. Moreover, if I am not mistaken above results are indicating internal memory pressure (IndicatorsProcess = 2, which is weird to me as Sentry all the time shows that SQL Server doesn't fully utilize the allocated memory. This is memory usage captured by Sentry for a sample taken at 8 AM:

SQL Server memory usage

All lines are pretty flat. What is also weird to me is that group of events:

  • RESOURCE_MEMPHYSICAL_LOW

  • RESOURCE_MEM_STEADY

  • RESOURCE_MEMPHYSICAL_HIGH

happens at the same time. So this pressure takes milliseconds or less (perhaps this is also the reason why Sentry doesn't capture anything as it collects data a way less frequently).

I tried to find the reason behind this internal pressure and I checked top 10 memory clerks (in terms of memory consumption) to see if there are any heavy consumers there:

Memory clerks query results

but to be fair I don't see anything suspicious there.

Another thing worth to say is I haven't tried LPIM yet, as it requires SQL Server to be restarted, but even if it's the solution I would really like to understand why this issue happens. Moreover, please correct me if I am wrong, but as buffer pool seems not to be affected by trimming I don't really think LPIM is a solution here.

Now I am completely lost and I don't really know what else should I check in order to find the root cause of the issue. I would really appreciate if some can help me to solve this puzzle.


Server shutdown event id 19019

$
0
0

Hi,


Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)   Jun 28 2012 08:36:30   Copyright (c) Microsoft Corporation  Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) 

I found that sql server 2008 R2 was shut down suddenly today and could not find those details in error log. I was able to find in event log with event id 19019 and user as N/A. Not sure how the server got restarted. This server is not a clustered instance.

Log Name:      Application
Source:        MSSQLSERVER
Date:         8/6/2018 11:11:41 PM
Event ID:      19019
Task Category: Server
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      XXXXXXXXXX
Description:


Can anyone explain the root cause of sudden shutdown?

Thanks,


SSIS - System Table

$
0
0

Can anybody suggest how below tables gets created in Sql DB. 

tg_DataCleaningMaintenance_PendingDelete__20180508_000000_6685cb35-4dbc-4f5d-ad62-dec3ccff564e

tg_DataCleaningMaintenance_PendingInsert__20180508_000000_2656765e-d58f-4068-a650-e66feeddb935




ACTIVE_TRANSACTIONS exist after stopping job

$
0
0

Hi Folks,

On our one of sql server there is a database IN_2018 which is in SIMPLE RECOVERY MODE. It was taking long time so we stopped the job with sp_stop_job, because in GUI mode we were not able to stop the job, which were doing inserts.

Now when we are checking whats running on found we saw that inserts are still showing in SPID 

I checked ACTIVE_TRANSACTIONS with query select log_reuse_wait_desc, * from sys.databases where log_reuse_wait_desc='ACTIVE_TRANSACTION'. I found that the database IN_2018 have still ACTIVE_TRANSACTIONS.

I kept the database is in FULL RECOVERY MODE and taken full backup, still active_transactions are there.

Again I kept database is in SIMPLE RECOVERY MODE.

My question is that why ACTIVE_TRANSACTIONS is showing all time and why insert query is showing in SPID but in job history it is showing as stopped.

Is it any bug???

SQL VERSION: Microsoft SQL Server 2012 (SP3) (KB3072779) - 11.0.6020.0 (X64) 
Oct 20 2015 15:36:27 
Copyright (c) Microsoft Corporation
Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)

OS VERSION: Windows Server 2008R2 Enterprise Edition SP1 64-bit

SQL Server Agent jobs not able to edit /add non admin users

$
0
0

Hi I need grant access to SQL Server agent jobs, edit, delete, add for non admin users. Basically reporting team. I already granted SQLAgentOperatorRole, SQLAgentReaderRole, SQLAgentuserRole on msdb.

But users still not able to add ,edit steps on Agent jobs. Version is 2016R2 CU1 EE

 



Page Life Expectency on Numa Node 0 (1 of 2 Numa nodes) resets to 0 every minute for SQL Server 2012 Instance

$
0
0
The SQL instance RPT01 is consistently under internal memory pressure on 1 of the 2 Numa nodes. This is best represented by the PLE perfmon counters:
PLE on Numa Node 0 resets to 0 every 60secs
- Intermittently this causes “A significant part of sql server process memory has been paged out. This may result in a performance degradation….”
PLE on Numa Node 1 keeps growing as expected
- No problem

The Cluster Configuration
· 2 Server Node Windows Server 2012 Failover Cluster
· 9 SQL clustered instances (ranging from SQL 2008 R2 - 2016)
· Each Server Node has 512GB of RAM (typically 300GB always available - every instance has Max Memory constrained)
· 2 Numa Nodes with 12 Core / node with HT enabled (total of 48CPUs)

Problem SQL Instance (RPT01)
· SQL Server 2012 Enterprise Edition (11.0.3000.0)
· Min Memory 0, Max Memory 8GB
· Lock Pages in Memory (LPiM)
· 3 user databases (total size less than 100MB)
· Very low usage

The following changes attempted in troubleshooting(without success)
1. Offlined user databases, stopped SQL Agent (i.e. stopped all known connections to instance)
2. Switched to other node and back again
3. Removed LPiM
4. Increased Max Memory from 8GB to 10GB
5. Cleared Buffer Pool & Cache repeatedly (DROPCLEANBUFFERS & FREEPROCCACHE)



 

Reg: The target principal name is incorrect. Cannot generate SSPI context. (Microsoft SQL Server, Error: 0)

$
0
0

Hi Experts,

I will not able to connect my local dev server through my system.

Please find the attchement and suggest any possible solution for this.

i am using the sql server 2016 developer edition.

Thanks.

avoiding double linked server

$
0
0

Hi experts,

 

I am running sql server 2017… And someone suggested to do a double linked server to consume data…

 

Sql server 2017 -> some server -> SqlExpress

 

At a first glance it does not look as a good idea! (the end server is a sqlExpress, apparently they update hourly a table there that has around 1,000,000 records)

 

I wanted to come up with clear reasons why not to do it. The only thing I came up with is that “it will be expensive, since per each transaction it will be executed 3 times, and the data travel several networks…”

 

What other reasons can I add to avoid doing it? I wanted to add that it may even be ‘ilegal’ since my sql server 2017 is a production with a paid licence… is it ok to consume from a sql express?

Understanding SQL Server Initial Size/ Autogrowth for SharePoint 2013 Databases

$
0
0

I don't think I fully understand initial size/autogrowth (for both LDF and MDF)

Take note of the database size and space available in the picture(s)

This is a screen shot of one particular Content Database.

I currently have 83133.06 MB of total size. I also have 319.41 MB of free space.

As for the second picture you will notice the initial sizes of the database and the autogrowth setting.

The initial size is 82212 MDF and 922 LDF. The autogrowth is 100mb for both.

I have a few questions on this.

It looks like I need to change my initial size, and my auto growth numbers, but I don't know what to change them to.

1. what does the database size in the 1st picture comprise of? Is it the initial size of the MDF+ initial size of the LDF?

2. If I raise the 100MB autogrowth to lets say 500, do the logs need to be that high as well? 

3.. how taxing is it if i change the initial size from 82212 (my current initial size) to lets say 100,000 - should this be done during the maintenance window? or will current users not even notice anything if I change it.

So current plan is to change this database from 82,212 MB to 100,000 MB. and then change the autogrowth from 100MDF/100LDF to 500/MDF 100/LDF.

4. Should I do this on ALL SharePoint servers? or do I just do this to only the SharePoint content databases?

5. Should I be changing the initial size of the LDF as well? Not so sure what should happen with initial size of LDF.

-Marc


which column PK if I know 0 about the table?

$
0
0

Hi experts,

 

I was asked to bring a table from  a remote server to our local server and sync it…

 

I accessed the linked server and the table resulted to be a view… No problem, I used the wizard and brought entirely, and I have a weekly SP that brings the records that have a higher UpdateDate than the highest I have locally…

                                                              

My question is… the wizard brings it as a heap… Should I create a PK… or any index? How can I do that if I know nothing about the table… Is it ok to pick the first unique column as a PK?

 

Ps: users still don’t know how exactly are they going to query it, so I can’t ask them…

Viewing all 15264 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>