During DBA career, the hard time is who , when and where did touch data from tables.
It would take much resources that if we would use object or database triggers. Is there any safe way to achieve it ?
Yes we have. That way is "audit"
Let`s get into business.
1) DML
sqlserver audit2 <-------- setup DML monitor on database level for special tables
sqlserver audit2 select <-------- select DML history
sqlserver audit2
use master
go
CREATE SERVER AUDIT [TESTAuditing]
TO FILE
( FILEPATH = 'C:\SQLAuditing\'
,MAXSIZE = 500 MB
,MAX_ROLLOVER_FILES = 2147483647
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 2000
,ON_FAILURE = CONTINUE
)
ALTER SERVER AUDIT [TESTAuditing] WITH(STATE=ON)
Go
use TEST
go
CREATE DATABASE AUDIT SPECIFICATION AuditingTable
FOR SERVER AUDIT TESTAuditing
ADD (UPDATE,DELETE ON TEST.DBO.TEST BY PUBLIC),
ADD (UPDATE,DELETE ON TEST.DBO.TEST2 BY PUBLIC)
-- ON DATABASE::TEST
WITH (STATE = ON)
GO
sqlserver audit2 select
SELECT CONVERT(datetime, SWITCHOFFSET(CONVERT(datetimeoffset,event_time), DATENAME(TzOffset, SYSDATETIMEOFFSET()))) currentime,
action_id,
succeeded,
session_id,
database_name,
schema_name,
object_name,
statement,
file_name,
transaction_id
FROM fn_get_audit_file('C:\SQLAuditing\TESTAuditing*.*',default,default) where action_id<>'AUSC'
Then you may see above/
(2) DDL
sqlserver audit4 <-------- setup DDL monitor on database level for special tables
sqlserver audit4 select <-------- select DDL history
sqlserver audit4.
USE master
go
CREATE SERVER AUDIT auditddl
TO FILE
( FILEPATH = 'C:\SQLAuditing\'
,MAXSIZE = 500 MB
,MAX_ROLLOVER_FILES = 2147483647
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 2000
,ON_FAILURE = CONTINUE
)
go
CREATE SERVER AUDIT SPECIFICATION ddl_audit
FOR SERVER AUDIT auditddl
ADD (SCHEMA_OBJECT_CHANGE_GROUP)
WITH (STATE = ON)
go
ALTER SERVER AUDIT auditddl WITH (STATE = ON)
go
sqlserver audit4 select
select
CONVERT(datetime, SWITCHOFFSET(CONVERT(datetimeoffset,event_time), DATENAME(TzOffset, SYSDATETIMEOFFSET()))) currentime,
action_id,
session_id,
database_name
schema_name,
object_name,
statement
FROM fn_get_audit_file('C:\SQLAuditing\auditddl*.*',default,default) where action_id<>'AUSC'
Yes you can see that
(3) Loging infomation\
sqlserver audit3 <-------- setup monitor login (successful) on server level
sqlserver audit3 select <-------- select login (successful) history related SQL.
sqlserver audit3.txt
USE master
go
CREATE SERVER AUDIT audigt100
TO FILE
( FILEPATH = 'C:\SQLAuditing\'
,MAXSIZE = 500 MB
,MAX_ROLLOVER_FILES = 2147483647
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 2000
,ON_FAILURE = CONTINUE
)
go
CREATE SERVER AUDIT SPECIFICATION del_update_audit
FOR SERVER AUDIT audigt100
ADD (SUCCESSFUL_LOGIN_GROUP)
WITH (STATE = ON)
ALTER SERVER AUDIT audigt100 WITH (STATE = ON)
go
sqlserver audit3
select
WITH XMLNAMESPACES ('http://schemas.microsoft.com/sqlserver/2008/sqlaudit_data' AS ns)
SELECT CONVERT(datetime, SWITCHOFFSET(CONVERT(datetimeoffset, audi.event_time), DATENAME(TzOffset, SYSDATETIMEOFFSET()))) currentime,
audi.action_id,
audi.session_id,
T.c.value('(ns:address/text())[1]', 'nvarchar(30)')
FROM (SELECT *, cast(additional_information AS xml) AS additional_xml
FROM fn_get_audit_file('C:\SQLAuditing\audigt100*', default, default)) AS audi
CROSS APPLY additional_xml.nodes('/ns:action_info') AS T(c)
go
Yes, you can see loging history with IP and session that information
(1) eventtime is not equal with current system on sqlserver, So I use that SQL to get current system time.
CONVERT(datetime, SWITCHOFFSET(CONVERT(datetimeoffset,event_time), DATENAME(TzOffset, SYSDATETIMEOFFSET()))) currentime
(2) We can not get DDL history if I set up only "Server_principal_group", That should be "SCHEMA_OBJECT_CHANGE_GROUP"
ADD (SCHEMA_OBJECT_CHANGE_GROUP)
(3) That is real time process status for your original SQL . The process information will disappeared when process became "IDLE" or "Log out"
select getdate() as [date] , client_net_address, net_address, p.loginame
from sys.sysprocesses p inner join sys.dm_exec_connections c on p.spid = c.session_id
How frequency should I running that SQL in order to catch up related information ? So , I use "SUCCESSFUL_LOGIN_GROUP" replace that SQL.
Thanks
Finally, We can monitor DML,DDL and loging information by server audit. Maybe it would better that using triggers.
Comments