Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

Thursday, April 21, 2016

Getting All Permissions for Server

We recently had to work out a way to fetch and display all effective permissions granted to our Production SQL Server. I had been required to get all permissions at the DB level before, but even then that tended to be more granular and only showed to what roles you belonged and what explicit permissions you'd been granted. We wanted to run something through a loop of all users and get that information into a report. It turns out this was easier than I'd expected, especially after my first attempts to dig through the system tables.

We used a script something like the following:

EXECUTE as login = 'DomainName\Login';
select 'DomainName\Login' AS Login, * from fn_my_permissions (NULL, 'SERVER');
REVERT;

With a little work this can be used within PowerShell or SQL to loop through all logins and capture the information in a table or report.

 

The function could also be used to loop through all databases by using the database and then substituting Database for Server above:

USE AdventureWorks2012;
EXECUTE as login = 'DomainName\Login';
SELECT 'DomainName\Login', * FROM fn_my_permissions (NULL, 'DATABASE');
REVERT;

For an interesting take on revamping sp_msforeachdb, check out this script from Edward Pollack (t). It might help you if you need to loop through all of your databases to run something like this for your auditors.

Monday, February 29, 2016

SQL Server Audits and Action_IDs

We were recently re-doing our SQL Server Audits and I was reminded again how painful setting the filters can be. MS expects an integer for "action_id", but to actually use, them you need to know what those actually mean.

I came across this blog post by Chris Provolt listing the text versions of the action_id's. That was helpful, especially the quick query to see what's available:

Select DISTINCT action_id,name,class_desc,parent_class_desc from sys.dm_audit_actions

However, as you can tell by running this, the action_id's returned are all text values. That doesn't help when trying to set up your SQL Audits.

MSDN provides code for a function to translate the text audit_id into the expected numeric value.

CREATE FUNCTION dbo.Getint_action_id (@action_id VARCHAR(4))
returns INT
  BEGIN
      DECLARE @x INT;

      SET @x = CONVERT(INT, CONVERT(VARBINARY(1), Upper(
                            Substring(@action_id, 1, 1))));

      IF Len(@action_id) >= 2
        SET @x = CONVERT(INT, CONVERT(VARBINARY(1), Upper(
                              Substring(@action_id, 2, 1))
                              ))
                                         * Power(
                                                  2, 8) + @x;
      ELSE
        SET @x = CONVERT(INT, CONVERT(VARBINARY(1), ' ')) * Power(2, 8) + @x;

      IF Len(@action_id) >= 3
        SET @x = CONVERT(INT, CONVERT(VARBINARY(1), Upper(
                              Substring(@action_id, 3, 1))
                              ))
                                         * Power(
                                                  2, 16) + @x;
      ELSE
        SET @x = CONVERT(INT, CONVERT(VARBINARY(1), ' ')) * Power(2, 16) + @x;

      IF Len(@action_id) >= 4
        SET @x = CONVERT(INT, CONVERT(VARBINARY(1), Upper(
                              Substring(@action_id, 4, 1))
                              ))
                                         * Power(
                                                  2, 24) + @x;
      ELSE
        SET @x = CONVERT(INT, CONVERT(VARBINARY(1), ' ')) * Power(2, 24) + @x;

      RETURN @x;
  END; 

 

Once you create that function, you can use it to generate a list of the Integer action_ids, the text action_ids, and the name/description of those action_ids. (You can also expand it out to see which actions are applicable to various objects at the server and database level if you so desire.)

SELECT DISTINCT dbo.Getint_action_id(action_id) Action_ID_Int,
                action_id, NAME AS Action_Description
--,class_desc,parent_class_desc
FROM   sys.dm_audit_actions
ORDER  BY action_id

This will result in the following values (as of SQL 2012):

 

Action_ID_Int action_id NAME
1329873729 ACDO DATABASE_OBJECT_ACCESS_GROUP
542065473 ACO SCHEMA_OBJECT_ACCESS_GROUP
1329742913 ADBO BULK ADMIN
1346651201 ADDP DATABASE_ROLE_MEMBER_CHANGE_GROUP
1347634241 ADSP SERVER_ROLE_MEMBER_CHANGE_GROUP
538987585 AL ALTER
1313033281 ALCN ALTER CONNECTION
1397902401 ALRS ALTER RESOURCES
1397967937 ALSS ALTER SERVER STATE
1414745153 ALST ALTER SETTINGS
1381256257 ALTR ALTER TRACE
1280462913 APRL ADD MEMBER
538989377 AS ACCESS
1129534785 AUSC AUDIT SESSION CHANGED
1179866433 AUSF AUDIT SHUTDOWN ON FAILURE
1213486401 AUTH AUTHENTICATE
538984770 BA BACKUP
541868354 BAL BACKUP LOG
1111773762 BRDB BACKUP_RESTORE_GROUP
1179595331 C2OF TRACE AUDIT C2OFF
1313813059 C2ON TRACE AUDIT C2ON
1196180291 CCLG CHANGE LOGIN CREDENTIAL
1196182851 CMLG CREDENTIAL MAP TO LOGIN
1430343235 CNAU AUDIT_CHANGE_GROUP
538988355 CO CONNECT
538988611 CP CHECKPOINT
538989123 CR CREATE
538976324 D DENY
1179074884 DAGF FAILED_DATABASE_AUTHENTICATION_GROUP
1279738180 DAGL DATABASE_LOGOUT_GROUP
1397178692 DAGS SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP
1178681924 DBAF DATABASE AUTHENTICATION FAILED
1396785732 DBAS DATABASE AUTHENTICATION SUCCEEDED
1128481348 DBCC DBCC
1195590212 DBCG DBCC_GROUP
541868612 DBL DATABASE LOGOUT
538987588 DL DELETE
1280462916 DPRL DROP MEMBER
538989124 DR DROP
541284164 DWC DENY WITH CASCADE
538990661 EX EXECUTE
538989638 FT FULLTEXT
541545542 FTG FULLTEXT_GROUP
538976327 G GRANT
1111773767 GRDB DATABASE_PERMISSION_CHANGE_GROUP
1329877575 GRDO DATABASE_OBJECT_PERMISSION_CHANGE_GROUP
542069319 GRO SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP
1330860615 GRSO SERVER_OBJECT_PERMISSION_CHANGE_GROUP
1448301127 GRSV SERVER_PERMISSION_CHANGE_GROUP
541546311 GWG GRANT WITH GRANT
1346653513 IMDP DATABASE_PRINCIPAL_IMPERSONATION_GROUP
542133577 IMP IMPERSONATE
1347636553 IMSP SERVER_PRINCIPAL_IMPERSONATION_GROUP
538988105 IN INSERT
541214540 LGB BROKER LOGIN
1195525964 LGBG BROKER_LOGIN_GROUP
1094993740 LGDA DISABLE
1111770956 LGDB CHANGE DEFAULT DATABASE
1095059276 LGEA ENABLE
1279674188 LGFL FAILED_LOGIN_GROUP
1179207500 LGIF LOGIN FAILED
1397311308 LGIS LOGIN SUCCEEDED
1196181324 LGLG CHANGE DEFAULT LANGUAGE
541935436 LGM DATABASE MIRRORING LOGIN
1196246860 LGMG DATABASE_MIRRORING_LOGIN_GROUP
1296975692 LGNM NAME CHANGE
542066508 LGO LOGOUT
1146308428 LGSD SUCCESSFUL_LOGIN_GROUP
538988364 LO LOGOUT_GROUP
1111772749 MNDB DATABASE_CHANGE_GROUP
1329876557 MNDO DATABASE_OBJECT_CHANGE_GROUP
1346653773 MNDP DATABASE_PRINCIPAL_CHANGE_GROUP
542068301 MNO SCHEMA_OBJECT_CHANGE_GROUP
1330859597 MNSO SERVER_OBJECT_CHANGE_GROUP
1347636813 MNSP SERVER_PRINCIPAL_CHANGE_GROUP
1196182862 NMLG NO CREDENTIAL MAP TO LOGIN
538988623 OP OPEN
1111773263 OPDB DATABASE_OPERATION_GROUP
1448300623 OPSV SERVER_OPERATION_GROUP
1380013904 PWAR APPLICATION_ROLE_CHANGE_PASSWORD_GROUP
541284176 PWC CHANGE PASSWORD
1195595600 PWCG LOGIN_CHANGE_PASSWORD_GROUP
1396922192 PWCS CHANGE OWN PASSWORD
1480939344 PWEX PASSWORD EXPIRATION
1129142096 PWMC MUST CHANGE PASSWORD
1280333648 PWPL PASSWORD POLICY
542267216 PWR RESET PASSWORD
1397905232 PWRS RESET OWN PASSWORD
542463824 PWU UNLOCK ACCOUNT
538976338 R REVOKE
538985298 RC RECEIVE
538986066 RF REFERENCES
538989394 RS RESTORE
541284178 RWC REVOKE WITH CASCADE
541546322 RWG REVOKE WITH GRANT
538987603 SL SELECT
538988115 SN SEND
1313624147 SPLN SHOW PLAN
1448301651 STSV SERVER_STATE_CHANGE_GROUP
1313953107 SUQN SUBSCRIBE QUERY NOTIFICATION
1313035859 SVCN SERVER CONTINUE
1146115667 SVPD SERVER PAUSED
1146312275 SVSD SERVER SHUTDOWN
1381193299 SVSR SERVER STARTED
1095975252 TASA TRACE AUDIT START
1347633492 TASP TRACE AUDIT STOP
538988372 TO TAKE OWNERSHIP
1111773012 TODB DATABASE_OWNERSHIP_CHANGE_GROUP
1329876820 TODO DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP
542068564 TOO SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP
1330859860 TOSO SERVER_OBJECT_OWNERSHIP_CHANGE_GROUP
1195594324 TRCG TRACE_CHANGE_GROUP
542069332 TRO TRANSFER
1346847573 UCGP USER_CHANGE_PASSWORD_GROUP
1195459669 UDAG USER_DEFINED_AUDIT_GROUP
1430340693 UDAU USER DEFINED AUDIT
538988629 UP UPDATE
1178686293 USAF CHANGE USERS LOGIN AUTO
1196184405 USLG CHANGE USERS LOGIN
1129599829 USTC COPY PASSWORD
1414743126 VDST VIEW DATABASE STATE
1414746966 VSST VIEW SERVER STATE
1413699414 VWCT VIEW CHANGETRACKING
538984792 XA EXTERNAL ACCESS ASSEMBLY
538989912 XU UNSAFE ASSEMBLY

Friday, November 7, 2014

SSAS Tabular and Excel PowerPivot

I don’t know why it took so long before making the connection between PowerPivot models and SSAS Tabular models, but in the last couple of days I came across several blogs and videos tying the two together. With that in mind, I took the plunge into PowerPivot and built a quick model off of some SQL queries to test out a concept.

I started with the basics of mocking up a fact table. I stripped out decorator text, “renamed” a couple of columns in the query, filtered the data so I wouldn’t pull back a crazy amount into my local workstation, verified the counts and look of the data in a SQL query, and added that. Excel quickly loaded 1.2M rows and didn’t blink.

After that, I knew I needed a couple of Dimension-like tables. I wrote queries for those in a similar fashion and added the data again. Each addition went smoothly. I was able to add the proper relationships quickly to tie everything together.

To finalize the model, I created a couple of hierarchies to make querying easier. After messing around with these several times in SSAS Multi-Dimensional, I was shocked at how easy it was to create a hierarchy for dates and even certain domain-specific entries in our data: Right-click, create new hierarchy, drag and drop columns. Done.

After the model was finished, I dropped the relevant fact data into a Pivot Table and could easily analyze the data in all sorts of interesting ways. I worked around this with OLAP for quite some time to come up with a similar concept. With PowerPivot, I wrote a couple of queries, linked the tables, and had something workable within an hour. As an added benefit, I know I can scale that up to an SSAS server without too much trouble and have a similar experience.

Hat tip to the folks @ www.powerpivotpro.com and www.sqlbi.com for the many articles and pointers to get me started and over that initial hump.

Monday, September 23, 2013

SSIS, MySQL, and invalid Dates–Take #2

A while back I posted an article on handling dates in select statements from MySQL. The idea was that I wanted a quick way to generate select statements for MySQL that would automatically set any invalid date to NULL. (an invalid date being one such as “0000-00-00” or “1973-02-00”)  After some more trial and error, I realized that this script was incomplete. I’ve created an updated script that should better handle those edge cases.

The script is largely the same as the former, but I’ve added handling for invalid years or months or days. We don’t want to redirect the rows for further error handling. We want to treat the dates that won’t successfully import as NULL. This could be done several ways within SSIS, but handling the transform within the select statement reduces our need to write many transforms to handle each date time for each table.  (I suspect that BIML could be my friend for this task, but I’m not quite ready to tackle this with BIML.)

SELECT CONCAT(CASE
  WHEN ordinal_position = 1 THEN 'SELECT ' 
  ELSE ', ' 
  END
, CASE data_type WHEN 'date'
THEN CONCAT('CASE WHEN ', column_name, '= ''0000-00-00'' THEN NULL '
, 'WHEN ', column_name, ' LIKE ''%0000%'' THEN NULL '
, 'WHEN ', column_name, ' LIKE ''%-00%'' THEN NULL '
, 'else ', column_name, ' END as ', column_name)
WHEN 'timestamp'
THEN CONCAT('CASE WHEN ', column_name, '= ''0000-00-00'' THEN NULL '
, 'WHEN ', column_name, ' LIKE ''%0000%'' THEN NULL '
, 'WHEN ', column_name, ' LIKE ''%-00%'' THEN NULL '
, 'else ', column_name, ' END as ', column_name)
WHEN 'time'
THEN CONCAT('CAST(', column_name, ' AS CHAR(8) ) as ', column_name ) 
ELSE column_name
END
, CASE WHEN ordinal_position <> maxord THEN '' 
  ELSE CONCAT('     FROM MySchema.', c.table_name, ';') 
  END) AS Select_Column_Name
FROM   information_schema.columns AS c
       JOIN (SELECT table_name,
                    Max(ordinal_position) AS MaxOrd
             FROM   information_schema.columns
             WHERE  table_schema = 'MySchema'
             GROUP  BY table_name) AS t1
         ON c.table_name = t1.table_name
WHERE  table_schema = 'MySchema'
ORDER  BY c.table_name, ordinal_position
LIMIT  0, 50000;

Tuesday, May 14, 2013

SSDT: Setting Different Permissions per Environment

One of the areas that SSDT doesn't address adequately is permissions. Many users have different users, logins, and permissions set up across their environments, but SSDT takes a "one size fits all" approach and assumes that you will have the exact same configuration in each environment. While that should be the case for your schema, this area requires some interesting and somewhat involved techniques to handle well in SSDT.
Attribution - I'm borrowing heavily from Jamie Thomson's ( blog | twitter ) post
A strategy for managing security for different environments using the Database Development Tools in Visual Studio 2010
Jamie addressed this problem for the older DB Projects (Datadude, VSTSDB, DBPro, etc.). He came up with an excellent, if somewhat complex, solution to effectively handle pushing different permissions to different environments. This is my attempt to update this and distill it into a "how to" for SSDT and SQL Projects, but I freely admit that this is based on Jamie's excellent work.
In general, the steps we'll want to take are:
  1. Generate a set of SQL Logins that will be used on the server.
    1. These should be run on the servers or local box already - we will not use these within the SQL Project itself though you can if you wish.
  2. Generate the files for users/roles/permissions for each database project and environment.
  3. Add the files to your SSDT SQL Project
  4. Create and set a variable for your environments in your SSDT SQL Project.
  5. Adjust your Security Wrapper file to handle your environments.
  6. Adjust your post-deploy script to call the Security Wrapper.
First, let's look at what happens when you import Permissions by default into an SSDT SQL Project.
clip_image001
As you can see from the screenshot, we have a "Permissions" file containing granted permissions, a "RoleMemberships" file that adds users to various roles, a "Sales" file for the Sales schema, and 3 files each for the users - one for its schema, one for its login, and another for its user. Confusing, but not necessarily a problem until you need a release where User1 does not and will not exist on the server or where User2 belongs to a different domain than your production servers and can't and should not be created there because there's no trust relationship.
Let’s remove all of those files for objects other than Schemas from the project before we start. You’ll probably have to look at each file to verify before removing it from the project if you’re not sure which file is which. If you accidentally remove the files for the schemas, you’ll get errors and need to re-add a “Schema” to this security folder for each schema that is used.
 
Generate SQL Logins
This Microsoft KB article details the sp_help_revlogin stored procedure. Create this on your source database, run it, and use the output to re-create those logins elsewhere. This will copy all logins and users, but no permissions or database users.
 
Generate the Security Files through PowerShell
First, download the PowerShell_CreatePermissions.Zip file. Extract that to a folder on your hard drive.
Note - the script assumes that you will use Windows authentication to connect to your SQL Server. The script will need to be adjusted if you need to use SQL Authentication.
Edit the "GeneratePermissions.ps1" file. Replace the "DB1", "DB2", etc with whatever database names you'll be using. Also uncomment any of the extra lines that you'll need in the file. (There is likely a better way to do this in Powershell and I'm more than open to some suggestions on how to build up this array in a more concise manner.)
The assumption is made that the databasename == the projectname. If that's not the case, adjust your DB/Project names accordingly.
Open a PowerShell prompt in that folder after editing/saving the file.
Run the file using something like this commandline:
.\GeneratePermissions.ps1 -SQLInstance "localhost" -Environment DEV
"localhost" should be replaced with your servername
DEV in this case indicates that this will be tied to a variable value of "DEV" when the project is published.
I'd recommend running this for every environment at one time if you can. It will make adding the files to the project much easier.
If you run for several environments, you'll see a folder structure something like the following for each project (in this case for Adventureworks2012):
clip_image002
Inside your "SecurityAdditions" folder, you'll see the following files and folders:
clip_image003
Each of these files will be used to add permissions for each environment. In this case, we have permissions set for our Development, QA, Local, Production, and Staging environments.
Now, let's add the "SecurityAdditionsWrapper.sql" file from the files we extracted earlier into your "SecurityAdditions" folder. Feel free to edit it as needed to match the environment(s) you want to use for your database project. By default, it's set to look for the above files plus a "Default" file to be a catch-all. Adjust that so all files/environments match the files you created and save it.
 
Add the files to your SSDT SQL Project
Copy your newly created folder(s) to the appropriate project. Open each project, and you will notice that the files still don't show up. Click the project file. You should see a menu item that's now enabled to "show all files".
clip_image004
You'll see a bunch of files that aren't part of the project under a "Scripts" folder:
clip_image005
We want to add them to the project. Right-click the "Scripts" folder and select the option to "Include in Project". Click the Project file again and turn off the option to show all files.
  • We need to set all file properties to "Not in Build" or there will be errors when trying to build the project.
Unfortunately, we can't select just the folder to exclude from the build. We need to select the files. You can ctrl+click to select multiple files. Select all of the files, open the Properties tab, and change the option from "Build" to "None"
clip_image006
Create and set your Variable to change environments
Open the Project Properties window
clip_image007
We want to change the SQLCMD variable to add a new variable called "DeployType". Set it's default value to "Local" for now, or whatever environment for which you have a set of security scripts.
clip_image008
 
Adjust your Security Wrapper files
The SecurityAdditionsWrapper.sql file in the code set I've created comes with settings for "Dev", "QA", "Staging", "Production", "Local", and "Default". It's likely that you will not use all of these in your setting. One drawback to using SQLCMD to wrap all these files together is that if you don't have a file or set of files, the script will start throwing errors because an expected file doesn't exist. Edit the SecurityAdditionsWrapper.sql file to remove any of the configurations you will not use or change the names to match the environment values you used earlier.
 
Adjust your Post-Deploy script
Finally, we need to open your Post-Deploy script.
Add the following to your script:
:r .\SecurityAdditions\SecurityAdditionsWrapper.SQL
This will include your SecurityAdditionsWrapper file as part of your post-deploy process. See my earlier post on selectively running pre/post deploy scripts if you want finer control.



















































SSDT: Trick to Not Run Pre/Post SQL on Publish

We’ve run across the need to avoid running certain scripts when publishing a brand new copy of a database from a SQL or DB Project from time to time. Often, those pre/post deploy scripts are designed to be run only against an existing system and don’t make sense for a brand-new, empty database.

We’ve worked around this by using a SQLCMD variable called DeployType and setting that variable to “New” for any build operation that will be used only to create the initial database. The code snippet for our Post-Deploy script looks something like the following:

-- Place scripts here if they should only run for existing/working DB
IF ( '$(DeployType)' <> 'New' )
  BEGIN --Run scripts
      PRINT 'Deploying scripts.'
  END

--Security - runs last
:r .\SecurityAdditions\SecurityAdditionsWrapper.sql

 

The above script will look at the DeployType variable. If it’s set to “New”, it will ignore all of your deploy scripts in that block. If it’s set to any other environment, it will run the appropriate scripts if any exist. You can use this to control which environments will use certain scripts as well.

Wednesday, January 23, 2013

SSDT, Publishing, and Referenced Databases

Had this catch me today in a series of databases that all work together. In order to resolve all of the cross-database dependencies, I've added database references to the project. That takes care of all of the "Database.dbo.Object" references outside of the database. The problem is that within the current database, you will get errors without that reference defined.

In my haste to get the projects working, I added a database reference to the current project. I would build my project and reference the generated *.dacpac files when publishing. The problem is that every time I built my project, the referenced dacpac file would overwrite the generated dacpac file.

Results from this problem - I noticed that my "DeployType" variable wasn't referenced anywhere, which struck me as odd because I use it in my Pre and Post Deploy scripts. I also started noticing that my dacpac file in my build folder was out of date, even after a successful build.


Workaround - do what I have done otherwise - replace all instances of CurrentDatabaseName.dbo. with just dbo. in all SQL files in the project.  This also includes the various forms that this can take:
Databasename.dbo.
Databasename..
[Databasename].[dbo].
etc

Once I replaced all of those references and rebuilt the project successfully, my publish actions started working again. My Deploy scripts were included. I didn't get mysterious warnings about my variables not existing.

There may be a different way to work around self-referenced database objects within SSDT, but until I come across that, the workaround is to remove them from the code and not to just add a reference to the current database through a dacpac file.

Wednesday, November 14, 2012

SQL Server: Problems with corrupt files or assemblies

Yesterday I found and fixed a bunch of hard drive issues. One of those issues resulted in my “Microsoft.AnalysisServices” assembly for SQL Server 2012 becoming corrupt. That in turn resulted in VS2010 throwing errors when I tried to use SSDT or do just about anything else. My exact error message was:

Could not load file or assembly 'Microsoft.AnalysisServices, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The module was expected to contain an assembly manifest.

 

There was an MS help link as well, but it basically took me to a page that thanked me for letting them know they needed to work on their documentation. It was a little less than helpful.

 

I tried several things to fix the problem.

  1. Re-apply SP1.
  2. Uninstall just Analysis Services.
  3. Repair SQL 2012
  4. Uninstall SQL 2012  (This mostly worked except I still had issues with SSAS)
  5. Reinstall SQL 2012  (This worked, but still had issues with Analysis Services)

All of the above resulted in the same error message at some point and a non-working install of SSAS 2012.

 

I finally came across an article that talked about fixing the Global Assembly Cache. While not everything applied directly, it did get me started.

  1. I tried copying the gacutil files as mentioned in the article. This didn’t work. The program ran, but didn’t do anything. I had to use the actual location for  gacutil.exe in order to do anything. This can be found in “c:\Program Files (x86)\Microsoft SDKs\Windows\” You’ll need to choose the appropriate version for your OS as well as the appropriate choice for x32 or x64. In my case, it was Windows 8. My full path to gacutil.exe was “C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\x64”
  2. I had to find my physical Microsoft.AnalysisServices.dll file, found under “C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012\x64”
  3. After that, I was able to run gacutil -if “C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012\x64\Microsoft.AnalysisServices.dll”

After running that command, I uninstalled and re-installed SSAS under SQL 2012 using the standard add/remove features and everything is now working. I don’t know if anyone will encounter a similar problem with a bad assembly or assembly manifest, but hopefully this will help someone.

Monday, November 12, 2012

SSDT: Tips, Tricks, and Gotchas

I wanted to add a short post to make sure I highlight some things that will trip people up or otherwise cause issues.

 

  • When setting trigger order for triggers on a table, you could run into an issue with Publishing your database. The project will build successfully, but throw a "Parameter cannot be null" error for parameter "Key". This is an internal bug with the product as of at least SQL 2012 CU2. It's been entered as an internal bug within MS with a workaround. To workaround the issue, do not try to set the trigger order in the table definition, but rather put this code in a post-deploy script.
    • This is supposed to be fixed in a future release of SSDT so your experience may vary.
  • Importing encrypted objects may fail. If you want to store encrypted objects in your project, you'll likely need the code in order to create them in the project.
  • Unlike the Schema Compare option in VS 2010, there is no way to set the default Schema Compare options nor a way to filter out "skipped" objects in the schema compare.
    • I’d love some ideas on how better to handle this. You may be able to save the compare within the project for future re-use, but I had little success with this in VS 2010 Projects so have been reluctant to try that route again.
  • If your database uses FILESTREAM, you will not be able to debug using the default user instance that SSDT provides. You will need to point your debugging instance to a SQL Server install that supports FILESTREAM. See this forum post for more details.
    • Set this in the Project Properties and point to an actual instance of SQL Server that supports the feature(s) you want to use.
  • SQL Projects will substitute CONVERT for CAST in computed columns at the time of this writing. They also push default constraints with extra parentheses around the default value. If you clean these up from the target server, be aware that on your next Publish action, you could end up with extra work being done to make your target database use those parentheses or the CONVERT function.
    • To work around this, do a SQL Schema compare after a release to find any areas where the schema on the server differs from that in the project. For instance you may see a DEFAULT (getdate()) in your server, but have DEFAULT getdate() in your project.  Add the parentheses to your project to avoid unnecessary changes.

 

Do you have any tips to share? Add them to the comments below.

Friday, November 9, 2012

SSDT: SQL Project Snapshots

SSDT allows for snapshots to be taken of the project at any point. Just right-click the project name and select the option to "Snapshot Project".
clip_image001
This builds a dacpac file into a folder within the project called "Snapshots" with a default name format of Projectname_yyyymmdd_hh-mi-ss.dacpac.
This file contains a build of the project at the time the snapshot was taken, including all objects and scripts.
 
Uses (by no means a complete list)
  • Save a specific version of your project to use for release
  • Save this version of the project before making changes to the underlying project
  • Use as a source for schema compare
  • See the differences between snapshots and/or the current project through schema compare
  • Baseline your project
  • Roll back to this state
    • Schema Compare with the snapshot as the source and the Project as the target
    • Import into a new project with this as the source.
    • Import into current project, but be aware that this could easily produce a lot of duplicates




Thursday, November 8, 2012

SSDT: Publishing Your Project

Build the project

In order to successfully publish your project, it must first be able to build successfully.

Start by building your project. Right-click the project and select "Build".

clip_image001

If the build is successful, the project can be published.

 

You may want to create a folder within your project to store saved Publish Profiles. These can be used later to easily publish the project to your servers.

clip_image002

 

Creating Publish Profiles

Right-click the project and select Publish. This will bring up the Publish Database dialog.

  • Choosing to publish or opening a saved publish profile will initiate a build of the project.

clip_image003

Choose your target database, set your advanced options (similar to Schema Compare options), and choose the "Save Profile As" option to save this to a location within your project. Selecting the "Add profile to project" option will create the publish profile in the root of the project. You may wish to either move the file to a folder storing all of your publish profiles or, if you saved it without adding to the project, show all files of the project so you can include the file in the project.

clip_image004

 

Some options you may want to consider:

  • "Always re-create database" - this will re-create the database. Any data in the database will be lost.
  • "Block incremental deployment if data loss might occur" - If there are any changes that could result in the publish action failing because of data loss, this option will stop the script from running.
  • "DROP objects in target but not in project" - This will remove anything in the database that doesn't exist in the project. Useful if you want consistency, but you may want to ensure this isn't checked if there could be objects in the database that were created, but didn't make it to the project.

Under the "Advanced Deployment Options"

  • Allow Incompatible Platform - Useful if you may publish to a different version of SQL Server than the one specified in the project
  • Include transactional scripts - Will run the entire update operation as a transaction. If any one part fails, the transaction will roll back. If you have cross-database dependencies, selecting this option could result in no changes being published if you're publishing to a new server. For a new publication, you may want to de-select this option to ensure a successful deploy of what can be published.
  • Script state checks - This option will ensure that the publish action will only work on the specified server and database.
  • Verify deployment - Checks the database and project before publishing to try to ensure there are no changes that will cause problems with the publication such as missing data for a foreign key.

 

Using Publish Profiles

Once you've set up your publish profiles, you can easily use these to push changes to that server and database without needing to specify additional parameters. The easiest way to use them is to double-click the Publish Profile within the project and choose to either "Generate Script" or "Publish".

Generate Script will generate a script for you to use to update the target at a later time (run in SQLCMD mode).

Publish will immediately attempt to push the changes to the target.

You can also use these at a later point to push changes through the SQLPackage.exe command line.

 

SQLPackage

To publish your package through a command line we use something like the following:

Code Snippet
  1. sqlpackage /a:publish /sf:.\sql\Local\Adventureworks2008.dacpac /pr:.\Publish\Local.publish.xml

The above will:

  • Use the "Publish" Action
  • Use the Source File named Adventureworks2008.dacpac, built in the sql\Local folder
  • Use the publish profile named "Local.publish.xml" (defined to push to the local SQL Server)

You may want to add SQLPackage.exe to your path. By default it is installed in:

C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin

You can override quite a few of the default settings through various command line arguments. This includes source, target, and variables. You can find a full list of the command line arguments at the SQL Package reference online.

 

Jenkins Automation for CI

We use Jenkins at my current workplace and set up a Jenkins job to do the following (With thanks to Matthew Sneeden for the assistance.):

  • Get the latest from our mainline repository
  • Build each SQLProj file.
    • Building the SLN file will result in also attempting to publish the database
    • msbuild .\Adventureworks.sqlproj /t:build /p:Configuration="Local"
      • This assumes that msbuild.exe is in your path.
    • Configuration is mostly to control the location of the dacpac file generated.
  • Run SQLPackage w/ a specified Publish Profile for the appropriate environment and using the newly built dacpac as the source.

We are currently investigating how we can use Snapshot files to better control releases to our UAT and Production environments. This series will be updated when that information is available.

Wednesday, November 7, 2012

SSDT: Errors and Warnings

SSDT includes an Errors and Warnings window that is well worth your attention. Ideally, your project should have no errors or warnings.

clip_image001

 

However, sometimes coding errors slip in to your project or you get warnings that an object can't be found because it exists in another database. Sometimes a warning might appear because an object is missing completely, in this project or another one. In these cases, it's well worth checking this section to find out where you may have some issues.

Warnings are not necessarily a problem. SSDT will bring possible issues to your attention, but warnings will not stop a project from building and publishing unless you have set the option to treat warnings as errors or unless there really is an underlying problem that causes an issue during the publish phase.

For example, if I modify the Person.Address table to add a new column in the code, but forget to add a comma, I'll get an error something like this.

clip_image002

If you double-click on the line, the editor should open the appropriate file and take you pretty close to your problematic line. Correct the problem, save the file, and move on to the next error.

 

Some common warnings/errors

  • 4151 - Unresolved database reference. This is often caused when one of the objects in a database references another. This can often be resolved by creating a database reference.
  • 71562 - Unresolved database reference warning.
  • 71502 - Another unresolved database reference warning.

How to globally suppress certain warnings

  • Right click the root of the project and select properties
  • Click on the "Build" tab
  • Enter in the numeric portion of the codes, separated by commas. Remove the "SQL" and any leading zeroes when entering the code.

clip_image003

Tuesday, November 6, 2012

SSDT: Updating a Project Manually

Sometimes it's necessary to modify the project manually. The change might require specific tweaking to include just a couple of new lines in a stored procedure or function. It might be just adding a column to a table. Maybe you know exactly what change needs to be made and would rather just edit it manually instead of comparing or importing. Whatever the reason, updating your project manually can be done without too much trouble.
If you know the name and location of the file you want to edit, just go straight to it and right-click it.
clip_image001
You have two ways to edit the file - using "View Code" will bring up a T-SQL script. Edit as you would any other T-SQL script and save. Remember that SSDT scripts create all base objects using CREATE scripts.
If you choose to View Designer, you'll see a new screen combining a design view with a T-SQL editor.
clip_image002
 
Here you can choose to edit within either window. The code will be kept in sync across the panels. You can right-click any of the Keys, Indexes, Triggers, Constraints in the upper window and choose to add a new one. You'll get a shell of the script to create a new object tied to this table. Modify its code to match what you want to do and save the file.
  • This is different behavior from the older Database Projects. Those would create separate files for each object by default. In SSDT, the scripts are all put together in one script unless you upgraded from a DB Project.
  • The only place SSDT supports "GO" to break up the batches is within these create scripts. You cannot use GO in a post or pre deploy script.
  • If you highlight a column in the table, you can give it a description in the Properties window. This is an easy way to propagate object/column descriptions into your SQL Server.
  • You can select the table's properties in the Properties window dropdown to edit its properties/description.
clip_image003
 
 
SQL Server Object Explorer
If you hit the F5 key inside of a project to debug the project, SSDT will build a local database, often hosted in an instance called (localdb)\Databasename. This will run the database project in a debug session. If you then open the SQL Server Object Explorer view, you can edit the project and underlying scripts.
This works in most instances where you’re using basic SQL Server functionality. If you’re using any more advanced features such as Filestream, you’ll want to change the location for this debug instance. You can change these in the project properties.
clip_image004

Double-clicking the HumanResources.EmployeePayHistory table above brings up the editor for the underlying table in the project.
clip_image005













Monday, November 5, 2012

SSDT: Updating a Project by Importing Scripts

Sometimes your developers will work on new SQL Objects and give you scripts to alter or create objects. SQL Projects support importing those scripts into your project. Start by choosing the Import Script option.
clip_image001
Find your script or scripts.
clip_image002
Select your options for import:
clip_image003
 
If you see the following text in your log, be sure to check this file to see if something was missed on import. You'll need to manually make these changes, if applicable. In a lot of cases, the statements not understood tend to be "GO" statements.
“In the script that you provided to the import operation, one or more statements were not fully understood. These statements were moved to the ScriptsIgnoredOnImport.sql file. Review the file contents for additional information.”
I've also found that "ALTER TABLE" statements are not well understood by SSDT within the imported scripts. If you get several scripts that include these and aren't understood, you can either compare the physical database to the project or manually update the project. (This has been acknowledged by Microsoft as “working as designed” even if we might wish that this could actually change the script for the object instead.)




Friday, November 2, 2012

SSDT: Updating by Using Schema Compare

Sometimes changes are made to a shared database that need to be brought into your SQL Project and there are no saved change scripts for those changes. Other times you may just want to see what will change when you publish a project. SSDT has included a Schema Compare option for SQL Projects, dacpac files, and databases.
If you are using VS2010, there are two options for SQL and Data.
clip_image001
  • The "Data" menu is used for the older DB Projects within VS2010. There's a useful data compare option in there, but the schema compare will not work for SQL 2012 or SQL Projects.
  • The "SQL" menu contains the Schema Compare item with a sub-item to do a new Schema Comparison
clip_image002
To update your project from a shared database, start a "New Schema Comparison". You'll see a screen something like this:
clip_image003
Setting your source/target is pretty straightforward and each will produce a screen something like the following:
clip_image004
You can choose to compare against an open project, a live database, or a Data-tier Application File (dacpac). In our case, we are going to select a live database as the source and our project as the target.
Once selected, you may want to change the options to exclude certain object types or ignore certain settings such as file placement. Unlike in VS2010, the options cannot be set to some default at this time. When you've set the options to your liking, click the "Compare" button.
clip_image005
If there are any differences, you'll see a list of them here. The comparison window shows where there are differences between the source and target. This can be really helpful to know whether or not to include this change in the update.
To exclude a change, clear the checkbox next to it in the upper section of the window. A green + indicates that this object will be created. A red - indicates that the object will be dropped. The blue pencil indicates that the object will be modified. At this time there is no way to hide any unchecked items that show in the compare.
Once you're satisfied with your selection, click the "Update" button to push those changes into your project.








Thursday, November 1, 2012

SSDT: Pre and Post Deploy Scripts

At some point, you will likely need to release changes that contain more than just schema changes. You may need to update data in a table, remove data that could cause a problem, or insert some data to support a code release. To support these changes, SSDT provides support for Pre-Deploy and Post-Deploy scripts.
These scripts are not created by default in a new SSDT project. To organize your scripts, you may want to create a folder structure similar to the following.
clip_image001
If you right-click the Post or Pre Deploy folders, you can choose to add a Script item.
clip_image002
You will then get a choice of which type of script to include. Choose Pre or Post deployment, as appropriate, and name the file accordingly.
You can only have one Pre-Deployment script and one Post-Deployment script per project as part of the Build! All included scripts must be set to “Not in build” or you will likely have errors.
clip_image003
If you look at the properties of these scripts, you will see that the Build Action is set to PreDeploy or PostDeploy. Opening the Post-Deploy script will show this default text.
/*
Post-Deployment Script Template       
--------------------------------------------------------------------------------------  This file contains SQL statements that will be appended to the build script.  Use SQLCMD syntax to include a file in the post-deployment script.  Example:      :r .\myfile.sql  Use SQLCMD syntax to reference a variable in the post-deployment script.  Example:      :setvar TableName MyTable                SELECT * FROM [$(TableName)]     
--------------------------------------------------------------------------------------
*/


 
Using Pre and Post Deploy Scripts
These files are interpreted as SQLCMD scripts by SSDT. You can use most valid SQLCMD syntax within the script file to include other scripts and set variables. At the time of this writing, you cannot use the :ON ERROR IGNORE command within a script. This will hopefully be addressed in a future release of SSDT.
Pre-Deploy scripts will always run before the schema changes are run. If you alter the database in a way that changes the schema, you may encounter errors with the release. E.g., if you make schema changes to the Person.Person table, but drop that table in a Pre-Deploy script, your publish action will likely fail.
Likewise, Post-Deploy scripts will always run after the publish action. This makes them a great place to insert new data, make small adjustments to the system, or perhaps implement custom security and permissions. The caveat to using post-deploy scripts to make changes is that the changes need to be repeatable. If you write an insert statement for a new lookup value, that same insert will run next time unless you check for the existing value first.
To create a new script to run in pre or post deploy:

  • Right-click the appropriate folder (Pre-Deploy, Post-Deploy, other)
  • Choose to add a "script" and select the "Script (Not in build)" option
  • Give it a name, preferably one without spaces as it will make it easier to run the pre or post deploy script.
  • Add your code changes. You may want to give it some descriptive comments and sometimes a Print statement can be helpful if you want to see progress as you run a deploy script manually.

    • Make sure that you can re-run this script without causing errors! Check for data that may already exist, use outer joins, wrap your code in Try/Catch - whatever you need to do to make sure that you can have this run again if necessary.
  • After saving the script, edit your pre or post-deploy script and add a new line in a non-commented area something like:



Run a script called "MyScript"



  1. :r .\MyScript.sql






    • This assumes that your script is in the same folder as your pre or post-deploy scripts. Adjust the relative paths as needed for the script you created.

  • Save your deploy script.

The next time you publish your project, it will pick up this script and include it in your change set. If you choose to create a script on publish, you can see the full text of your scripts included in the pre or post-deploy section of your change script.
 
Cleanup of Pre and Post Deploy Scripts
There are several options for cleanup, but one of the best suggestions I've seen has been to generate project snapshots and then remove the script references from your pre/post deploy scripts and the script files themselves from the project. They will still be saved in the snapshot, but will not be in your project anymore.  You may be able to manage this well through your Version Control System, but snapshots do have some advantages.
Pros:

  • Your project will remain somewhat clean.
  • Script files will be saved with their appropriate snapshot, ideally tied to a particular release.
  • Less concern about whether a script is replayed because it's removed.
  • Good for scripts that have a larger impact and should only be run one-time.

Cons:

  • It's a manual process and requires some attention to detail.
  • You need to look in the snapshots to see what scripts were used.
  • It may require that you have a more formalized release process.
  • You may need to publish several snapshots to bring the database up to a current version.