Using a system function as a computed column within a table-value function….

Wouldn’t it be nice if SQL Server allowed this functionality? for instance if did the below:-


I get the above error. I know the code can be easily rectified by doing:-

computed column in a table valued function - correct

It just makes the code read bad and perform bad as you would need to add this system function call everywhere you insert into the return table. One line instead of 5 makes more sense to me!



CDC changes column-by-column instead of row-by-row, can you improve it???

I’m writing a table value function that returns a cdc recordset that, instead of returning changed data row-by-row, I need to output them column-by-column.

Say for example I have a table called auth.Account with cdc enabled on fields Email and Password. e.g.

EXEC sys.sp_cdc_enable_table @source_schema = N’auth’,
@source_name = N’Account’, @captured_column_list = N’
@supports_net_changes = 1;

Right, now cdc is enabled, i need my recordset to look like this:-

ID, CDC_OPERATION, LSN, CurrentEmail, PreviousEmail, CurrentPassword, PreviousPassword

This is my solution so far, yep I know its a bit messy and could be improved, any ideas on how I could improve this? Thanks AJF

— =========================================
— Author: Andrew Fenna
— Create date: 2013-08-15
— Description: Get Changed Auth Details TVF
— =========================================
ALTER FUNCTION [auth].[ChangedAuthDetails] ( @FromLsn BINARY(10) )
UserID INT ,
CurrentEmail NVARCHAR(255) ,
PreviousEmail NVARCHAR(255) ,
CurrentPassword NVARCHAR(32) ,
PreviousPassword NVARCHAR(32)


IF ( @FromLsn IS NULL )
SELECT @FromLsn = sys.fn_cdc_get_min_lsn(N’auth_account’)
SELECT @FromLsn = sys.fn_cdc_increment_lsn(@FromLsn)

SELECT @ToLsn = sys.fn_cdc_get_max_lsn()

IF ( @FromLsn = sys.fn_cdc_increment_lsn(@ToLsn) )

— Query for change data
‘U’ AS [__$operation] ,
a.[__$start_lsn] ,
A.CurrentEmail ,
a.PreviousEmail ,
b.CurrentPassword ,
FROM ( SELECT Email.[__$start_lsn] ,
Email.ID AS [UserID] ,
Email.[3] AS [PreviousEmail] ,
Email.[4] AS [CurrentEmail]
FROM ( SELECT [__$start_lsn] ,
ID ,
Email ,
FROM cdc.fn_cdc_get_all_changes_auth_account(@FromLsn,
@ToLsn, ‘all update old’)
WHERE [__$operation] IN ( 3, 4 )
( MAX(Email) FOR [__$operation] IN ( [3], [4] ) ) AS Email
) A
CROSS APPLY ( SELECT [password].[__$start_lsn] ,
[Password].[3] AS [PreviousPassword] ,
[Password].[4] AS [CurrentPassword]
FROM ( SELECT [__$start_lsn] ,
ID ,
[Password] ,
FROM cdc.fn_cdc_get_all_changes_auth_account(@FromLsn,
@ToLsn, ‘all update old’)
WHERE [__$operation] IN ( 3, 4 )
( MAX([Password]) FOR [__$operation] IN ( [3], [4] ) ) AS [Password]
) B
WHERE A.__$start_lsn = B.__$start_lsn
CASE __$operation
[__$start_lsn] ,
CASE [__$operation]
END AS [CurrentEmail] ,
CASE [__$operation]
END AS [PreviousEmail] ,
CASE [__$operation]
WHEN 2 THEN [Password]
END AS [CurrentPassword] ,
CASE [__$operation]
WHEN 1 THEN [Password]
END AS [PreviousPassword]
FROM cdc.fn_cdc_get_all_changes_auth_account(@FromLsn,
@ToLsn, ‘all update old’)
WHERE [__$operation] < 3

Restore SQL Server databases remotely using my SQL CLR

Has anyone ever though of pushing a SQL Server database backup out remotely? (SQL Native and\or RedGate Backups) My idea came from having to schedule SQL Server Agent jobs on the target servers to run a RedGate restore command to refresh our Development, QA and CI environments with a cleansed RedGate backup from our Production environment. This became a chore as after each restore I had to apply logins\users and permissions which were different on each environment and apply in-flight patches which were also different on each environment as each server is at a different development stage respectivly.

So I came up with the idea to push the database backups out remotely having a single SQL Server Agent job per environment configured on the source server. The steps that the job must do:-

  1. Restrict Access
  2. Restore the database
  3. Setup logins
  4. Setup users
  5. Setup user roles
  6. Grant access
  7. Run release scripts
  8. Archive events after 30days

Knowing the process I began to write the schema. I identified that I needed to store some static data for this to work correctly. I identified I needed to store:-

  1. Restore specific configuration table restore.Configuration
  2. Database Instance configuration table infrastructure.DatabaseInstance
  3. Database configuration table infrastructure.Database
  4. Database Logins table permissions.Login
  5. Database Users table permissions.User
  6. Database User Permissions table permissions.UserPermission
  7. Database User Role Membership  table permissions.UserRoleMembership
  8. Some type lookups tables permissions.DatabaseRole and permissions.PermissionType

See this database diagram :-


I also I wanted to log every event and sql that the SQL CLR executed, as I know that it will be very hard to debug the system once the solution goes live. Also I know that some DBAs have issues with SQL CLR’s as they can’t see what is going on inside them, just like they can with regular SQL stored procedures. So if things went wrong, which they usually do! they couldn’t see what it was doing and knowing that, they probably wouldn’t accept it into live. So decided to:-

  1. Create logging.Event to store every event in the system.
  2. Create logging.EventLog – the log.
  3. Create logging.EventType – the log types (Error, Information, Success, Warning)
  4. Create logging.EventLog_Archive – the archived log

See this database diagram :-


Right, once the the schema was completed, I started to develop the SQL CLRs that will process the events. All development was carried out following strict Test-Driven Development guidelines using Visual Studio 2012 integrated with SSDT (For a download of the source code see the end of this article). The SQL CLRs written in C# are compatible with SQL Server Versions 2008+, however the MS.Net framework (used for the SQL CLRs) is only compatible with versions 3.5 (for now!).

To deploy database, either :-

  1. Follow this guide to publish the dacpac with SqlPackage.exe
  2. Run the deployment script attached in sqlcmd mode. (DBRestore.publish.sql).
  3. Opening the solution with Visual Studio 2012 integrated with SSDT and publish from there.

Once deployed we need to add the configuration. I have provided template scripts under folder source / Database / Scripts.  Complete the SqlCmdVariables in all the scripts and execute in sqlcmd mode, in order against the source restore server.

The source code is available here on Git at

SQL Training – Introduction to SQL Server Data Tools


This document will provide a basic understanding on SQL Server Data Tools (SSDT) and its integration into Visual Studio 2012 Professional. The document will discuss the positioning, heritage its aims, and the meaning behind declarative database development. This document will give you a basic tour of Visual Studio 2012 Professional with integrated SSDT and we’ll explorer the new features of the IDE. This session is intended to be hands an on workshop, and at the end I’ll be taking Questions & Answers. However if your feel you are not clear about any subject, please ask me to go through it again. Please feel free to leave me feedback at the end of the session.


To take part in the demonstrations a number of tools are required, they are:-

  1. Microsoft Visual Studio 2012 Professional.
  2. SQL Server Data Tools.

SSDT Positioning

SQL Server Data Tools is an integrated environment for SQL Server database developers. The product is the ideal solution to design your databases as is has an integrated low overheat version of the SQL Server platform called ‘localdb’ which is integrated deeply within Visual Studio.


SSDT is being marketed as a new offering within SQL Server, but in fact, the reality is slightly different. The features that comprise SSDT are the latest evolution of a product line that has existed in various incarnations since Visual Studio 2005. Those incarnations include Visual Studio Team System for Database Professionals and Visual Studio 2010 SQL Server Database Projects, plus some more colloquial names, such as “Data Dude,” “TSData,” and “DB Pro.”

Another side note: I have often seen or read remarks that suggest some folks mistakenly believe that SSDT and its predecessors rely on Microsoft’s application lifecycle management product, Team Foundation Server (TFS). That is not the case; you don’t need to be using TFS in order to use SSDT.

Aims of SSDT

SSDT is an attempt to bring similar tools and consistency to SQL Server development to what .NET developers have had for years. Features such as IntelliSense, code refactoring, code navigation, find all references, sandboxed development and deployment, MSBuild support, and ease of deployment will be familiar to most .NET developers. However, the majority of these features are, in my experience, alien to many SQL Server developers who have become adept at toiling in SQL Server Management Studio (SSMS). SSDT aims to change that.

Declarative Database Development

In addition to bringing the above-mentioned new features, SSDT pushes a new development paradigm: the notion of declarative database development. Put simply, declarative database development means that you define within SSDT what your schema looks like. Then, when you deploy (or in SSDT parlance, publish) your SSDT project to a target database, SSDT will determine what it needs to do to make your target look like the schema that you’ve defined.

To help you understand this concept, let’s look at an example. Imagine we have the following table defined in our SSDT project:

CREATE TABLE [dbo].[Product]
,      [ProductName] NVARCHAR(30) NOT NULL

We decide we want to make a couple of changes. We want a new column to store a category, and we want to assign a name to the primary key:

CREATE TABLE [dbo].[Product]
        [ProductId] INT NOT NULL
,       [ProductName] NVARCHAR(30) NOT NULL
,       [Category] NVARCHAR(30) NULL –New column
,       CONSTRAINT [PK_dboProduct] PRIMARY KEY ([ProductId]) –Named PK

After we publish the project, SSDT generates the script above to affect the required changes.

The script makes the appropriate changes to the deployed table while respecting the presence of any data that might be stored within it; it is fairly complex, to be sure, but that’s not the point. In fact, under normal circumstances, you as the DBA or database developer would never even see this script because SSDT generates and executes it transparently during a publish operation; However, you do have the option for SSDT to just generate the script and not execute it.

This example of adding a column and naming a constraint is simplistic, but it illustrates the concept of declarative database development. You don’t have to define how you update your database schema from an old state to a new state: You simply tell SSDT what state you want the database schema to be, and SSDT will take care of it for you. Imagine a database schema with many changes, many of which will be more complex than simply adding a NULLABLE column as above and you can perhaps grasp the inherent value of this approach.

You can initiate it using either a command-line tool called SqlPackage.exe or from inside the Visual Studio shell of SSDT. If you use the latter approach, you have available a useful addition not in previous SSDT versions called the Data Tools Operations window, shown below, which provides a handy overview of your publish operations.


As you can see above, the Data Tools Operations window provides these features:

  1. an at-a-glance indication of the success or failure of the publish operation
  2. an error message where appropriate
  3. the amount of time taken to complete the publish operation

In my opinion, declarative database development is the killer feature of SSDT. Put more simply, declarative database development is the reason that we should be using SSDT. This isn’t just a developer crush, either. I have recently been speaking to devops professions in my social network and SQLBits regarding their projects. And at first they were initially discreet about letting a tool govern their database deployments. But after carrying out numerous deployments to their production environment using this technique in the last couple of months, they’ve been won over and are now mandating that all development projects move to using SSDT. After reading an article by Jamie Thompson who asked for a quote for his article, the head of the devops team, Joe Pollock, had this to say:

“Having been used to database releases being packaged as a set of scripts, variously doing create and alter DDL statements, suddenly being handed an SSDT release was an uncomfortable change. Letting the release itself determine the schema changes required based on the target database? This required a lot of trust in the process, even though historically the hand-crafted scripts were prone to mistakes, were labor-intensive to run and required before and after compares to have confidence in them.

“But this much simpler method of defining the schema once using create statements and then trusting SSDT to do what it needs to on the target makes everybody’s job easier. Harnessing the power of this tool has reduced the complexity of managing releases, both in packaging and deploying them, and I am now a huge fan of this method.” –

It takes a leap of faith to start trusting SSDT in this manner, but with the proper precautions in place (see my tips a bit later in the article), it can be a leap worth taking.


The localdb is at the heart of SSDT; it’s similar to SQL Server Express under the hood and runs a full version of sqlserver.exe. However this is throttled by the numbers of CPUs and limits on resources. There are quite a lot of limitations; you cannot upgrade the instance and there is no management and the sqlserver.exe does not run as a service. It is not similar to SQL Server Compact as this is feature-less (no stored procedures or functions) it is actually a DLL file that runs in a process, but is not available to task manager. It is awakened when SQL Server Native Client requests a connection from within Visual Studio. It doesn’t stay online forever, it shuts down after time. And you can configure where it creates the SQL files required to run. The localdb does not support table partitioning or data compression at the moment. However there are not many features that it does not support. You can however configure SSDT to use a full version of SQL Server i.e. the Developer edition, if your project requires unsupported features; we will explain how to do this later in this article.

Lab #1 – Creating your 1st SSDT Solution

Within this training lab we will learn how to create a SSDT database solution with Visual Studio 2012.

Create a SSDT Project

  1. Launch Visual Studio 2012 Professional.
  2. Select File\New\Project.
  3. From the Templates, select SQL Server.
  4. Then select SQL Server Database Project.
  5. Give the project a suitable name; this is typically the database name you’re going to deploy.
  6. Give the solution a suitable name; this is the overall solution that will contain the database and its tests.Image
  7. Select OK to create the solution.

Create a SSDT Test Project

  1. Select File\Add\New Project.
  2. Select .NET Framework 4.5
  3. From the Templates, select Test.
  4. Then select Unit Test Project.
  5. Give the project a suitable name; this is typically called <DBProjName>-Testing.Image
  6. Select OK to add the test project to the solution.

NB. We are going to be creating a TTD database project, therefore please delete the “UnitTest1.cs” under the test project.

Adding a new SQL Server Unit Test Item

  1. Add a SQL Server Unit Test item, right click the test project select Add\New Item…
  2. From the Visual C# Items select SQL Server, and then select the SQL Server Unit Test item.
  3. Give the item a meaningful name and select Add.
  4. You should see the SQL Server Test Configuration, you can select the database connection that you are going unit test against, this is the environment your database has been; or is going to be deployed to. For the sake of this document please select New Connection.
  5. Enter the server name: (localdb)\Projects
  6. Enter the database name: SQLTrainingSSDT-I
  7. Select OK and OK again to close the SQL Server Test Configuration.

Tour of SSDT within Visual Studio 2012 Professional


Refactoring code can generally be described as altering how a code module executes without changing what it actually does. SSDT provides support for such changes.

Renaming an object (such as a table) or moving it to a different schema will, ordinarily, break any code modules (e.g., stored procedures, views, triggers, user-defined-functions) that reference that object. This is an area where SSDT refactoring is intended to help. Using SSDT refactoring will not only affect the rename/move operation, but it will also alter any referencing code modules accordingly.

Such refactoring operations are commonplace in other development tools. However the very nature of a database means that SSDT refactoring is a little different. Not only does SSDT change the object and referencing code, it also records the change in a file called the refactor log, which exists as an artefact of the project, as shown in Figure 2.

Consider the example of a column being renamed. The Publish operation needs to know that it should issue a call to sp_rename rather than causing data loss by dropping the column with the old name and creating a new column with the new name. This is the purpose of the refactor log: to record the fact that a rename has taken place.

Go to definition. 

This is a code-navigation feature that has long existed in Visual Studio for .NET language development and is a welcome addition to SSDT. Simply placing your cursor on an object (say, a table) referenced in a code module and pressing F12 will take you to the DDL that defines that object.

Find all references. 

Find all references, another code navigation feature, shows you all code modules in which an object is referenced. In some ways, this is the inverse of the F12 code navigation that I mentioned earlier — indeed, it is accessed by placing your cursor on the object in question and pressing Shift+F12.ImageIntelliSense. 

IntelliSense is a feature that helps you to write your code by making suggestions as to what you might type next. For example, if you type the name of a schema and press the period key (.), IntelliSense will present a list of tables in that schema from which you can choose, as shown in Figure 3.

It’s worth noting that if you’re used to other languages where IntelliSense is implemented well (such as the .NET family of languages), your expectations for this feature will most likely not be met — T-SQL IntelliSense is not a patch on .NET IntelliSense. I have spoken to a person who writes code parsers, and he claimed that parsing T-SQL code is significantly more difficult than more modern languages such as C# — thus it’s more difficult to write an IntelliSense engine that works flawlessly. Nonetheless, I am of the opinion that IntelliSense is still a useful feature. (I find a competing IntelliSense product from Red Gate Software called SQL Prompt to be better than Microsoft’s offering.)

SQL Server Object Explorer (SSOX)

SQL Server Object Explorer is complimentary to Visual Studio’s Solution Explorer pane because it provides a logical view of the objects that are defined within the project in Solution Explorer. What does that mean exactly? Perhaps a better way of describing it is that Solution Explorer shows a collection of files (like Windows Explorer) whereas SQL Server Object Explorer shows a collection of database objects like SQL Server Management Studio (SSMS).Image

Above we see the DDL script for a table, [dbo].[Product], open and in full view. That same script, Product.sql, is in Solution Explorer; and over on the left we see that SQL Server Object Explorer has a logical view of that same table and also its columns and primary key.

SQL Server Object Explorer can also be used to launch the refactoring operations that we discussed earlier. You cannot do so from Solution Explorer. Image

SQL Server Database Unit Test Designer

The SQL Server Database Unit Test Designer is the tool that enables you to write tests in TSQL against the previously built and deployed database.

If you work in an agile TTD environment this will be the 1st point of contact in your solution. In short, you would write your test to obtain a failure then deploy your code to make your test pass. This is the tool that will assist you; later on in this document we will discuss how to do this.

The SQL Server Database Unit Test Designer has lots of Test Conditions they are:-

Data Checksum

Fails if the checksum of the result set returned from the Transact-SQL script does not match the expected checksum.

Empty ResultSet

Fails if the result set returned from the Transact-SQL script is not empty.

Execution Time

Fails if the Transact-SQL test script takes longer than expected to execute. The default execution time is 30 seconds.

The execution time applies to the test script test only, not to the pre-test script or the post-test script.

Expected Schema

Fails if the columns and data types of the result set do not match those specified for the test condition. You must specify a schema through the properties of the test condition. 


Always produces a test with a result of Inconclusive. This is the default condition added to every test. This test condition is included to indicate that test verification has not been implemented. Delete this test condition from your test after you have added other test conditions.

Not Empty ResultSet

Fails if the result set is empty. You can use this test condition or the EmptyResultSet with the Transact-SQL @@RAISERROR function in your test script to test whether an update worked correctly. For example, you can save pre-update values, run the update, compare post-update values, and raise an error if you do not get the expected results.

Row Count

Fails if the result set does not contain the expected number of rows.

Scalar Value

Fails if a particular value in the result set does not equal the specified value. The default Expected value is null.

SQL Server Database Table Designer

SSDT features a new table designer that provides both a graphical and scripted representation of a table, as shown below:-ImageWhat sets this table designer apart from other similar designers you might have seen is that you can edit the table definition by either using the GUI or by editing the script directly.

Lab #2 – Putting it all together the TDD way

Within this training lab we will learn how to build databases using TTD methodology. We will build, deploy and test our database and take a closer look at some of the new features and tools with SSDT.


As a shop keeper I would like to record the products that I sell. I would also like to have the ability to retrieve the products that I sell.


  1. We must create a table to store the Product and its name, cost and quantity.
  2. We must create a stored procedure to insert the Products
  3. We must create a stored procedure to retrieve the Products

Development Steps

  1. Create the SSDT solution as Lab #1
  2. If you have an empty SQL Server Unit Test Item, then please rename this to ProductTableUnitTest.cs.Image
  3. If you need to create a new SQL Server Unit Test Item, please follow steps in “Adding a new SQL Server Unit Test Item” above giving your file name as ProductTableUnitTest.

NB. For information on the SQL Server Database Unit Test Designer, please see section ‘SQL Server Database Unit Test Designer’

  1. First, we need to put the dbo.Product table into a state that is going to be the same for all tests. From the test script drop down select (common scripts) :-Image
  2. Select the ‘Click here to create’ link as below:-Image
  3. Enter the below SQL to remove all the rows from the table and reset the Primary Key:-Image
  4. Next, switch to the Test Cleanup:-Image
  5. Select the ‘Click here to create’ link as below:-Image
  6. Enter the below SQL to return the table to its original state after the tests have run:-Image
  7. Now switch back to the SqlTest1 script:-Image
  8. We need to enter some data into the dbo.Product table to test. Select ‘Pre-test’ option from the test type drop down box.Image
  9. Select the ‘Click here to create’ link as below:-Image
  10. Enter the below SQL to insert the values into the dbo.Product table:-Image
  11. We need to write our test. Select ‘Test’ option from the test type drop down box.Image
  12. Select the ‘Click here to create’ link as below:-Image
  13. Enter the below SQL to return 1 result set  with a single value to be tested:-Image
  14. Remove the inconclusive test condition:-Image
  15. Add a new scalar value test condition, and set the expected value = 1:-ImageImage

NB. For a detailed explanation of each test condition please see section ‘SQL Server Database Unit Test Designer’

  1. Consider renaming your SQL Test item to something more meaningful:-ImageImage
  2. Open the Test Explorer from Test\Windows\ menu. You should see a windows open as below:- Image
  3. Build the SSDT Test project, and you will notice that the newly created test appears in the below list:-Image
  4. If you were to run the test the you will notice that the test will fail, this is because the object Product does not exists:-Image
  5. Our 1ST task is to make the test pass; therefore we must create the dbo.Product table. There are two main ways to create a new table; you can add a new table to your SSDT project either from the solution explore, or SQL Server Object Explore (SSOX). I naturally prefer to work within SSOX, as its natural fit for SQL Developers as the layout is similar to SSMS, and when a new object is created, it will automatically be added the project. We’ll do that now, right click Tables and select Add New Table… from within SSOX:-


NB. For information on the SQL Server Object Explorer, please see section ‘SQL Server Object Explorer (SSOX)’

  1. Give your table the name Product, and select Add:-Image
  2. The new Table Designer will be displayed, and if you look underneath your project in the solution explorer you will see the new file create Product.sql:- Image
  3. As mentioned above in section ‘SQL Server Table Designer’ “you can edit the table definition by either using the GUI or by editing the script directly” therefore, whatever changes your make to either the T-SQL designer or the GUI designer the changes are reflected in both. Also, what sets the table designer apart from previous versions is that if your where to change the name of the table\column then the referenced objects within the SSDT project will update seamlessly. So if you had stored procedure that reference a table, and you changed the name of the table name from within the table designer, SSDT would automatically refactor the stored procedure and update the table name, and columns if they were changed as well, this is a nice feature of SSDT. As it enforces Declarative Database Development, mentioned above.
  4. We are ready to deploy this and retest this database, hit “F5”, this will automatically build and deploy the database to the LocalDB (localdb)\Projects, as mentioned above:-
  5. You will notice the build and deploy messages:-Image
  6. What this does is checks the project for any errors, if none exists the project will build, if build is successful the project is deployed. If you hit F5 again, the project will only deploy as no changed have been made to the project. Also on the filesystem under ‘bin\Debug’ a deployment script and the dacpac files have been created, the dacpac is essentially an offline representation of database and all the objects in it. Under the covers it’s a zip file and can be open in explorer if you rename the file to

The deploy works by essentially creating a dacpac from the source (SSDT Project) and destination (LocalDB) and compares them both together, a decision is made on how to make the destination the same as source and this operation is done offline in memory.  Also the comparison takes into consideration the refactor log, this is so that when objects are renamed within the project they are also renamed in the destination, and occurs before any changes are applied. If this operation did not occur objects would be left in the destination and SSDT would not know about them. See below:-Image

  1. We are now ready to test, go back to the Test Explorer and Right –Click on the failed test and select ‘Run selected tests’ you will notice that the test will now pass:-Image

NB. The idea is that we can run this test over and over again and the test should always pass, we should never change the test to make the test pass.

  1. Our next requirement is create a stored procedure to insert the Product; we must add a test 1st to SSDT Test Project before we code the change, do to this select the  Image button in the Test Designer:-Image
  2. Give the test a meaningful name, and select OK:- Image
  3. Enter the below SQL:-Image
  4. Remove the inconclusive test condition:-Image
  5. Add a two scalar value test condition, and set the expected value = 1 in both conditions:-

NB. For a detailed explanation of each test condition please see section ‘SQL Server Database Unit Test Designer’


  1. Build the SSDT Test Project and your will notice the new test appear in the Test Explorer:-Image
  2. If you Run All test, you will notice that the dbo_InsertProduct_Test will fail, due to missing dbo.InsertProduct stored Procedure:-Image
  3. Our Task will be to make this test pass, so from with SSOX find the Stored Procedures node, right click and select ‘Add New Stored Procedure…’:- Image
  4. Enter the name of the stored procedure, and select Add:- Image
  5. This will create the below:-Image

NB. A template stored procedure snippet, which is configurable, but for the purpose of this document this is out of scope.

NB. The Item will be created in SSOX:-


NB. Andrew the Item will be created in Solution Explorer


  1. Change the SQL to the below:-Image
  2. Hit ‘F5’ to build and deploy the SSDT project and re-run all the tests, and you should notice that all the test pass:- Image

Lab #3 – Practical  – Create a Stored Procedure using TTD

  1. Please follow the steps in Lab #2 to create a store procedure using TTD  for the 3rd requirement:-
    1. We must create a stored procedure to retrieve the Products.
    2. Ensure that a test is written before you attempt to write your code.
    3. You must make sure of 4 or more Test Conditions.
    4. All the tests must pass, including the test from Lab #2.

Questions & Answers

Please provide me with feedback. Thank-you

Information in this document has been sourced from:- – published by Jamie Thompson