Archive

Posts Tagged ‘[MSBuild]’

Continuous Integration with Visual Studio 2010 Database Projects, Hudson & MSBuild (Part 3 of 3)

Introduction
This post is a continuation to the series on implementing a continuous integration environment for Visual Studio 2010 database projects (.dbproj) using Hudson, MSBuild and Subversion. Links to the previous two posts:

Part 1: Agile Data warehouse Planning & Implementation with Hudson, MSBuild, Subversion and Visual Studio Database Projects.
Part 2:
Managing Database Code for Continuous Integration

Part 1 was an overview of agile development practices for database\data warehouse development. Part 2 explained in detail on setting up the database project, adding code to change control (Subversion) and preparing artifacts for CI. Part 3 will make use of the deployable database script (sql) created as part of the Visual Studio deployment process (from part two) for setting up the CI environment for the database project.

Hudson can be used with NAnt or MSBuild to build and deploy database projects. I will perform a walkthrough of setting up the AdventureWorks database project (created in Part 2) on Hudson using NAnt\MSBuild and Subversion.

Installing Hudson
Hudson is a free continuous integration server mostly used by the dark side (a.k.a. the java community). I have worked with CruiseControl.Net and Bamboo but my personal favorite is Hudson for its management features, ability to extend plugins and a cool UI. Hudson is available for download
here. (Java is a prerequisite to install Hudson).

Install Hudson by running the command line statement (from the directory where Hudson.war was downloaded to):

java -jar hudson.war

By default Hudson is installed on Port 8080. If this port is taken by some other service, use the ‘-httpPort’ switch to change to port on which Hudson runs.

Ex: java -jar hudson.war –httpPort 7777

Hudson can also be configured to run as a windows service. The documentation already does a good job of explaining the steps for configuring hudson to run as a windows service so I will just redirect you to the right place.

Next, find and add MSBuild and Nant plugins from Manage Hudson –> Manage Plugins. The ‘Install’ button is at the very end of the page.

1

Create a project for the build process

Navigate to http://localhost:8080 (or to the port on which you have hudson running on). Start by creating a freestyle software project for our AdventureWorks sample. In this demo we will be creating a development hourly build of the AdventureWorks DB project. Click OK once done (ignore all the settings on next page, just hit save for now).

2

Adding the build step & Polling source control for auto updates
Hudson works with build files to execute against different targets, more like a service broker that works with different services to run different kinds of jobs. Targets can be a set of instructions grouped together under a common task umbrella. For example a ‘debug’ target for building the project in debug mode with configurations that are specific to development environments and having a ‘Deploy’ target for running the distributable product.

You could use either MSBuild or Nant to build and deploy the project.

Difference between using MSBuild & NAnt with Database Projects: With MSBuild the database project file AdventureWorks.dbproj works as the build file and one can make changes or customize it in the VS IDE itself which is really helpful. With NAnt and DBProjects,  one has to create a build file (.build) manually or use a free NAnt build file editor; whichever the case you would have to include the MSBuild task in the Nant build filee to build and deploy the database project. (Tasks are specific XML tags that are passed as instructions for a particular unit of work in a build file). The reason MSBuild is preferred for a database project .dbproj vs. a dbp project type lies in the way the deployable database file is created (see previous blog post) .

For example, if a new table script is added to the database project and you intend to include it in the deployment script (Adventureworks.sql) you need to rebuild the database project (Right click –> Rebuild). Only this action will recreate the deployable AdventureWorks.sql with all the database objects and include the new table that was recently added. And to do this action from the command line you need the MSBuild command. So… If you already have a provision for running the MSBuild task from Hudson directly, why would you use Nant. The answer – in this case since we are ‘only’ deploying the database project we are fine. But, if we had a different range of project types with different technologies, like a Java project for instance, then using Nant would make sense. It all depends on the composition of your build steps\events.

Before you can access the MSBuild tasks in NAnt you need to install the NAntContrib task libraries and include them in your NAnt build file. Since all of this sounds a bit complex, lets just use MSBuild for now and follow the steps below to add the build steps:

1. Click on the AdventureWorks-Dev-CI-Build project and click on Configure.

2. Assuming SVN is setup correctly and the database project is in source control (if not setup the repository from here and then access it from here), update the Subversion Repository URL as follows.

SourceControlSetup

3. Save and go back to the dashboard to test the changes. Hit the ‘Schedule a build’ button to start the project (Play button)

image

The second column ‘W’ (Weather Report) should go to sunny if everything was set up correctly.

image

Note: Click on the Build History –> Console Output. It should display all the files from AdventureWorks project being checked-out to a workspace location. In the event of a build failure, this would be the place to check for the failure error.

4. Make sure that the database project builds and deploys without errors from VS IDE.

5. Add build step: Go back to the project configuration and add an MSBuild step.

image

6. To add a trigger to poll for changes on source control, follow this final configuration or skip it.

image

7. Finally, save and go to dashboard. Start the build and verify the output.

This completes the setup of a database project with continuous integration using Hudson, Subversion and MSBuild (I know I promised using NAnt, but trust me – in our case it just would have been a bit more painful to go over. Helpful NAnt tutorial).

Summary
This was a very basic, barebones CI setup. Once you start delving into configurations, parameters, different environments it gets more complex and interesting. The database project is the core component of all of this CI setup process. The rest of the pieces like the CI server, source control, build tools and issue tracking systems are customizable. Depending on the environment and other influential factors you may decide to go with one product set versus another. This was just one way of getting started on CI on a database project with Visual Studio. There are numerous CI server products available as open source. A comparison is provided
here. Hope this helped. Thank you and keep reading.

Advertisements

Managing Database Code for Continuous Integration (Part 2 of 3)

Introduction

URL to Part 1: Agile Data warehouse Planning & Implementation with Hudson, MSBuild, Subversion and Visual Studio Database Projects (Part 1 of 4)

Part 2 (of 4 series) on Continuous Database Integration covers creating, managing and provisioning a database project for a continuous integration environment. I will be using Visual Studio 2010 to create a database project (.dbproj project) and for source control I will be using Subversion (open source). The usage of Visual Studio is mainly for the management of SQL scripts and (most importantly) for the deployment file that it generates, but you can do away with it and have a complete CDBI system setup in an open source environment. Most of the tools selected in this article to set up CI are free.
If your environment extensively uses Team Foundation Server (instead of Subversion), and Application Lifecycle Management (ALM), you should look into setting up continuous integration with these tools before starting to look for free ones. If you are an open source shop (two thumbs up) or want to try out standing a CI environment on your own, read on.

This article is focused on the use of the SQL Server database project type in Visual Studio 2010. The SQL Server project type has ‘build’ and ‘deploy’ capabilities (along with a bunch of other features), which is the pivotal component of the CI setup. If you do not want to use VS 2010 Database Projects, and instead use a file system management based structure of hosting the database scripts; then, you need to manually create the re-runnable\deployable script (an OSQL or SQLCMD command script that executes all your deployable .sql scripts). Make sure you test this against a local instance of your database to emulate the build and deploy features. I created a custom C# console application that looks into specific folders (Tables, Stored Procedures etc.) and creates a command line batch script with error handling embedded in it (a custom deploy file generator was created as my project involved both oracle and sql server and I was working with Visual Studio 2005 .dbp project type), but with VS 2010 you get all these benefits plus the database testing, refactoring and it just plainly makes managing a database project much simpler (.dbproj project type).

Difference between .dbp and .dbproj: http://blogs.msdn.com/b/gertd/archive/2009/03/25/dbproj-vs-dbp.aspx

Before we begin on the CI setup of the database project, download the following tools to set up the environment:
1.    Subversion – For source control of the database project. This is by far the best and most comfortable source control system I have worked with (Sorry TFS). After installing subversion and creating a repository, make a note of the repository location as you will need it to link your database project to that.

2.    Plugin for Subversion integration with Visual studio IDE (either one of the two)
a. Visual SVN (Free to try, $49 per license)
b. Ankh SVN (Free)

Visual SVN vs. Ankh: I would recommend Visual SVN to a database centric shop that has heavy duty database development, SSIS, Reporting and\or SSAS solutions. I have had problems with Ankh SVN plugin to work correctly with these project types. It does not recognize these project types from the IDE and you end up managing them from the windows explorer instead of the commit\revert operations from the IDE. Visual SVN is much simpler to use, works perfectly with all types of project types that a database developer needs to work with. Yes, it does come with a price tag, but a license of $49 is dirt cheap. This was when I was working with Visual Studio 2005 integration. Things may have changed with Ankh since then, try it out and see what works best for your scenario.

3.    SQL Server SSMS Tools Pack: This is more of a helper plugin than a requirement. Helps you generate your seed data, save custom snippets as hotkeys, CRUD generator and more. Once you start using it you will want to get more. Download it here.

Database Project Setup
Once the prerequisite software is installed, open Visual Studio 2010 and create a new SQL Server 2008 project.

clip_image001[6]
Database Project Type

For demo purposes I am creating (reverse engineering by importing an existing database during project setup wizard) a database project for AdventureWorks SQL Server 2008 database [If AdventureWorks sample database is not installed on your database server, it can be downloaded from CodePlex]. Complete the project setup by following the necessary steps as per your database configuration. Leaving them in their default settings is also fine for the moment.

Once the project setup wizard completes, the solution explorer should resemble the fig. below. Right click on the project name “AdventureWorks” and select Properties to bring up the project settings. Click on the ‘Build’ option to view the location of the deployment script. Select the ‘Deploy’ tab on the left to view the deployment settings.

clip_image002[6]
AdventureWorks Deployment Options

Now that we have the project ready, Right click the project and select ‘Build’. The status bar should go from ‘Build Started’ to ‘Build Succeeded’ status. After the Build succeeds, deploy the project by Right clicking the project and selecting ‘Deploy’. This will create a deployment script named ‘AdventureWorks.sql’ (in the Visual Studio\Projects\YourProjectFolder \sql\debug). With the database project deployment, two options are available (for now leave it in its default state).
1.    Create a deployment script (default)
2.    Create a deployment script and run it against a database.

clip_image003[6]
Location of AdventureWorks.sql deployment script

The deployment script is the most important artifact for a successful CI system setup. This is a compilation of all the database objects belonging to your database project (including seed scripts, security etc.). After first time deployments, when a change is made to the database project, a script with the same name will be generated which will include the changes.
The next step is to add your project to Subversion source control. To version control your project, right click on the project and select ‘Add solution to subversion’. Select the repository path and add the project. And finally right click and Add files and then commit\check-in the solution. The project is now ready to be shared by anyone who has the correct setup as listed earlier.

Preparing artifacts for CI
An isolated database instance of SQL Server needs to be provisioned for continuous build and deploy of database scripts (tear-down and reinstall). This database should not accessible to developers to use for development or testing purposes. The sole reason for the existence of this database is to test the continuous deployment of a database on either code commits or regular intervals of time. This also serves as a sanity check of your end product at any point in time.

Visual Studio 2010 database project provides the tear-down and install script (tear-down = recreate database) on the right click and select deploy action. But, in a continuous integration environment we would like to have this file created automatically on every build scenario. This can be implemented using the VSDBCMD command. You could use VSDBCMD for just creating the re-runnable deployable script (with the /dd- command line option) or use it for creating and running the deployable script (with /dd+ option at command line). If you plan to use it just for creating the re-runnable deployable script, then the script needs to be executed by either ‘sqlcmd’ or ‘OSQL’ in any environment separately.

Ideally, I would prefer using VSDBCMD just to create the deployment script and then handover the deployment script to the DBA specifying the parameters (documenting them in an implementation plan of the database). The DBAs are familiar with sqlcmd\OSQL than VSDBCMD, plus using the VSDBCMD to execute the deployment script requires a bunch of assemblies (dll files) to be copied on the database server. I am not sure as to how the production DBA of today will accept this change. Thinking on the likes of a developer; sure, VSDBCMD is cool and you should definitely use in qa and production environments. But, in the real world scenario DBAs run the show. By just creating the deployable file in development and then running the same on QA and Production using sqlcmd you standardize your deployments and make your deployments simpler and worry free. (Did I mention that VSDBCMD also requires a registry change on the machine if Visual Studio is not installed on the machine, which is the database server?).
Not always a smooth ride; enter the obstacle: the hardcoded variables in the deployment file.

Hardcoded variables in Visual Studio deployment file: Visual Studio deploy process creates three default parameters in the deployment file: DatabaseName, DefaultDataPath and DefaultLogPath. The ability to edit\override them is what makes the discussion of SqlCmd vs VSDBCMD interesting.
The main advantage with SqlCmd over VSDBCMD is the ability to pass variables as parameters to the deployment script from the command line. This is a big advantage as the VS DB project hardcodes the database name, data and log file path (mdf and ldf) in the deployment script (AdventureWorks.sql, see setvar commands below) and although there is a way to get around it, it is painful.

:setvar DatabaseName “AdventureWorks”
:setvar DefaultDataPath “C:\Program Files\…\DATA\”
:setvar DefaultLogPath “C:\Program Files\…\DATA\”

Note: The above variables can be suppressed by editing the project deployment configurations. (This option can be used at runtime via command line params also).

clip_image004[7]

At this point you have two options with sqlcmd: manually changing the DatabaseName and DefaultDataPath and DefaultLogPath variables in the deployment file, or use option two i.e. changing the variables on command line with ‘SqlCmd’ using the “-v” flag for variables.

Ex: sqlcmd –S -d master -v DatabaseName=“NewAdventureWorks” DefaultDataPath=“C:\Data\” DefaultLogPath=“C:\Data\”

If you decide to go with VSDBCMD for creating the deployment file and deploying to the database server, a workaround is required to make this work. Complete the following workaround steps (skip both steps if you are going to go with sqlcmd for qa & production deployments):
1.    Override the DatabaseName at runtime with the TargetDatabase command line option. Ex: /p:TargetDatabase=”NewAdventureWorks”. This will override the :setvar DatabaseName “AdventureWorks” to “NewAdventureWorks”.
2.    Overriding file path variables: Let’s get something straight first – ‘The variables DefaultDataPath and DefaultLogPath cannot be overwritten’. Microsoft has received requests for this and is planning to allow for overwriting in the next release of database projects. For now we will have to do with a workaround.
a. Right click on project ‘AdventureWorks’ and select ‘Deploy’. Edit the Sql command variables file by clicking the Edit button.

clip_image005[6]

b. Add two additional variables ‘myDataPath’ and ‘myLogPath’ as shown below.

clip_image006[6]

c.    In the database project, navigate to Schema Objects \Storage\Files and change the data file path variable in the AdventureWorks_Data.sqlfile.sql and the log file path variable in AdventureWorks_Log.sqlfile.sql to reference the newly created command variables.
–    Rename $(DefaultDataPath) to $(myDataPath)
–    Rename $(DefaultLogPath) to $(myLogPath)

clip_image007[6]

d.    Right click and Build the project. Navigate to .\AdventureWorks\sql\debug (location of your project) and open the AdventureWorks_Database.sqlcmdvars with Notepad. The new variables will be available to change in here.

clip_image008[6]

As you can observe from steps 1 & 2 above, the workaround for using VSDBCMD can be a bit painful. One other important thing to keep in mind is that VSDBCMD does not execute pre-prepared deployment files. This is also an item that the MS team is considering to change in the next iteration. To create a deployment package VSDBCMD needs the necessary assemblies, build files, manifest, sqlcommandvars file etc. to prepare the end product (deployment file) and run it. On the other hand sqlcmd is easier to run pre-prepared deployment files (like AdventureWorks.sql).

Creating the build package (build files) & executing the deployment output
For now, I am going to demonstrate creating the deployable file with VSDBCMD (minus steps 1 & 2 above) and deploying them on different environments with sqlcmd instead of using VSDBCMD.
The workflow of the continuous builds and deployments that we are trying to emulate is:

a. Clear existing deployable file: In the \sql\debug folder, delete the file AdventureWorks.sql.

b. Build the project: For now just right click and select “Build”. I will be using MSBuild to perform this task in the next article. Behind the scenes, when you right click and build, Visual Studio uses MSBuild for the build process to create the build files in \sql\debug folder.

c. Generate deployable file AdventureWorks.sql: Using the VSDBCMD command line tool on the integration machine and the database manifest file (AdventureWorks.deploymanifest) to generate the deployment file “AdventureWorks.sql”

Before trying out this step, make sure VSDBCMD is installed on your integration machine.
–    If Visual Studio is not already installed, then follow the instructions here to download and install VSDBCMD.
–    If Visual Studio is already installed (VSDBCMD is located in “C:\Program Files\Microsoft Visual Studio 10.0\VSTSDB\Deploy”), make a reference to it by adding it to your PATH variable in environment variables (instead of copying the files).

Execute the following command in the \debug\sql folder from command prompt:

VSDBCMD /dd:- /a:Deploy /manifest: AdventureWorks.deploymanifest
The /dd- ensures that a deployment script ‘AdventureWorks.sql’ is generated and is not executed against the database server.

d.    The output of step above (AdventureWorks.sql) is executed against the integration database instance. Execute AdventureWorks.sql using ‘sqlcmd’ to test out the deployment.

Sqlcmd –S -d -E –i -o -v DatabaseName=“” DefaultDataPath=“” DefaultLogPath=“”

Ex: SqlCmd –S myServer -d master -E –i C:\AdventureWorks.sql -o C:\LogOutput.txt -v DatabaseName=“NewAdventureWorks” DefaultDataPath=“C:\Data\” DefaultLogPath=“C:\Data\”

Note: Variables set with –v parameters overwrite hardcoded variables set in the deployment script.

Summary

This completes our preparation of build items needed for setting up the database project for Continuous Integration. Not to worry, the steps above are for understanding the working knowledge of how the CI product (Hudson) is going to orchestrate the above steps on the server for us in the next part of this series. As far as the options go with selecting VSDBCMD for build or using it for both build and deploy, it depends on your environment. If you are flexible enough knowing the changes you have to accommodate to get VSDBCMD working in your environment, then go for it, otherwise just use it for build purposes on the build server to create the deployable product and use that going into the next environment phases (QA and Production).
The next part of the series deals with orchestrating the steps a through d above in a repetitive manner based on either code commit\check-in or regular intervals of time. This will be set up using free tools – Hudson and NAnt\MSBuild. The next article will explain in detail setting up Hudson as a CI server for Database projects and configuring MSBuild\Nant tasks for the actual implementation. I will also take some time to discuss a proactive vs. a reactive CI setup and how that affects development. That’s all I have for now. Thanks for reading.

Agile Data warehouse Planning & Implementation with Hudson, MSBuild, Subversion and Visual Studio Database Projects (Part 1 of 3)

Overview

The notion of managing data warehouse projects with continuous integration with open source technologies is an uncommon practice or i guess is just unpopular in IT shops dealing with database code, SSIS and SSAS projects (from my experience). Excuses\opinions differed from company to company:

• “It doesn’t apply to database code projects”
• “What is Continuous Integration and how does it apply to data warehouse projects?”
• “Here at Acme Inc. change control is done by our architect\DBA who uses a tool called ‘AcmeErwin’ or AcmeVisio to generate code, so we don’t need the additional bells and whistles”
• “Automating testing & deployment for database projects, SSIS packages is not possible”
• “Is it worth the effort?”
• “We are special, we do things differently & we don’t like you.” – kidding about this one.

In this article I will try to justify the use of CI on data warehouse projects and try to address the concerns above. The subject matter of this article is geared towards planning and implementing data warehouse projects with agile development practices on the lines of iterative feature\perspective driven development. (Perspective = Subject Area = Star Schema) The article begins with an introduction to agile development practices, reviewing evolutionary database design, defining continuous integration in the context of database development, comparing viewpoints of waterfall and JAD methodologies to Agile, and demonstrating the coupling of the Kimball approach with Agile to establish a framework of planning long term project milestones comprised of short term visible deliverables for a data mart\warehouse project. I will do a detailed walkthrough of setting up a sample database project with the technologies (VS Database Projects for managing code, Hudson for Continuous database integration, MSBuidld for configuring builds, subversion for source control and OSQL for executing command line SQL) is included for demonstration .

Due to the verbose nature of my take on this, I am writing this article as a 3 part series. Trust me; the next 3 parts are going to be hands on cool stuff.
Part 1 – An introduction to agile data warehouse planning & development and an introduction to Continuous Database Integration (CDBI).
Part 2 – Create the database project with Visual Studio Database Projects & Subversion.
Part 3 – Prepare the build machine\environment with Hudson and MSBuild

Introduction
After shifting gears on different approaches to database development at various client sites and compiling the lessons learnt, I am close to applying a standardized methodology for the database development and management. One can apply this approach on projects regardless of size and complexity owing to its proven success.

Before starting the introduction to Continuous Integration and Agile, let me take a step back and give you a lesson learnt working with the waterfall model. While working on a new data warehouse project and adopting the waterfall SDLC approach for database project planning and implementation, over time, the implementation plan did not follow the estimated planning. Sure, it was only an estimate, but you don’t want these estimates changing forever. Here the implementation was almost always off track when compared to the initial plan. This observation was initially not visible during the initial planning phase, but over time the mismatch was more evident during the development cycle. The mismatch was due to ‘change’. These were changes in requirements or caused due external factors. When you approach a data warehouse development methodology it is either the Kimball approach or the Inmon approach; and for my project it started off with the waterfall + Kimball. But, due to the nature of the requirements from the business where changes were too frequent, the waterfall was proving to be a showstopper. The reaction to changes and turnaround time required by the development team was slowing down the project timeline. The requirements were changing plus this project was already a mammoth effort with more than ten subject areas with conformed dimensions to form a data mart.

The old school approach on starting a new database project (with the waterfall cycle) begins with initial requirements and then comes in the logical model and then the physical model. Usually In this approach you have an ‘architect’ or a ‘data modeler’ or an application DBA on the project who owns the schema and is responsible for making changes from inception to maturity. This methodology is almost perfect and everyone starts posting those ER diagrams on their walls and showing off, until Wham! Requirements start changing and for every change you need to go back, change the specification, the schema and then the actual code behind, and of course the time for testing. As the frequency of these changes goes up, this catapults the delivery dates and changes your project plan. In this approach, the turnaround time for delivering an end product with the changes identified is just not feasible. I am not debating that using a waterfall approach will determine success or failure; I am trying to juxtapose the effort involved (basically showing you the time spent in the spiral turnover of the waterfall model then will compare it with an agile approach). This is a classic example of how the traditional waterfall approach hinders the planning and implementation of your project.

This called for a need for a change in the development approach, one that could react quickly to any change that could affect the time-line of deliverables. The new approach adopted was an agile development practice + the Kimball method which resulted in a successful implantation of a large scale data mart for a health care company. By now you should have an idea on what I am trying to sell here.

What is Continuous Integration?
Continuous Integration is all about automating the activities involved in releasing a feature\component of software and be able to simulate the process in a repeatable manner to reduce manual intervention and thus improving quality of the product being built. This set of repeatable steps typically involves running builds (compiling source code), unit testing, integration testing, static code analysis, deploying code, analyzing code metrics (quality of code, frequency of errors) etc.
Continuous Integration for database development is the ability to build a database project (a set of files that make up your database) in a repeatable manner such that the repeatable action mimics the deployment of your database code. Depending on the database development structure database projects can successfully be set up to run scheduled builds, automate unit testing, start jobs and deploy code to different environments or stage it for deployment to reduce human error that a continuous and monotonous process can bring.

The CDBI (Continuous Database Integration) Environment
The best part about the tools I am going to set up the continuous integration environment is that they are all FREE. Almost all are free except Visual Studio Team Edition for Database Professionals. I would highly recommend using VS Team Edition for Database Professionals as a tool for developing and managing code when working with database projects. The others tools that I used for continuous integration is Hudson, Subversion and NANT. Yes, freeware used mostly by the othe other community. But after applying them side by side with MS technologies, that proved to be a good mix.

All of the above opinions when drilled down, point to the concept of ‘done’ or to ‘a deliverable’. The granularity of the deliverable is pivotal to incremental software development. It is just a matter of perspective – for incremental software development the granularity of your deliverable is much smaller and the visibility is much clear and concise when compared to a deliverable on the waterfall track. On the continuous sprints you know for sure what needs to be delivered by the next sprint\iteration. At this point you have a definition of ‘done’. This is the most important thing when we start getting into agile development practices – The concept of done. [TechEd Thanks]
The more tasks are granular, easier they become to control and complete. Once you start slacking on a few, they tend to pile up and when that happens on a larger scale, it fogs the plan even worse. Once you step in to the shoes of a project planner and also of a lead, this gap will become more evident and clear.

This is where CI helps in meeting deadlines, showing progress of work in regular sprints where the previous sprint progress is evaluated (to validate the concept of done) and requirement for the next sprint is defined.

Summary
To summarize, Continuous Integration in a DB environment is all about developing your database code in sprints (of two weeks or more, your choice), by a feature or perspective. Ex: A feature in the AdventureWorks database would be HR module or the Sales module. An example in a data warehouse environment could be the Inventory star schema. It is these short sprints (regular intervals of feature completion or deliverable, usually 2 weeks) of clear quantifiable requirements (definition of ‘done’) that helps gauge the status of work. Once the developers adapt to this rapid SDLC, visibility into the progress of the work goes up, results in accountability and ownership of work, building a more cohesive team and increased productivity (I can bet on this one) and most important of all, a quality product being delivered in chunks to form the big picture. The big picture being a collection of perspectives(start schemas) that plug together to form a data warehouse. That is all I have for now, more to follow in my part 2 and 3 on setting up the CI environment with a database project, using Subversion as source control system and MSBuild\NANT for creating build files.

Thanks for reading & stay tuned ….