Back To Basics: Managing Databases

5 06 2010

On a new clients site the other day, observed that over time the more companies I work for the deeper my knowledge for applying effective work practices becomes. In other words, over time you see things that work well, and things that don’t. I’m talking about simple practices that when applied to teams result more quality and/or efficient software.

Some companies I’ve worked for have demonstrated mature ITIL based change control processes with dedicated change control teams and separate environments for development and testing, but sadly over time they have become so out of sync with the production environment that they provide absolutely no indication of what will happen in the real production environment. And management wonder why production deploys are so unsuccessful having spent gazillions on ITIL training and manuals. A simple problem with a simple solution, but in the real world can be rarely practiced.

Another that I see regularly is an effective way of managing database change across a development team. Here I present a technique so primitive and proven, that there is no requirement that you have the database tooling that ships with the likes of Visual Studio Team System (VSTS) 2008 Database Edition or Redgate SQL Compare. If a team has such tools at their disposal then great use them—they can make life easier, but I am dumbfounded by the number of environments I see where database change is completely unmanaged.

Yes, databases and their associated artefacts (functions, triggers, message broker queues and so on) should be managed, and versioned. Again a simple problem with a simple solution, but in the real world tends to be practiced poorly.

In the source repository that the team uses, create a directory hierarchy that implies some sort of sequence (e.g. prefix with a numeral). Start off with scripting the database and its requirements, such as filegroup options and collation types etc. Remove any code generated guff, to keep the scripts as clean and as readable as possible. Then move on to tables, and then objects that work with the tables such as foreign keys, triggers, procedures, functions. An example structure could look like this:

– “01Database”
– “02 Tables”
– “03 Foreign Keys”
– “04 Triggers”
– “05 Stored Procedures”
– “06 Functions”
– “07 Queues”
– …
– “10 Data”

Over the lifecycle of the project this structure should be completely populated with the necessary artifacts to build the target database from scratch. No restoration of backups needed.

Because the number of scripts contained in a single directory could become overwhelming with time, a copy of the below batch script “all.bat” could be placed in each directory that enumerates and concatenates every “.sql” file in the containing directory to produce one large sql script “all.sql”. Running in 150 stored procedure scripts then become a simple matter as running in “all.sql” contained in the stored procedure hive.

@echo off

@rem type NUL>"_all.sql"
del /F /Q "_all.sql"

for /f %%a in ('dir /b *.sql') do (
type %%a >> _all.sql
)

When it comes to scripting the data (lookup data and sample data should be versioned), I find it hard to pass up the simplicity of the sp_generate_inserts gem I found a few years ago. Its basically a stored procedure that get created in your master database (therefore resolvable in any db’s on the same instance), that provides a rich set of options for scripting your data (e.g. EXECUTE sp_generate_inserts footable, @ommit_identity=1).





Subversion Repository

17 04 2010

I’ve been working on a number of personal projects lately and need a reliable, fast and possibly multi-user source control. There are many of options available, but for me VS.NET integration is a must. Without focusing too much on my rational for choosing SVN, there are two fairly mature and rich VS.NET providers; AnkhSVN and VisualSVN. I use VisualSVN and it rocks.

Follows is the fastest path to getting a repository up and running over the native SVN protocol (which by default listens on port 3690).

  1. Download pre-built binaries from CollabNet (I had TortoiseSVN compatibility issues with other distributions such as SlikSVN).
  2. Create the repository: svnadmin create “c:\svn\repository”
  3. Edit conf/snvserve.conf. Uncomment the lines (anon-access = read, auth-access = write, password-db = passwd)
  4. Edit conf/passwd. Register users and their passwords here.
  5. Register Windows services (daemon): sc create svnserver binpath= "C:\Program Files (x86)\CollabNet\Subversion Server\svnserve.exe –service -r c:\svn\repository"
    displayname= "Subversion" depend= Tcpip start= auto

That’s it! Just connect to svn://localhost using TortoiseSVN and/or the other VS.NET providers. Remember to open up port 3690 to make the repository available over a network.

I find setting up the TTB (tags/trunk/branches) style structure initially pays off downstream, when activity like tagging or branching starts taking place.

Technorati Tags: ,




Azure BizSpark Camp

12 04 2010

Last month I was invited to attend the Azure BizSpark Camp in Sydney. Given minimal detail about the event, 7 small Australian startups pitched business concepts to Microsoft, and were selected to compete for the prize of 5G’s of cold hard cash, a sturdy glass trophy that could do some serious damage in a fight, and potential exposure to some real concrete venture capital investors (“VC’s” for short if you’re the hip businessy type) – i.e. people willing to throw cash at your business concept with the intent of making more money in return.

The teams started to arrive at Microsoft’s Australian headquarters at North Ryde Sydney around 8:30AM on Saturday morning. I knew I was at the right place when I saw a 20 strong group of pasty looking nerds all armed (laptops) for a big weekend ahead.

Soon after registering, Chris Auld (an enthusiastic yellow croc wearing kiwi) from Intergen smashed the ice, by delivering a condensed presentation on Microsoft Azure, and the general principles behind scalable cloud architectures. Chris even tossed out a few gems throughout his presentation, some of which included; frequent use of the word “choice”, XNA text books, an excel based Azure cost forecasting calculator, a colorful debate on the evils of GPL, some YSLOW techniques to reduce HTTP pressure such as CSS sprites, the shift in thinking that the cloud promotes such as, the movement away from traditional transactional relational storage to “NoSQL” highly replicated non-transactional storage models, the renewed importance that good multi-threaded code (e.g. Parallel.For) has now with the likes of Azure worker roles and so on, a thought stimulating project called lokad-cloud which is pitches itself as a .NET O/C (object to cloud) mapper for Windows Azure, and much more!

After a solid and inspiring 3 hour effort from Chris, he wrapped up his presentation, and Catherine Eibner from Microsoft ran us through the broad agenda for what the remainder of the weekend would entail. The way I translated Catherine’s instructions were roughly as follows: “now you guys are Azure experts, go and build something Azure’y and awesome. By the way, you have a hard deadline of 4:00PM tomorrow evening, by which time you will then be given 5 minutes to deliver a business pitch to a panel of 5 judges. Now get the hell out of here bitches!”. It quickly loomed over me I wouldn’t be seeing much of my girlfriend this weekend.

My friend and I quickly found a vacant meeting room, a white board, and setup up shop.

24 hours (2 worker roles, 1 web role and some Azure Blog and Table storage) later…

We’d built out the first working prototype of a concept known as eyeknowit running on a local Azure development fabric. Think of photo streams combined with information. From this overarching idea stems a number of downstream ideas, like being able to acquire imagery tagged with particular things (e.g. the need for a particular type of photo for a newspaper), or being able to specialise the photo stream to a particular expertise (e.g. I have a interest in flowers).

I guess the judges thought the idea could gain some traction. We won.

In summary, at zero cost (other than time, sleep and daylight) to attendees, got some “choice” training from Chris Auld, geeked out on Azure for a whole weekend, met some good people, and took some cash and a slightly inflated ego away.

http://blog.bartjellema.com/2010/04/11/azure-bizspark-camp-sydney/

http://blogs.msdn.com/b/ausdev/archive/2010/04/12/bizspark-camp-australia.aspx





C# 3.0 Language Extensions

25 12 2008

Put together a little snippet to remind myself of the some of the beautiful language extensions that shipped with the C# language.

namespace CSharpLanguageExperiment
{
    using System;
    using System.Collections.Generic;
    using System.Linq;

    class Program
    {
        static void Main(string[] args)
        {
            // Object Initialiser
            Person bill = new Person { FirstName = “Bill”, LastName = “Gates”, Age = 40 };

            // Type Inference
            var ben = new Person { FirstName = “Ben”, LastName = “Simmonds”, Age = 25 };

            // Anonymous Types
            var john = new { FirstName = “John”, LastName = “Smith”, Age = 18 };

            // Anonymous Delegate
            Func<string, bool> filter1 = delegate(string name)
            {
                return name.Length > 4;
            };
            filter1(“hel”);

            // Lambda Expression
            Func<string, bool> filter2 = x => x.Length > 4;
            filter2(“foobar”);

            // Extension Method
            ben.GetData();

            // Queries
            List<Person> people = new List<Person>();
            people.Add(bill);
            people.Add(ben);
            Func<Person, bool> filter = x => x.Age > 30;
            IEnumerable<Person> exp = people.Where(filter);

            foreach (Person person in exp)
            {
                Console.WriteLine(person.GetData());
            }
        }
    }

    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public int Age { get; set; }

        public override string ToString()
        {
            return String.Format(“{0} {1}”,
                FirstName,
                LastName);
        }
    }

    // Extension Method
    public static class PersonExtension
    {
        public static string GetData(this Person person)
        {
            return String.Format(“Name: {0} {1} Age: {2}”,
                person.FirstName,
                person.LastName,
                person.Age);
        }
    }
}





70-235 Certified

21 12 2008

A few weeks ago I sat the 70-235 exam. Coming from a real-world perspective I wasnt overly impressed with the exam. To me it felt mind numbingly robotic and very 2006 R1 feature set focused (e.g. BAM and BRE). However the exam successfully probed some interesting areas within BizTalk, encouraging me to learn things about BizTalk that I would not otherwise get exposure to.

Patrick Wellink’s post “How to prepare for the BizTalk 2006 Exam” concisely summarised how to prepare for this exam. At first I sensed some sarcasm in Patrick’s post, but now having successfully passed the exam found his post to be scaringly accurate.

  1. Create a nice BizTalk solution
  2. Use party resolution
  3. Use correlation
  4. Send a message to a web-service that requires authentication
  5. Use a business rule with a database fact
  6. Use business rules that assert and retract
  7. Make sure you deploy the solution
  8. Run messaages through it
  9. See what happens if the orchestrations are not started
  10.   See what happens if the rules are not deployed
  11.   Remeber the name of each tool you use (e.g. orchestration view in VS.NET)
  12.   Know the stages of the pipelines
  13.   Finally create a BAM View in excel
  14.   Deploy the BAM View
  15.   Deploy the tracking profile
  16.   See if it works through the BAM Portal
  17.   Now modify the BAM view
  18.   Update the BAM view
  19.   Do some BAM with continuations
  20.   Know (memorise) your persistence points




XHTML Formatted Messages

21 12 2008

BizTalk messages are very XML centric. A while ago there was a requirement to produce a neatly formatted XHTML report, which was destined to be emailed.

At the time I stumbled across a customised version of the XslTransform pipeline component which ships with the BizTalk SDK. It demonstrates how to apply an XSLT transformation in the pipeline. The customised version I was playing around with, pulled up XSLT from a SQL data store.

Regardless of where or how the transformation be done, we needed to produce an XHTML document as a result. The thing with XHTML is that is it just that. Its XML. A XSD schema can be produced from a well formed piece of XHTML. Therefore it is possible to create a strong message type (e.g. FooReport.xsd) which can then be pub/sub’ed with BizTalk.

High level steps:

  1. Generate the document schema using xsd.exe.
  2. Write the necessary transformation using BizTalk mapper, using the above as the destination schema.
  3. Depending on the client which will be picking up and rendering the XHTML (e.g. outlook, firefox) it is usually necessary to set the MIME type, so it knows to treat the content as a piece of XHTML. To do this from BizTalk set the ContentType context property:
    errorReport(Microsoft.XLANGs.BaseTypes.ContentType) = “text/html”;




BizTalk 2006 Install Problems on WinXP

21 12 2008

Today while doing some vanilla BizTalk 2006 R2 installs, discovered the installer was choking with:

Error 5003.Regsvcs failed for assembly C:\Program
Files\Microsoft BizTalk Server 2006\Microsoft.BizTalk.Deployment.dll.
Return code 1.

Regsvcs

Awesome. Other cases of a malfunctioning regsvcs executable have been reported. For reasons that remain unknown to me regsvcs.exe stop functioning following the installation of SP1 for VS.NET 2005 and the .NET 2.0. regsvscs.exe appeared to be intact, but invoking any of it functionality (including listing its command line help) would return nothing—hence the return code 1 problem.

Solution: Repair the .NET Framework 2.0 binaries. I grabbed a fresh copy of the service pack. It is important that regsvcs.exe functions correctly—you can test it while reinstalling the service pack by running “regsvcs.exe /?“. If it lists out help your good to go.