Archive for April, 2008

Manage your transaction chaos with Spring Framework

Spring is a popular framework available in Java.  It has also been published in .NET and is called Spring.NET

This framework is huge, but what I wanted to focus on is the section that deals with transaction management.

If you have ever worked with Transactions, (say SQLTransactions), you would realize how messy it can be when you have to keep your connection open and pass around a transaction to many different functions that are making updates or queries, and then you might make a small little change and find out you forgot to use the transaction object and you caused a deadlock!

How can you fix this?  By having a generic transaction/connection object?  Well, that’s one way.. but how about using the Spring.NET framework to deal with the transactions?

Spring.NET allows you to specify when you need a transaction, and you would begin and end the transaction.  However, when you do the actual database calls, you would not pass any connection string, it would automatically use the transaction in process if there is one, or if not needed then it would use a regular connection and disconnect after the query is complete.

Here is a code snippet to demonstrate programmative transaction management, taken from the Spring.NET source code (yeah its open source!)

 

[Test]
        public void ExecuteTransactionManager()
        {
            DefaultTransactionDefinition def = new DefaultTransactionDefinition();
            def.PropagationBehavior = TransactionPropagation.Required;

            //TODO change to property of name TransactionStatus...
            ITransactionStatus status = transactionManager.GetTransaction(def);
            
            int iCount = 0;
            try
            {
                iCount = (int)adoOperations.ExecuteScalar(CommandType.Text, "SELECT COUNT(*) FROM TestObjects");
                /*
                IAdoCommand cmd = new AdoCommand(dbProvider, CommandType.Text);
                cmd.CommandText = "SELECT COUNT(*) FROM TestObjects";
                iCount = (int)cmd.ExecuteScalar();
                */

                //other AdoCommands can be executed within same tx.
            } catch (Exception e)
            {
                transactionManager.Rollback(status);
                throw e;
            }
            transactionManager.Commit(status);
            Assert.AreEqual(2, iCount);

        }

As you can see from this code, the adoOperations.ExecuteScalar does not have any connection string passed to it.  Same thing with ExecuteDataSet, and so on. 

This actually saves you a lot of headache, as you just have to make sure you use the spring DataOperations object.  The easiest way to implement this on a ASP.NET site is to put these objects as a static variable and initialize them on Application_Load.  Just an idea but it should work. 

 If you want to do more digging, Spring.NET has tons of stuff, it seems to mostly focus on Dependency Injection and loose coupling of code from objects from the data layer.  I really like it and I recommend you give it a shot.

It’s proven itself in Java and I think this framework is going to prove itself in .NET as well.  At the very least the source code is a prime example of a clean object oriented well designed application with full unit tests and sample code, XML documentation, thorough use of interfaces and inheritance, and even ORM (object relational mapping) using NHibernate (again a popular Java framework which has come into .NET)

 Update: I called this "declarative transaction management", actually its "programmative transaction management". – FIXED

Unit test your life!

If you are not unit testing your code, chances are you are not unit testing in your life.
 
If you aren’t unit testing, START now! At the very least, do some “manual” unit testing in your code. How can you do this? Well, try running your code on a very basic case. Then try a bit more complicated case. Then another, then another. If you are smart, you are saving these cases using a testing framework like NUnit. If not, well at least you can have some confidence when your manager comes that you tried it comprehensively and that it’s not going to crash on you while you are showing it to him, or even worse, in a demo to the team or to your big boss.
 
I recently ran into some problems in my life which I managed to solve amazingly well by doing “unit testing”…
 
First problem, my DVD burner was going awfully slow. I had some complex and messy setup including an external IDE card, two burners, two hard drives, and all I know is that at some point in time something went wrong and it started going really slow. What I don’t know, is how it happened.

Second problem, I was doing some video encoding/rendering, and for some reason it was doing something bizarre and the application VirtualDub kept looping over and over and would never end encoding the file. Again, I don’t know what happened.

How did I solve these problems?

UNIT TESTING!!

 
For the first problem of the DVD drive. I removed everything from my PC and set up a very basic system which included 1 HD, 1 Burner, etc..   Then when I found this wasn’t working, a quick check online and I resolved the issue which was incorrect DMA settings. It was trying to send all the data through the CPU (PIO mode) instead of directly to the burners, which was causing a massive slowdown. Once this worked, I quickly put together my system again, and checked each case (HD on same IDE channel as Burner, on separate IDE channel, and so on). 

With the encoding problem, again, I was very confused, but by unit testing the situation, I was able to resolve it. How did I do that? I tried encoding on a different machine, reinstalled the software, etc, etc, and it was still having problems.
 
And finally when I started from scratch, I removed the batch encoding, I removed the DiVX processing, and so on, and then made each test pass. Once the test passed, I added another level of complexity, until finally I figured out that VirtualDub was looping infinitely because I had the “segment AVI file” option enabled. I don’t know why this was the problem, but by unit testing, I was able to resolve it.

Lesson to learn? Unit testing (if you can call it that) can really help you solve such issues. Start from the base case, and slowly work back towards what you need. After each case, write down the results.

Microsoft felt strongly enough about Unit Testing that Visual Studio 2008 has built in unit testing (Wahoo!). As well, it integrates nicely even with NUnit or MbUnit (Don’t ask me how, though).
 

Moving from SourceSafe to Subversion

This is an overview of some of the changes you will encounter when going from SourceSafe to Subversion.
Subversion is modeled on CVS, whereas SourceSafe is modeled on…. well… nothing (just a joke).
 
In SourceSafe, you work in a shared code base or line (known as a “Project” in SourceSafe) which you “share” into another line.
 
So you have a line that you work on, and then when you want to start adding new features or you want to change the functionality in certain pages, you share the entire line (aka “Projects”, represented by a small green folder icon SourceSafe Project Icon). 

If our first project was called Development1.0, you can share this into a new project called Development1.1

At this point in time, they are identical twins. In fact, they both point to the same item, sort of like how a pointer would work, they both reference the same data. If you change one of the files in Development1.0, it will instantly be updated in Development1.1.  This means at this point there is no real point for creating this shared copy as they are the same.
However, if you add a new page to either Development1.0 or Development1.1, it will not appear in the other Project, it will only be in the one you put it in.

Now, if you want to “branch” some of these files, so that you can make changes to them in Development1.1, you would select them and click on “Branch files” (icon is 2 arrows coming from a file SourceSafe Branch)
This will effectively split them, so they are distinct independent copies.  All changes to either of the files will be maintained independently of the other file.

However, since all the other files are still shared, you might need to also branch any dependent files. This is good in some ways, because of the shared workspace others will see your changes and integration will be sooner then later, however it can also cause your work to break other people’s pages.  What that means is if your page is dependent on some shared library or business object that is going to be modified, this can get quite messy!

As well, you might want to maintain a link going forward, but not going backward.  So you might want this file in Development1.1 to be shared to Development1.2 and Development1.3 but not shared to Development1.0
 
Keep in mind that nomatter what tool you use, the more parallel development you try to perform, the more messy and difficult your job is going to be.  AVOID HAVING TOO MUCH PARALLEL DEVELOPMENT – TRY TO CLOSE OLD BRANCHES AS SOON AS POSSIBLE.  This will save you much headache and tons of time wasted in managing lines and branches 🙂
 
The first recommendation when working with SourceSafe is to implement a continuous integration server, namely CruiseControl.NET, which at the very least will automatically build at regular intervals and apply labels each time, so that it is possible to revert to a previous version. This is super important, especially if you release your code to customers, because you might want to be able to get a piece of old code and find out why it is behaving thus.
 
If you do not label after every build or set of checkins, it is nearly impossible to get back to a previous version unless you are doing a very silly job of copying and maintaining a folder with all the old builds in it. 
With labels, you can roll back individual files, but that individual file might have depended on an older version of say a business object, and thus you will have to roll back several files manually which is very difficult.
 
So CruiseControl is just a fancy batch file. It’s not too hard to set up at all, it might take you a few days, but once it’s done, its well worth it!
 
As well, names are different. For example, with Subversion you say “Commit” instead of “Check-in”.  This takes a bit of getting used to.

Also, with Subversion, by default, there is no concept of “check out”. That is quite scary for some people.   Anything you want to edit, you got it! Especially if you never turned off exclusive checkouts in SourceSafe, then you are used to the idea that if you are working on this file, nobody else can work on it. If you turn off this feature in SourceSafe, then you can have multiple checkouts, and the first person who checks in their code, wins. The rest have to manually merge the files, or depend on SourceSafe’s not so great merging tool.
 
With Subversion, unless there is a conflict (aka two people edited the same line of code), you will find that it will automatically merge the changes in a smart way, which is very helpful. In fact it saves you a lot of time. And you don’t even have to worry about checking out files. Although keep in mind that you should try to check in your stuff as soon as possible otherwise you might find it has changed dramatically and you will have to merge any conflicts that happened manually. (This is because a machine has no way of knowing which change was “Right”, you may have to remove your change, or the other person’s change, or keep both, in the case of conflicts!)
 
TortoiseSVN is the best way to get started with Subversion, it’s user friendly and requires no database (uses your file system to store data) and very little set up.

It’s not just enough to switch to Subversion, you need to know some of the SCM best practices otherwise you will still fail. 
 
It is very possible to continue working “sourcesafe style” in Subversion without realizing it and suffering the same problems.

Copy A Database Diagram To Another Database

For some reason SQL Server doesn’t have an easy way to "Create TO" for database diagrams, unlike stored procedures, functions ,etc.

Here is how you can achieve moving a database diagram (or copying a database diagram) in SQL Server 2005

use Old_Database

go

--this will copy your database diagrams into a temporary table

select * into dbo.#tempsysdiagrams from sysdiagrams

use New_Database

go

insert into sysdiagrams ([name],principal_id,version,definition)
select [name],principal_id,version,definition from dbo.#tempsysdiagrams where [name]='Name_of_your_Diagram'

That’s it, so easy.

Optimization WordPress Plugins & Solutions by W3 EDGE