Executing Scripts during EF Code First Migrations

September 20th, 2012 1 comment

I’ve spent the last two days converting a training application from Model First to Code first and enabling Migrations.  I knew I wanted to use the Migrations functionality on a new project that I’m starting in a few weeks and converting an existing application seemed the best way to really get my hands dirty.  I must say, I love the possibilities that this new feature set opens up, especially around just working within the development team.  No more “Oh, yea, I updated the database last night, I forgot to check in the script, I’ll send it right away”.  The development code will just update itself.  Glorious.

One issue I ran into is the ability to execute an arbitrary script as part of a migration.  I wanted to do this so that the initial configuration would also create the asp.net membership provider objects and the Sql Server ELMAH objects that are used behind the scenes.  One less thing to think about for a new developer or pushing into a new Dev/QA environment.

After much googling, it seemed that there is no good way to accomplish this task.  The migrations code exposes a Sql() command that will execute one sql statement, and you can do the same thing in the Configuration.Seed function by accessing the context directly.  Either way, not as clean as just giving an entire script over to Sql Server for execution.

I had a fully executable script, so instead of trying to parse it out manually as almost every post I found said to do, I worked around it by using this code:

I embedded the two scripts into my model assembly and then just split them apart to execute each statement one at a time.   I had to use “GO\r\n” due to the fact that the aspnet objects actually use GOTO in them.

It feels a little hackish, but it gets the job done and simplifies the initial creation as well as the maintenance of these objects.   This gets us back to, when a new developer starts.

Dev: “How do I set up the database”
Everyone: “Run the App”



Categories: Coding Tags: ,

How to Solve Issues Managing Users on Projects after Upgrade to TFS 2012

September 12th, 2012 No comments

This past weekend we upgraded our server to TFS 2012. After we finished the upgrade, I noticed that I was getting errors while trying to manage permissions on a few of our projects. This is odd because my user account is a TFS Admin, there shouldn’t be anything that I’m not allowed to do and I hadn’t heard of any large security changes that were delivered in 2012.


After some investigation I found that the only projects I was having issues managing were the projects that existed since the early days of our server. Our server has existed since 2009 and this makes it’s second major version upgrade of TFS. As with any permissions issue in TFS, you can use TFSSecurity to gather extensive information on the security configuration of almost any object in TFS.

Using that tool, I was able to generate this table:

Project Created in 08 and upgraded to 2010 and now to 2012 (after following the advise of a forum poster about running a TFSSecurity script to add the ManageMembership permissions to this project)
[+] Read [Project ]\Contributors
[+] ManageMembership [Project ]\Contributors
[+] Read [ Project]\Readers
[+] Read [ Project]\Build Services
[+] ManageMembership [Project ]\Build Services
[+] Read [ Project]\Project Administrators
[+] Write [ Project]\Project Administrators
[+] Delete [ Project]\Project Administrators
[+] ManageMembership [Project ]\Project Administrators

Project Created in 2010 and upgraded to 2012

[+] Read [Project]\Builders
[+] Read [Project]\Project Administrators
[+] Write [Project]\Project Administrators
[+] Delete [Project]\Project Administrators
[+] ManageMembership [Project]\Project Administrators
[+] Write [Project]\Build Editors
[+] Delete [Project]\Build Editors
[+] ManageMembership [Project]\Build Editors
[+] Read [Project]\Contributors
[+] Read [Project]\Readers
[+] Read [PC]\Project Collection Valid Users
[+] Read [PC]\Project Collection Service Accounts
[+] Write [PC]\Project Collection Service Accounts
[+] Delete [PC]\Project Collection Service Accounts
[+] ManageMembership [PC]\Project Collection Service Accounts
[+] Read [PC]\Project Collection Administrators
[+] Write [PC]\Project Collection Administrators
[+] Delete [PC]\Project Collection Administrators
[+] ManageMembership [PC]\Project Collection Administrators

Project Created in 2012

[+] Read [Project]\Readers
[+] Read [PC]\Project Collection Build Service Accounts
[+] Read [Project]\Build Administrators
[+] Read [PC]\Project Collection TestService Accounts
[+] Read [Project]\Contributors
[+] Read [Project]\Project Administrators
[+] Write [Project]\Project Administrators
[+] Delete [Project]\Project Administrators
[+] ManageMembership [Project]\Project Administrators
[+] Read [PC]\Project Collection Valid Users
[+] Read [PC]\Project Collection Service Accounts
[+] Write [PC]\Project Collection Service Accounts
[+] Delete [PC]\Project Collection Service Accounts
[+] ManageMembership [PC]\Project Collection Service Accounts
[+] Read [PC]\Project Collection Administrators
[+] Write [PC]\Project Collection Administrators
[+] Delete [PC]\Project Collection Administrators
[+] ManageMembership [PC]\Project Collection Administrators


For all intents and purposes the project created in 2010 and 2012 are identical from a permissions standpoint.  The project created in 2008 however was missing the above key ACL entries.  To be honest, I’m not sure if they were there and the upgrade removed them or if something else happened; however, before the upgrade I was able to edit these projects and after I was not.

As I mentioned previously, the TFSSecurity program is very powerful and not only can it give you security information, it can also set it, irrespective of what the UI will allow you to do.  This is how I fixed my issue:

First:  You need to get the GUID of the project that you need to adjust permissions on.  I do not know of a way to get it from 2012.   The easiest way to do this is to get it from VS2010.  When you open team explorer and have the project in question in your list, right click and select properties.  The GUID will be in the field whos value looks like this:

Once you have the GUID of your project, you need to get the SID of your project collection administrators group.

Now that we have both of those pieces of information, we execute the following command

Piecing this command out looks something like this (full documentation found here)

  • /a+ , Add permission (/a just shows permissions)
  • Identity, This means that w’re modifying something in the Identity security namespace
  • vstfs:///Classification/TeamProject/*GUID*, this is the specific object we’re modifying security on. In this case, a project
  • ManageMembership, This is the specific permission in the security namespace that we’re modifying
  • SID, This is the user were performing the security action on
  • ALLOW, We’re allowing them to perform the action
  • /collection, the specific TFS Project Collection that the action will be run against.

Voila, you should now be able to manage groups on the project again.  I ran the same command twice, to see if it would cause an issue and the command exited without doing anything.  Given that test, I believe it’s safe to say that writing a script to run this against every project in your TFS instance, especially if you have a lot of old projects, would be the better idea then doing them all by hand.

If you’re experiencing an issue similar to this, hopefully this solves your problem!


P.S. You can see my original forum post here.
(This post updated for more explanation of the command and to fix a display problem with my original example)

Categories: Troubleshooting Tags:

Unit testing async Functions in VS2012

August 20th, 2012 No comments

In an attempt to get a jump start on writing Windows Store apps (is that what we’re calling them now?), I created a simple library targeting WinRT with some unit tests over the weekend. During that process, I got stumped for a good hour on how to properly write a unit test for an async function.

Let’s me walk you through this:

First: This is the code that I started with.

and I eventually found that you create your unit test like this

Note that the test itself is async and returns a Task. This is a requirement for the unit test. If the unit test returns void, you will not get a compile error, but the unit test will not show up in the Unit Test Explorer window.

The above code; however, doesn’t compile for a different reason.

The logic behind this is that you can only await something that returns a Task because of the way the framework works. The first hits in google said to do this:

Using the static function Run() against the function I wanted to execute would allow the test to wait on said function. Here’s where I ran into a peculiar issue (one which I, of course, cannot reproduce now that I’m trying to blog about it). It would compile and run, but when I hit the first await in the called function, the unit test would immediately return. Thus my test would always pass because no exception was thrown, no data every processed after the await, etc.

What I eventually discovered was that instead of wrapping the void function with Task.Run, I should make the function return Task instead:

After that update, my tests started waiting like they were supposed to.

There is a good follow up discussion on Stack Overflow about returning void vs returning a Task and what the purpose of even allowing an async void function is.

Categories: Coding Tags: , ,

Activation Issue with Windows 8 Pro Upgrade

August 18th, 2012 No comments

I did an upgrade to Windows 8 Pro last night from Windows 7 Ultimate (both from my MSDN account). In the activation screen I was getting an error similar to “file or description cannot be found” both on the UI, and as a popup when I clicked the Activate button. Eventually what I found out was that I needed to change the license key; however, the button that the normal instructions said to use was not there to change it.

You can change your key through an elevated command prompt by executing the following command:

The UI still showed the same error When I went back into the activation screen; however, when I clicked the Activate button it went through and now everything looks and is working as intended.

Hopefully this helps some of you avoid the headache of trying to track down the root of this problem. I, unfortunately, did not take a screen shot. If my laptop has the same problem, I’ll make sure to take one and update the post.

Categories: Troubleshooting Tags:

Entity Framework Objects with Proxies, MVC4, and Telerik Grids.

August 8th, 2012 No comments

I’m currently working on a side project that is using MVC4 and EF5. Some of the EF objects have bi-directional navigation on them due to how they’ll end up being rendered.

I was trying to wire up a Telerik MVC grid for data entry and received this error on the grid select:“A circular reference was detected while serializing an object of type ‘System.Data.Entity.DynamicProxies.CourseType”.

After some googling, the easiest answer I found was to disable proxy generation. This was fine as I wasn’t using lazy loading. Once I did that, I started getting this exception: “The RelationshipManager object could not be serialized. This type of object cannot be serialized when the RelationshipManager belongs to an entity object that does not implement IEntityWithRelationships.”. The easy answer didn’t work.

At first I tried to set up the Json serialization settings of the global formatter since MVC4 uses JSON.Net by default.

That did not solve the problem.

I then found a code sample on Telerik’s site that accomplished the goal, but it seemed overly complex (it may have just been the only way to solve it in mvc3). Based on that code sample, and using JSON.Net, this is what I came up with as a final solution:

Update: For this to work, Proxy generation still needs to be disabled. I didn’t make that clear in the original text.

Categories: Coding Tags: ,

WCF Proxies using Dependency Injection

July 31st, 2012 No comments

Last time I noted that we use Dependancy Injection for a lot of things on my project and that one of the big pieces is the use of DI for our WCF Services. In this post I’ll cover what the benifits to this approach are and how we accomplished the task.

The benifits of writing our wcf service layer this way is the following:

1) It abstracts away the fact that it’s a service call. To the calling code, it’s just a DI interface.
2) Enables easy Mocking for unit testing (due to the DI container)
3) Easily swapping out what could be a service call or a local call with the opposite. This makes local integration testing and debugging easier.
4) Allows us to modify the interface without touching a bunch of projects. Before we did this, it took a while to right click and refresh the auto generated service reference every time an interface was updated.
5) Configure at runtime which services were in the container and what their settings were
a) This includes interface, URL, message size, graph size, etc.
b) One of our requirements was a centralized configuration system, not in the web config on 20 servers.

The con is that you aren’t using the autogenerated service reference; which means all of the nicities of the autogenerated async methods do not exist. We are handling async ourselves because of long running IO anyway, so this was not a big deterent for us.
The first step was creating a wrapper class that would properly dispose of the WCF channel. You cannot just have the interface implement IDisposeable and put that inside your using block, as it will call the remote dispose method, and not the local one.


As you can see, this function chooses the appropriote action depending on if the class is a WCF proxy or just a local interface.

The next step was to create some way to get the configuration data from a remote location. In our case that is a service (or a DB call); however, that remote locaction could be the web config, a seperate config file, whatever you come up with.


We originally started with just the first 4 properties; however, as we progressed through the project, we needed to adjust the rest of the settings as well.
After we had defined the settings we wanted to store, it was time to set up the dependancy injection container. For the most part, it’s a simple wrapper around the Unity container. The important setup is done in the regitration of the interface.

Unlike the normal use of DI, where we just tie a type to an interface, we want to tie an interface to a function call. We accomplish this in unity through the InjectionFactory class. The function CreateService is then called every time we make a Resolve<> call on the unity container for those registered interfaces.


This function uses reflection to new up a ChannelFactory<Interface> and call CreateChannel which will create a WCF channel with the configured binding and endpoint.


After some additional maintenance plumbing and creating some overrides for mockability; the end result is the is a singleton wrapper that can be used anywhere in your codebase to make WCF calls such as this:


The full code can be found here:  https://gist.github.com/3222717
How have you approached this problem in your own architectures?



Categories: Coding Tags: , ,

Using Attributes to Register Dependency Injection Interfaces

June 27th, 2012 No comments

We use dependency injection to isolate various layers of our infrastructure for testing purposes.  As we moved past 20-30 classes in the layers, it became non-trivial to maintain the default set of interface registrations.  There used to be a large file that would set up all the interfaces with the types (thus knowing everything about all interfaces and implementations, which sort of defeated the purpose of DI).   We solved this problem by creating a custom attribute for the registration.  We scan all of the assemblies for this attribute on application startup and use the information in the attribute to tie everything together.



In the next blog post, I’ll show how we take this concept to the next level with our WCF endpoints to avoid having service references in 100 different projects.

Categories: Coding Tags: ,

Shim Example – Not ShimDateTime

June 12th, 2012 No comments

I spent some time today writing unit tests using the new Fakes framework in VS2012.  It took me a little bit to figure out exactly what was going on and I didn’t find any examples to help along the way (other then the normal ShimDateTime example).  Either my google-foo is off, or there just isn’t much out there yet, so I figured I would post a quick example to help get you started.

First step:   To create a Fake of a system assembly (or any assembly for that matter), right click on the reference in the unit test project references and click “Add Fakes Assembly”.  That will generate a Fakes assembly and add it to your project.  Once that is created, all namespaces in the original assembly will now have a corresponding .Fakes namespace. (e.g  System.IO.Fakes)

In this example, I have a function that cleans up some temporary data that my application creates.  I want to assert that that function does not throw exceptions if, for some reason, the temp folders cannot be deleted.

To start using the Shim objects, I need to define my ShimsContext.


This context keeps the shim scoped to this unit test or a subset of the unit test.  If the context is not defined, you’ll get an error when running the test.  I’m very glad that Microsoft set up the Shims in this fashion.  The reason being is that Shims are AppDomain wide, and left unchecked, could change how all of your unit tests execute.  That would lead to a lot of weird issues that would be hard to track down as you’re executing your unit test suite.   How would you like it, if your unit test was throwing IO exceptions when run on the build server, but when you run that isolated test on your machine, it runs perfectly?  I’d be pulling my hair out after 10 minutes most likely.


Now that I’ve wrapped my unit test in a ShimsContext, I can get to work on the meat of my unit test.  We’ve already created the fakes assembly for System, so it’s a matter of getting the Shim object for DirectoryInfo and returning what I want it to, in this case, throwing an exception.


I use the shim class for DirectoryInfo and set the AllInstnaces.DeleteBoolean property to my Action.  One thing to note: it appears that the naming convention is FunctionArg0Arg1 on the Shim properties.  Make sure you are setting the action for the specific function override that you’re using,  and not the first one you find.  For DirectoryInfo.Delete, there was both a Delete and a DeleteBoolean.     The AllInstances property allows me to state that any instance of this class follows this rule, not just a specific instance.   From there, setting the override is a simple lambda expression to define the Action.  (Instance, Parameters ) => { function; }.  I can’t think of any easier way to express my goal as a developer then that.

If you have not looked into the new testing features of VS2012, I would suggest that you watch this Tech-Ed presentation by Peter Provost.  It’s a great primer on the new features.   I have been very impressed by the Fakes framework so far and the Shim feature just knocks it out of the park.  All in all, once you get the basic syntax of Fakes, it will become your new best friend while writing unit tests.



Categories: Coding Tags: ,

Building WiX MSIs in TFS Preview

June 11th, 2012 1 comment

I spent the weekend getting one of our more complicated projects up and building in the hosted build servers of TFS Preview.  The last step was integrating our WiX deployment packages.   I upgraded to 3.6 RC before I started, just in case.

After creating a demo project with a simple WiX project.  This is the build output that I receive.


Ok, that makes sense.  WiX isn’t installed on the hosted build server.  Since we cannot install WiX on the build server, we must upload everything we will need into source control.  There are two main components here:  the WiX binaries and the WiX msbuild targets files.   I have a folder in source called “Build Stuff” where i put everything that I’m going to reference as part of a build.  I created two sub folders, WiXBin and WiXTarget.


I copied all of the files from C:\Program Files (x86)\WiX Toolset v3.6\bin into the WiXBin folder, and all of the target files from  C:\Program Files (x86)\MSBuild\Microsoft\WiX\v3.x into the WiXTarget folder.

Now we have to tell MSBuild where to find our new files.  On the process tab of the build definition, is a setting called “MSBuild Arguments”.  This allows us to pass overrides directly to MSBuild.  In this case, I passed the following (this should be all one line, I wrapped it so you could read it):


/p:WiXTargetsPath=c:\a\src\BuildStuff\WiXTarget\wix.targets; WixTasksPath=c:\a\src\BuildStuff\WiXBin\WiXTasks.dll; WixToolPath=c:\a\src\BuildStuff\WixBin; WixExtDir=c:\a\src\BuildStuff\WixBin


Given that we have more then one MSI project and they reside at different levels in the folder structure, a relative path to these files didn’t seem like it was the best long term solution.  At least for now (no telling it MS will change it), the source directory for a project being built is C:\a\src.

Now that we have MSBuild set up correct to build our WiX project file, let’s run the build again:

If you’ve set up a new build server before, you’ve probably seen this error.  It’s the error basically telling you that the account running the build doesn’t have administrator permissions.  Since we’re running in a hosted environment, this is not something that we can change.  Our only option is to disable ICE validation.

By opening up the WiX project settings, going to Tool Settings, we can set WiX to suppress the ICE validation step.



After that setting get’s checked in, we re-run the build and we see the MSI in our drop location.


ICE validation steps in the build are a nice to have in that they ensure that the MSI is consistent right away; however, those validations can be run by an infrastructure resource using Smoke.exe locally before being deployed into your QA environment.



Categories: Coding Tags: , , ,

Using XUnit with Team Build 2010

June 7th, 2012 No comments

My team is making the switch from MSTest to XUnit for our unit testing framework.  As a result of this change, I needed to enable our TFS Build to run XUnit and publish the results.  Searching around I found a couple old posts like this one; however, all of them left me with a decent amount of work to do.  I wanted to create something that I could easily distribute to the rest of my teams and it would be simple to integrate.

The first step was to create a workflow task to execute XUnit against my test DLL.  My goal for the task was to work as close to the MSTest task as possible.  That meant passing in a file spec such as **\*tests*.dll.  To make it easy on the consumer, we’re also going to not assume a path, but have that as an argument as well.


Now that we’ve created the arguments that we need, we can start dropping in the activities.  The first activity is to use the TFS Build Find Matching Files task.  This will use our file spec to find all of the unit test dlls that are built.   Next we confirm that we have items in our file collection or issuing a warning (not shown) that we couldn’t find any items.  There is no TFS Build 2010 XUnit activity from the XUnit project.  As such we’re going to be invoking the command line runner.  This runner can only run one dll at a time.  This means that the next step must be to loop over the test dlls with the ForEach activity.


Now that we have our individual test dlls processing, the next step is to invoke XUnit against each of those files.  We do this with an InvokeProcess activity.    The file name is one of the arguments that’s the path to the XUnit console runner.  The arguments that are passed are “<path> /silent /nunit results.xml”


The next step is to publish the results to TFS.  For this I created a coded build activity that mimicked the functionality of the NUnit publish task from the community build extensions.


The catch with the way the publish works is that the result file must be different for each test dll run.  For this we add a counter to append to the result xml as we go through the files.   The full activity can be downloaded and built from GitHub.  Now that we have a xUnit build activity, it’s just a matter of dropping it in a full build workflow.  This is the easy part, as we’re just completely removing the MSTest portions and replacing it with the XUnit activity we just finished writing.

And voila, we have XUnit tests running in a TFS 2010 build process and publishing the results.  Hopefully this is helpful to help you get XUnit integrated into your process.



Full Source for XUnit activity on GitHub

Categories: Coding Tags: ,