Archive

Archive for the ‘Coding’ Category

Executing Scripts during EF Code First Migrations

September 20th, 2012 1 comment

I’ve spent the last two days converting a training application from Model First to Code first and enabling Migrations.  I knew I wanted to use the Migrations functionality on a new project that I’m starting in a few weeks and converting an existing application seemed the best way to really get my hands dirty.  I must say, I love the possibilities that this new feature set opens up, especially around just working within the development team.  No more “Oh, yea, I updated the database last night, I forgot to check in the script, I’ll send it right away”.  The development code will just update itself.  Glorious.

One issue I ran into is the ability to execute an arbitrary script as part of a migration.  I wanted to do this so that the initial configuration would also create the asp.net membership provider objects and the Sql Server ELMAH objects that are used behind the scenes.  One less thing to think about for a new developer or pushing into a new Dev/QA environment.

After much googling, it seemed that there is no good way to accomplish this task.  The migrations code exposes a Sql() command that will execute one sql statement, and you can do the same thing in the Configuration.Seed function by accessing the context directly.  Either way, not as clean as just giving an entire script over to Sql Server for execution.

I had a fully executable script, so instead of trying to parse it out manually as almost every post I found said to do, I worked around it by using this code:

I embedded the two scripts into my model assembly and then just split them apart to execute each statement one at a time.   I had to use “GO\r\n” due to the fact that the aspnet objects actually use GOTO in them.

It feels a little hackish, but it gets the job done and simplifies the initial creation as well as the maintenance of these objects.   This gets us back to, when a new developer starts.

Dev: “How do I set up the database”
Everyone: “Run the App”

Glorious.

 

Categories: Coding Tags: ,

Unit testing async Functions in VS2012

August 20th, 2012 No comments

In an attempt to get a jump start on writing Windows Store apps (is that what we’re calling them now?), I created a simple library targeting WinRT with some unit tests over the weekend. During that process, I got stumped for a good hour on how to properly write a unit test for an async function.

Let’s me walk you through this:

First: This is the code that I started with.

and I eventually found that you create your unit test like this

Note that the test itself is async and returns a Task. This is a requirement for the unit test. If the unit test returns void, you will not get a compile error, but the unit test will not show up in the Unit Test Explorer window.

The above code; however, doesn’t compile for a different reason.

The logic behind this is that you can only await something that returns a Task because of the way the framework works. The first hits in google said to do this:

Using the static function Run() against the function I wanted to execute would allow the test to wait on said function. Here’s where I ran into a peculiar issue (one which I, of course, cannot reproduce now that I’m trying to blog about it). It would compile and run, but when I hit the first await in the called function, the unit test would immediately return. Thus my test would always pass because no exception was thrown, no data every processed after the await, etc.

What I eventually discovered was that instead of wrapping the void function with Task.Run, I should make the function return Task instead:

After that update, my tests started waiting like they were supposed to.

Bonus:
There is a good follow up discussion on Stack Overflow about returning void vs returning a Task and what the purpose of even allowing an async void function is.

Categories: Coding Tags: , ,

Entity Framework Objects with Proxies, MVC4, and Telerik Grids.

August 8th, 2012 No comments

I’m currently working on a side project that is using MVC4 and EF5. Some of the EF objects have bi-directional navigation on them due to how they’ll end up being rendered.

I was trying to wire up a Telerik MVC grid for data entry and received this error on the grid select:“A circular reference was detected while serializing an object of type ‘System.Data.Entity.DynamicProxies.CourseType”.

After some googling, the easiest answer I found was to disable proxy generation. This was fine as I wasn’t using lazy loading. Once I did that, I started getting this exception: “The RelationshipManager object could not be serialized. This type of object cannot be serialized when the RelationshipManager belongs to an entity object that does not implement IEntityWithRelationships.”. The easy answer didn’t work.

At first I tried to set up the Json serialization settings of the global formatter since MVC4 uses JSON.Net by default.

That did not solve the problem.

I then found a code sample on Telerik’s site that accomplished the goal, but it seemed overly complex (it may have just been the only way to solve it in mvc3). Based on that code sample, and using JSON.Net, this is what I came up with as a final solution:

Update: For this to work, Proxy generation still needs to be disabled. I didn’t make that clear in the original text.

Categories: Coding Tags: ,

WCF Proxies using Dependency Injection

July 31st, 2012 No comments

Last time I noted that we use Dependancy Injection for a lot of things on my project and that one of the big pieces is the use of DI for our WCF Services. In this post I’ll cover what the benifits to this approach are and how we accomplished the task.

The benifits of writing our wcf service layer this way is the following:

1) It abstracts away the fact that it’s a service call. To the calling code, it’s just a DI interface.
2) Enables easy Mocking for unit testing (due to the DI container)
3) Easily swapping out what could be a service call or a local call with the opposite. This makes local integration testing and debugging easier.
4) Allows us to modify the interface without touching a bunch of projects. Before we did this, it took a while to right click and refresh the auto generated service reference every time an interface was updated.
5) Configure at runtime which services were in the container and what their settings were
a) This includes interface, URL, message size, graph size, etc.
b) One of our requirements was a centralized configuration system, not in the web config on 20 servers.

The con is that you aren’t using the autogenerated service reference; which means all of the nicities of the autogenerated async methods do not exist. We are handling async ourselves because of long running IO anyway, so this was not a big deterent for us.
The first step was creating a wrapper class that would properly dispose of the WCF channel. You cannot just have the interface implement IDisposeable and put that inside your using block, as it will call the remote dispose method, and not the local one.

 

As you can see, this function chooses the appropriote action depending on if the class is a WCF proxy or just a local interface.

The next step was to create some way to get the configuration data from a remote location. In our case that is a service (or a DB call); however, that remote locaction could be the web config, a seperate config file, whatever you come up with.

 

We originally started with just the first 4 properties; however, as we progressed through the project, we needed to adjust the rest of the settings as well.
After we had defined the settings we wanted to store, it was time to set up the dependancy injection container. For the most part, it’s a simple wrapper around the Unity container. The important setup is done in the regitration of the interface.

Unlike the normal use of DI, where we just tie a type to an interface, we want to tie an interface to a function call. We accomplish this in unity through the InjectionFactory class. The function CreateService is then called every time we make a Resolve<> call on the unity container for those registered interfaces.

 

This function uses reflection to new up a ChannelFactory<Interface> and call CreateChannel which will create a WCF channel with the configured binding and endpoint.

 

After some additional maintenance plumbing and creating some overrides for mockability; the end result is the is a singleton wrapper that can be used anywhere in your codebase to make WCF calls such as this:

 

The full code can be found here:  https://gist.github.com/3222717
How have you approached this problem in your own architectures?

 

 

Categories: Coding Tags: , ,

Using Attributes to Register Dependency Injection Interfaces

June 27th, 2012 No comments

We use dependency injection to isolate various layers of our infrastructure for testing purposes.  As we moved past 20-30 classes in the layers, it became non-trivial to maintain the default set of interface registrations.  There used to be a large file that would set up all the interfaces with the types (thus knowing everything about all interfaces and implementations, which sort of defeated the purpose of DI).   We solved this problem by creating a custom attribute for the registration.  We scan all of the assemblies for this attribute on application startup and use the information in the attribute to tie everything together.

 

 

In the next blog post, I’ll show how we take this concept to the next level with our WCF endpoints to avoid having service references in 100 different projects.

Categories: Coding Tags: ,

Shim Example – Not ShimDateTime

June 12th, 2012 No comments

I spent some time today writing unit tests using the new Fakes framework in VS2012.  It took me a little bit to figure out exactly what was going on and I didn’t find any examples to help along the way (other then the normal ShimDateTime example).  Either my google-foo is off, or there just isn’t much out there yet, so I figured I would post a quick example to help get you started.

First step:   To create a Fake of a system assembly (or any assembly for that matter), right click on the reference in the unit test project references and click “Add Fakes Assembly”.  That will generate a Fakes assembly and add it to your project.  Once that is created, all namespaces in the original assembly will now have a corresponding .Fakes namespace. (e.g  System.IO.Fakes)

In this example, I have a function that cleans up some temporary data that my application creates.  I want to assert that that function does not throw exceptions if, for some reason, the temp folders cannot be deleted.

To start using the Shim objects, I need to define my ShimsContext.

 

This context keeps the shim scoped to this unit test or a subset of the unit test.  If the context is not defined, you’ll get an error when running the test.  I’m very glad that Microsoft set up the Shims in this fashion.  The reason being is that Shims are AppDomain wide, and left unchecked, could change how all of your unit tests execute.  That would lead to a lot of weird issues that would be hard to track down as you’re executing your unit test suite.   How would you like it, if your unit test was throwing IO exceptions when run on the build server, but when you run that isolated test on your machine, it runs perfectly?  I’d be pulling my hair out after 10 minutes most likely.

 

Now that I’ve wrapped my unit test in a ShimsContext, I can get to work on the meat of my unit test.  We’ve already created the fakes assembly for System, so it’s a matter of getting the Shim object for DirectoryInfo and returning what I want it to, in this case, throwing an exception.

 

I use the shim class for DirectoryInfo and set the AllInstnaces.DeleteBoolean property to my Action.  One thing to note: it appears that the naming convention is FunctionArg0Arg1 on the Shim properties.  Make sure you are setting the action for the specific function override that you’re using,  and not the first one you find.  For DirectoryInfo.Delete, there was both a Delete and a DeleteBoolean.     The AllInstances property allows me to state that any instance of this class follows this rule, not just a specific instance.   From there, setting the override is a simple lambda expression to define the Action.  (Instance, Parameters ) => { function; }.  I can’t think of any easier way to express my goal as a developer then that.

If you have not looked into the new testing features of VS2012, I would suggest that you watch this Tech-Ed presentation by Peter Provost.  It’s a great primer on the new features.   I have been very impressed by the Fakes framework so far and the Shim feature just knocks it out of the park.  All in all, once you get the basic syntax of Fakes, it will become your new best friend while writing unit tests.

 

 

Categories: Coding Tags: ,

Building WiX MSIs in TFS Preview

June 11th, 2012 1 comment

I spent the weekend getting one of our more complicated projects up and building in the hosted build servers of TFS Preview.  The last step was integrating our WiX deployment packages.   I upgraded to 3.6 RC before I started, just in case.

After creating a demo project with a simple WiX project.  This is the build output that I receive.

 

Ok, that makes sense.  WiX isn’t installed on the hosted build server.  Since we cannot install WiX on the build server, we must upload everything we will need into source control.  There are two main components here:  the WiX binaries and the WiX msbuild targets files.   I have a folder in source called “Build Stuff” where i put everything that I’m going to reference as part of a build.  I created two sub folders, WiXBin and WiXTarget.

 

I copied all of the files from C:\Program Files (x86)\WiX Toolset v3.6\bin into the WiXBin folder, and all of the target files from  C:\Program Files (x86)\MSBuild\Microsoft\WiX\v3.x into the WiXTarget folder.

Now we have to tell MSBuild where to find our new files.  On the process tab of the build definition, is a setting called “MSBuild Arguments”.  This allows us to pass overrides directly to MSBuild.  In this case, I passed the following (this should be all one line, I wrapped it so you could read it):

 

/p:WiXTargetsPath=c:\a\src\BuildStuff\WiXTarget\wix.targets; WixTasksPath=c:\a\src\BuildStuff\WiXBin\WiXTasks.dll; WixToolPath=c:\a\src\BuildStuff\WixBin; WixExtDir=c:\a\src\BuildStuff\WixBin

 

Given that we have more then one MSI project and they reside at different levels in the folder structure, a relative path to these files didn’t seem like it was the best long term solution.  At least for now (no telling it MS will change it), the source directory for a project being built is C:\a\src.

Now that we have MSBuild set up correct to build our WiX project file, let’s run the build again:

If you’ve set up a new build server before, you’ve probably seen this error.  It’s the error basically telling you that the account running the build doesn’t have administrator permissions.  Since we’re running in a hosted environment, this is not something that we can change.  Our only option is to disable ICE validation.

By opening up the WiX project settings, going to Tool Settings, we can set WiX to suppress the ICE validation step.

 

 

After that setting get’s checked in, we re-run the build and we see the MSI in our drop location.

 

ICE validation steps in the build are a nice to have in that they ensure that the MSI is consistent right away; however, those validations can be run by an infrastructure resource using Smoke.exe locally before being deployed into your QA environment.

 

 

Categories: Coding Tags: , , ,

Using XUnit with Team Build 2010

June 7th, 2012 No comments

My team is making the switch from MSTest to XUnit for our unit testing framework.  As a result of this change, I needed to enable our TFS Build to run XUnit and publish the results.  Searching around I found a couple old posts like this one; however, all of them left me with a decent amount of work to do.  I wanted to create something that I could easily distribute to the rest of my teams and it would be simple to integrate.

The first step was to create a workflow task to execute XUnit against my test DLL.  My goal for the task was to work as close to the MSTest task as possible.  That meant passing in a file spec such as **\*tests*.dll.  To make it easy on the consumer, we’re also going to not assume a path, but have that as an argument as well.

 

Now that we’ve created the arguments that we need, we can start dropping in the activities.  The first activity is to use the TFS Build Find Matching Files task.  This will use our file spec to find all of the unit test dlls that are built.   Next we confirm that we have items in our file collection or issuing a warning (not shown) that we couldn’t find any items.  There is no TFS Build 2010 XUnit activity from the XUnit project.  As such we’re going to be invoking the command line runner.  This runner can only run one dll at a time.  This means that the next step must be to loop over the test dlls with the ForEach activity.

 

Now that we have our individual test dlls processing, the next step is to invoke XUnit against each of those files.  We do this with an InvokeProcess activity.    The file name is one of the arguments that’s the path to the XUnit console runner.  The arguments that are passed are “<path> /silent /nunit results.xml”

 

The next step is to publish the results to TFS.  For this I created a coded build activity that mimicked the functionality of the NUnit publish task from the community build extensions.

 

The catch with the way the publish works is that the result file must be different for each test dll run.  For this we add a counter to append to the result xml as we go through the files.   The full activity can be downloaded and built from GitHub.  Now that we have a xUnit build activity, it’s just a matter of dropping it in a full build workflow.  This is the easy part, as we’re just completely removing the MSTest portions and replacing it with the XUnit activity we just finished writing.

And voila, we have XUnit tests running in a TFS 2010 build process and publishing the results.  Hopefully this is helpful to help you get XUnit integrated into your process.

 

References:

Full Source for XUnit activity on GitHub

Categories: Coding Tags: ,

TFS History Reporting with Powershell

June 2nd, 2012 1 comment

A client asked us a few years ago if there was a way to generate a TFS report that would give them a breakdown of check-ins and the specific files changed on each day.  The reasoning was that they stored their SSIS reports for their financial system in TFS and their auditors wanted the information for their audit report.  As the TFS guy, I was tasked with coming up with the solution.

I couldn’t figure out a way to use the TFS warehouse to give them all of the information that they were looking for.  I was about to start writing a console app to pull the data from the TFS API when I remembered that the TFS Power Tools come with PowerShell extensions.   These powershell extensions give you the same functionality as the TFS object model, but with the power of the PowerShell piping.

The first output was a simple select changeset items and output to a file.

Get-TfsItemHistory "$/PROJECTNAME" -Recurse -Version "D1/1/10~D12/31/10" | Sort CreationDate | Select ChangeSetId,Committer,Comment,CreationDate |  Format-Table ChangeSetId,CreationDate,Committer,Comment -Auto -Wrap  | out-file "full.txt"

The ItemHistory comamnd pulls back all of the changesets from 1/1 to 12/31 and pipes that to the sort command.  The data is sorted and then piped to a select comamnd which pulls back the 4 fields we care about displaying.  Now that the data has been filtered to those columns, it’s piped to the format-table command to render and then sent to an output file.   The output looks like this:

ChangesetId CreationDate Committer Comment
———– ———— ——— ——-
21719 1/5/2010 10:43:32 AM edwin.frey  I should really put better changeset commands
21722 1/5/2010 10:59:32 AM edwin.frey  Changeset description #2

The second report is a little more complex and was Check-Ins by name and Date.

Get-TfsItemHistory "$/PROJECTSOURCEROOT" -Recurse -Version "D1/1/10~D12/31/10" | Sort CreationDate | Select ChangeSetId,Committer,Comment,@{n='CreationDate';e={$_.CreationDate.ToShortDateString()}} | Group CreationDate | Foreach { "$($_.Name) - Total Checkins: $($_.Count)";$_.Group | group Committer | sort @{e={$_.Count};asc=$false},Name | Format-Table Count,Name -auto } | out-file groupedByDate.txt

Let’s walk through what this script is doing:

Get-TfsItemHistory "$/PROJECTSOURCEROOT" -Recurse -Version "D1/1/10~D12/31/10" | Sort CreationDate 

will invoke the TFS powershell comment and pull back the change set history on that source root, recursively between the first and the last of the year 2010.  The output of the command is then piped into the Sort command to sort the results by Date.

Select ChangeSetId,Committer,Comment,@{n='CreationDate';e={$_.CreationDate.ToShortDateString()}} | Group CreationDate

This command will select the specific fields we care about, format the date,  and then group on the creation date.  We are grouping by the date because we want the CreationDate to be the header of our output.

Foreach { "$($_.Name) - Total Checkins: $($_.Count)";$_.Group | group Committer | sort @{e={$_.Count};asc=$false},Name | Format-Table Count,Name -auto }

The foreach is over the grouped history from the previous group on CreationDate.  Inside of that foreach, we have 4 commands acting on the input.  The first command returns the name of the group, which in this case is the CreationDate, the total check-in count, and the group data.  The group data is then grouped again by the Committer, then piped into a sort command that sorts by count, and then returns the name of the Committer.  The final command is the format command that specifies that the output for that command should be a table format, with count and name in columns respectively.

When this report is run, you end up with a text file like this:

1/5/2010 – Total Checkins: 3

Count Name
—– —-
2 Name1
1 Name2

1/6/2010 – Total Checkins: 2

Count Name
—– —-
2 Name1

 

The final report was basically outputting every detail about a changeset into a text file.

Get-TfsItemHistory "$/PROJECTNAME" -Recurse -Version "D1/1/10~D12/31/10" | Sort CreationDate | % {"<code>nChangeSetId - $($_.ChangeSetId), $($_.Committer), $($_.CreationDate) </code>n"; $(Get-TfsChangeset $_.ChangeSetID).Changes | select-tfsitem | select path | Format-Table -Wrap -Auto} | out-file "fullWithPaths.txt"

This script takes a while to run due to the fact that for every changeset, it’s pulling back all of the files modified in that changeset.  Inside of the ForEach we use the second cmdlet Get-TfsChangeset to get the specifics for a changeset, specifically the items changed and we’re selecting the path of those items.  The output looks like this:

ChangeSetId – 21719, edfrey, 01/05/2010 10:43:32

Path
—-
$/report1.rpt
$/report2.rpt

The source for these scripts is also available on GitHub.

 

What problems have you solved with the TFS powershell cmdlets?

 

Categories: Coding Tags: ,

Entity Framework Performance Tip – Did your object really change?

May 29th, 2012 No comments

During the performance testing of one of our batch jobs, we discovered that we were always sending objects back to the database; whether they had actually changed or not.  When you’re processing 100,000 objects, this can be quite the performance hit.  What I discovered is that almost half of the objects didn’t change, but their entity state was set as modified.  After some additional debugging, I narrowed it down to this code in the EF designer.cs:


set
{
OnNameChanging(value);
ReportPropertyChanging("Name");
_Name = StructuralObject.SetValidValue(value, true);
ReportPropertyChanged("Name");
OnNameChanged();
}

The setter of every property was marking the object as Modified, whether the value actually changed or not!

I could have solved the problem in the batch job by checking the values before I set them, but I opted to solve it globally (we have about 30 batch jobs) by modifying the EF generated code instead.  After all, the object should be able to correctly keep track of it’s own state.   This first required me to switch to using a T4 template.

Switching to T4 Templates

 

The first step is to remove the default classes generated by Entity Framework by removing the EntityModelCodeGenerator from the Edmx file.  On the left is the before window, and the right is after.

Edmx Property comparison

 

The next step is to add the T4 template to your project.  Right Click on your project -> Add New Item -> Select ADO.Net EntityObject Generator

Add New T4 Template Dialog

 

With the new t4 template, make sure that the Custom Tool property is set to “TextTemplatingFileGenerator“.  Right Click the template and select Run Custom Tool to generate the EF Code.

 

Modifying the template*

 

Now that we’re using the T4 template to generate our entity code, we can go in and modify the way property setters work.  The first place we’re going to look for is on the setting of primitive types.   Look for the function “WritePrimitiveTypeProperty”.   Once you find it, you’ll see the template code for generating the primative setter.

 

Code snip-it  of the T4 template for generating the primitive setter

In the default template, the primary key fields had logic to only mark it as changed if the value was different.  In the updated template.  you’ll notice that I just commented out the check to see if it was the primary key.  This way that logic applies to all primitive fields.

The second change is to update the function “WriteComplexTypeProperty“.  This change is more straight forward as there was no pre-existing logic in the setter.  The final result is this:

Complex Property Setter in T4 Template
This code can be copied from Gist

we’ve now modified the template to make our objects a little smarter about marking themselves as modified.  Right click on the T4 template file in solution explorer and run the custom tool again, and voila!.


set
{
if (_Name != value)
{
OnNameChanging(value);
ReportPropertyChanging("Name");
_Name = StructuralObject.SetValidValue(value, true);
ReportPropertyChanged("Name");
OnNameChanged();
}
}

In future blog posts I’ll talk about the other changes we’ve made to the default behavior that might be useful to others.

What are the ways you wish EF operated differently that could be accomplished by modifying the T4 templates?

 

*These steps are specific to EF4.  I have not upgraded to EF5 yet, and have not gone through the exercise to modify the updated T4 template.

Categories: Coding Tags: , , ,