Articles from Development

PowerShell Tip: Creating GUIDs and Code Generation

By Michael Flanakin @ 11:30 AM :: 5020 Views :: Development, PowerShell :: Digg it!


I should start off by saying this isn't really about hard-core code generation. It's more about simplifying some of the more repetitive tasks we tend to do during development. In this instance, I needed to get 7 GUIDs to use in some test code. Sure, I could've use the Create GUID tool in Visual Studio, but what fun is that? Besides, I hate manual tasks and this tool would have forced me down a ~24 step process. No thanks. I'll stick to the 3 steps PowerShell can give me.

First thing's first, how do we create a GUID in PowerShell? I'm going to fall back to .NET for this one.


If you run this, you'll get a blank line. What's up with that!? Admittedly, I'm not 100% sure why this is happening, but I have a pretty good guess. GUIDs are surrounded by curly-braces ({}), which are interpreted by PowerShell to be a script block. Putting 2 and 2 together, I'm assuming PowerShell thinks this is a script to run. Easy fix.


That's it. The only other thing I had to do was add in my other formatting to create the GUID in code and slap that in a loop.

foreach ($i in 1..7) { 'new Guid("{0}")' -f [Guid]::NewGuid() }

You'll notice I opted to use the PowerShell shortcut syntax for string formatting. If you missed it, I talked about it as well as the static-method-calling syntax last week.

This tiny exercise reminded me of a pretty advanced Excel spreadsheet I created years ago because I was sick of typing the same bit of code over and over again. Since then, other tools, like GhostDoc and Resharper, have augmented my developer experience, enabling a much higher level of productivity than I had back then. Nonetheless, there is still room for improvement. This makes me wonder how much PowerShell could do for me with respect to code generation.

Bug When Changing Platform (32- vs 64-bit) in Visual Studio

By Michael Flanakin @ 9:43 AM :: 1673 Views :: .NET, Development :: Digg it!

Visual Studio

Someone I work with came to me recently and showed me an interesting bug. In Visual Studio, you can force a project to be built as 32- or 64-bit by going to the project properties Build tab and specifying the target platform. He did this and then proceeded to build the app. This put all the binaries in a \bin\x86 directory. WTF!? I tried it myself and -- not that I doubted him -- I got the same results. The build directory still had the previous value of \bin\Debug, so I found this odd. I changed the build directory to \__bin\Debug and guess what... that's exactly where it went. I thought this was odd, but remembered the IE7 bug on Windows Server 2003 I mentioned a few months back. I changed it back to \bin\Debug and everything worked like a charm.

It looks like changing the target platform changes the build location and, to get around that, change the build directory to something else, save the properties, and change it back. Annoying, but at least you only have to do this once. I went ahead and added this to Connect.

Visual Studio Web Project Fails to Open with COMException

By Michael Flanakin @ 7:46 AM :: 1565 Views :: .NET, Development :: Digg it!


When opening a solution with a web project in Visual Studio, you receive the following error in a popup dialog:



Apparently, this is an issue with IIS configuration. I'm not quite sure why we get such a useless error message, tho. Very annoying. If you're not sure you're seeing this with a web project, load the solution and, when the error pops up, look at the status bar. You should see a "loading" message with the path of the problematic solution.


  1. Ignore the errors and let the solution load
  2. In the Solution Explorer, right-click on the project that failed to load, click Edit <project file>
  3. Scroll down to the bottom of the file and look for <UseIIS>True</UseIIS> (located at \Project\ProjectExtensions\VisualStudio\FlavorProperties\WebProjectProperties\UseIIS)
  4. Repace True with False
  5. Save and close the project file
  6. In the Solution Explorer, right-click on the project, click Reload Project

More Information

  • Applies to: Visual Studio 2005, 2008

Resharper Nightly Builds: Game On!

By Michael Flanakin @ 7:09 PM :: 2203 Views :: .NET, Development, Tools/Utilities :: Digg it!

Resharper 4.0

I'm a month and a half late, but the Resharper nightly builds are back! I guess I stopped checking after not seeing any movement for a while. I'm glad to see some activity, tho. This is the most beneficial add-in to Visual Studio I've seen; especially as a productivity geek. What I've been most surprised about is the overall quality of the nightly builds in the past 6 months. Simply outstanding. If you're asking yourself whether to give it a shot or not, I say go for it. You're likely to run into minor issues, but if my experiences are indicative of how well they manage their day-to-day development, this is a team with a very tight ship. I always grab the latest and try to update a few times a week, depending on what I'm in the middle of. If you're not quite as confident as I am, grab a "works here" release. I'm sure you'll see how great this tool is in relatively short time. An absolute must-have for all code-focused developers.

Programmatic User Login (aka Win32 LogonUser)

By Michael Flanakin @ 6:04 AM :: 6089 Views :: .NET, Development :: Digg it!

Programmatic user login

Nothing new here. I just wanted to save this code snippet because it's popped up a few times in the past year and I have to go find it over and over. At least this will make it a little easier for me. This is by no means an authoritative reference -- it's simply what I have used. If you know of something I'm missing, please let me know.

First, you'll need to import the Win32 LogonUser() function:

using System.Runtime.InteropServices;

[DllImport("advapi32.dll", SetLastError=true)]
public static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken);

The only question you probably have is about the logon type and provider parameters. The only provider I know of is the default, 0, which uses "negotiate" (Kerberos, then NTLM) for Windows XP/Server 2003 and later machines. Windows 2000 defaults to NTLM. If you don't know the difference, let me know and I'll explain that in more detail. Here are a list of logon types:

  • Interactive (2) -- Intended for interactive use (duh) with something like terminal server or executing a remote shell. This type caches credentials for disconnected operations.
  • Network (3) -- Intended for high performance servers to authenticate plain-text passwords.
  • Batch (4) -- Intended for batch servers, which act on behalf of the user without his/her direct intervention. Typically used to process many plain-text auth attempts at a time.
  • Service (5) -- Intended for service accounts, which have the "service" privilege enabled -- don't ask me what that is because I don't know.
  • Unlock (7) -- Intended for GINA DLLs (whatever that is) that will interactively use the machine. This type includes some auditing.
  • Clear Text (8) -- Intended for double-hop impersonation scenarios where credentials will be sent to the target server to allow it to also impersonate the user. As I understand it, this is what IIS "Basic" authentication uses. To perform a double-hop, you'll actually have to do a few other things. I won't get into that here, but let me know if that'd be of interest.
  • New Credentials (9) -- Clones current credentials and uses new credentials for outbound connections. Supposedly, this doesn't work with the default provider -- it requires the WINNT50 provider, whatever that is.

The following is a list of the supported providers. I don't know anything about the non-default ones, but figured I'd list them for completeness..

  • Default (0) -- "Negotiate" (Kerberos, then NTLM) for Windows XP/Server 2003 and later; NTLM for Windows
  • NT 3.5 (1)
  • NT 4.0 (2)
  • NT 5.0 (3) -- Required for the "new credentials" logon type.

Next, well... just use it. As you can see, the last parameter in the LogonUser() function is an out parameter for a token which represents the user. This is key. All you need to do is initialize a WindowsIdentity instance with this token and you're well on your way.

using System.Security.Principal;

/// <summary>
/// Creates a new <see cref="WindowsIdentity"/> using the specified credentials.
/// </summary>
/// <remarks>
/// This method assumes an interactive logon for simplicity.
/// </remarks>

public static WindowsIdentity GetIdentity(string domain, string userName, string password)
    IntPtr token;
    bool success = LogonUser(userName, domain, password, 2, 0, out token);
    return (success) ? new WindowsIdentity(token) : null;

Pretty simple. Of course, we still aren't there, yet. Now that you have the identity, you most likely want to impersonate it. Luckily, this is a simple 2-liner... well, technically two 1-liners. I should also say that, if you want to do impersonation with an already-obtained WindowsIdentity (and you don't have a password), you'll start here.

ImpersonationContext context = GetIdentity("mydomain", "me", "mypassword").Impersonate();
// do work as impersonated user

That's it. Enjoy!

MSTest Helper: Create Private Accessor

By Michael Flanakin @ 2:03 PM :: 7405 Views :: .NET, Development :: Digg it!

I just found something very useful in Visual Studio 2008 a few days ago. I've been using MSTest for about 2 years, now, and one thing I liked initially was the wizard that would generate test stubs for you. I liked that it gave you somewhere to start. After using it more and more, I began to hate it, tho. I guess the problem is I'm anal about how my code looks and I end up changing everything. So, I started generating the classes myself. The only problem with this approach is the private member accessor the wizard generates is no longer there. Not desirable, but there's an easy fix: run thru the wizard quickly and delete the methods. At least, that was until a few days ago. In VS08, all you need to do is open the file of the desired class, right click the background, select the Create Private Accessor menu item, and then pick the test project to add it to. VS has so many menu items, it's easy to overlook the really useful ones, so I figured this one was worth sharing. Hopefully, Sara Ford Syndicated feed is listening.

Creating a private accessor with MSTest in VS 2008

"To Be [null], or Not To Be [null]," says the String

By Michael Flanakin @ 3:46 PM :: 1231 Views :: .NET, Development :: Digg it!

String Handling

David Kean Syndicated feed mentions his thoughts on string handling and I have to say I completely disagree. He states that you should always return String.Empty instead of null. I hate this. An empty string and what equates to "no value," by definition, mean two different things. David, on the other hand, contends that the two should always be treated as the same. To be clear, I'm not saying the two should never be handled the same. On the contrary, I believe that's the norm... and rightfully so. With that, however, there are still cases where they need to be handled differently. Besides, the benefits you get from returning an empty string are immediately invalidated when considering consistency principles.

I'd even go as far as to say, returning empty strings promotes bad practices. Why? Because the developer then treats what is returned as a non-null value. While this isn't a problem, it typically goes further. The same developer will start assuming all methods that return strings act the same way. Let's not forget what they say about people who make assumptions... For this reason, I believe it's a good practice to always use String.IsNullOrEmpty() to validate string values, no matter what you think/know is returned.

I could list a number of what-if scenarios depicting why this is so important, but I won't. It should be pretty clear. Let's face it, we've all fallen victim to a null reference exception, which is essentially what happens when you assume a variable has a value. Clarify your assumptions with good exception handling.

Lastly, one other reason I like null over empty strings is because it uses less memory. Yes, this is very trivial, but it's true. The bottom line is that, whether you return null or empty string, you have to treat it as if it were null. For this reason alone, I believe it's cleaner to simply use null. Using empty strings is a hack, in my mind, unless it really means something different than "no value."


Thoughts on SharePoint Development

By Michael Flanakin @ 7:19 AM :: 1638 Views :: .NET, Development, Patterns & Practices, SharePoint :: Digg it!


It's been entirely too long. I've had a new project take over my life for the past few months. I'm trying to get back into things and catch up with the blogs I read, but it's sometimes hard. This is my first big SharePoint-based project and I have to say it's been "an experience." If you're a developer and you haven't heard about how much SharePoint has been taking over, you've probably been living/working in a cave. It's amazing. I've learned a lot, with respect to SharePoint, and I can sum it all up with this: SharePoint is the new VB. I don't say this from the drag-n-drop point of view that made VB so easy to use, but from the ease in which you can get something done. To put it another way, SharePoint is to hobbyist web "developers" what VB was to hobbyist client-side "developers." I say "developers" for a reason. I've been very clear about my thoughts on hobbyists, so I'm not going to get into that.

Perhaps the best way to put it is to liken SharePoint "development" to the creation of a Rube Goldberg device. And, if you want to throw in AJAX features and a standard user experience that looks/feels nothing like SharePoint... well, let's just say you have your work cut out for you. As if portal development wasn't odd enough with respect to deployment. All I can deduce is that SharePoint was built for tech savvy users (not even power users) and not developers... definitely not developers.

As most who know me already know, I've been a fan of DotNetNuke (DNN) for quite a while and even worked on the dev team for a few of the modules. I feel lucky to have gone thru that because DNN gave me a nice perspective to some key issues around portal development. It's also let me experience the difference between portals built for developers instead of for users. This is the key difference between SharePoint and DNN. I know there's been a lot of talk in the past about differences, but it all comes down to that. SharePoint has the polish necessary to make your portal much more user-friendly and feature-complete, but DNN has the dev backing to make it a breeze. I'm really hard-pressed to pick a favorite, that's for sure. I've been in talks with a few key people within the DNN and Microsoft teams to get more involved with DNN in an official capacity, but I think I'm going to back off of that. Between the two portals, I see SharePoint having the most promise for the future. DNN is fantastic and I'll continue to recommend it where it makes sense, but SharePoint is much more powerful. The big hole in SharePoint is the developer experience. I see this as a huge opportunity.

More and more, I hear about how we need to focus on specific technologies. I'm not the type to whole-heartedly dig in to a single technology from top-to-bottom, but I do see value in filling this gap. The training I continue to see around SharePoint is all about small-time hacks. We need an enterprise solution. Some true patterns and practices for enterprise SharePoint development. Given the Rube-Goldberg-ian nature of SharePoint, this may be hard, but someone has to do it. Of course, this is all going to depend on the project opportunities I have. For now, it's just a bag of thoughts I've thrown together, but I'm hoping I'll have a chance to sort them out and pull it all together in the future.

There's so much more I could say about SharePoint, DNN, and the other things I've mentioned here, but I've babbled on long enough.

One Use for Extension Methods

By Michael Flanakin @ 5:44 PM :: 1380 Views :: .NET, Development :: Digg it!

Microsoft .NET

I like extension methods. Some people suggest you avoid the feature, which I believe is understandable, but the more I think about it, the more I want to use it. Don't get me wrong, I'm not using them all over the place, but when they make sense, it can make your code much cleaner.

As you may know, I'm working on custom code analysis rules to enforce coding standards. To make the rules testable, I decided to front the code analysis types with my own object model. To do this, I created a factory to create my types -- CodeItemFactory.Create(Member). While creating a new rule, I wanted to see if the current member had a sibling member with a specified name and then return that. At first, I started to add a new method to the CodeItem class, but then I realized I'd have to reference the CodeItemFactory class. While I'm in the same assembly, I didn't like the idea of putting this logic into the domain object and creating a circular dependency. Plus, since the code analysis framework isn't "officially supported," there are no promises on what will be available in the future. Keeping my integration code separate is just a good idea. So, I ultimately decided to create an extension method to do what I needed -- CodeItemExtension.GetSibling(this CodeItem, string). This enabled me to have a clean code implementation and keep my purist ideals of separating integration code -- codeItem.GetSibling(memberName).

There are plenty other reasons to love extension methods, but I just wanted to share this because I feel like it's a very nice solution that keeps life simple.

Testing Custom Code Analysis Rules

By Michael Flanakin @ 3:55 AM :: 2100 Views :: .NET, Development, Microsoft, Predictions, Tools/Utilities :: Digg it!

Microsoft .NET

Over the years, I've been asked to put together coding standards again and again. The nice thing about this is that it enables me to pull out the old docs and touch them up a little. A year or two ago, I heard something that made a lot of sense: developers never really read coding standards and, even if they do, they don't usually adopt them. Let's face it, if you don't adopt a standard as your own, you're not going to use it. The only way to ensure the standard is applied is to catch the problem before it gets checked in. I tried a VS add-in that attempted to do this as you type, but it wasn't quite as extensive as I want, but I grabbed onto the concept. For the past year, I've been wanting to start this and have finally decided to do it.

As I sat down and started to investigate writing custom code analysis rules, I asked myself how I was going to validate them. After hacking away at one approach after another, I started to realize I wasn't going to get very far. Apparently, with the latest releases of Visual Studio and FxCop, there's no way to create the objects used to represent code. After talking to the product team, the official position seems to be that, since custom rules aren't "officially supported," they're not going to support their testability. I'm not sure who made this decision, but I think it's a bad one. Of course, I say this without knowing their plans. Well, not completely, anyway.

It's not all bad news, however. It turns out there are hopes to start officially supporting custom code analysis rules in the next major release, Visual Studio 10. Nothing's being promised at this point, it's just something the team would like to deliver. I should also say that the upcoming Rosario release isn't the major release I'm referring to. I'm expecting Rosario to be a 9.1 release that will probably hit the streets in early 2009. That's a guess, tho. If that's true, the VS 10 release probably wouldn't be until 2011. All I can really say about it is that it'll be a very exciting release. I can't wait to get my hands on a beta. Speaking of which, some of the goals they have for the product will make beta testing much much easier... I'm talking about a hugely evolutionary change, if not revolutionary, considering where the product is today. That's all I can really say, tho.

Back to the point, since there's no realy testability of the code analysis framework, I decided to create my own object model. The part I'm missing, obviously, is the factory logic that converts code analysis types to my types. I'm hesitant about this approach, but it's working so far. Hopefully, I'll have something to deliver soon. I keep bouncing around, tho, so at this point, I want to deliver a release with only naming conventions. That release is mostly complete, I just need to get approval for a distribution mechanism. If I don't get that soon, I'll just release it on my site.