| |
Articles from
October 2007
Looks like Lenovo is releasing a consumer laptop. While it doesn't look as nice as the ThinkPad, I'm excited to see Lenovo joining the game. They didn't address all my keyboard concerns, but it is nice to see some change. The stupid mouse "nub" is gone and the Esc key seems to be in the right place, but, unfortunately, they still have the damn Fn key out of whack... of course, the most annoying of all the issues. On the positive side, it has an integrated camera, what sounds like a nice sound system, and a 6-in-1 card reader. Nothing revolutionary, but they are at least nice features. Hopefully, we'll see these improve over time. I'm really hoping to see some change in look and feel of the laptop. It's about time for some out-of-the-box thinking. Then again, maybe they should focus on tablet PCs first, since I'll be getting one of those in June 
Looks like C++ will be seeing an upgrade in the coming years. Seeing as tho I don't use C++, I was about to brush this aside. Then, I started to get curious about the changes. In what has been dubbed C++ 0x, there will be changes to remove errors and inconsistencies, add garbage collection, and support for concurrency and multithreading. While the former two sound like big wins for the C++ community, the last one triggered something. Anyone can tell you the only constant is change, but looking at where hardware is moving and today's limitations in programming, it's become apparent current approaches to multi-threading need to be revised. Of course, this leaves us with the all too often asked, "how can we improve this?" Honestly, the change in C++ is just about promoting features from system-specific libraries to the core language, so there's nothing revolutionary going on. I want more. We now have Linq; I want Lint -- languge integrated threading. I don't really know what this means, yet, but the idea just sounds right. I want the compiler and/or run-time to make some decisions about how to manage threading code blocks or method calls. Attributes are a gimme to identify methods to thread, but I feel like there's so much more we can do. Forcing someone to create a method to contain threaded code may not be all too bad of an idea. I guess what I'm thinking of would include a thread{} block with some sort of eventing mechanism. Perhaps something like thread (ThreadArgs t = new ThreadArgs()) {...}. Like standard event handlers, the ThreadArgs class would be extended for special scenarios. The key to this class would be its built-in eventing structure and thread processing instructions. The framework would know how to manage the thread based on a ThreadHandler<ThreadArgs> instance. Factory and provider patterns jump out at me here, but I'd have to put more thought into it to determine what would be the most beneficial. The heart of the idea is that threading patterns would be programmed into handlers or providers, which would provide an extensible way to enable .NET to manage threads without forcing the developer to wrap his/her head around how it will work.
As nice as something like this may sound in my head, I don't think the full capabilities are coming out in my explanation. The real value lies within the ability to describe how a thread should execute in the ThreadArgs and the robustness of the handler to be able to execute that. This is still only half of the equation, tho. The compiler needs to be smart enough to detect common pitfalls, like deadlocks. This is obviously the harder problem to attack. Tooling also comes into play, when looking at how to avoid these problems.
All this is pretty half-baked, tho. In truth, someone a lot more threading savvy than I needs to look at this. I have no doubts that is going on, tho. I'd be interested in seeing what comes out of it all. I'm betting we'll see improvements inn .NET 4.0. Not that any such release has been discussed, but I suspect we'll start hearing feature news by mid-2008 and see a release in 2009 with the next OS release. Again, I know nothing. I'm merely making wild speculations.
Dvorak, in his typical sensationalist nature, questioned the code name for the next version of Windows: Windows "7". He claims it should be Windows "19". Apparently, he doesn't know much about versioning. I expected more from him... well, maybe not. He's all about getting attention and taking every opportunity to point a finger. That's ok, tho. I have no problem with that, but I do have to point out when he makes an idiotic comment like this one. I checked out Wikipedia and updated it to include version numbers it the table of releases for Windows. For good measure, here's a list of the named releases and their version numbers...
- Windows 95 (v4.0)
- Windows 98 (v4.10)
- Windows 2000 (v5.0)
- Windows Me (v4.90)
- Windows XP (v5.1)
- Windows Server 2003 (v5.2)
- Windows Vista (v6.0)
- Windows Server 2008 (v6.1)
- Windows "7" (v7.0)
Personally, I hope they stick with "7". I kind of doubt it'll happen, but I love the name.
Shelving was a heavily touted feature for TFS when it was first released in early 2006. Microsoft seemed to try to sell it as a new concept, but I argued it was simply an adjustment to an old concept: branching. I will say it's probably a good thing Microsoft put so much into selling the idea. Without it, I don't think most developers would know about such a capability. Then again, there seem to be a lot of developers who still don't know about it. Anyway, back to my point... After playing with shelving in TFS, I'm getting mildly annoyed with it. I guess the reason I say that is because I want it to be treated more like a branch. I believe in the concept of committing logical changesets, meaning I make small changes and commit them individually. Perhaps I take this to an extreme, but I want each change to be tracked independent of any others. When I shelf code, it's usually a sizable change. I'd like to be able to shelf the first change and then incrementally update the shelf with my changes as I go along. I'd also like the ability for others to commit to my shelves, which speaks to the collaborative nature of shelves. This is all a given when you use Svn shelving (aka branching); I just wish TFS was up to it.
Scott Sehlhorst  does a good job of discussing the difference between use cases and scenarios. I think this is important for those following the Microsoft Solutions Framework (MSF), which uses scenarios. Personally, I think scenarios are too specific and don't offer the amount of flexibility you need to make your requirements easy to manage. One of the nice things about use cases is you can make them as fine- or course-grained as you want. To sum up what Scott says, a scenario is a single run thru a use case. So, while a use case may have several alternate flows, only one of those flows, from end-to-end, would make up a scenario. Pretty simple, but I feel it's an important fact to acknowledge.
There's no telling how true this really is, but there's rumored to be a new user interface for Windows Mobile. The post says it sits on top of WM6, which makes a little bit of sense, since WM6 was released not too long ago, but typically, UI enhancements constitute a major version change. Either way, I'm intrigued by the rumor. I will say there's been talk of a completely new user experience which isn't based on the "start" menu concept in the next version of Windows, but I don't know if this qualifies -- besides, this is just Windows Mobile. There's still a start menu, but it is a different concept than what we have now. I imagine Microsoft is looking for some new ideas and this is perhaps a prototype of one of those. I kind of doubt we'll see something too close to this interface, but I'm sure all WM users will be eager to see some change. There hasn't been a major change in the mobile user experience since its initial release six or seven years ago.
A small big deal was made about Steve Ballmer saying Google reads its users' email. When I read this, I thought, "You've gotta be kidding me! When does he stop!?" Ballmer's made a number of comments that go against company policy, which I think is funny. He's a very controversial CEO and a number of people think the company would do good without him because of this. I'm not going to go into that, tho. What I do want to go into is his comment. He did, in fact say, "...they read your mail and we don't," but I think it was taken out of context. Specifically, Ballmer was referring to targeted ads. Ads on Gmail are targeted based the content of your email, while Hotmail's aren't. What this means is that Google reads thru your email to determine what ads to display. Of course, we have no idea what, if at all, Google is doing with this content beyond creating targeted ads. Google's seen a barrage of publicity about this in the past and I don't expect that to lessen anytime soon. This is perhaps why Microsoft has chosen to avoid the hoopla and opt for a more secure practice... at least, a logical barrier between Microsoft and your email content. Personally, I don't care about the targeted ads. In fact, I think, if you're going to show me ads, you might want to show me an ad I'm likely to click on, but privacy Nazis are probably loving it. All I care about is that it's not stored and/or used without my consent. All that aside, tho, I guess it was nice to see Ballmer actually making a truly factual statement that wasn't really about slamming the competition. The discussion was about ads and his statement about how Google generates ads was completely within reason. Heck, watch it for yourself.
There was a comment in the video about Visual Studio on Mac/Linux. He claimed that there was no plan now, but that it would be evaluated if/when there was a market. Just to chime in, I see it coming. Let Silverlight grow over the next two years and I believe we'll start seeing full cross-platform .NET. After that, we'll see more Microsoft apps that have been at least partially built on .NET be transitioned. The future of Microsoft is .NET. I can already see the day that Windows is build on .NET. Maybe I'm crazy, tho. I'm sure all the C++ guys think I am.
As I started the process of upgrading my Oracle client from 10.2.0.2 to 10.2.0.3 a few weeks ago, the pain and aggravation it caused reminded me of how much Oracle must really hate its users. Seriously, can you think of any other reason Oracle hasn't improved its user experience in the past 10 years!? Should it really take over an hour to make such a minor upgrade? I've installed, tested, and uninstalled apps in less time. It's not just about the time, tho. The fact I had to manually replace files, install assemblies, and modify registry settings is what really takes the cake. Don't even get me started on the pain of accidentally installing Oracle into two separate locations, which is a very common problem that causes immeasurable aggravation.
With all this, you can imagine how much I had to laugh when I saw Oracle's name on the World Usability Day poster. Oracle and usability? You gotta be kidding me. I don't think so. While we're looking at the names, I also found it amusing that the top two tech companies known for usability aren't there: Apple and Microsoft.

The long-awaiting Windows Live Maps has now seen it's first major update... and when I say "major," I mean major! The #1 complaint about having two input boxes has been resolved; there's now only one. That's not it, tho, there's a new layout, which I think looks pretty nice. I really like the minimal navigation and menu controls. I could do without the space that's taken up by the search type (business, people, collections, locations) and description, but at least this is a start. I also don't like how much space is taken up by the search results on the left, but I know people have complained about that being too small, so I'm probably the minority here. There's a bunch more, but I won't get into it all. Let me just say I'm glad to see this release and I was surprised to see so much change. I guess I was only expecting a small change.


Speaking of change, tho, I do have to say the 1-click directions are very nice. The idea is, you right-click a place on the map, select 1-click directions and you get directions to that location from the North, East, South, and West. From there, you decide which direction you're coming from and you get the proverbial "last mile" of directions. Very interesting. I've never seen this before, but know I'll find myself using that in the future. There's also a link in search results to get here without right-clicking on a random place on the map.

I've always shied away from laptops. Laptops, in general, aren't extensible enough and tend to be too expensive. Over the past few years, tho, as I have become more and more mobile, laptops have become a necessary evil. The worst thing about laptops is the keyboard you're stuck with... and I do mean stuck with. If you had some level of flexibility to switch out keyboards, that'd be a different story, tho. Heck, now that I think about it, with a little reverse-engineering, someone could make some money replacing standard laptop keyboards. I imagine most don't question their laptop keyboards much, but as a touch-typer who tries to ween every bit of productivity out of the system as possible, I want... no, I need my keys to be in a standard location. Honestly, when I look into buying a laptop, the keyboard is the first thing I look at. If you don't have a keyboard that at least closely resembles the standard layout, I take my money elsewhere. What do I look for? Perhaps the first thing is the Insert/Delete/Home/End/Page Up/Page Down buttons. I want the 2x3, horizontal layout. Most vendors get dropped out here. Next, I look at the arrow keys, which must be in the inverted "T" formation. From what I've seen, most vendors who pass the previous test pass this one, too. From there, I glance over the other standard keys like Ctrl, Fn, Win, Alt on the left and Alt, Context, Ctrl on the right of the space bar. I can live without the context menu button being there, but it is the "standard" location. These are the main things I look for and, believe it or not, most laptop vendors fail to meet them all.
I don't know why laptop vendors insist on placing keys in random places. It's almost as if they just shove the qwerty keyboard on a canvas and just toss the rest of the keys on to see where they fall. Perhaps the best vendor I've seen is Dell. HP does a pretty good job, but not as good as Dell. On the other hand, HP has been using extended keyboards with a full number pad. I always get annoyed when I see a laptop -- like my 17" Dell Inspiron from 2004 -- that has plenty of extra room on either side of the keyboard, but no number pad. When you see a laptop with a number pad, you know the vendor is putting more thought into its user. The other thing I like about HP is the button to disable the mouse touchpad. When I've mentioned this to people in the past, they talk of a software disabler, but I have yet to find one; either way, a button is nice. I've pretty much dismissed all other vendors (especially Toshiba *grumble, grumble*)... well, that was until I got a hold of my Lenovo. People told me how "solid" these laptops were and I always wondered what they really meant by that. Since I've tried various other laptops already, I figured I'd give it a shot. Let's just say I was sold. Lenovos are missing some of the consumer conveniences of other vendors' laptops, but if you can get past that, Lenovos can be summed up in that one word: solid. Unfortunately, they're not all that and a bag of chips, tho.
When it comes to Lenovo laptops, I have four complaints. Let me start with the small one: the touchpad buttons are too low, which makes it awkward to use when the computer is in your lap. If the stupid trackpoint buttons weren't so huge, it wouldn't be a big deal. I've always hated those annoying mouse "nubs" and it irks me that it degrades my experience. The second is another minor annoyance; a nicety that was added to enhance users' web browsing experience: Back/Forward buttons on either side of the up arrow. My annoyance is that I've hit these keys several times when I wanted to use the arrows. This can be very annoying when you lose a lot of data (i.e. a blog post). As if that wasn't enough, the capability already exists with the use of one additional finger about 4" away (Alt+Left Arrow). Adding buttons with trivial benefits like this annoys me; especially when there are obvious negative effects. I wish they would've opted for a smaller button that wasn't as easy to accidentally push, like one the shape/size of the volume buttons. My third complaint is the Esc key, which is above the F1 key instead of to the left of it. I keep hitting F1, which makes the system hesitate while it brings up the help. This derails my productivity, like the Back/Forward buttons. Speaking of derailing productivity, this last one baffles my mind: the left Ctrl and Fn keys are switched. This is the first time I've seen something this stupid. What really gets me is how a vendor who has such a quality laptop can miss something this obvious. Most people seem to think it's ok; that you'll just get used to it. I'm sorry, but I refuse to accept this. Of the 7 people I know who have a Lenovo, all of them say this is their #1 complaint. Another handful of people complained about this when I sent an email internally polling for a workaround. Unfortunately, the Keyboard Customizer Lenovo offers doesn't cover this.
This is obviously a common problem, tho, and Lenovo isn't the only one to blame. The broader topic of keyboard standardization came up in Hanselminutes a while back. Some of my concerns were voiced there. Perhaps there's a need for a true standard. I don't see anyone pushing that, tho, so I'm not sure where to go for a global resolution. For now, I guess we're left with our voices.
I've used about half a dozen or so bug and issue tracking systems over the years and, with that, have come to favor specific setups. The main configuration I'm referring to includes work item status, resolution, priority, and severity. When I first dug into TFS, this was something I looked at and didn't like all that much. With the ability to customize work item templates, tho, I wasn't too concerned. Given that not too many people who work with TFS seem to have much experience with other, more mature bug tracking systems, I figured I'd share my thoughts.
Work item status in TFS is limited to active, resolved, and closed, by default. Without some sort of reason or sub-state, this just isn't enough. I want to say sub-states are probably how most people get around this, but I'd argue that full states are better. This is obviously an individual preference, tho, and only really comes into play when you look at reporting. The statuses I prefer are as follows: unconfirmed, new, assigned, in progress, awaiting info, reopened, resolved, verified, and closed. As with the rest of what I'll cover, anyone with Bugzilla experience will recognize some of these values. Bugs are created as "unconfirmed" and are moved to the "new" status after being triaged. By default, TFS includes a triage property, but I feel this is better covered by status. Again, just my opinion and it comes more into play with reporting. I prefer bugs to remain unassigned by default and then have the status changed to "assigned" when it's actually been assigned to someone. "In progress" and "awaiting info" are gimmes. Once completed, the bug should change from "in progress" to "resolved." When this happens, a resolution needs to be specified to indicate what actually happened. Resolutions are pretty standard: fixed, invalid, won't fix, duplicate, unable to reproduce. I'm thinking there are more resolutions, but this is all I can think of off the top of my head. Once tested, the bug is changed to the "verified" state and ultimately to "closed" once it has been deployed.
Priority and severity go hand-in-hand. All-too-often, I see teams trying to calculate what bugs should be in a certain release by priority alone. While this is a great start, I don't think it's the only factor. Let's just start with priorities, tho. Personally, I like to have at least 5 priorities because it gives you a bit more flexibility than 3 or 4, which seems to be fairly common. That's just a start, tho. I had a conversation with a co-worker, Allan Askar, last week about putting some meaning behind these numbers. We haven't put this to use, but I'm pretty confident that it should make prioritizations across the team much clearer. The key here is that priority is based on the iteration a work item is assigned to. If not assigned to a specific iteration, the work item's priority indicates when the work item should be implemented: 1's in 2 weeks, 2's in a month, 3's in 2 months, 4's in 4 months, and 5's in 6 months. The idea is every wok item would be assigned to the backlog and given a priority based on its importance at that time. During release planning, every item on the backlog would be evaluated to update it's priority and determine whether it should be included in the upcoming release. If an item is included in the release, then priority loses its time-based meaning and picks up a relative priority that simply indicates what order it should be worked on, as you would typically expect. In smaller chunks, we felt the abstract nature of relative priority was adequate; however, a larger group, like the backlog needs more meaning and should work much better. I guess we'll see how that works for us. Obviously, any time frames could work. We just chose some that seem to be somewhat typical on our current project.
Lastly, severity is what brought me into this whole thing. Originally, severity was all about workarounds (on this project). I never felt comfortable with this, so I suggested the following: critical, major, normal, minor, trivial, and enhancement. My guess is that most people are fine with this until they hit "enhancement." To most, a bug is a bug and an enhancement is something completely different. I disagree. I see a "bugs" as code-changing tasks that require testing. This includes simple refactoring changes. Why? Because they have to be regression tested to ensure everything works as expected. If this is skipped, you're risking breaking the app. Seriously, how many times have you saved something without checking it only to find out it either doesn't build or simply breaks logic? Yeah, I thought so. If that's not enough of a reason, then I'll stick to something everyone likes: simplicity. Bugs and enhancements are the same for the most part. Separating them just overcomplicates reporting and tracking.
I should probably modify the default bug and task templates to add some of these things in. Severity probably only works with bugs, but the status and priority concepts are arguably the same.
In the past, I've put a lot of effort into defining all or at least the majority of a given system's requirements up-front. The reason for this is because I've worked in fairly process-heavy environments. While I continue to see the value of this effort, it simply isn't feasible in most environments. Actually, I beg the question: is it feasible in any environment? Most developers find the requirements definition process a pain and question its value; however, experienced engineers recognize the value and strive to find a way to work it into the software engineering process. Unfortunately, this doesn't happen enough. For most small systems, this may not pose an issue; however, once these systems grow to a significant team size or fall victim to enough attrition, new team members start to feel the pain of this missing documentation. New team members aren't the only ones who feel the pain of limited documentation. As anyone who's witnessed one of the many failed contracting efforts in the software or construction industry can attest to, faulty requirements definition can be the death of any project. For this reason, I hope more developers will find the value of requirements and seek to develop them for their own projects. What is the right way, tho? How should we solidify the abstract thoughts that form the projects we engage in? Tyner Blain does a good job of describing an agile approach to requirements definition. I hope more take this to heart. Due to a change in scenery over the past year, agility has been more important than process. The change in viewpoint has been interesting and ultimately driven me to the same opinion Tyner has, where a full definition is desired, but is easier to swallow one bite at a time.
About a month ago, I commented on Sun's controlling nature with respect to Open Office  . We saw it before with Java and we're seeing it again. I find it quite amusing to see this company scrape by on community-focused ideals, hiding behind the guise of openness. Now, the community behind Open Office seems to be setting their sights on IBM  . IBM is a mammoth and supporting open source initiatives is about all it can do to stay "fresh," in my mind. IBM's efforts with Eclipse did a lot for the company and could do a lot here, but it's definitely an uphill battle. Despite their positive relationship over the years, Sun won't give up their control easily. Hell, it took a CEO change to loosen grips on Java. Either way, I don't see Sun's actions killing the open source project. I do see it's relevance dropping. To be honest, I don't think there is much relevance at this point besides a jump in attention due to the ODF vs. Open XML debate. The real competition seems to be more in the connected world, rather than on the desktop. There's been a love/hate relationship with the ribbon UI of Office 2007, but it's the first real innovation we've seen in a long time. While I don't expect such a drastic improvement in the next release, I'm interested in seeing what's next. We have Office dominating the productivity application market, Open Office making minor steps with the document format debate, and Google quickly putting some skin in the game with their online offering. There's much to see in the coming years. The question is whether or not it's too late for Open Office.
Karl Seguin talks about extension methods , which are part of the next release of both C# and VB; specifically he suggests avoiding them. He quotes the C# documentation by stating...
"Extension methods are less discoverable and more limited in functionality than instance methods. For those reasons, it is recommended that extension methods be used sparingly and only in situations where instance methods are not feasible or possible."
This is pretty much the impression I've had about them since I first saw them. I love the idea of having them, but all it really gives us is a new way to call our existing functionality. When you think about it this way, it seems pretty useless; especially when you consider the fact that it comes with some baggage. I can see developers getting confused when they see an extension method used in one project, but can't seem to find the same method when on a different project. The problem here is that simply looking at a code snippet doesn't clue you in on the fact that a method is an extension method. For this, you'll have to at least hover over the method and gleam that knowledge out of the tooltip or intellisense. Based on this, I'd tend to agree with Karl and say extension methods should be avoided; then again...
Karl claims he can't think of a potential use of extension methods, but I have two that I've thought about a few times. The first, which was the very first thing that hit me after being introduced to extension methods, is extending .NET. If you can't imagine that, let me introduce you to the universal problem that is the string. .NET 2.0 brought us String.IsNullOrEmpty(), but that's not enough. Why not? Because I need something that also checks for whitespace. Essentially, what I'd like is a String.IsBlank() method. Pretty simple, but very useful.
public static class StringUtil { public static bool IsBlank(this string value) { return (value == null || value.Trim().Length == 0); } }
Which would you rather see: StringUtil.IsBlank(username) or username.IsBlank()? Personally, I like the latter. The alternative is extending the class in question, but for something like String, which is so ubiquitous, that's not a serious option.
Want another example? I'll give you two, both of which relate to domain objects. Domain objects, as opposed to business objects, hold state and only state. You don't see any actionable methods on a good domain object. Typically, these objects are used to pass state between different systems or even throughout a single system. Business logic doesn't need to be everywhere, so a pure, state-based object is often the answer. Of course, some people like methods off of their state-based objects. Extension methods can give us the best of both worlds -- the simplicity of a condensed logical structure and the flexibility of a layered architecture. The first example I'll show is with validation.
public class IForm { ... } public class BasicForm : IForm { ... } public class AdvancedForm : IForm { ... }
public static class FormValidator { public static bool IsValid(this BasicForm form) { ... } public static bool IsValid(this AdvancedForm form) { ... } }
As you can probably gather, this will add an IsValid() method to the two concrete IForm implementations. Sure, we could simply call FormValidator.IsValid(), but that's just not as clean. Still not seeing it? Let me add something else to the story...
public static class FormDelegate { public static void Create(IForm form) { ... } public static void Update(IForm form) { ... } public static void Save(this IForm form) { if (form.IsNew) Create(form); else Update(form); } }
If you're not seeing the value of separating logic from state, you're just not going to get this at all. I could go into my reasoning behind some of the decisions I made in these implementations, but I'll save those for another day. For now, let me just finish up my code sample with the code you'll actually be using. Oh, and by the way, I prefer create/read/update/delete (CRUD) method names, but "save" comes with a graceful simplicity. I go back and forth, but seeing as tho this is about simplicity, "save" makes more sense.
IForm form = FormFactory.Create(...); ... if (form.IsValid()) form.Save();
This pretty much saves you from having to use the more verbose...
if (FormValidator.IsValid(form)) FormDelegate.Save(form);
And that's it. I love the idea... as if you couldn't tell. I do think the practice should be somewhat limited, however. And I don't know whether or not I'd do this in my projects. I merely thought about the ability to do this when I read Karl's post. I'd be interested in seeing how others might take advantage of the feature.
Tim Barcz talks about options when it comes to implementing search for a custom site . He suggests two answers: Google and custom built. I'd recommend two more; both of which I've used and been very happy with.
From what I've seen, Google is not quite what I'm looking for when I think about integrating search into my site. It's not bad, but it doesn't give me the feel I'm looking for. Admittedly, I haven't played with it. I'm simply going off of what I've seen around the web. I want something wholly integrated into my web application, not just Google with a logo. Ok, I understand there's more than just that, but I have yet to see a Google search inserted into a site; every implementation I've seen has been the other way around. On the other hand, there are ways to do this with a bit more work... but I'm lazy.
The second option is just plain crazy. Sure, if you've got the time, go for it. Who does, tho? Even if you do have the time, who says you'll implement something completely bug-free? Yeah, right. For this, I have one suggestion that gives you search and a host of other capabilities without limiting your ability to create great .NET sites: DotNetNuke (DNN) . DNN is an open source portal framework or content management system, depending on who you ask. It's absolutely wonderful. That's what I use. While I'll probably get some flack on this comment, think about it as SharePoint-light. DNN is a little rough around the edges and I don't think I'd want to claim the vast majority of the code I've seen, but it is a very good foundation with an excellent extensibility story. Since I'm mentioning it, tho, SharePoint would also be an option; however, I'm not convinced it's the best story for anyone looking for a website. It'll do what you need it to do and then some, but it might be overkill. SharePoint is much more polished and provides a host of features DNN couldn't touch, but the developer experience isn't all it's cracked up to be. I'm going to stay hopeful for the next release, tho. But, I digress...
Lastly, I have to mention my favorite: Live Search Box . I love this one because it's totally non-intrusive. Try it for yourself. The search box I have at the top-right of every page is Live Search. A nice AJAX-y popup shows your results without intruding on your look and feel. In a sense, it adds to it. I love it! As if that wasn't enough, I was able to be as lazy as I wanted. It's simply takes adding a little JavaScript and you're done. Like I said, I love it.
By Michael Flanakin
@ 12:49 PM
:: 1882 Views
:: LSU
:: Digg it!
Well, looks like, after a week of questioning LSU's position in the Associated Press rankings, it's become unanimous across each of the major college football ranking systems: LSU is #1. This past game was an important one for the team. LSU came out strong this year, killing the competition in 4 of its first 5 games, which included a game that was expected to be the best non-conference match this year -- apparently, someone forgot to tell Virginia Tech that. All-in-all, these seemed fairly easy games for the Tigers. This is perhaps why I was worried. The team has a history of getting a little too cocky. Heck, the first half of the season is always the easiest, so this obviously kick-starts their adrenaline to a dangerous high. With Florida experiencing a bad in-conference loss to Auburn costing them 5 spots, I knew the Gators would bring it to this match in a fiery attempt to prove their worth. This just spelled disaster to me. I wasn't too far off, either. Florida came out very strong in the first half, keeping ahead and making onlookers question the "#1 defense." Don't get me wrong, I think LSU did a great job, but this wasn't what I was used to seeing. In a turn of events, LSU followed up with an atypical second half to match its first half. LSU is typically a team that comes on strong and dies off in the second half. Not this game. While the team didn't dominate like I would have expected a #1 team to dominate, they did pull the stops they needed to. After watching the game, I told myself there was no way LSU would remain at the top of the polls... well, until I heard the news...
USC lost!!! ...and to Stanford!? Thank you! The one-point loss wasn't a drastic one, but it didn't need to be. I've questioned USC for years and am glad to see someone lay it to them. Ok, maybe that's a bit to strong, but that single point did cost them 8 spots, dropping them to #10. That's a game I should probably go back and watch. The huge upset wasn't enough to get Stanford ranked with their now 2-3 record, but they do have bragging rights for a year. Hopefully the team can keep it up to prove it wasn't a fluke. Nonetheless, I tip my hat to thee.
Someone briefly mentioned Java Specification Request (JSR) 168 to me a little over a month ago. As most would, I asked what the heck it was about. I know what JSRs are, but I don't make a habit of knowing each one. JSR 168 turns out to be all about portal applications and, specifically, calls out a Java-specific way to implement Web Services for Remote Portlets (WSRP) . Any time I'm asked about integrating Java and .NET, two things come to mind -- and, no, one is not replace the Java with .NET... although, that is a good idea Those things are JNBridge and Mainsoft . I don't know much about these tools besides their existence and high-level goals. After talking to Simon Guest a month and a half ago about user experience , he mentioned how JNBridge works. I'm going to liken it to how Visual Studio allows us to easily consume web services. JNBridge creates a proxy class on the target platform that hooks into their system, which wraps the original code, be it .NET or Java, if I understood it correctly. I'm not sure how MainSoft does the job, but I wouldn't be surprised if it's a somewhat similar method.
Of course, anyone who paid attention to the fourth sentence above will notice I said JSR 168 is about web services, so you might ask why one would need to integrate Java and .NET at a component level. I'm going to chalk this one up to a mild case of stupidity. I say mild because there is some logic here, but not enough. Being the brilliant person that he is, an "architect" at a client's site determined that web services were too slow to accomplish what they needed. At first, I started to accept that. Then, I thought about how web services can be streamlined and asked what numbers they had to back up that claim. Apparently, there aren't and never have been any benchmark tests. People: If you're going to claim something is too slow, at least have some numbers to prove it. Later, I found out JNBridge was mentioned to this person before, but was shrugged off. I don't know if it's the presence of Microsoft that made him change his tone, but he was very accepting of the idea. To me, this guy is one of those zealots we run into occasionally. They always have something hateful to say about the competition, but rarely add to the conversation. In this case, he was (and still is) trying to push Microsoft solutions out of the conversation. I find that funny because... well, let me just say Microsoft has brought a lot of value to the client in the past year. We're not alone, by any means -- we work with some really good... and, with any project, some not-so-good people. I guess one of the key differentiators is our extensive training mantra along with our connections and resources back in Redmond and abroad.
As if the subject of this post doesn't clue you in on my initial reaction to finding out about this, I was quite surprised by the fact that .NET is now open source under the Microsoft Reference License (Ms-RL) . I'd like to see open source zealots' reactions to this one. I'm sure they'll say it's all in a move to push Windows licenses... however that might work. On a lighter side, I'm curious what the Mono folks will do with it. As I understand the license, I don't believe they can simply run with the code. Then again, it looks like they are utilizing some key components of the framework.
On the other hand, Miguel de Icaza claims the license isn't an "open source" license ; however, I'd argue this. By definition, open source is about access to source code. This doesn't mean you can use it for whatever you see fit or even contribute to it. That's why there are so many different types of open source licenses. Miguel's idea of open source seems to be more about open use, open contribution, or perhaps solely on whether or not a license has been Open Source Initiative (OSI) approved. Ms-PL hasn't, but that discussion is underway . After reading a little bit of Miguel's thoughts and opinions, it seems to be more about open contribution. I'll have to stand by the fact that this isn't and should never be the utmost important tenet to becoming "open source." I will agree that it is key to the greatest level of openness. Honestly, tho, I don't think I'd accept anything from everyone if I managed an open source project. There's just too much crappy code out there. Miguel's last comment in the aforementioned post indicates he wants it all, tho.
No matter what Miguel's thoughts of what it is to be "open," I think everyone will agree this is a fantastic move for the community. I love having source to look at. This is why Reflector has been so popular. As Miguel mentioned, I've made use of the Mono source many times when I wanted a peek into .NET. This isn't the same, but it's been close enough. I think more people will be interested in what Microsoft has written than Miguel and company... not to devalue their effort.
Lately, I seem to be doing more and more work in the console. I started creating batch files to automate some manual tasks when I was an assistant network administrator over a group of Windows 3.11 machines. Since then, I've returned to the console off and on, but never really enough to be too concerned with how it looked or felt. Then comes PowerShell. The first time I opened PowerShell, I was in awe -- not because of what I could do, but how it looked. I liked the larger, more colorful screen. I also fell in love with quick edit mode, which allows you to left click to select and right click to copy/paste. This is huge; especially when you're writing SQL. Since I came across these things, I've wanted all of my console apps to look and feel the same way. As if that wasn't enough, I get to these apps by using Win+R, cmd or powershell. If you haven't tried that, PowerShell opens in a traditional black/gray window. Obviously, you can just click an icon, but that slows me down. If you don't know me, I'm fairly big on productivity... or, at least doing things faster 
After automating more and more tasks -- namely, deploying software to a clustered environment -- I eventually got sick of looking at the black/gray and wanted my blue/white. Low and behold, the registry comes to the rescue. Console window settings are located at HKCU\Console. These are the default settings and overrides are in subdirectories (actually, sub-keys) for different times you've modified the properties of a console window. If you don't know about these properties, open a console application, click the system menu icon at the top-left of the window, and select Properties. Feel free to change it to your heart's content. I'd suggest modifying one to look exactly how you want and then save it. This will get saved to the registry, where you can then replace your defaults. To do that, browse to the customized key, right-click on the key, and select Export. Just save this to the desktop for simplicity. Next, open the .reg file with a text editor -- my favorite is Notepad2 -- and change the path to [HKEY_CURRENT_USER\Console], leaving off the sub-key name you exported it from. After this, save and close the file and simply double-click it to import them to the new location. If you're concerned about what this might do, simply export the existing console settings so you can overwrite anything you decide to undo. Here are my new defaults: ConsoleStyle.reg.
NOTE: Open all .reg files in a text editor before executing, as they may be hazardous to your system!
It's all pretty simple and, of course, nothing new. My only hope is that we get a better overall experience for console apps. I'm thinking of a tabbed environment with real intellisense, not just tab completion. I could probably come up with a dozen more features I'd like to see, but essentially, it's all about making the console a more attractive, approachable system. Currently, newbies open it and say, "What now?"
|
|
|