Are You Ready for Live ID?

Communism doesn't work because people like to own stuff. - Frank Zappa

Microsoft has rolled out its Live ID Web Authentication SDK and from some of the forum posts I've seen, developers are already misunderstanding what they really have to work with. You see comments like "hey, but it won't give me their email", or "Useless!", etc. Wrong!

Windows Live ID is the authentication used by Microsoft’s web sites and services - for example, Hotmail and Live.com. Windows Live ID is the evolution of the original Passport Service -- the name change ostensibly to focus on Window Live services and its ability to federate in a multi-property, multi-service identity world. If you don't believe this is big, let me give you one number: currently, more than 380 million users have credentials that work with Windows Live ID. D00d, that -- is big!

Oh, and by the way - it's not just for the web. There is also a Windows Live ID Client SDK.

Mostly the offering has been driven by MS Identity Guru, Kim Cameron, with the idea that there is a “metasystem” that utilizes WS-Trust to translate tokens, so that all identity systems can interact with each other. This is all web standards -- isn't that the direction that everything is supposed to be going? I'm not so sure that Google and Yahoo, with their authentication schemes and API's, are getting this...

Frankly, I think it is deplorable that Microsoft took so long to "get it" - and to finally do this. However, it now appears that Microsoft has indeed "got it" -- and that the commercial competitors have actually gone backwards in that they "do not".

The most important aspect of this rollout is that Windows Live ID will support WS-Trust, WS-Federation, CardSpace and ADFS (Active Directory Federation Server). This means that Windows Live ID can interact with other identity implementations and that you can integrate your corporate Active Directory environment into Windows Live ID. In other words, what was once a closed system (Passport) has now been transformed into an open, standards -based, transparent system.

When you put the sample IFRAME - based Live ID sign-in "control" on a page, and a Live ID user "signs in", the AppId information you've stored when you registered your property (site) with LIve ID tells it to redirect to the page you've designated. At this point you can capture the unique UserId string for that user, which will always be the same for your application. Now this requires a bit of thinking "outside the box" for most developers.

Normally, you would have a new user create a membership - with their username, email, password, etc. Your membership provider would store this information and this would enable the user to come back and log back in at your site. You would probably email them a link to "activate" so that you don't get "spambers". You would authenticate them via the username and password they provide. With Live ID, it is reversed just a bit. A new user signs into your site with their Live ID and -- THEY ARE ALREADY AUTHENTICATED -- you just don't have their user information yet. So, what you do is look up their UserId (their Live ID identifier described above) and if they aren't in your system, you take them immediately to your sign-up form where they can now enter their name, email, preferences, etc. For standard ASP.NET Membership provider authentication, the password you would use would the same as the username, and that is the unique user ID string returned from Live ID (it's a GUID).

This is then stored in the database using the UserId you got from the Live ID sign-in as the "Lookup" to get the member info back out when they revisit. Since this information is stored in a cookie, they can have a persistent, seamless login using the all - familiar Live ID mechanism. If your user doesn't want to provide the required Membership ("registration") information, then they may be able to authenticate with Live ID, but it is up to you whether you want them to have full "User" privileges.

This is very easy to do with the standard ASP.NET Membership Provider implementation by writing a custom Membership Provider class and using the UserId string to do the ValidateUser work. You don't really need a password, since your user is already "legit" from Live ID. In fact, you don't even need to use the membership provider - you can just do a database lookup from your own non-opaque Users table.

What's the downside?

The downside is that you've relegated your authentication process to an outside mechanism that you don't control. What if it is very slow (it is known to be so, occasionally)? What if it simply goes down for an extended period? I've had trouble accessing my Live SkyDrive folders account for several days - all I get is an error page. Not good. Maybe the best compromise is to use Live ID as I've described but also allow for your regular sign-in mechanism as a "fail safe". Again, not very difficult to implement.

I humbly predict that as soon as developers start to "get it" with Live ID and CardSpace integration (already available in Beta) you are going to see some major, major changes in how web sites authenticate their users. There's already a Mozilla CardSpace extension for Firefox, and whether you understand it or not, CardSpace is going to be very big. It is already supported with OpenID. Combine it with Live ID and I say you've got a real win-win for everyone.

The Live ID SDK has downloadable sample code in ASP.NET (C#), Java, Perl, PHP, Python, and Ruby. Right now the documentation terms of service says you can have 1 million authentications without any fees.

N.B. I've now added an article with downloadable solution code that implements all this here.


Dr. Dotnetsky's All time Grammar Pet Peeves

Is your server "loosing" Session State? Is your "Web Sight" not getting enought traffic? Why you should pay attention to your grammar, since it's a direct reflection of your professionalism on the web. The worst offenders, explained in plain English, with examples.

read more | digg story


SEO: Does Yahoo Get It With Site Indexing? NOT!

Not only is there no God, but try getting a plumber on weekends. - Woody Allen

After a lot of pain, contacts and fixing, I'm not sure that they do. I am not going to name names or point fingers, but really, Yahoo seems to have completely lost it in the search engine wars. You put up a new website, create a sitemap, submit it to Google, Yahoo, and MSN ("live.com"), and you do everything right, including adding the new accepted "sitemap" directive to your robots file.

Google jumps right on it. Within a day or so, virtually every entry in your sitemap is going to get indexed. If you ping them via the various RPC Ping server addresses and you have engineered the ability to update and use their Webmaster tools, the googlebot doesn't seem to have any trouble at all indexing your "stuff". That's coolio with me, because over a range of websites, I've found that the Googly Bear is responsible for 90% of my search engine traffic to the site.

And - depending on what kind of a site you've got - that could represent up to 90% of your total traffic! We've got real chemistry here, I felt it!

Live.com is trying, but in Google Analytics, they rarely show up in anything less than 5th place among the major search engines in terms of what percentage of total pageviews they are responsible for on my sites.

But Yahoo? D00D, they SUCK -- Totally and completely! Usually, depending on the site, they can only be found down in 7th or 8th place. I know of several instances (one of which I have been personally involved in) of webmasters going back and forth, back and forth -- with the Yahoo people, getting reports like "Well, our engineers made a mistake, and we will fix it right away" -- and yet the results are that the number of pages reported being indexed -- GOES DOWN!

SEO expert Mike Valentine claims that Google drives 74% the traffic to a range of business web sites he manages. He claims it is because Google delivers more relevant results and isn't afraid for searchers to click on results and leave Google. Valentine says that searchers want to find what they are looking for and easily see through transparent attempts to sell stuff to them and keep them from leaving.

But that still doesn't explain why Yahoo does such a poor job of indexing legitimate, unique content on many sites.

I don't know what these people are smoking, but either I don't understand their business model - or -- THEY SUCK.

Here is an example of real 30 - day traffic (pageviews) generated solely from the major search engines, ranked by "Who's on First":

google 1,294,146
live 1,904
aol 1,816
search 1,608 (proprietary)
ask 1,509
msn 976
yahoo 93

Yikes! What am I talking , Greek? It's as plain as day, folks!

"We're working on it" just doesn't cut it with me, Pal -- not after a whole month of pain and frustration. Hey Jerry? -- you want my business? You gotta learn to index my site, man.

Yahoo? How About DMOZ?

Corruption? How about this....

Has DMOZ become corrupt? Wouldn't surprise me in the least. Read all the coments on the linked post, and weep. Bastards!

Ultimate Developer Tool List and WSSF Woes

I first met Scott Hanselman at the bus stop outside the W Hotel in Seattle. A group of devs including Rob Howard, Scott and I were waiting to go the the Mother Ship to have our brains crammed full of Geek Food. This guy is a ball of fire, I'm not sure if he ever sleeps (Scott Guthrie is another one of those). N.B. -Hanselman is walking to fight Diabetes and you can help!

Anyway, Scott has come out with his 2007 version "Ultimate Developer and Power Users Tool List". Once again he features my litle UrlKicker tray tool that will band-aid broken copied urls together and take you there, search on a number of providers with a copied search phrase to the clipboard, and even has a little "notes" facility that enables you to make notes and view them.


One of the problems you may encounter when using GAT packages and framework - generation wizards like the WSSF (Web Services Software Factory) is that the wizards don't always want to allow you to do "custom stuff" and mix it in with their view of the world. For example, if you use the Service Factory Data Access option and want to generate repository classes for your custom update that runs a stored proc that you have added yourself, there is already a default UpdateOne guy in there and he tells you "can't do it 'cause there already is one". Yes, I KNOW there is one, and I want it to stay there, but this is ANOTHER ONE and I want that one too! Sorry, bud, no can do. So, to maintain "best practices purity", at least for now, what I did was to go Get the guy I wanted, update the fields on the object, and then call the Save method. That's not one but TWO database calls.

Where is Dr. Dotnetsky?

Our eggheadcafe.com denizen and prognosticator, Dexter Dotnetsky, appears to have disappeared from the GeekOSphere. I'm making every effort to get back in touch with Dexter, and hopefully I'll be able to get him to resurrect more of his endless pointless, useless geekisms. In the meantime, you might want to bone up on your grammar - with his "all time pet peeves list".


Not so Random Coin Toss: Mersenne Twister

“The generation of random numbers is too important to be left to chance.” — Robert R. Coveyou

Often when confronted with difficult decisions, I resort to a simple coin toss to "get objective". But, flipping a coin may not be the fairest way to settle disputes. About 13 years ago, statistician Persi Diaconis started to wonder if the outcome of a coin flip really is just a matter of chance. He had Harvard University engineers build him a mechanical coin flipper. Diaconis, now at Stanford University, found that if a coin is launched exactly the same way, it lands exactly the same way.

The randomness in a coin toss, it appears, is introduced by us sloppy humans. Each human-generated flip has a different height and speed, and is caught at a different angle, giving different outcomes.

But using high speed cameras and equations, Diaconis and colleagues have now found that even though humans are largely unpredictable coin flippers, there's still a bias built in: If a coin starts out heads, it ends up heads when caught more often than it does tails.

PseudoRandom number generators have a much better probability distribution:

Marsaglia and Mersenne are at 0.5000 -- in other words, a perfect random distribution of heads vs tails.

The Mersenne twister is a pseudorandom number generator developed in 1997 by Makoto Matsumoto and Takuji Nishimura that is based on a matrix linear recurrence over a finite binary field F2. It provides for fast generation of very high quality pseudorandom numbers, having been designed specifically to rectify many of the flaws found in older algorithms.

Its name derives from the fact that period length is chosen to be a Mersenne prime. There are at least two common variants of the algorithm, differing only in the size of the Mersenne primes used. The newer and more commonly used one is the Mersenne Twister MT19937, with 32-bit word length. There is also a variant with 64-bit word length, MT19937-64, which generates a different sequence.

Being a big proponent of "not reinventing the wheel" I set out to see if I could find somebody's C# implementation. It didn't take long to find Dave Loeser's article at codeproject.com, with a feature-complete implementation of the latest algorithm. A little monkeying with the initialization and seeding, and I had my coin flipper all done.

My test implementation has a web page with a button that will run the "flip" through 1 Million iterations, and choose the last one as the outcome value. This is then used to show either heads or tails on a nice image of an old Indian Head Penny. It also shows you the statistics for each set of 1 million "coin tosses". You can view the live sample page here, and you can download the sample code and play with it if you like! This algorithm is fast -- it will do the 1 million iterations in about 135 milliseconds -- the page displays the elapsed time using the stopwatch class.


Developer Evangelists Unite!

Last Thursday, August 23 2007, I had the pleasure to get paid time off from my employer to attend a local Orlando MSDN Developer Evangelist event hosted by Russ Fustino, our Florida MS Developer Evangelist, in Orlando. I was both pleased and excited to see a real turnout of new faces I've never seen before, as well as good friends and several other MVPs who made the trip, either as attendees or presenters.

Russ Fustino surprised me. Although Russ and his MS cohort Joe Healy and I have become friends because of our almost constant interaction at various developer events (Code Camps, MVP Summits, PDCs, Tech-Eds and even local developer user groups like our OneTug Orlando .NET User Group) I had never seen Russ do a full presentation. Bottom line, Russ is an expert presenter. This guy did four hours of "stuff" -- all by himself -- and I gave him an 8 out of 10.

I've done some presentations at public user group meetings, and I intend to do more in the coming months. I can tell you that it is not easy. There is a lot of stuff that has to come together, not just technology (laptops, VPC images, presentation screens and "the unknown") - but you also have to be flexible enough to adjust to real - time developer questions and other kinds of "Curveballs". Russ did an excellent job, and he communicated some excellent basic concepts about LINQ, WCF, and SliverLight - all in one powerhouse presentation. My only regret is that I had to jump out back to the Bromberg Ranch right after the close and was unable to attend the Pub Club afterward.

So this is an endorsement of the entire Microsoft Developer Evangelist cadre and effort - Microsoft brings out an amazing array of new products and technologies -- and they truly understand that they need to bring this home to us, the local level developer community, with these MSDN and related events. If you are a developer, whether new or experienced, you can make an investment in yourself by getting involved in your local .NET Developer Group, attending or presenting at local Code Camps, and developing relationships with your local Microsoft Developer Evangelist.

They gave us some nice presents -- one of the most valuable of which was a DVD with all kinds of very current content related to the presentation. NIce!


Visual Studio 2005: Project / Properties Display Errors

The quickest way of ending a war is to lose it. -- George Orwell

I'm posting this fix because it is extremely difficult to find on the web. In certain situations, the underlying COM dll's or type libraries that "help" the Visual Studio IDE perform its various functions get unregistered, causing any number of different symptoms. This can happen for example if you've installed an Orcas Beta (which does it's own set of "stuff"), then uninstalled or done a repair or upgrade on Visual Studio 2005 on the same machine.

Most of these COM Server "helper" dlls and tlbs can be found here:

C:\Program Files\Microsoft Visual Studio 8\Common7\IDE

You need to re-register those of these that are COM Servers. Of course, you cannot know which are and which are not, but running REGSVR32.EXE on all of them cannot hurt anything. What I did to make this easy is first to create a list of all of them. You can do this from a DOS window with the command:

dir /b *.dll >list.txt

This will create a directory listing and pipe it to the list.txt text file.

N.B. Thanks to the anonymous commenter who added the /b directive to make it better!

Now you have a raw list of filenames, one on each row. You can then bring this into a text editor such as EditPlus, and using the Replace function, you can replace the "Beginning of Line" with "regsvr32.exe ". Save the resultant set of commands as "reggie.bat" and you can then double click on it.

reggie.bat will attempt to register each and every dll in the list. If one doesnt register, you get an error dialog you can dismiss, and if one registers, you get a success dialog that you can also dismiss. By semi - automating the process this way, you can take care of the whole deal in less than a minute.

Fire up Visual Studio 2005, choose Project - Properties, and -- voila! it's fixed.

Here is a sample list:

regsvr32.exe cmddef.dll
regsvr32.exe compluslm.dll
regsvr32.exe CrystalDecisions.VSDesigner.dll
regsvr32.exe csformatui.dll
regsvr32.exe custsat.dll
regsvr32.exe dbghelp.dll
regsvr32.exe DevCfg.dll
regsvr32.exe Dip.dll
regsvr32.exe ExtWizrd.dll
regsvr32.exe ExtWizrd7.dll
regsvr32.exe IndigoTemplateGenerator.dll
regsvr32.exe Microsoft.CompactFramework.Design.Data.dll
regsvr32.exe Microsoft.Data.ConnectionUI.Dialog.dll
regsvr32.exe Microsoft.Data.ConnectionUI.dll
regsvr32.exe Microsoft.SqlServerCe.Client.dll
regsvr32.exe Microsoft.VisualBasic.UpgradeEngineInterface.Dll
regsvr32.exe Microsoft.VisualBasic.UpgradeSnippet.Dll
regsvr32.exe Microsoft.VisualBasic.UpgradeWizard.Dll
regsvr32.exe Microsoft.VisualJ.UpgradeEngineInterface.dll
regsvr32.exe Microsoft.VisualJava.UpgradeWizard.DLL
regsvr32.exe Microsoft.VisualJSharp.PropertyPages.dll
regsvr32.exe Microsoft.VisualStudio.ConfigurationUI.dll
regsvr32.exe Microsoft.VisualStudio.Converters.Interop.dll
regsvr32.exe Microsoft.VisualStudio.CSharp.Options.dll
regsvr32.exe Microsoft.VisualStudio.CSharp.Services.Language.dll
regsvr32.exe Microsoft.VisualStudio.Data.dll
regsvr32.exe Microsoft.VisualStudio.Data.Interop.dll
regsvr32.exe Microsoft.VisualStudio.DataTools.dll
regsvr32.exe Microsoft.VisualStudio.DataTools.Interop.dll
regsvr32.exe Microsoft.VisualStudio.Debugger.dll
regsvr32.exe Microsoft.VisualStudio.DeployWizard.Dll
regsvr32.exe Microsoft.VisualStudio.ExportTemplate.dll
regsvr32.exe Microsoft.VisualStudio.HostingProcess.Utilities.dll
regsvr32.exe Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll
regsvr32.exe Microsoft.VisualStudio.ImportProjectFolderWizard.Dll
regsvr32.exe Microsoft.VisualStudio.JSharp.Options.dll
regsvr32.exe Microsoft.VisualStudio.ObjectTestBench.dll
regsvr32.exe Microsoft.VisualStudio.ProjectConverters.DLL
regsvr32.exe Microsoft.VisualStudio.QualityTools.Wizard.TestProjectWizards.dll
regsvr32.exe Microsoft.VisualStudio.TemplateWizard.dll
regsvr32.exe Microsoft.VisualStudio.ToolBoxControlInstaller.dll
regsvr32.exe Microsoft.VisualStudio.VC.DLL
regsvr32.exe Microsoft.VisualStudio.vspBatchParser.dll
regsvr32.exe Microsoft.VisualStudio.vspConnectionInfo.dll
regsvr32.exe Microsoft.VisualStudio.VSPEnumerator.dll
regsvr32.exe Microsoft.VisualStudio.vspGridControl.dll
regsvr32.exe Microsoft.VisualStudio.vspManagementUI.dll
regsvr32.exe Microsoft.VisualStudio.vspRegSvrEnum.dll
regsvr32.exe Microsoft.VisualStudio.vspServiceBrokerEnum.dll
regsvr32.exe Microsoft.VisualStudio.vspSmo.dll
regsvr32.exe Microsoft.VisualStudio.vspSmoEnum.dll
regsvr32.exe Microsoft.VisualStudio.vspSqlEnum.dll
regsvr32.exe Microsoft.VisualStudio.vspSqlTDiagM.dll
regsvr32.exe Microsoft.VisualStudio.vspWmiEnum.dll
regsvr32.exe Microsoft.VisualStudio.Web.Application.dll
regsvr32.exe Microsoft.VisualStudio.Web.dll
regsvr32.exe Microsoft.VisualStudio.XmlDesigner.dll
regsvr32.exe Microsoft.VSDesigner.Management.dll
regsvr32.exe Microsoft.WebPublisher.dll
regsvr32.exe Microsoft.WizardFramework.dll
regsvr32.exe Microsoft.WizardFrameworkVS.dll
regsvr32.exe msdis150.dll
regsvr32.exe msenc80.dll
regsvr32.exe msenv.dll
regsvr32.exe msenvmnu.dll
regsvr32.exe msobj80.dll
regsvr32.exe mspdb80.dll
regsvr32.exe mspdbcore.dll
regsvr32.exe msvb7.dll
regsvr32.exe ProjectAggregator.dll
regsvr32.exe ProjectAggregator2.dll
regsvr32.exe ProjWiz.dll
regsvr32.exe RequiredPermissions.dll
regsvr32.exe sqlceca30.dll
regsvr32.exe sqlcecompact30.dll
regsvr32.exe sqlceer30en.dll
regsvr32.exe sqlceme30.dll
regsvr32.exe sqlceoledb30.dll
regsvr32.exe sqlceqp30.dll
regsvr32.exe sqlcese30.dll
regsvr32.exe srcsrv.dll
regsvr32.exe symsrv.dll
regsvr32.exe System.Data.SqlServerCe.dll
regsvr32.exe vb7to8DL.dll
regsvr32.exe VJSFormatUI.dll
regsvr32.exe VsAssert.dll
regsvr32.exe VSConvertersPackage.dll
regsvr32.exe vslog.dll
regsvr32.exe vsmacros.dll
regsvr32.exe VSPolicy.dll
regsvr32.exe vssln.dll
regsvr32.exe vstlbinf.dll
regsvr32.exe VsWizard.dll
regsvr32.exe WinFxBrowserApplicationTemplateWizard.dll
regsvr32.exe WinFxCustomControlTemplateWizard.dll


SEO Friendly Paging with ASP.NET 2.0 Data Controls

ASP.NET has lots of "out of the box" features that make the display of data easy, including pageable GridViews and DataGrids with little or no code. But the stock paging mechanism uses javascript to cause the postback, and the url of the new "page" doesn't change. This is not "SEO friendly", because the Googlebot won't index each page. Here's a fix in my recent article on eggheadcafe.com.

read more digg story


ASP.NET: REGEX Parse the RSS / ATOM Feed Url from a Page

2 is not equal to 3, not even for large values of 2. - Grabel's Law

I've been scraping again, I confess. Just can't resist it. One of the things I've run into when grabbing a bunch of web pages in a threadpool callback is how to determine if the page sports the autodiscovery tags (e.g. there is a feed for the site).

Here is one way to do this with a little bit of REGEX:

using System.Text.RegularExpressions;

namespace WebLogsSearcher
public static class Matcher
public static string Parse(string htmldata)
Regex linkregex =
new Regex(@"<link\s*(?:(?:\b(\w-)+\b\s*(?:=\s*(?:""[^""]*""'" +
@"[^']*'[^""'<> ]+)\s*)?)*)/?\s*>",
RegexOptions.IgnoreCase RegexOptions.ExplicitCapture);

string url = "";
foreach (Match linkmatch in linkregex.Matches(htmldata))
bool ok = false;

Regex sublinkregex =
new Regex(@"(?<name>\b(\w-)+\b)\" +
@"s*=\s*(""(?<value>" +
@"[^""]*)""'(?<value>[^']*)'" +
@"(?<value>[^""'<> ]+)\s*)+",

foreach (Match sublinkmatch in sublinkregex.Matches(linkmatch.Value.ToString()))
if ("type" == sublinkmatch.Groups["name"].ToString().ToLower()
(sublinkmatch.Groups["value"].ToString() == "application/atom+xml" ||
sublinkmatch.Groups["value"].ToString() == "application/rss+xml"))
ok = true;

if ("href" == sublinkmatch.Groups["name"].ToString().ToLower() && ok)
url = sublinkmatch.Groups["value"].ToString();
return url;


Analysis: Should You Publish Full or Partial RSS Feeds?

"After silence, that which comes nearest to expressing the inexpressible is music." -- Aldous Huxley

This is an argument that comes up frequently among those who author content of any kind and publish RSS feeds of same - which is better - to publish the full content in the description field, or just a summary with say, a "read more" link that forces interested readers to go directly to the site to read all of it?

I've done some research on the subject, as well as having some actual data of my own with which to make comparisons, and I am of the opinion that publishing the full content is the best way to go.

There are basically two concerns that webmasters have that target this subject:

1. Since with a full feed RSS Subscribers will get to read the full blog content inside their newsreader, they would not visit the actual site - meaning lower pageviews would impact advertising revenue negatively.

2. A more ominous threat is from blog plagiarists and "Made For Adsense" sites who would steal the content for their own sites, and put their own advertising around it.

Digital Inspiration is one of several sites that offers actual comparison data on this. They had both of the above concerns, but switched to full content feeds anyway as an experiment. Surprisingly, this added more than 1000 new feed subscribers in less than a month. Moreover, the revenue generated from full feeds in the month after switching was more than the combined revenue of previous months. More -- not fewer -- ad impressions were generated -- translating into an increase in revenue. Defies logic perhaps, but not human behavior.

Other benefits of switching to full content RSS feeds included the fact that more readers started participating in the discussion (comments, etc.). That means people are staying on site longer.

My own analysis of FeedBurner stats of two different blogs or sites over whose content I am in control indicates clearly that the one which publishes the full feed has a much higher subscriber - to - pageview ratio than the one that only publishes a partial RSS content feed. There is certainly the possibility that other factors are coming into play, but at this point the difference in the two numbers is so dramatic, I can only ascribe it to the fact that one publishes full RSS and the other only partial. Good metrics don't lie - no matter what you may think should be happening.

Some other things to think about on this subject:

  • When you publish your full feed you are making your content user friendly. When you don’t you are making me, the reader, work harder.
  • I know you want me to visit your website so you can get more ad impressions and ad clicks. If I find what you write compelling, I will visit your site to read the comments or forums or post a comment myself. People want to be enticed to your site by good content, not to be dragged there.
  • Certainly you may be worried about scrapers and people using your feed in an unscrupulous manner. But that does not mean that your readers should have to be the ones to pay the price via inconvenience. After all, they are the ones who want to read your content for the right reasons -- so don't make it harder for them.

Feedburner, in their official blog, states "We've seen no evidence that excerpts on their own drive higher clickthroughs" With over 866,000 feeds to draw statistics from, they certainly have the numbers to prove it.

One blogger summarized the whole scenario: "Truncated RSS feeds are like foreplay without sex. Damn frustrating."

There are also different kinds of protections you can employ to help prevent feed thievery: copyright notices linking back to the source, as well as some plugins for different kinds of blog authoring platforms. And Copyscape has some useful tools to help find and prevent blog plagiarism.

All in all, the "loss of revenue" concern has been proven invalid - actually the reverse -- more revenue -- is likely to occur. Regarding content thievery, plagiarists are going to steal content, period. Full - content RSS feeds may make it a bit easier for them to do so, but the benefits still outweigh the risks in most cases. That's my take on it so far.

Beautiful Code

Andy Oram, an editor at O'Reilly, sent me a review copy of "Beautiful Code" and I sent him a comment on it. Very interesting book. Basically what Andy and his co-editor did was to interview top programmers in a broad range of disciplines and platforms and get them to do a "chapter" on what they consider "beautiful code". The result is fascinating, and it provides some real insight into how top programmers think. You can read the comment quote and more about the book on his blog here. Incidentally, from an "SEO" perspective, this is the real definition of a "reciprocal link" in my book -- it's human edited, and meaningful - unlike mindless automated linking schemes that often do more harm than good!


ASP.NET: Loss of Session / Cookies with Frames

Imagine if every Thursday your shoes exploded if you tied them the usual way. This happens to us all the time with computers, and nobody thinks of complaining. -- Jef Raskin

Recently we had a forum question on eggheadcafe.com where the user indicated they were losing Session because of the use of FRAMESET or an IFRAME.

If you implement a FRAMESET where frames point to other Web sites on the networks of your partners or inside your network, but you use different top-level domain names, you may notice in Internet Explorer 6 that any cookies you try to set in those frames are lost. This is most frequently experienced as a loss of session state in an Active Server Pages (ASP) or ASP.NET Web application. You try to access a variable in the Session object that you expect to exist, and it is null. You can also see this problem in a FRAMEs context if your Web pages alternate between the use of Domain Name System (DNS) names and the use of Internet Protocol (IP) addresses.

The fix is very simple - Starting in Internet Explorer 6 support for the Platform for Privacy Preferences (P3P) Project was introduced. The P3P standard notes that if a FRAMESET or a parent window references another site inside a FRAME or inside a child window, the child site is considered third party content. Internet Explorer, which uses the default privacy setting of Medium, silently rejects cookies sent from third party sites. So consequently a large percentage of your visitors may end up having an unhappy experience on your site.

You can add a P3P compact policy header to your child content, and you can declare that no malicious actions are performed with the data of the user. If Internet Explorer detects a satisfactory policy, then Internet Explorer permits the cookie to be set. Most developers that have hosted sites don't have the ability to access IIS and set the required header.

An easy fix is to add the header in Global.asax:

protected void Application_BeginRequest(object sender, EventArgs e)
HttpContext.Current.Response.AddHeader("p3p", "CP=\"IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT\"");


A simple compact policy that fulfills the needed criteria follows:


The above code sample shows that your site provides you access to your own contact information (CAO), that any analyzed data is only "pseudo-analyzed", which means that the data is connected to your online persona and not to your physical identity (PSA), and that your data is not supplied to any outside agencies for those agencies to use (OUR). This is sufficient to get Internet Explorer (and some other browsers) to allow the Session cookie, as well as other cookies.

This page provides more information about the P3P privacy policy for Internet Explorer.

To summarize: unsatisfactory cookies are those where the policy contains a token from both columns in the table below and where the purpose/recipient token does not contain the optional attributes, "i" or "o." As an example, a cookie with a compact policy that contains the tokens PHY and OTR is an unsatisfactory cookie, whereas a cookie with the compact policy that contains PHY and OTRo is acceptable.

PHY Physical location
ONL Online location
GOV Government ID
FIN Financial information
SAM Same policies
OTR Other recipients
UNR Unknown purposes
PUB Publicly available
IVA Individual Analysis
IVD Individual Decision
CON Contact Information
TEL Telephone Promotion
OTP Other Purposes

If you've been having Session or other cookie issues and you have any kind of frames on your pages, try this fix. Even if it does not solve the problem, it cannot hurt.

NOTE: This fix will NOT fix the security issue with denied access to content in an IFRAME or FRAME whose content is loaded from a different domain. That is by design, and I've railed about how hypocritical it is since you can have a client-side script tag whose src property points to anywhere at all, and browsers will happily slurp it up. That's why so many developers have turned to dynamically written script tags using JSON to marshal their results back into the browser DOM in the page.


Windows Vista "Pre -SP1" Performance and Reliability Update

There's stuff out now that apparently has punched MS's Hotbutton enough times so that they are releasing the fixes now -- before Service Pack 1 comes out. We got some real chemistry here -- I felt it!

The info and download page for this is here.

This isn't a very long list of "Fixes", but the ones I list here (assuming they work!) will make me feel better:

  • When you copy or move a large file, the "estimated time remaining" takes a long time to be calculated and displayed. Zzzzzzzzzzzzzzzzzzz.... say what?
  • After you resume the computer from hibernation, it takes a long time to display the logon screen.
  • After you resume the computer from hibernation, the computer loses its default gateway address.
  • Poor memory management performance occurs.

You gotta just love that last one "Poor memory management performance occurs" - man if that isn't the biggest catch-all for whatever Vista ailment you've got! Did you know that over 50% of Windows Vista users over 40 suffer from "poor memory management performance"? Yup - its a medical fact!

Sigh. Keep 'em coming, guys. And don't forget to check the back seat first.

Microsoft LINQ, WCF and Silverlight Event in Orlando Area August 23

I'll be there! http://ittyurl.net/33r6.ashx

And Let me tell you something else Department...

You know, while I am on this kick, I might as well continue venting the old spleen...

I installed Windows Server 2008 x64 over Window Server 2003 x64 and it handled the upgrade beautifully. That much I give you credit for. However, I've got a Radeon 9250 graphics card -- about the most common card on the planet -- and what did I get? "VGA". Give me a break, folks! Even Window Vista Ultimate 32-bit gave me the Microsoft Radeon 9250 implementation driver. Go to 64-bit - you get Horseshit! I run Maya 8.5 64-bit, and what do I get for it in the graphics department? BUPKIS!

Go figure.


Windows Live Folders debut

Looks like Microsoft has opened up its Windows Live Folders offering here. Nothing earth-shaking, but it has a nice clean user interface and offers 500MB of free storage. You can store documents, music etc. in various folders, send links to friends, make it public, private or shared only to certain people. You can also control read or contribute settings on a folder for others. Standard HTTPInputFile control uploads, no fancy ActiveX or other junk to bomb-out your browser.

Here is an example link to a file I have in one of my "public" folders:


BTW -- they have an IFRAME snippet that "embeds" a Live Drive (yep - they've changed the name already) item in your page. You want to be very careful about using these - I had one in this post and then every time I'd load this UnBlog in my browser, the little booger would take control and redirect everybody to the Live Drive page. DOH!

I've been using Yahoo Briefcase for a number of years, but this looks like a good replacement and is more flexible. Yahoo is still in the dark ages with this stuff, they want you to pay for extra space and features, and the ability to share files. I don't mind looking at ads as long as they don't interfere with the features I need. So, folders people? Give me the option to automate transferring my "Stuff" from Yahoo's crappy Briefcase to Live Folders and I'll happily look at your ads instead of theirs...

BTW, they offer a "Help us improve" feedback thing. I've already offered a few suggestions:

1) Offer a Details View with datetime and filesize, just like in Windows Explorer Views.

2) Let me search for my uploaded files in the existing "folder" or in all folders. Doods! This ain't rocket science -- how 'bout it?


WSSF: Software Factories and You (or "Help, I've Fallen...")

"If it really were 1985 and you were writing Windows, you wouldn't even be doing it in C ... Windows itself was written in 8086 Assembly Language" -- Charles Petzold

First - what's a Software Factory?

In software engineering and enterprise software architecture, a software factory is defined as a software organization structured such that software projects are built in discrete "work centers". These generally represent, or specialize in, certain software disciplines such as architecture, design, construction, integration, test, maintenance, packaging, release, etc. Much like a true manufacturing facility, software factories require clearly defined product creation and management processes. By utilizing the same fundamentals as industrial manufacturing, a true Software Factory can achieve a superior level of application assembly even when assembling new or horizontal solutions. This can provide benefits in terms of economies of scale, geographic distribution, load leveling, and rigorous product and process control. Software factories have gained recent popularity as a cost-efficient way to reduce the time it takes to develop, create and/or construct software solutions.

Although the term "software factory" is used by Microsoft in association with the .NET Framework, "Software Factories" are much broader in use and application. For .NET developers, Microsoft's Guidance packages that represent Software Factories provide Wizard - like steps and stages that help the developer design and produce the Service, Implementation, Data, Business Logic and other layers including the generation of CRUD stored procedures against a database schema. In sum, you have a lot of code generation and logical construction of the framework, but you still need to flesh it out for a complete application.

Are Software Factories for Code Generation, or something else?

As a co-worker remarked to me quite prophetically the other day, the main purpose of these SF's is not to generate code for you (although they certainly do plenty of that) -- their main purpose is to enforce best practices coding and architecture standards. So, there is a "buy in" concept here where developers (and IT Departments) need to make a decision that they will embrace the best Patterns and Practices have to offer and "run with it". When all the developers in a group are in concert as far as the best practices architecture and coding standards they use on a project, there can be enormous gains in scale and quality on new projects. If two different groups are making brake pads, and they do it the same way, you know that when you need a brake pad, it will fit. It's that simple.

What does a Software Factory do for me?

The Web Service Software Factory (WSSF) (or "Service Factory" for short) provides guidance that addresses many of the challenges associated with building ASP.NET (ASMX) and Windows Communication Foundation (WCF) Web services and the components of a distributed application. These include:

  • Designing ASMX and WCF messages and service interfaces.
  • Applying exception shielding and exception handling.
  • Designing business entities in the domain model.
  • Translating messages to and from business entities.
  • Designing, building, and invoking the data access layer.
  • Validating the conformance of service implementation, configuration, and security using code analysis.
  • Planning for the migration to WCF.
  • Applying security to WCF services.
  • Applying message validation.

How much flexibility do I have?

What happens with WSSF is not "set in stone" - depending on the size and complexity of your project, you can eliminate some stuff, or combine it down into fewer component projects. But the bottom line is that 95% of projects use a data store (database) and you need to have it be "schema complete" before you start. Or to put it as a co-worker explains, "The Database is God, and the Meaning of Life can be found in the constraints of Referential Integrity".

Personally, I used to think Life was a fountain, but I have walked through the Valley of DLL Hell long enough to fear no DLL, so I would not disagree at this point.

Future versions of the WSSF (an early drop of v3.0 is already out) will have a "memory" of what you did so that if you change your Database, it's easy to regenerate the code. But right now, no dice on that score.

When you start a WSSF Solution the Guidance Package creates the following project structure:

Service Interface:
Business Logic:
Resource Access:

In addition you get a Client project, and a Host project, along with optional TFS Tests. Then, you assign project "responsibilities" (which projects do what part) and you can begin to generate the various implementations and skeleton code for your DataAccess, DataContracts, BusinessEntities, ServiceContracts and so on. Everything is context-menu driven. For example, you could right - click on your BusinessEntities project, choose "Service Factory (Data Access) - and then either "Create Business Entities From Database" or "Create Data Repository Classes".

The best way to get started with WSSF is to download the latest bits and install them, and to go through one of the Hands On Labs that are provided, (for WCF Services, that would be the Coho Winery bit). Hint: This stuff is NOT a no-brainer: it takes serious study. But, it's the Way of the Wizard, and it's not going away, either. So, get with the program, programmer!

Extra Bonus - Fiddler 2.0!

Fiddler has been indispensable to me in debugging, capturing and even manipulating HTTP traffic. Now comes Fiddler 2.0! Snag a copy. You'll thank me later!

And One More: ".NET Command prompt here"

Copy all the lines below, and save them from notepad or other text editor with a ".reg" extension, then you can double-click on the .reg file to get the right-click context menu item to get a .NET-enabled DOS prompt on any folder in Windows Explorer:

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Directory\shell\.net cmd]@=".NET Command Prompt Here"

[HKEY_CLASSES_ROOT\Directory\shell\.net cmd\command]@=hex(2):63,00,6d,00,64,00,20,00,2f,00,6b,00,20,00,22,00,43,00,3a,00,5c,00,50,\ 00,72,00,6f,00,67,00,72,00,61,00,6d,00,20,00,46,00,69,00,6c,00,65,00,73,00,\ 5c,00,4d,00,69,00,63,00,72,00,6f,00,73,00,6f,00,66,00,74,00,20,00,56,00,69,\ 00,73,00,75,00,61,00,6c,00,20,00,53,00,74,00,75,00,64,00,69,00,6f,00,20,00,\ 38,00,5c,00,43,00,6f,00,6d,00,6d,00,6f,00,6e,00,37,00,5c,00,54,00,6f,00,6f,\ 00,6c,00,73,00,5c,00,76,00,73,00,76,00,61,00,72,00,73,00,33,00,32,00,2e,00,\ 62,00,61,00,74,00,22,00,00,00