Smartphone and Vista: "Look Ma, No USB!"

Recently I posted about how the Bluetooth connectivity to the new Windows Vista Device Center had been disabled by Microsoft in response to security concerns by corporate customers. That was partially inaccurate.

You can connect your SmartPhone to Windows Vista with Bluetooth (assuming of course you have a Bluetooth radio USB dongle on your desktop PC). The process is a bit weird, but hopefully this explanation will help:

1) First you have to enable Bluetooth on your device and make it "Discoverable".
2) On your desktop machine, in Control Panel, in "Bluetooth Devices" you need to add your device. The following pics show a successful add:

Now all you need to do is go into the Device Center main screen, click on your device, and it will say "Waiting for device to connect". You may also need to go into the BlueTooth setup on your device and make it connect. A passkey is recommended, and once you fill that in on your device, you should see "Connected" on your desktop.

At that point, you can sync, sync media files, or just browse the filesystem on your device and copy files back and forth like Windows Explorer, all using the features of the Windows Mobile Device Center.

I wish there were clearer instructions on how to do this that came with the product, but so far I haven't found them.


Da Dum, Da Dum, Saddam

"At 6:10 a.m., the trapdoor swung open. He seemed to fall a good distance, but he died swiftly. After just a minute, he was not moving. His eyes still were open but he was dead.
His body stayed hanging on the rope for another nine minutes as those in attendance broke out in prayer, praising the Prophet, at the death of a dictator."

There are surely two schools of belief system about the above:

1) America toppled the Dictator and empowered his people to regain their destiny, and the Iraqi people did what they consider proper justice at their own will.

2) It was the Americans' fault, America has no business in Iraq; It was America that executed Saddam.

You will often also find the number 2 people in the cadre of folks who have something in their radical, left-wing psyches that leads them to believe that the U.S. Government was somehow involved in the 9/11 disaster, in some evil conspiratorial way.

So, now it's over. Like Hitler, Tojo, Stalin. Good vs Evil. At any rate, it certainly should be cathartic for the Iraqi people, many of whom have suffered dearly at the hands of this maniac.

Where do we go from here? Personally, I have no doubt that Saddam was one of the most evil dictators of the 20th century (and a portion of the 21st), and deserved to be hanged. Banishment would have been more humane, but we must remember that in that part of the world, punishment by death is more the rule than the exception.

-- And, it was their decision.

We still have 40,000 troops in Japan and 30,000 in Korea after 60 and 50 some-odd years respectively. The U.S. of A. is going to have a presence in Iraq for a very long time. Get over it.

The American Thinker article that one commenter astutely points out sums it up:

"It is interesting to note that those complaining about Saddam's death sentence do so in the safety knowing that they will probably never have to live in the culture he helped create, nor will they ever have Saddam Hussein as a neighbor. If Saddam were allowed to live, Iraqis who suffered under his regime would not have those same assurances."


Enterprise Library 3.0 Dec '06 CTP, About 2007...

The Library 3.0 Beta is out at Codeplex.  Also in the Community Extensions, there  are a number of cool add-ons and extensions that users have authored.

Even if you do not use the full EntLib in your work, this is really good "best practices" code to study, and I highly recommend it. It's a good way to get into Patterns and building blocks OOP coding.

About 2007...

2007 starts with MONDAY and also ends on MONDAY...
2007 has the highest number of SUNDAYS and SATURDAYS...
So enjoy the least working year in your life!


OpenID - Ready for Prime Time?

(Subtitle: "Reality.sys not found. Universe halted.")

I saw a video on OpenId by Simon Williston, and checked out the main site, and I did manage to find an implementation of server and client for ".NET", but - it was ported from something for MONO, there was no source code, and the original was written in BOO for .NET.

There seems to be a lot of activity around OpenId, one could possibly make the case that this represents what Microsoft hoped to do with Passport (oh, wait, I think it's "LiveID" now), or at least what everybody else hoped to do with it.

The concept is pretty simple, you get a URI that exclusively identifies you and is difficult to spoof, and it allows you to do single sign-on (or at least use the same credentials mechanism) at multiple sites.

I have no particular problem with BOO, I could use SharpDevelop, which supports it, but my real question is why isn't this authored in an industry - standard certified CLI language such as C#? It just seem like so much stuff was cobbled together, I think at this point the best thing to do is wait.

I like the concept, and I'd implement it on sites I develop, but not without a lot more infrastructure support and less FUD.

A recent survey of the Fortune 1000 websites by Port80 Software shows that Microsoft's Internet Information Services (IIS) is being used by 54.9 percent of companies. and the ASP.NET platform is being run by 48.4 percent of the companies. You can dispute the figures as being biased but the bottom line is, if you want to gain broad acceptance of a platform or standard, you have to embrace the entire market, not just the Penguinistas. This, to me, represents a gaping hole in whatever marketing strategy the OpenId crowd may have.

This is one of the trouble areas that you can find with open-source initiatives -- you can get a kind of fragmentation where individual efforts go off in different directions, and although the intentions are noble, there may not be some central authority that helps to tie everything together and you run the risk of having a real mess on your hands. I really hope they get it sorted out better. I am a big proponent of open-source software and platforms. It's just that open-source still requires leadership and proper management to succeed.

Just my $0.02 .


Boom! Boom! 5:30 PM

If you live within an hour or so of the Kennedy Space Center, this unmistakeable loud double-boom will really wake you up. I think geeks who live in Central Florida are just so much more tuned in to NASA, space exploration, and what technology is all about.

I completely forgot about the Shuttle trip - I've been so engrossed in my work. But that unmistakeable sound blast woke me, and within a half a second, I knew that they were coming home.

Flipped on the TV and watched a perfect landing, and also noted that my stress level, even though perhaps subliminal, just went down a notch for not having to worry about the space program - at least for now.

Think about it. We are sending people up into space to do science and follow our human destiny, and meanwhile back here on earth, brother is killing brother in a mindless universe of hatred.

If this doesn't represent the two most polarized opposites of the human species, I don't know what does.


IttyUrl.net in Beta

My latest creation, IttyUrl.net, takes the "Short Url" concept farther, and is oriented toward developers.

1) Turns long Urls into "Ittified" short Urls just like the dozens of similar sites do.
2) Automatically spiders the target page, returning the url type (Feed or Page), Title, and up to 200 TAGWORDS on the page, along with any custom tagwords you provide, and indexes all.
3) Easy "bookmarklet" you can drag to the Toolbar or add to Favorites enables you to "Ittify" any page while viewing it.
4) RSS Feeds of your IttyUrls, or most recent site-wide IttyUrls.
5) Search by tags or Title keywords.
6) Tag Cloud feature.
7) Complete WebService API
8) Neat little script you can put on any page of your blog or web site (neutered here, remove the + signs):

Everything on the site is free! Comments, suggestions and criticism are welcome!

Try it! http://www.ittyurl.net


Windows Update and IIS Metabase Corruption?

I let the most recent windows updates install on Server 2003 x64 this morning, and the first thing I noticed on reboot was that familiar but despised messagebox, "One or more services failed to start, have a looky at the Event Viewer, etc.".

Now, I can't be sure if it was the update, I'm just mentioning this in case other poor souls want to corroborate, in addition to the fact that I know of at least one other situation where this may have occurred.

At any rate, the IIS Admin service would not start. A bit of investigation revealed that the IIS Metabase, "Metabase.xml", which resides in %windows%\system32\inetsrv\, was corrupted.

If you have enabled backups on your IIS metabase, there are three ways to restore a previous backup. You can do it from IIS Admin snapin (the preferred method). However, if (as in this case) the Admin Service won't start, you are pretty much S.O.L. on that one. Another way is to do a restore of your backed - up System State, but in many cases that would result in so much additional "Bad Baggage" being restored that you really want to avoid it.

The third way, is to look in the History Folder, and you'll see a bunch of files that look like "MetaBase_0000003275_0000000000.xml". All you need to do is copy the latest one (assuming it's not a backup of the corrupted one, of course), copy it back into the inetsrv folder, and rename it to "Metabase.xml".

Presto! You're done, and the IIS Admin service should start. Of course, you may have to reconstruct some VRoots and stuff, but it's quick and pretty much failsafe.

IIS automatically maintains a record of metabase changes in history files that are saved to disk. You can also configure the number of metabase history files to save. By using the metabase history feature, you can revert the metabase through any number of changes to restore a particular configuration, or to see what has changed between revisions.

The History folder stores versioned copies of the MetaBase.xml and MBSchema.xml files. These copies can only be viewed by members of the Administrators group. The location of the History folder is systemroot\System32\Inetsrv\History.

When the in-memory metabase is written to disk, IIS checks to determine whether the number of history file pairs that are contained in the History folder exceeds the value of the MaxHistoryFiles property. If the number of history file pairs exceeds the value of the MaxHistoryFiles property, the oldest history file pair is deleted.

Now! The question becomes, how did my Metabase xml file, which has never been corrupted before, ever, get corrupted like this?


Why does Forms Authentication Fail When Migrating from ASP.NET 1.1 To 2.0?

The <machineKey> element in the Web.config file is used to control tamper-proofing and encryption of ViewState, forms authentication tickets, and role cookies.

ViewState is signed and tamper-proof by default. You can request encryption for pages that contain sensitive items in their ViewState by using the ViewStateEncryptionMode attribute. Forms authentication and role cookies are also signed and encrypted by default. You do not need to modify the default settings under normal usage scenarios, except for a few situations that developers should be aware of:

If your application is in a Web farm or if you need to share authentication tickets across applications, you need to manually generate encryption and hashing keys and specify them in the <machineKey> element, and NOT use the "autogenerate" default.

If you migrate an application from ASP.NET 1.1 to ASP.NET 2.0 and use Hashed passwords, the key material used to generate your hashes WILL CHANGE. Again, the solution is to have specified encryption and decryption keys in your <machineKey> element, and to keep these the same in your migrated application.

Judging from the number of "what's wrong" posts around this subject, it appears that Microsoft hasn't made this clear enough. There are a couple of KB's on the subject, but the problem is - most developers read KB's AFTER they have a problem, not as a "preventative measure". You give me 100 developers who have installed Visual Studio 2005, and I will show you at least 95 developers who never read the "Readme" file that accompanies the distribution! It's just human nature to RTFM as a last resort.

For ViewState, a hashed message authentication code (HMAC) is generated from the ViewState content and the hash is compared on subsequent requests.
The validation attribute of the <machineKey> controls this, and indicates which hashing algorithm to use. It defaults to SHA1, which uses the HMACSHA1 algorithm. Valid choices for hashing include SHA1 or MD5, although SHA1 is preferable because it produces a larger hash and is considered cryptographically stronger than MD5. The validationKey attribute of <machineKey> is used in conjunction with the ViewState content to produce the HMAC. If your application is installed in a Web farm, you need to change the validationKey from AutoGenerate,IsolateApps to a specific manually generated key value.

Here is an example element with manually-provided keys:


You can make yourself a nice little class to generate fresh keys, like so:

using System;

using System.Text;

using System.Security;

using System.Security.Cryptography;


public static class GenerateKey {

  public static string  GetKey( int keyLength)


    int len = 128;

    if (keyLength  > 0)

      len = keyLength;

    byte[] buff = new byte[len/2];

    RNGCryptoServiceProvider rng = new



    StringBuilder sb = new StringBuilder(len);

    for (int i=0; i<buff.Length; i++)

      sb.Append(string.Format("{0:X2}", buff[i]));  

      return sb.ToString();   



Pass in the desired key length as the parameter to the GetKey method. Passing zero will result in the default 128 length.


The Case of the Incredible Multiplying Email Alias

(-- or "How I Learned to be a Complete Idiot and Send Out Spam Via the 'Me Too' Effect...")

"A man does not exist until he is drunk." -- Ernest Hemingway

From the ReallyReallyDumb Department:

This one "takes the cake" for absolute stupidity! I wouldn't even know about this except for the fact that I once started work on a book for this publisher, which deal I eventually got out of, but they've never taken my email address off their "Authors" list.

This afternoon about 2 PM, I get about 20 emails from the "contacts@....." address of this publisher, who shall remain unnamed, (you probably know who they are anyway). Apparently, some complete idiot set up this address to relay any mail sent to it to every author in the list (or maybe they didn't even know, which in my book still qualifies for "idiot" status), and then they decided to "Test it" - even asking other people to "Send it an email every 30 minutes" - Good God! So somebody sends a mail with this alias in the CC list, right? BOOM! 200 people get a copy of it and what do they do? You got it - they hit the reply button, which includes the alias in their CC list in many cases, and instead of 200 emails, now you have 800 more. Not to mention all the bounces from people who are "Lo Babayit" (Nobody Home) or whose mail servers are sending the bounce mail notifications BACK TO THE ALIAS -- which of course, once again, replicates the entire process, ad nausuem.

Well, this started about 1PM, its finally starting to die down about 5PM. I've seen this a couple of times before, where the "Me too!" effect kicks in, creating absolute recursive havoc with news and mail servers (not to mention any poor slob who happens to have gotten their tail in the loop, like me).

N.B. -- Oops, spoke too soon. It's Friday morning, and there are another 60 or so from their own Postmaster address. Boy, did they create a mess!

Go figure. Lawn Chair Larry from Los Angeles wasn't even this dumb - at least he took along some sandwiches, a six pack of beer and a BB gun!


ASP.NET 2.0 vs PHP -- or PHP.NET?

I finished up my Multisearch Windows Vista Sidebar Gadget, which allows you to choose from multiple search providers and get your RSS Search results in your Sidebar gadget, and I chose the Google Blog Search provider and searched on ASP.NET and came up with this post on Digg.com (target of post):

http://www.modernlifeisrubbish.co.uk/article/why-not-dot-net where this fellow basically trashes ASP.NET in favor of PHP. Most of his reasoning in favor of PHP is really just personal preference, the reasons given are mostly inaccurate or biased because of lack of knowledge about ASP.NET and the .NET platform.

There was one comment on the Digg posting, however, that I found revealing:

"You should check out Phalanger.


It integrates PHP with ASP.NET, pre-compiling your PHP into MSIL, the same way eAccelerator and others pre-compile scripts. The difference is, it's done by ASP.NET, and your scripts run on IIS6. With your scripts running on ASP.NET, they're running managed which protects you from a great deal of security issues.

As for performance, I suggest you give it a try, because our tests show PHP running under ASP.NET under IIS6 under Windows 2003 completely destroys the same hardware running PHP with eAccelerator under Apache Linux 2.6 or FreeBSD. Sounds hard to believe, but we're a shop full of linux/unix guys and all our stuff is currently on Apache and we gave ASP.NET a chance at running our PHP and the results were stunning.

Phalanger gives you the ability to use the .NET framework in your PHP scripts, and to use the code behind model of ASP.NET, but you don't have to use that stuff if you don't want to! It is 100% compatible with existing classic style PHP scripts, and PHP modules. (N.B. - it is - tried it, and it was a 100% total "no brainer")

I'm no microsoft fanboi, but if you're a real developer you shouldn't be a unix fanboi either, you should test things out and see what works best for you. I highly recommend you test Phalanger with your existing PHP codebase."

Well! I had already looked at Phalanger, but I didn't realize it was that far along, so I downloaded and installed it (including the Visual Studio 2005 IDE Integration support).

Let me tell you something: PHP is extremely popular, but it's still an interpreted language, like VBScript with classic ASP. When you can combine the ease of use of PHP with .NET Compiled Framework support, you have got a winning combo. I tried a couple of PHP web apps, and I was duly impressed. There is a checkbox in the installer that basically asks you if you want to support compiling classic PHP scripts, and I checked it, and it works- right out of the box!

The included PHP.NET compiler will output standard .NET class library assemblies that can be used by any .NET application. Think about it - this is HUGE!

What does Phalanger do?

My extract from the features section of the help PDF:

Phalanger enables developers to painlessly deploy and run existing PHP code on an ASP.NET web server and develop cross-platform extensions to such code taking profit from the best from both sides.

Compatible with PHP 5.1 as well as with proposed features from the upcoming PHP 6, the object model in Phalanger enables to combine PHP objects with the .NET ones. It is possible to use and extend any .NET class in PHP and also consume classes written in PHP from an arbitrary .NET language.

From another point of view, Phalanger provides .NET programmers with the giant amount of practical PHP functions and data structures - many of them reimplemented in the managed environment of the .NET Framework. The whole library of PHP functions and classes (including those implemented in the PHP extensions) is accessible to a .NET programmer together with type information.

The compilation of PHP scripts gives yet more power to the existing PHP web applications inside the Phalanger environment. All the static (run-time immutable) code in the scripts gets parsed and compiled only once and all following accesses to a page benefit from the unleashed execution of the native compilation of the script. Yet the usage of Phalanger is not limited to web applications. The compiler supports output of standalone executables or dynamic link libraries enabling you to create managed PHP console or windows applications, or library modules reusable from any other .NET Framework application.

Personally, I look at this from the standpoint that there is a huge codebase of high quality PHP stuff that I may wish to use without having to run PHP in interpreted (or even "eAccelerator") mode. With Phalanger, I can run this great codebase in ASP.NET with no hiccups at all and get the enormous perf boost that the ASP.NET model provides. And, I can host it in IIS just like any ASP.NET application.

If this isn't PHP.NET, I don't know what is.

In addition I will leave you with this snippet from Scott Guthrie's ("Mr. ASP.NET") blog from March (paraphrased for simplicity):

"Myspace had (in March) 65 million registered subscribers, and were registering 260,000 new users each day. According to the Media Metrix report (an independent analyst firm) MySpace.com had more page views in February than all of the MSN and Google sites combined. Umm!

They re-built and re-deployed their site on ASP.NET 2.0 shortly after it was shipped last year. Some statistics:

MySpace.com is now processing 1.5 Billion page views per day
MySpace.com handles 2.3 million concurrent users during the day
MySpace.com's average server CPU utilization went from 85% to 27% after moving (from another technology) to ASP.NET 2.0

The top-6 domains in terms of page-views in February according to Media Metrix were: 1) Yahoo, 2) MySpace, 3) MSN, 4) Ebay, 5) Google, and 6) Hotmail.

4 of the top 6 sites (MySpace, MSN, Ebay and Hotmail) run on IIS and Windows"

You PHP folks? Read up on it d00ds, WAKE UP and put that convenient "anti-Microsoft" stance aside for a bit, and open your mind. You can have your PHPCake and eat it, too... It's not just about scripting: it's about performance too.

It ain't null until I SAY it's null!

The fiasco around System.DbNull and "null" and Databases kind of reminds me of the "Hanes Lady" commercial where she is pulling the elastic of the briefs (Now that one was Marketing 101 exemplified -- how many TV ads do you really remember like that one?).

The typical forum or newsgroup post goes:

"When I insert a blank value into a SQL Server database for a DateTime column, my database is inserting 1/1/1900 even if I assign Null to the variable in my application."

When you are inserting data into a database, the ADO.NET data providers and your database may distinguish between a null object and an uninitialized value on a spcific data type. In this case, inserting a null into a DateTime column causes the database to seed the field with the default initialized value - 1/1/1900. What you really want is to tell the database that the field in question should remain uninitialized. To do that there is a System.DBNull class and you use the Value property of the class, e.g. "System.DbNull.Value".

To insert a row into your database, and maintain the uninitialized state of the DateTime fields you use code like this:

SqlCommand cmd = new SqlCommand();
cmd.Connection = conn;
cmd.CommandText = "INSERT INTO USERS (Name, RegisterDate, CancelDate) VALUES (@Name, @RegisterDate, @CancelDate)";
cmd.Parameters.Add("@Name", "FeeFiFoFum");
cmd.Parameters.Add("@RegisterDate", DateTime.Now);
//Use System.DBNull.Value to keep the CancelDate field uninitialized
cmd.Parameters.Add("@CancelDate", System.DBNull.Value);


I've seen a number of approaches to this, but one engineered by Adam Anderson is clearly the best. In .NET 2.0 , we can have one function for all the data types:

public static class CastDBNull
public static T To( object value, T defaultValue )
return ( value != DBNull.Value ) ? (T) value : defaultValue;

To use this:

// Pass a string type to cast to string; you could pass either String.Empty or null,
//depending on what you want for the default value:
string s = CastDBNull.To( dr[0], String.Empty );
//Now with the same class and method, passing int type to cast to int:
int i = CastDBNull.To( dr[0], 0 );

Some programmers prefer to use nullable types to handle DBNull, the reasoning being that using null to represent DBNull is better than using a "magic number" such as 0 to indicate DBNull.

However, there are times when you can't use nullable types, because you need to know the difference between having "no data" and null data. If you try to select a field from a row with certain criteria, there might be no matching row, so your field value remains uninitialized (null), or a row where that field's value is DBNull, or a non-null value. In cases like this, where you can have three different kinds of results, nullable types are difficult to "make fit".


Getting Dugg: An exercise in audience understanding

This past weekend I finished putting together my "programmers" version of the Myers-Briggs MPTI test for online consumption. Actually, the "Real" MPTI can only be administered by a licensed practitioner and it's trademarked. However, over the years, Keirsey and a number of others have refined their own well-researched versions of this test, and those are not trademarked.

Consequently, with a little study, and common sense on figuring out how the test is actually scored, it is possible to put together a highly accurate version of the MPTI. So I posted this, along with a nice chart that links to the Wikipedia page for each of the 16 personality types, as well as 16 details pages with data accumulated from a number of sources, over on eggheadcafe.com on Sunday afternoon.

I also submitted it to Digg, mostly because I had a "hunch" that it would fit pretty well with the Digg geek herd mentality.

Well! Within 5 minutes, it already had 10 Diggs, and as of this morning (Monday) it had some 1400 Diggs and had made the front page, and had some 300 comments, most of which were favorable. I've never gotten any of my "Stuff" dugg more than six or seven times, so this was an epiphany of sorts.

They say its the title. Of course "10 Best ways...", "Amazing ..." and similar buzz phrases can get you dugg, but it really takes the herd mentality to make it to the Digg front page (and if you do, you better have your webserver and your pages running lean and mean, or you won't be there long).

My catchy title starts with "Are you a programmer?", and it was posted to the Programming section. I guess that combination of title and the natural curiosity of being able to take a free personality test online must have hit the right "DiggNerve".

Anyway, Analytics reported some 27,000 page views just for Sunday - a day when most people are watching football, and they certainly aren't at work.

On Monday, Dec 11, the "article page" had garnered 55,687 page views for the day and was responsible for approximately 56 percent of the site-wide Adsense revenue for Monday - all from the single 300X250 ad that appears at the top of the article. But the residual effect should be good as well, since many of the visitors were brand new and will come back repeatedly to visit our site for more good content. The only real downside is that the ISP bitched about the extra bandwidth!


Standards, Schmandards! - OpenXml vs Open Document Format

Microsoft's Office OpenXML has been approved as an Ecma standard and will now also be submitted for consideration as an ISO international standard. Ecma International announced the approval of the new standard on Dec. 6 following a meeting of its general assembly. Ecma will also begin the fast track process for adoption of the Office OpenXML formats as an ISO international standard in January 2007.

The technical committee, which includes representatives from Apple, Barclays Capital, BP, The British Library, Essilor,Microsoft, NextPage, Novell, Statoil, Toshiba and the U.S. Library of Congress, also boasts the membership of Intel, which recently joined.

Naturally, criticism of the new OpenXML standard was quick, particularly from those who support the competing OpenDocument Format, which has already been approved as an ISO standard. For example:

Bob Sutor, the vice president of Open Source and Standards at IBM, said in a blog posting that IBM "voted no today in ECMA on approval for Microsoft's OpenXML spec. I think we have made it clear in the last few months why we think the OpenDocument Format ISO standard is vastly superior to the OpenXML spec," he said.

In actual fact, IBM's "no" vote was THE ONLY "no" vote.

But Ecma clearly disagrees with their view, saying in a statement that an increasing number of organizations around the world are interested in achieving document processing interoperability and creating digital archives using open formats.

"The Office OpenXML (OpenXML) formats provide an international open standard for word-processing documents, presentations and spreadsheets that can be freely implemented across multiple applications and platforms, both today and in the future," it said.

Vendors, including Corel, Microsoft and Novel, have already announced implementations of the OpenXML standard in their applications, such as WordPerfect, Open Office and Microsoft Office 2007.

Wikipedia has a nice comparison of the two, er, "standards", along with two sets of "Adavantages of XXX" lists here.

What do I think? I think it's almost over, and Microsoft won. One thing I have learned in my short happy career as a programmer and software architect is that just because something is open-source and doesn't have the name Microsoft in it anywhere, doesn't always mean its the best thing for the broadest population of users. If you are Microsoft, they are gonna bash you even when you try to do the right thing and support standards that will work for the "greatest good". In particular, I find Mr. Sutor's pronouncements somewhat two-faced. Microsoft voted "yes" for ODF at ISO, and offered no resistance. But Sutor / IBM freely admit that it will take 3 revisions just to get ODF to be OpenXml compatible. But, don't take my advice - read the wikipedia comparisons and come to your own conclusions.

Oh, well . ,. the nice thing about standards is that you get to pick the one you like the best...

Ready for Your Spanish-American War Tax Refund?

Er, "yippee" --the IRS is going to return money collected from our phone bills that was supposed to to pay for the Spanish-American War.

The Federal Excise Tax, which was enacted in 1898, amounts to about $3 per month for the average say, $100 / month phone bill. Heavy phone users might pay $100 or more per year. Yep, this was actually to pay for the Spanish-American War, and we've all been paying for "it" since 1898.

Fortunately, once this ludicrous tax started getting some legs in the press, no one could really defend it and the tax has indeed finally come to an end. We're even being offered refunds:

You are to claim the refund on the 2006 tax form that you file in 2007.

You can opt for a standard refund of $30 (if you have one exemption), $40 (if you have two), $50 (if you have three) or $60 (if you have more). This option requires no documentation from you.

If you have (or want to go through the trouble of procuring) your telephone bill statements from March 2003 to July 2006, you can get a refund based on amounts you were actually charged. In most cases, this can amount to a lot more than the standard refund -- perhaps as much as $100 to $300 for many of us. You need to fill out IRS Form 8913 for this.

Let's see, I'll get out my "Fawlty Math" calculator. 2006 minus 1898 is 108 years, times $36 a year, times 200 million Americans.... equals a $50 refund. Yep, works out PERFECTLY!

Folks, this completely, utterly idiotic example is just the tip of the iceberg. I have never liked US taxes. I've never gotten along well with Uncle Sam, and I don't like him any better now. Trust me, he can make your life miserable if you aren't careful.

The reason we have this incredible waste is that Americans put their elected representatives (Republican, Democrat, and Lieberman) into Congress and then go promptly to sleep, completely oblivious to the greedy abuse of power that ensues. Congress taxes, Congress spends, and it spends more than it gets, but who cares! It's kinda like:

"And it's 1, 2, 3 what are we taxin' for?
Who cares, I don't give a damn
Next stop is la-la land."

What happened to the "Contract With America"? The Republicans conveniently forgot about it, and that's why the stupid morons got booted out in the last election. No, I think the Iraq war midterm election thing was just a cover for plain bad governance. The American People are smart enough to realize we are going to be in Iraq for a very long time. Look, we still have 40,000 troops in Japan, and 30,000 in Korea after 60 and 50 years respectively. But, wars as a percentage of GDP are small compared to the reallyreallydumb stuff that Congress spends money on.

Democrats want things to be "more fair" by increasing the percentage of tax the rich pay, and that's total BS. If I make $2 million a year and you make $200 thousand, and both our tax rates are 20 percent, then I ALREADY pay ten times as much in taxes as you do, and there is no reason to increase my tax rate to 40%. Close the loopholes the rich use, and you can even lower all tax rates across the board.

What we NEED to do is lower the tax rates for EVERYBODY, WAKE UP, and stop letting Congress throw money down the fyookin' TOILET! Taxes SUCK. We don't need them, at least not the way it's set up now.

Maybe we could get the Chairman of the Federal Lubrication Board, Alan Greasepan, to fix things up?


More Windows Vista Goodness for Developers

There is a page on MSDN that details and has links to:

Visual Studio on Windows Vista FAQ

Visual Studio on Windows Vista Issue List - normal permissions

Visual Studio on Windows Vista Issue List - Elevated Permissions

Visual Studio.Net 2003 on Windows Vista

Also, you can use the Visual Studio and .NET Framework bug reporting site both to submit issues, and to look for issues already submitted, some of which may have fixes or workarounds.

PC World has a "Windows Vista FAQ" they call the "Ultimate Guide". Its a bit more consumer-ish, but worth a look.

And, "Windows Vista Security News" has some worthwhile stuff to look at.

In Other News...

Victoria's Secret, in an unusual environmentally sensitive move, announced they will be cutting down on the amount of paper used in their racy catalogs. They did not specify how this would be accomplished. Hmmm... skinnier models, maybe?


What's in a [domain] Name?

Domain names are - well, important as the lingua franca of the internet, so a quick review of some selected top level domains may be appropriate.

The domain you choose has more ramifications than just search engine performance. The problem with strange TLDs is that:

  • They can confuse visitors
  • They are almost always more difficult to remember than .com, unless they spell something or sound like a word or phrase.
  • They can have a tendency make your orgainzation or site appear less reputable than you actually are.

Here are some choices, and my comments:

The ubiquitous, "everything bagel", .com is the TLD you want. Assuming, of course, the one you want isn't already taken! Some of the hardest .com domains to find are "short" ones. Try to find a .com domain like "whiz". You can't. Even the .net versions are already taken!

.net / .org
Theese two other non-country specific TLDs are good second choices, if you can get one. But, they lack the familiarity of a .com, and for some sites that's unacceptable. However, for certain applications or audiences a good easy-to-remember .net or .org address can be cool.

This TLD was made popular by sites like del.icio.us. The .us TLD is great for those targeting a primarily US-based market. But unless, like del.icio.us, the full site url actually does something like spell a word, they probably have little appeal, because they simply won't ring off the tongue as "familiar".

.biz, .info
Early adoption by spammers and other less reputable operators have tarnished the .biz and .info domains.

These are intended for individual use, but .name has never really caught on, and so it just doesn't "cut it". I could never imagine a "peterbromberg.name". Could you? Doesn't even sound like "The Internet".

The others

There are hundreds of other country-specific and industry-specific domains available, but most lack the recognition required, so for a global site it's usually safer to stick to a generic TLD. A few country codes have gained credence in niche areas, like the Federated States of Micronesia (.fm) for music sites, Tuvalu (.tv) for TV sites, and the Tonga (.to) as in "kickme.to".

Stick to .com if you can. The .net TLD is a good second choice. If you're interested in the official list, here it is.


Windows Vista Defrag? NOT!

One of the so-called "nice" new features of Windows Vista is the "rebuilt" defrag engine. Problem is, I don't like it. Why? I like to SEE what's being defragged, and I like to SEE a visual representation of what my filesystem looks like.

The main reason for this is that I can choose different defrag methods (such as with O&O Defrag) and get better file ordering. Also, when I get to see what's happening, it helps me to identify files I know I don't need and I can delete them, and do a follow-up defrag.

Unfortunately, O&O doesn't have anything out for Vista yet (yes I know you can Orca the MSI, but I ain't doing that!). Diskeeper isn't ready for Vista either.

Frankly, I don't know what these people have been doing all this time. They knew Windows Vista was coming out, the defrag API has been readily available to them to get their products ready. What, were they waiting for Godot?

At any rate, Raxco, which makes PerfectDisk, has a free 150 day trial of their Vista-compatible beta, and it works great. Also, Auslogic has a free defrag that is Vista compatible, but it doesn't offer control of defrag method or offer boot-time defragmentation.

Suggestion: Delete your Paging file before defragging (requires a reboot, and don't forget to press the "SET" button). You can then do the boot-time defrag, which takes care of your MFT and System files, and have a nice fast hard drive. Then, go back to Control Panel / System and restore your Paging file.

Always run the Disk Cleanup Wizard before defragging, and get rid of unwanted .tmp and .bak files.

Happy Vista-ing!


HTTP referer spoofing, cookies, User Agent strings

There was a post on one of the groups recently by a developer who was making a number of WebRequests for various pages, claiming that one of them would strangely fail.

Yet, this individual stated that if he would paste the respective URL into his browser, that page would come up just fine.

There are several things that could come into play here with various sites:

1) Many sites will reject a request that doesn't match a particular one or more User Agent strings (Here are some samples, if your memory is rusty). So you can add the UserAgent header to the WebRequest. I've even seen some wise-asses who detect Internet Exploder and give you a nasty message about how immoral you are and you should go download Firefox to become a real person and how dare you try to view my site with ..etc...

(Listen, Pal: I already figured out you are a Webtard, so I'm not going to bother to fire up my copy of Firefox to see your page, which I already know is worthless. Besides, your little shenanigan is currently restricting you to only about 15% of your potential audience. Go take a good course in Marketing.)

2) Many times a site is looking for a cookie. Perhaps they set it when you first visit, and subsequent pages look for it. So if you make a request for a "Subsequent page" without the required cookie, you get "Bupkis". Some cookie container code:

CookieContainer myContainer = new CookieContainer();

            // following line adds a cookie in container, which will be for all urls on the domain myDomainstr

            myContainer.Add(new Cookie("name", "value", "/", myDomainstr));

            HttpWebRequest request1 = (HttpWebRequest)WebRequest.Create(httpUrlString);

            request1.CookieContainer = myContainer; // use this same container for all requests

            HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //you can check cookies on response.Cookies


            // next request coming--


            //all cookies received on request1 would be automatically included in this request from same Cookiecontainer

            HttpWebRequest request2 = (HttpWebRequest)WebRequest.Create(httpUrlString2);

            request2.CookieContainer = myContainer;

            HttpWebResponse response = (HttpWebResponse)request.GetResponse();

3) Another common issue is redirects. Here are a couple of settings you can use:

webrequest.AllowAutoRedirect = [true|false];
webrequest.MaximumAutomaticRedirections = 30;

You can also capture the redirect url:

 public virtual string GetRedirectURL(HttpWebResponse

                webresponse, ref string Cookie)


            string uri = "";


            WebHeaderCollection headers = webresponse.Headers;

            if ((webresponse.StatusCode == HttpStatusCode.Found) ||

              (webresponse.StatusCode == HttpStatusCode.Redirect) ||

              (webresponse.StatusCode == HttpStatusCode.Moved) ||

              (webresponse.StatusCode == HttpStatusCode.MovedPermanently))


                // Get redirected uri

                uri = headers["Location"];

                uri = uri.Trim();



            //Check for any cookies

            if (headers["Set-Cookie"] != null)


                Cookie = headers["Set-Cookie"];



            return uri;

        }//End method

4) Another common technique (this one is real popular with "those" sites) is to check the HTTP Referer. That's available (in ASP.NET) with the Request.UrlReferer property. They do this as a sort of "poor man's authentication" - the idea being that you got in at some "gateway" page with your credentials, and now they figure that you could only be requesting one of their pages from within one of their sites that you "got into" so they look for one or more referers. Here's some code to handle this in a WebRequest:

public string GetUrl(string url, string referer)


           // assumes a fully-qualified "http://" url

            HttpWebResponse webResp = null;

            HttpWebRequest HTTPGetRequest = null;

            StreamReader sr = null;

            string myString = String.Empty;

            HTTPGetRequest = (HttpWebRequest)(WebRequest.Create(url));

            HTTPGetRequest.KeepAlive = false;

            HTTPGetRequest.Referer = referer;

            webResp = (HttpWebResponse)HTTPGetRequest.GetResponse();

            sr = new StreamReader(webResp.GetResponseStream(), Encoding.UTF8);

            myString = sr.ReadToEnd();



            return myString;


Happy Spoofing!

Inspector Rae and the case of the Incredible Shrinking DIV

This one popped up on the asp.net newsgroup, and I think it's good for a chuckle!