New Bridges,a new Internet, and Software Projects

There’s no question we need new bridges and road infrastructure in the United States. But how about a new Internet?

There is a massive project to redesign and rebuild the Internet from scratch that is  inching along with $12 million in National Science Foundation funding and donations of up to 40 gigabits of network capacity by the Internet2 organization and NationalLambdaRail. That's enough bandwidth to run 30 high-quality movies into your house -- simultaneously! Construction on "GENI" could start in about five years and cost $350 million - if Congress approves the funding.

You could consider emailing your Senator / Congressperson telling them you think it’s a good idea.

On Software Projects

Have you ever been involved in a software development project that is simply, well -- dysfunctional?  I have, and it is not fun, especially when time starts to run out and then people start looking for somebody to blame. There are certain ingredients that I've learned need to be in every software project for it to succeed. I call it “RPARL” – Requirements, Plan, Architecture, Resources, Leadership:

  1. A complete description of the Requirements of the project.
  2. A project Plan (yep - forget about those index cards, guys) with a realistic timeline.
  3. UML / Sequence diagrams for Architecture - or at least, "some form" of architecture (to build a building, you need blueprints, remember?).
  4. Resources (people) who understand what they are working on and what their part is in it.
  5. Leadership. A project manager who understands the effort and is not in a "State of Denial". Somebody who knows how to steer the ship!

Oh, there is more, but the above are the main ones I can think of.

The software development process is a science; it deals with facts, figures and arithmetic. If you see that the ingredients aren’t there, get out if you possibly can. I’ve been developing software on my own or in a group of developers for probably 15+ years now, and it never ceases to amaze me how many companies – both big and small, can’t seem to “get” the importance of these ingredients for success in their software projects.


Ittyurl.net now sports nearly 450 Silverlight Links!


OMG, Silverlight! Asynchronous is Evil! (or, Call me back when you got it)

Now we sit through Shakespeare in order to recognize the quotations. - Orson Welles

I just have to shake my head at this absolutely moronic thread on the Silverlight Forums promoting a "petition" to bring back synchronous webrequests in Silverlight. Really, it has all the elements of the old VB6 flame wars…

N.B. 8/18/2008: It looks like the moderators finally took the thread down; it was so full of hate posts and name-calling and ad-hominem attacks, its about time!

They just don't understand: Have you ever had your browser freeze up when requesting a page somewhere which request (or even a subrequest in the page, such as for advertising or an image) doesn't come back right away? Your browser turns white, your whole damned desktop is frozen, and you may need to get rid of IEXPLORE.EXE from within Task Manager just to free up your system (in rare cases you may actually have to shut down and reboot). This is what happens when a developer who doesn't know how to write asynchronous code issues a blocking, sync method call and then “something bad” happens. Remember - HTTP is NOT a reliable protocol!  

Since Silverlight's UI runs on a single thread and most all network code takes time to return (if it actually ever does return) it would be unacceptable if the Silverlight plug-in would just block waiting on an HTTP or Socket call -- thus blocking the UI of the host (your web browser). So, the Silverlight Team decided to only use the asynchronous model for all network related calls – whether WCF or WebService proxy calls, WebRequest, or WebClient. They simply implemented the same NPAPI plug-in architecture that all plug-ins must use.  It’s sad,  because a couple of “holier than thou” posters on that forum thread were quoting my statement from the paragraph above and telling me that I was stupid! Not stupid at all – the whole purpose of the plug-in architecture only permitting async requests is so that plug-ins would not be able to lock up the browser!

The bottom line is this: if Microsoft were to allow every petition-signing Tom, Dick and Harry blowhard / hotshot developer to make synchronous calls because they are too lazy and crybaby to learn how to do it better, there would DEFINITELY be a lot of very unhappy people with frozen browsers out there in SilverLand -- and guess who would get blamed? Microsoft!

I see that many are trying tricks like using ManualResetEvents and other threading primitives to try and “simulate” synchronous behavior. My advice? Don’t even bother to try – it’s so much easier and more professional to just learn how to write -- and THINK -- async. Your typical .NET developer is just so used to making a synchronous method call without ever taking the time to think about what will happen if it doesn’t come back -- it’s the Zen of the sound of a single hand clapping…

Have a look at the WebClient or WebHttpRequest class and see. There is no way to synchronously download content. You can write code with LINQ and inline delegates or Lambdas that “looks like” it’s synchronous, but it isn’t. You can try to use ManualResetEvents and other threading primitives to “trick” Silverlight into faking a  sync method call – but none of it will work. Of course, the majority of developers having 3 or even 5 years of experience with .NET have never even written an asynchronous method call -- and now they’ll have no choice but to learn how. And that -- is a good thing. Silverlight needs to be truly cross-browser. In order to do that, it must implement the standard NPAPI plug-in architecture, which dictates that ONLY async methods can be used. At least -- for now. Work with what you've got - don't flame.

Thanks to the Silverlight dev team for doing developers a big favor. I’m for “doing it in the callback”.


Silverlight: Recent Updates Developers Should Know About

If you got the original Silverlight 2 Beta 2 developer bits which became available around 9PM on the very last Friday of Tech-Ed 2008, you may have noticed that you just recently got a refresh in the form of KB955011 from Windows Update.

It turns out that this is supposedly included in a newer version of the silverlight_chainer.exe consolidated installer, which was freshly updated on 7/11/2008. That is to say, the version up there now is NEWER than what you got right after Tech-Ed.

I suspect there are some additional fixes in the Tools portion of this, and so I recommend downloading it and reinstalling.  In fact, when I did so, I was pointed to a .vbs script whose header comments are self-descriptive:

' Installation verification script for Microsoft Visual Studio 2008
' This script checks for KB949325 and reinstalls all Advertised features
' At no time should there be Advertised features for Microsoft Visual Studio 2008
' Any features in an Advertised state are an [sic] indications of a patching error

Apparently they feel that if any features of Visual Studio 2008 are in an “Advertised” state, this is an indication of a patching error, and this script fixes it.

Then, you should be able to reinstall the updated Silverlight_Chainer.exe (dated 7/11/2008) and be assured that you have the latest and greatest of everything Silverlight-ey in the world.


I figured I would try out the free Silverlight Streaming service, so here's a quick Silverlight Streaming video I took of last year's Rolex 24 race at Daytona with my Audiovox Smartphone camera, combined with some photos and some background music near the end:



Guy Burstein has a nice 1 page tutorial on how to use the Silverlight Streaming service - it is super-simple!


IttyUrl.Net now has over 500 Silverlight user-contributed  social-tagged links!


The Lessons of History – and Moral Hazard

"Olim habeas eorum pecuniam, numquam eam reddis: prima regula quaesitus"
(Once you have their money, you never give it back: the 1st rule of acquisition)

The U.S. Savings and Loan crisis of the 1980s and 1990s was the failure of 747 savings and loan associations (S&L's) in the United States. The ultimate cost of the crisis is estimated to have totaled around USD$160.1 billion, about $124.6 billion of which was directly paid for by the U.S. government -- that is to say, by you and me, the U.S. taxpayers, either directly or through charges on our savings and loan accounts. This contributed in a major way to the large budget deficits of the early 1990s. The resulting taxpayer bailout ended up being even larger than it would have been because moral hazard and adverse-selection incentives compounded the system’s losses.

Moral hazard is the prospect that a party insulated from risk may behave differently from the way it would behave if it were fully exposed to the risk. Moral hazard arises because an individual or institution does not bear the full consequences of its actions, and therefore has a tendency to act less carefully than it otherwise would, leaving another party to bear some responsibility for the consequences of those actions. This is why many Republicans in Congress are very reluctant to give the Fed the kind of "blank-check" freedom to step in and open the Discount Window with "Free money" for Fannie, Freddie and every other Tom Dick and Harry financial firm that's gotten it's ass caught in the wringer because they don't understand financial risk (and because the Fed, which was supposed to be watching them, was asleep at the switch!).

With the Federal Government bailout of Bear Stearns and now imminently of Fannie and Freddie, compounded  with huge losses in banks like Washington Mutual; with BOfA having no choice but to swallow Countrywide whole, and IndyBank going down the crapper with an FDIC bailout because of their ridiculous exposure to sub-prime debt instruments -- we can see that the lessons of history haven’t made much of an imprint either on the financial community or the Government that is supposed to be monitoring it.  I think Allan Greenspan must have retired in the nick of time!

Let's take a quick math review on how much this stuff is actually costing us: The Federal Reserve printed $29 billion in new money and gave it to JP Morgan Chase to finance the Bear Stearns buyout.

According to the Federal Reserve, in February 2008, the M2 money supply was $7.57 trillion. Divide $29 billion by $7570 billion, and you get 0.38%. All outstanding dollars lost 0.38% of their value when the Federal Reserve printed the $29 billion in new money.  Now actually, that's inaccurate, because the US has a fractional reserve banking system. So when the Fed prints $29 billion in new money, that technically causes $290 billion of inflation.  So the Federal Reserve really caused $290 billion of inflation when it bailed out Bear Stearns. That means  all outstanding dollars lost 3.8% of their value instantly when the Federal Reserve printed new money to bail out Bear Stearns.  We all pay, it's just that it's a hidden tax in the form of lower buying power of all our dollars. Is it any wonder why the U.S. Dollar is down the crapper on the world market?

The following custom chart clearly illustrates that inflation (red line, as measured by the Consumer Price Index) closely tracks the broader M2 Money Supply (blue line):

M2 vs CPI

It’s gonna take a while for all this to wash out – at least two or three more years. Heads will continue to roll; the guilty will continue to go unpunished. You and I will continue to pay for it -- because it is WE who are asleep at the switch and do not hold our elected representatives accountable!  And – twenty five years from now, when it all happens again because everybody is asleep at the switch once again?

Heh-- I’ll post about it then.


Satire Takes an Uncertain Turn

I heard about the New Yorker's cover cartoon about Barack Obama and his wife Michele in the Oval Office; there was an NPR interview I listened to:


Editor David Remnick says, more or less, that "This is a satire, we are making fun of all the rumors". The NPR interviewer hardly challenged this guy at all.

Look, I grew up in New York City. I started reading the New Yorker when I was about 14 years old. I learned a lot about cartoons and satire. I learned that the New Yorker was a sophisticated, challenging magazine that made me stretch, and think. I can tell you without equivocation that this cartoon cover page sinks the New Yorker into the journalistic toilet. It SUCKS! This guy Remnick is a complete,tasteless MORON, he has no viable concept of what constitutes the long tradition of highbrow New Yorker - style satirical cartooning, and HE NEEDS TO GO BYE -BYE.

I find this highly offensive, and so should you. You put this CRAP ON THE COVER of your fine magazine? Shame on you, you TASTELESS MORON!  Remnick needs to take early retirement in a BIG HURRY.


Protect Your Ass Redux -- Redux!

Quis custodiet ipsos custodes
Who shall keep watch over the guardians?

Some may  think I write about this subject (example) too often, but frankly, I don't think I write about it often enough:

Your server (or workstation) machine is important. Having it not boot up properly or operate normally  can often mean serious loss of income. That's real hard-earned dollars that you CANNOT GET BACK.  So why is it that so many Admins don't have a reliable backup and recovery strategy? Maybe we just think "it can't happen to me". Or maybe we're just plain stubborn and dumb!

The single most important ingredient of a recovery strategy is the ability to restore a known good Registry.  Registry corruption often occurs when the machine is shutting down as OS changes are being written to the Registry. It can also occur if there is a network (TCP)  glitch or a power glitch. The bottom line is this:


Registry corruption and its companion demon, system file corruption, can occur AT ANY TIME. Just because you've been lucky so far doesn't mean you are in the clear. Unfortunately, the typical configuration of a Windows Server 2003 or Windows Server 2008 machine does not make recovery of the Registry  easy, and if you don't have the tools you need to get "back to first base", your ass is TOAST!

Windows NT-based operating systems automatically create a backup of each Registry hive (.BAK) in the %Windir%\System32\config folder. Any file can be restored from the Recovery Console. The Recovery Console can be invoked by booting off the original OS CD Rom or DVD Rom, or (if you have installed it) you can invoke it from the OS Boot menu.

On Windows NT-based systems, the Last Known Good Configuration option in startup menu relinks the HKLM\SYSTEM\CurrentControlSet key, which stores hardware and device driver information.  However, these options are not always sufficient to completely recover a system. Often you may have a machine that either will not boot at all, or if it does, major portions of the operating system are rendered unusable because of a corrupted Registry.  In these cases, you really want to have a "PE" (pre-installation) subset of the OS you can boot into. Bart's PE is the answer:


Although Windows Backup can back up the Registry, it only does so with a lot of extra baggage and is not fun to use at all.  Needless to say, if you cannot boot into a GUI or your GUI environment is unusable due to Registry corruption, you may not be able to recover with Windows Backup at all. 

If you use Backup to back up your system state, it also copies your existing registry hive files to the %SystemRoot%\Repair folder. This means you can restore your corrupted or missing registry hives by logging on with the Recovery Console and copying the files from %SystemRoot%\Repair to %SystemRoot%\System32\config.

A much better and easier to use solution doesn't come from Microsoft -- it is Lars Hederer's ERUNT. ERUNT, in it's default installation, will automatically save a complete copy of your Registry when the machine is successfully booted, in a folder called C:\Windows\ERDNT\Autobackup. These backups are created in subfolders with the current date, and ERUNT also places a recovery executable, ERDNT.EXE, which can be used to restore that registry backup. Double-click on this, and reboot, and you're all fixed. You can also run this from a Recovery Console prompt or a Bart PE command prompt. ERUNT runs on all Windows OS's, both 32 and 64-bit, and it's saved my butt more times than I care to admit.

If you do not have ERDNT installed, you can look for backup files in the \config folder as mentioned above. In Vista and Server 2008 there is a \RegBack folder under the main \config folder. Look at the file dates carefully to determine which hives are the most recent. In any case, you need to be able to gain access to these folders and the only way to do it is through either the Recovery Console or a utility CD such as Bart's PE - unless of course, you are able to boot in Safe Mode.

To replace bad system files (instead of doing a full installation with the (R)epair option) you can try running SFC /scannow from a command prompt. You'll need your original media, unless you've altered the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup key to have the SourcePath and other relevant entries point to a folder on your harddrive containing the setup from the media.

You should also be backing up the IIS Metabase and your machine.config files for ASP.NET. See the previous post linked at the beginning for how.

In sum: ERUNT to autobackup the Registry. Recovery Console to get at the hard drive when you cannot boot. Regular backups of the IIS Metabase, and .NET machine.config files. Those are all the ingredients you need.

Now, having repeated myself for the fifth time, let me ask you this question: DO YOU HAVE a reliable server OS recovery strategy, and ARE YOU SURE IT WORKS?

And so I leave you with this little gem:

Natasha: Boris, you got plan?
Boris:   Beh-heh-heh! Plan? Of course I got plan! Dey don’t ever vork, but I got one!

Happy computing.


HOWTO: Delay autostart of a program with batch file

If you shoot at mimes, should you use a silencer?  - Steven Wright

This is kind of an interesting little exercise that came from our eggheadcafe.com forums and I thought I’d write it up for posterity:

User has some sort of logging program (not a service) that depends on SQL Server being up and running in order to do it’s work. Problem is, the little proggie is starting to work before the SQL Server service does on bootup. So the user is asking how you can delay the start of the proggie.

Being my helpful self, I advised him to write a .NET console app, have the app sleep the main thread for the specified time, and then use Process.Start to execute the real program.

User responds that he doesn’t know what Sleep means; if only there was a way to do this with a batch file.

Well, not to be daunted, I went out and did a bit of research. Turns out that there isn’t any native DOS command to “sleep” a batch file. The only way you could do it is to use something from one of the Windows Resource Kits, and that’s a real pain.  But I did find a way:

if you ping a nonexistent address with a number of repetitions specified and a timeout, and redirect the output into NUL, ping will happily ping away, waiting for the specfied time period, quit,  and then the next line of your batch file will execute. Example:

PING -n 1 -w 10000 >NUL

Save the above as “delay.bat” and double –click on it. Cool, eh?  Just as a refresher, –n is the count, and –w is the timeout:

Usage: ping [-t] [-a] [-n count] [-l size] [-f] [-i TTL] [-v TOS]
            [-r count] [-s count] [[-j host-list] | [-k host-list]] 
            [-w timeout] [-R] [-S srcaddr] [-4] [-6] target_name


    -t             Ping the specified host until stopped. 
                   To see statistics and continue - type Control-Break; 
                   To stop - type Control-C.

    -a             Resolve addresses to hostnames. 
    -n count       Number of echo requests to send. 
    -l size        Send buffer size. 
    -f             Set Don't Fragment flag in packet (IPv4-only). 
    -i TTL         Time To Live. 
    -v TOS         Type Of Service (IPv4-only). 
    -r count       Record route for count hops (IPv4-only). 
    -s count       Timestamp for count hops (IPv4-only). 
    -j host-list   Loose source route along host-list (IPv4-only). 
    -k host-list   Strict source route along host-list (IPv4-only). 
    -w timeout     Timeout in milliseconds to wait for each reply. 
    -R             Use routing header to test reverse route also (IPv6-only). 
    -S srcaddr     Source address to use. 
    -4             Force using IPv4. 
    -6             Force using IPv6.


N.B. One user commented that you can do all this with Task Scheduler. I took a look, and at least in Windows Vista, TS has become much more feature-rich; indeed you can do the above and a lot more just by setting up a task.

Are online Video Tutorials really that useful?

I’ve noticed lately that it seems more and more developers / gurus / etc.  are turning to putting out online videos instead of written articles. Personally, I don’t usually find videos that useful, unless it is something important and the video is the only thing available. Most video tutorials aren’t well produced, they are linear in nature (unlike an article where you can easily jump from one place to another), and you certainly cannot copy and paste code samples from them!

I much prefer written articles with images and sample code, and a downloadable solution zip file. What do you think?

In Other News....

This Google search, Silverlight Patent Suit -- is already bringing up over 72,000 results... So, what else is new? More Mindless Patent Extortion!

          Regarding Twitter, I see that some people I follow have already started migrating to FriendFeed, which automatically picks up your "stuff" from a number of different services. There's a .NET API and I may double my IttyUrl.net links, all of which are automatically posted on Twitter,  to FriendFeed as a backup move in the near future. See my post on "The Social API we really need". Twitter is a great concept, BUT!

Russell Beattie explains:

"The lesson from Twitter is that microblogs aren't Content Management Systems at all, but are instead Messaging systems, and have to be architected as such. SMTP or EDI are our models here, not publishing or blogs.

Here's how a microblog system has to work to scale: All the messages created by users have to go into a Queue when they're created, and an external process then has to go through one by one and figure out which messages go into which subscriber's message list. As the system grows and more messages are created, the messages may arrive in your "inbox" slower, but they will still arrive. This type of system can be easily broken up into dedicated servers and multiple processes can handle different parts of the read/write process, and the individual user message lists can be more easily cached - as once a page is created that contains messages, it doesn't change."

It's funny, because I actually built a system just like Beattie describes, for a former employer. The main difference was that it was for SIP VOIP messages instead of Twitters, but the concept is the same. Messages were sent over the wire and a collector process deposited them into MSMQ. A second process continually read out each message from the queue and handled the routing and addressing (think, "which users this message should be shown at, and how"). The thing processed 2,000 messages a second - and these message were BIG - not the measly 140 character twits you get at Twitter. It's easy to scale out such a system where there can be multiple destination queues and multiple processing services. The endpoint simply round-robins the messages into different queues on different machines on the processing farm.

I don't know what the real bottlenecks are at Twitter, but if you look at services like Live.com for Messenger (390 Million accounts, supposedly) they don't go down very often, and when they do, it's usually not for very long.