I see so much of this on forums and newsgroups, I almost feel compelled to say "something". It seems that somewhere along the way new .NET developers read (or are told) that the DataReader is the "most efficient way" to get data out of a database. What happens next is they take this 100% literally, and get themselves into all kinds of trouble - blowing up connection pools, trying to do things that you cannot do, and so on. This becomes the idee fixe de-facto method for getting data, no matter what.

Yes! The DataReader IS the fastest way to get data out of a database. However, that comes with a price that you need to evaluate. DataReaders hold the connection open. You cannot have a method that just "returns a DataReader" - leaving you to blithely do whatever you want -- unless CommandBehavior.CloseConnection was used in the intial SQL call and you are prepared to call the Close method on your reader.

In addition, DataReaders offer only ONE TIME firehose-style, forward-only access through the resultset. You cannot "reset" a DataReader; you cannot "go backwards". One time through, and that's it, pal! The ONLY other thing you can do is switch to the next resultset in the reader, if there is one. And if you do not explicitly call the Close method on the Reader (or the connection), preferably in a finally block, your app will come to a screeching halt when the 100 connections in the pool are gone. PLONK!

By contrast, the TableAdapter (in .NET 2.0) has a Fill method that uses a DataReader under the hood to fill the DataTable, automatically closes the connection for you, and now you've got a DataTable that you CAN do all those cool things with. If you are going to want to Sort, filter, Page, choose a row, or any of a dozen other things then FORGET about using DataReaders!

I should also add, based on one of the comments, that the TableAdapter is a control and comes with a lot of overhead. If you just want a DataTable programmatically, you can use the DataAdapter's Fill method (which also uses DataReader "under the hood") and get your table very efficiently. If you use the Patterns and Practices Group's SqlHelper class ("Microsoft.ApplicationBlocks.Data") you don't even have to worry about connections, because it cleans up after itself automatically.

In my opinion, the slight additional overhead of filling a DataTable with the contents of the temporary DataReader is so small as to be inconsequential. This does not even address the fact that any good developer is usually able to cache most database data, another useful feature that is overlooked a good part of the time. I have finally trained myself to ask, every time I get data out of a database, "Can I cache this? If so, what kind of caching do I want to use?".

In most cases, unless you know exactly what you are doing and why you need a DataReader, developers will be better served by FORGETTING ABOUT DATAREADERS, and using DataTables and DataSets.

Just my two cents.


  1. Anonymous5:03 PM

    This kind of blanket advice is quite disturbing. Select a method that is appropriate for the problem. Should we throw out XmlReader too in favour of XmlDocument?

  2. Anonymous6:16 PM

    IMO, it's good advice. The only thing I would add is consider the command's ExecuteScalar method when retrieving a single value from the database. And if you do use the datareader, use the CommandBehavior.CloseConnection option and always immediately read the data and then close the datareader.

    A comment about the TableAdapter. The .Net TableAdapter ultimately uses the Data Provider namespace's DataAdapter. Using the Fill method of the DataAdapter (for example, SqlDataAdapter) is equivalent to using the TableAdapter's Fill method.

  3. RE: "Blanket advice": You and I know how to use DataReaders, but many n00bs do not, and they aren't receiving good advice on how to.
    The purpose of the "blanket advice" is to attract attention. Obviously, it must have worked :-)

  4. Anonymous10:19 PM

    Yes, let's all forget about data readers and instead use Data Tables!

    Then the hardware vendors can sell comptuers with 60 gigs of ram instead of just 2!!!

    Whoohoo! What a wonderful idea!

    How about instead people actually read the documentation and use it intelligently and use DataReaders whenever possible and make sure they close the connection when they're done.

    Why? Because a data reader only loads into memory the current Record of information NOT THE WHOLE THING. Thus unless you're going to manipulate the data (all the data) or more likely bind it to a grid of some sort for the end user to manipulate the data, and then likely save the changes, you should be using a DataReader.

    This is exactly how bloated crappy code that requires 500 ghz computers and a billion gigs of ram are written... *sigh*

  5. Anonymous5:04 AM

    TableAdapters = Bloated code? I think not!

    This reminds me of something Peter mentioned many moons ago about not using catch-all exception handling; the gist of the article was that if you wanted to avoid having code throw exceptions then just write good code in the first place.

    Sure, datareaders have a lower overhead than a tableadapter, but then you have to write a whole lot of plumbing code to make sure they clean themselves up properly in addition to which there is no support for strong typing unless you write more plumbing code. On the other hand a tableadapter will give you strong typing (and Intellisense) and robust data handling and VS will autogenerate it for you.

    Of course you could just not bother writing the extra code but then your application is more likely to fail - simple as that. The fact is that most servers these days are more than capable of handling the slight extra load incurred by the use of tableadapters (client PCs are hardly going to struggle). With a robust architecture scalability doesn't even come into it.

  6. Anonymous11:16 AM

    I think Peter’s point is more about connections than the datareader or tableadapter. The datareader requires an open connection and if you are using multiple datareaders at the same time, you have multiple connections open. And if you fail to close that connection, it may be sometime before the GC returns the connection to the available pool. The tableadapter opens and closes the connection automatically eliminating the possibility of having multiple connections open. Using a datareader does not require a lot of plumbing code, and the layer reading the reader does not need to “see” the connection if you use the CommandBehavior.CloseConnection option. You do lose the intellisense that a typed dataset provides, but you also have the series of .Get methods (.GetInt32, .GetDecimal, etc.) which basically provides the same key functionality of a typed dataset, the compiler knows the data type and it eliminates boxing.

    I am not a big fan of the TableAdapter simply because it creates a number of methods that I will never use a symptom of almost all wizards. Creating a tableadapter for a simple table with a few columns generates 6,000 lines of code almost all not needed. I am a big fan of typed datasets not only for the intellisense but more importantly, the compiler can generate more efficient code since it knows the datatype. I use the wizard to create the typed dataset, then delete the generated tableadapter code.

    And as Peter states, if you know what you are doing, there is nothing wrong with the datareader.

  7. The most disturbing thing in these posts is that everyone thinks handling DataReaders is some sort of difficult task. I think the point should be made that if you can use a DataReader in a using statement, then use a DataReader (in a using statement) - if you would have to be passing around a DataReader object, then think twice. If Microsoft implements IDisposable then take notice! There's probably a good reason.


Post a Comment

Popular posts from this blog

ASP.NET: Loss of Session / Cookies with Frames

FIREFOX / IE Word-Wrap, Word-Break, TABLES FIX