On Developer Wisdom

Don't you wish there was a knob on the TV to turn up the intelligence? There's one marked 'Brightness,' but it doesn't work.  - Gallagher

Wisdom. The “Wisdom of the Ages” -- wisdom  is an ideal that has been celebrated since antiquity as the knowledge needed to live a good life. What this means exactly depends on the various wisdom schools and traditions claiming to help foster wisdom. In general, these schools have emphasized various combinations of the following: knowledge, understanding, experience, discretion, and intuitive understanding, along with a capacity to apply these qualities well towards finding solutions to problems. These concepts, as one might guess,  apply equally  well to software developers.

For a software developer, however,  wisdom is a much narrower concept that comes from learning from one’s mistakes, from studying what others have done, from learning accepted and proven patterns of good software design. And above all, from being willing to change, endure the pain of mastering something new,  and then to be willing to refactor those old patterns and techniques as new and better ones are discovered.

Wisdom can mean learning Lean Programming, which teaches us to eliminate wasted effort by focusing on vertical design that only satisfies the needs of business requirements instead of building out entire layers of an application in advance. It can also come from learning to employ Agile development methodologies, most of which promote development iterations, teamwork, collaboration, and process adaptability throughout the life-cycle of the project. Not to forget TDD – Test Driven Development – an integral part of Agile methodology.

I believe we, as .NET developers who have been primarily taught a data-centric approach to enterprise application development from back in the “Windows DNA” days, are expanding into the epiphany of more open-source, domain-driven design modalities. You can see this from some of the kinds of frameworks that are now being embraced by mainstream .NET Developers – NHibernate and other open source ORMs, ActiveRecord, StructureMap, Spark MVC View engine (as well as ASP.NET MVC), and others. These frameworks haven’t been lost on the Microsofties, either. I distinctly remember Scott Guthrie uttering the word “NHibernate” during his MIX 09 keynote presentation. And recent issues of MSDN Magazine have featured articles by such notables as Jeremy Miller, a C# MVP who espouses Fluent.NHibernate, StructureMap and similar frameworks. Do you read this stuff? I do, and when MSDN Magazine agrees to publish this kind of content, I take notice. It’s not just that I’ve already been following this — it’s because a major publication that any .NET developer possessing an above-room-temperature IQ will read -- has just said, “We’re cool with this, and we want you to know about it”.

The concept of designing one’s POCO classes first and then creating the database persistence layer from that domain only after the domain is feature-complete is still quite foreign to many developers who otherwise envision themselves as “mature” or “senior level”.

I still see developers who insist on creating an entire database schema for their application before they’ve created even the beginning of their domain model. Often they dismiss “top down” development as being inappropriate for their particular “modality” – even in the face of overwhelming evidence that it is the business domain model and logic that should be dictating the eventual persistence schema, not the other way around.

I am by no means saying that developers who start by designing their persistence schema and then build up from that are wrong. What I am saying is that if this is the only way they can / will develop an application – something may very well be wrong.  You can show them how to do it the new way, but if they refuse to be open to a new concept, you should move on. Let somebody else be the evangelist!

Developers are learning how to simplify software development by applying the DRY (Don’t Repeat Yourself) principle using various frameworks and tools, most of which are readily available for the .NET Platform and many of which come via ports from the JAVA space, which has been around a bit longer than .NET. Isn’t it interesting that as .NET has matured, you don’t see the old JAVA vs .NET flame wars anymore? The JAVA guys have provided us .NET kiddies quite a bit to chew on.

One of the most difficult challenges to obtaining “wisdom” in the developer space is the natural tendency of developers to avoid (or simply be ignorant of)  lateral thinking techniques  and to be reluctant to “do things differently”.  It is easy for a developer who has a technique or a tool that they’ve engineered to unwittingly force themselves into a restrictive programming paradigm simply because they are unwilling to accept the fact that better tools may now be available to them. I call this phenomenon “coveting thy code”.The learning curve to master a new framework or concept is often dismissed with the thought that “I just don’t have the time”, or “nah - my ‘thing’ is better than that thing”.

One strategy  I have found is that if I make an honest assessment of what frameworks and tools I believe are really important to my development career future, and only focus on these, I can gain a lot of extra time to get the job done. I used to jump at every CTP and BETA of this, that and the other thing. Now I don’t – I’m focused only on  some core technologies. You won’t see me messing with Azure Services,for example,  because it’s not “baked” yet. And frankly, I don’t really need it just yet anyway. In fact, ASP.NET MVC 1  only recently popped out of the oven as “done”, and I didn’t even begin with it until it was already at the RC1 level. Is that “wisdom”? I say it is.

Sometimes I think of myself as an “old dog who’s  learning new tricks”. Yes, it’s hard. But I wouldn’t have it any other way. What about you?

Comments

  1. Ben Eaton4:45 PM

    While I agree that we should be constantly evolving our development knowledge (and learning wholly new skills to boot), I'm still not sold on the idea that the persistence layer should be dictated by the domain model.

    A key concept from the early days of relational database design was that there exists both a logical and conceptual model. Having spent most of the time preceding the last 18 months as a .NET application developer and the last 18 months as a BI consultant, I can really see the benefit of applying this model. I now see everyday how not applying best practice in application design, particularly around databases, can cause months of extra effort in the BI and performance layer.

    I completely agree that the application should be modelled around the domain model and by implication be the guiding model of the system. However it is a rare case that the application layer should directly resemble the persistence layer. Despite the doom-mongering of the cult of the OR database, I can assure you that the relational model is far from dead.

    The amount of effort required to convert an OLTP system into a data warehouse should not be excessive - this is best supported by an additional abstraction layer between the service layer and the database.

    ReplyDelete
  2. @Ben
    Actually it is possible we are in complete agreement, since tools like NHibernate enable these types of mappings between domain and persistence layer to be structured.

    ReplyDelete
  3. Well said. I honestly believe that more and more .NET developers are coming to this way of thinking. It's exciting times.

    ReplyDelete

Post a Comment

Popular posts from this blog

FIREFOX / IE Word-Wrap, Word-Break, TABLES FIX

Some observations on Script Callbacks, "AJAX", "ATLAS" "AHAB" and where it's all going.

IE7 - Vista: "Internet Explorer has stopped Working"