Thursday, July 14, 2011

Gold-plating or the Curse of the Architect

No solution should ever be engineered to be so technically complex or genericised to the nth degree that it becomes virtually impossible to redevelop, extend and maintain. While your years of technical experience have made things in your mind once seemed complex to now be easy,  the same is not true of those in your team who are likely to have much less experience than you. The same applies for the process and method that you must implement, which extends across the gathering and documenting of requirements, designing the software, developing it, building it, testing it, deploying it, maintaining it and so on and making sure it all integrates in a seamless fashion to deliver what is required on time and on budget. If only a select few or no-one at all “gets it”, then you’ll fall behind the moment you start. Communication is the one key attribute that you have to master; being able to communicate what must be done in clear, simple, language that is easily understood by all is a fundamental skill for an architect.

Complexity will bind you

Create too many staging gates, too many cumbersome and lengthy review and QA cycles, fail to clearly specify the deliverables, who owns them, how they align to the methodology and project plan or enforce a tightly coupled and rigid developer environment with no automation of quality and far too room for creative thought and things will fall apart. Nowhere is the need for this more pressing than in offshoring of software development too. The method, the process, the standards, they must be so well-defined and translatable from the architecture and the requirements right down to the lines of code that the concept of the “code factory” can actually be realised. But more on that in another article.

Consistency will save you

You must make sure that the solution is designed and broken down into components that can be easily understood by designers and developers so that they ultimately become reusable, testable and maintainable. Make sure that the way in which every single artefact is produced is done in a consistent fashion. There is no shame in creating more components within a solution if it improves the overall simplicity and consistency in the process of design and development. In fact it may end up making it quicker to produce than alternative methods because a simple and efficient process, once engrained and embedded in the minds of those following it, becomes innate, repeatable, measurable and also predictable. Make aspects of the solution, or the process to produce it, do too many things and it will grow out of control quickly because you will lose track of where and how things are being done. If consistency is inherent in everything you do changing things is simple. A highly modularised design is easy to modify and extend than one which is tightly coupled, cumbersome and inconsistent from one software layer to the next. We’ve all heard of the importance of architectural patterns, no doubt we’ve all read the Erich Gamma and co work, one of the principles that underpin this form thinking is consistency.

A saying I picked up early in my career as a junior developer from a highly skilled, if somewhat socially inept, architect was that I have never forgot: “I don’t care if you make mistakes; all I care about is that if you do make them, you make them consistently. Consistent mistakes we can fix inconsistent ones we cannot”

Minimalism will break you

There is a perception amongst many architects and developers that trying to be as minimalist as possible by putting as much complexity into the artefacts they produce is somehow conducive to creating a highly elegant and functioning application. It isn’t. Unless you are blessed with a team of people as equally smart and intelligent as yourself it will not work because fundamentally all IT projects are produced by humans and humans all think differently. Know your teams capabilities, know the expectations of the client and create processes, standards and a solution that meets these requirements in a simple and consistent fashion and you will be successful. Your worst enemy is always yourself, over-think, over-engineer, over-complicate it for your own ego’s sake and it will fail. You can sometimes get away with it on a small project < $500,000 AUD but you won’t on anything above and beyond $1M AUD

Owning the failures and sharing the success = respect

Back yourself and your judgement. Be confident in your decisions and people will buy-in to what you are selling, be cagey, un-cooperative and aloof and those below you will lose faith in the directions you set. There is no shame in being wrong or not knowing all the answers, just be accountable for your mistakes and learn to accept you are not always right and you will be amased at how well things will turn out. Don’t be afraid to stick your neck and take responsibility for when you fail. Because you will fail. But the most important thing is the way in which you handle and respond to it. Start pointing fingers, shouting and blaming others and you will lose respect. Own the response to fix the problem, commit yourself and always tell the truth, even if it hurts when doing so, and you’ll be respected.

How to sum it up? Why quote a luminary of course

I am both a victim and a perpetrator of this quote from Frederick Brooks, bookmark it and remember it, to keep yourself grounded:
An architect’s first work is apt to be spare and clean. He knows he doesn’t know what he’s doing, so he does it carefully and with great restraint.
As he designs the first work, frill after frill and embellishment after embellishment occur to him. These get stored away to be used “next time.” Sooner or later the first system is finished, and the architect, with firm confidence and a demonstrated mastery of that class of systems, is ready to build a second system.
This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable.
The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one. The result, as Ovid says, is a “big pile.”

Monday, July 4, 2011

Common Information Model, Canonical Schema, whatever you call it, just do it. Always.

Whatever name you apply to it, any software that is being developed be it a custom ground-up build, to a piece of integration middleware, one of the first and foremost task of any designer is to model the data that the system is going to use and the structures and relationships that compose it.

Before you start creating your sequence and activity diagrams survey the domain of the problem you are trying to solve. Look at all the unique pieces and groupings of the data that is going to be used throughout the software layers and interfaces, how the hosts systems categorise and organise relationships, how the business requirements reference and refer to it and so on to use that information to create the model. You’ll get a lot of this from the use-cases being constructed (if they are thorough enough) and also from system interface specifications, database structures, screen layouts. If you’re lucky the client may have already done this task on a previous project and hence you may be able to leverage the work already done, sometimes you can find evidence of it within the enterprise architecture – although more often than not in this case it will be very high-level and difficult to leverage without a lot of decomposition.

Generating the model is pretty straightforward, you can use any modelling tool that is available, but try and use one that enables the generation of a Class diagram into code such that it is always easy to maintain updates. My preferred tool of choice is Enterprise Architect by Sparx Systems. Not just because it is Australian, but because it is simple to use, cheap and very, very powerful. Other solutions would be to use XML, middleware tools such as BizTalk Server adopt this approach when defining the data schemas.

The level of modularity you build into the model is important and should take into consideration how this model can be extended and reused within the project you are working on and potential ones in future. One of my preferred methods to break a model down is to use a common database design theory known as normalisation. Once you have created your first drafts of the data model start the process of normalisation and break it down so that it becomes more modularised and hence more extendable and reusable. The extent to which it gets broken down is dependent on what is appropriate for the system being built and is dependent on a number of factors so at least get it to second or third normal form and leave it there.

Once the model is defined its usage should be permitted only within the layer it has been created for. A Business Logic Layer model should not be used in the Presentation layer and not in the Data Layers (as they should model their own data accordingly) - if it is for an integration solution the concept of internal and external schemas should be adhered to, the principle is the same. Exposing any of the models entities within service interfaces should be forbidden as the flow-on impact of a change to a model object will not be contained within the layer itself but instead will impact the services that expose it as well. For these reasons this means all requests should be translated to and from the data model within the services that expose the interfaces. The following diagram illustrates this concept in more details

Encapsulated Data Model

Now I bet some of you read that last paragraph and thought that was a load of crap? If you didn’t good, if you did then consider this question: why did the major database vendors start incorporating stored procedures into their platforms to control access to the data held within tables to offer alternatives to making direct table access calls from functions in code? Not sure? Because it was a bad idea 20 years ago and it still is now. Keeping that factor in mind lets us ponder another: it is both an accepted fact and considered best-practice within the IT industry that all logical layers of a software system should have a boundary of controlled entry points, and that these entry points must not be bound to the data structures and logical functions within to avoid both exposure of data and logic (sometimes this can be a security issue) and that the entry points should be able to be versioned and extended without impacting the logic and functionality underneath. Sound familiar? This is one of the principles that govern the implementation of service-based system – also known as being part of a broader SOA implementation. See how what I have described above is just following the same pattern? Yes you could avoid it on small applications where the code base is small but if you don’t do it on enterprise scale applications with large development and design teams you’ll be screwed so therefore why not follow the same pattern and just make it a habit. At times it may be a bit more work but I believe the trade-offs are worth it.