Bringing Agility to Architecture, and Architecture to Agility

Search the Site:

Share this page:

Share |

Site Map:

This page requires Java support

Agile Architect is run by Andrew Johnston of Questa Computing Ltd.


www.andrewj.com

 

Agile Architect Articles

If you want to be kept informed about new content, please subscribe to my RSS feed

 

My First Android App: Stash-It!

After a couple of months of busy early morning and late night programming, my first Android app has finally been released. Please meet Stash-It!

Stash-It! responds to an odd side-effect of the difference between the iOS and Android security models. On the iPad, there are a large number of applications which offer an “all in one” approach to managing a group of related content. These are a bit frustrating if you want to share files transparently and seamlessly between applications, but there are times when you want to manage a group of files securely, and then the iOS approach is great.

Android is the original way around. The more open file system and component model encourages the use of specialist applications which do one job well, but it can be a challenge to keep related files of different types together, and hide them if you don’t want private client files or the like turning up un-announced in your gallery of family photos!

Stash-It! tries to plug this gap, by providing an “all in one” private file manager, tabbed browser and downloader for Android. You can get all these functions independently in other apps, but Stash-It! is the only one which brings them together in one place. It’s the ideal place to keep content you want safe from prying eyes: financial and banking records, health research, client documents. I suspect a few will even use it for a porn stash, but that’s not its only use! :)

There are built in viewers for most common image and movie formats, plus PDF and web files, so you don’t have to move these outside the application to view them. However when you do need to use an external application Stash-It! has a full suite of import and export functions to move your files or open them with other applications.

It took a while to design the security model. Stash-It! encrypts the names of files so that they can’t be read, and won’t be visible to the tablet’s gallery and similar applications, but the content of your files is untouched, so there’s little risk of losing data. Hopefully this strikes a sensible balance between privacy and risk.

Even if you’re not too worried about privacy Stash-It! is a great place to collect material related to as particular project, with all your different file types and web research in one place. You can bookmark web links, but also positions in video files or PDF documents. Web pages can be saved intact for reference or offline reading. Again you can do a lot of these things in separate apps, but I believe Stash-It! is the first one to bring all these functions together where you might want them.

I’ve got a lot of ideas in the pipeline to improve it further, but its now time to test the market and see whether I’ve spotted a gap which needed plugging or not.

Take a look and let me know what you think!

 

Here’s the Google Play Page. You can also read the helpfile.

What Do I Mean by “Agile Architecture”?

A little while back I was approached by EITA Global, a global provider of on-line training, and we have now agreed that I should present for them a webinar entitled “Agile Architects, and Agile Architecture”. The current plan is for this run on 8th April. I’ll keep you all posted with any changes.

As part of my preparation, I decided to do a literature scan to see how this topic may have moved on since the last time I did some significant work on it, a couple of years ago. I have to say that based on my initial research I’m not that impressed… I don’t know whether to be flattered or slightly perturbed that AgileArchitect.org comes up squarely at the top of a Google search. There are a few decent web articles around, although most are several years old and I’d seen them before. The Google search also turns up several dead links.

Amazon turned up a couple of loosely-related books, and the most obvious candidate appeared to be “Lean Architecture: for Agile Software Development” by James O. Coplien and Gertrud Bjørnvig. I’ve now read a couple of chapters, but my first impression is not very favourable. I may be rushing to judgement, in which case I’ll apologise later, but the book seems to somehow equate “architecture” with “code structure” with “project structure”, which isn’t right at all, missing a number of the most important dimensions of any true architecture.

This led me to ask myself a very basic question. “What do I mean by ‘Agile Architecture’?”. In Coplien and Bjørnvig’s book they seem to answer “an architecture which facilitates agile development”. That may be one definition, but it isn’t mine.

I think the confusion arises from the difference between “agile” applied to a process (e.g. software development), and applied to a product. In the former case, the Agile Manifesto undoubtedly applies. In the latter, I’m not so sure. I think that for a product, and especially its architecture, the primary meaning of “agile” must be “able to respond to change”. The larger the change which can be handled quickly and cheaply, the more agile the architecture. An architecture which has been built in a beautifully run agile project but which needs new code the first time a business rule changes is fragile, not agile. The system which can absorb major changes in the business rules without a single line of code is genuinely agile. The integration architecture which allows multi-million pound system A to be upgraded with no impact on adjacent multi-million pound system B, or which allows the company to be restructured just by re-configuring its services, is the most agile of all.

I’m slightly worried that “agile” may have become a “reserved word”, and this “architecture in the large” definition may run counter to accepted practice. Is that right, or am I reading too much into a few examples?

Break Compatibility, Lose Loyalty

For almost 20 years I have been a fan of, and borderline apologist for, Microsoft. One of the main reasons was their focus on software usability, backed up by a visible intention to preserve backwards compatibility wherever possible. While each new release of Windows, Office, IE and Visual Studio brought new features, these were by and large an extension to rather than a replacement for that which already worked. When a compatibility break was absolutely necessary, such as with the transition to VB.NET, it was well signposted and the option to parallel run the old version well supported.

Sometime around 2007-8, maybe by coincidence just when Bill Gates retired, this all went to hell in a handcart, and since then I’ve been cursing new Microsoft software versions as much as praising them. Each release has brought frustrations, and in many cases they have been sufficiently severe to drive me to adopt a competitor’s product, or at least a third party add-on.

XP SP 2 broke WMA format so it is incompatible with most third party players. My car was new in 2008, but I have to rip CDs using an XP SP1 virtual machine. Vista broke the reliable and flexible ntbackup. It took a bit of effort to get it working again, and it’s still part of my (more complex) backup strategy, but the “heavy lifting” is now done by Acronis rather than Windows.

The disruptive user interface and file format changes of Office 2007 have been widely discussed elsewhere. Suffice to say that I never used Office 2007, and run Office 2010 only with a third party add-on which restores the old menus. The compatibility-breaking changes to follow up flags in Outlook 2010 are extremely annoying, but as yet insufficient to drive me to an alternative product.

The same is not true of the changes to Virtual Machine support in Windows 7. Before that move, I used Mirosoft’s own Virtual PC extensively. However, the loss of compatibility, features and reliability were so severe that I now only use and recommend VMWare WorkStation/Player for this purpose. You can read about my experiences here.

The latest problem, and what has prompted this blog, is the appalling state of Internet Explorer 9. I have been a faithful user of IE since V1, and have lived, fairly happily, with its limitations through to IE8. However, since “upgrading” to IE9 I have become completely disillusioned, because it just isn’t reliable enough. Here are a sample of the things which just don’t work properly:

  • Downloading dynamically-generated PDF files, such as bills from BT,
  • MasterCard SecureCard authentication. This one’s a real pain if you’re at the end of a long online purchase, and you find your main credit card won’t work,
  • The combined address / Google search bar. If I type in a valid www…. address, I expect the browser to at least attempt to use it , not do a search!
  • Printing. Some long text pages, especially from typepad blogs, get mashed with the main font/character set replaced by something unreadable,
  • Rendering some web sites readably at all. Some of the worst offenders, ironically, are Microsoft’s own “support” forums.

By direct contrast, Google Chrome seems to do a decent job of all the above. I am hereby announcing my intention to make it my primary browser whenever I have a choice.

I’m now really scared about Windows 8, with it’s so far half-hearted changes to the desktop. What will that wreck?

Now in fairness, Microsoft are not the only, or maybe even the worst offenders in this space. For example Bibble/Corel have just pushed through a change to their AfterShot Pro software which no-one wanted and which breaks a plugin I’ve written, and I suspect in that community I have some influence to say “the new version is broken, don’t use it.”

I really don’t understand Microsoft’s behaviour here. Are all these compatibility wrecks conscious decisions? If so, do the conquest sales related to cool new features really outweigh the loss of loyalty from existing users? If not, have they just got lazy and complacent? Who knows?

Tyranny of the Colour Blind

Or Have Microsoft Lost Their Mojo?

I like colour. I see in colour, dream in colour and have a rich colour vocabulary which drives much of my photographic style (see Seeing in Black and White). It’s also an important part of how I work – colour can be a powerful “dimension” in the visualisation of information. The human eye and brain are remarkably good at processing and using colour signals, whether it’s a highlighted line of text on screen, or a flashing blue light in traffic.

Now I acknowledge that this isn’t universal. As a designer you have to cater for a significant proportion of users (about 8% of males) who have poorer colour vision, and especially in mobile systems there will be times when ambient lighting conditions reduce effective colour saturation to a point where it doesn’t work. The traditional way to deal with this is to combine colour with another signal, such as shape – green tick vs red cross, for example. Then each user can use the signal which works best for them.

Microsoft used to get this. Their software was frequently a model of usability, and exploited colour, shape and shading to both guide the user, and allow the user to better manage their data. Icons could be rapidly located by colour as much as by detail. Data items of a particular status would “leap out” from a forest of those without the status marking. Office 2003 introduced follow-up flags for both OneNote and Outlook, which proved to be a great way to identify and retrieve key items in large lists. These supported both colour and shape or text as “identifying dimensions”.

Then sometime in the late noughties, Microsoft lost their way. Office 2010 has abandoned colour as a navigational tool. Tools, icons and the dividers between sections of the screen are all subtle shades or pale pastels, making them very difficult to visually distinguish, particularly in poor lighting conditions. Icons are no longer clearly distinguishable. However the worst regression is in respect of Outlook’s follow-up flags, which now actively disable the use of colour via a tyranically imposed colour scheme consisting of “multiple shades of puce”, rendering them completely useless for their original purpose.

This rant had been brewing for some time as I try to get to grips with Office 2010 and its inexplicable abandonment of many well-established user interface standards at the cost of enormous frustration for long-standing users. What tipped me over the edge was the announcement last week of Microsoft’s new Windows logo. Gone are the cheerful primary colours, and the careful shading which made later versions pop out of the screen with real depth. In their place is a plain white cross on a muddy blue background. Useless!

Now I suppose there might be people who think that this reduced colour palette is somehow “cool” or “elegant”. They’re probably the same group who think that it’s appropriate to model fashion on anorexic teenagers rather than real women. In both cases they’ve clearly lost track of who their real customers are, who has to get real utility from their work.

I’m not against change, and I accept that high-resolution graphics allows more subtle designs that we were previously used to. However, this rush to abandon colour in user interfaces and branding robs us of an important dimension. We absolutely do have to make sure that designs are also usable for users and in conditions where colour may not work, but we must not throw away or disable powerful tools which have real value to the majority of us. Microsoft should know better.

Ten Ways to Make Your iPad Work Effectively With Windows

If you’re one of those people who uses loads of Apple products, and is thinking of proposing Steve Jobs for canonisation, then you may be happy with how your iPad works, but if you’re trying to make it work effectively in a Windows-based environment you may have found shortcomings with the “out of the box” solutions.

It is perfectly possible to make the iPad play nicely as part of a professional Windows-based environment, but you do have to be prepared to grab the bull by the horns, dump most of the built-in apps (which are almost all pretty useless), and take control of both file management and communications via partner applications on the PC. This article presents some of my hard-won tips and recommendations on how to do this and get productive work out of the iPad’s great hardware.

Enterprise Architecture Conference 2011 Day 3

Well the third day of EAC 2011 came and went. My talk went well. Despite the last minute scheduling change I got a decent audience, and once in front of real listeners managed to find my style and pace again. They seemed to appreciate it, but as none of the inveterate tweeters was in attendance I’ll have to wait for the feedback analysis to be sure.

This morning’s keynote was excellent, it’s just a shame that I had to leave early to set up for my own talk. It could have been subtitled “why ‘cloud’ means people trying to sell you stuff”, and was the most balanced discussion I have yet heard on cloud computing. The most interesting observation is that individual component reliability is very much subservient to scalability and “elasticity”, which has major implications for more critical applications.

The rest of the day’s presentations were a mixed bunch. Some were too academic, others very light on real content. The one exception was Mike Rosen talking about SOA case studies, which included both real successes and failures, and should be the yardstick for anyone looking to move to SOA.

One thing I have learned from this conference is a (arguably the) real purpose for Twitter. It’s a great way for a group engaged in a joint activity like this to have a shared background conversation. In many ways it’s the electronic reincarnation of the DeMarco/Lister red and green voting card system, but with wider and longer reach. It’s not without problems: it can be a distraction, some users can dominate with high volume, low value tweets and retweets, and Twitter’s search and the available clients (certainly on the iPad) are not optimised for hashtag-based operation. However, these are minor complaints.

The iPad makes a superb conference tool, and I was amazed by the number of them in use, for making notes, reviewing slides, and tweeting. Interestingly I think this trend will drive a move to standardise on PDF-format material: slides published this way worked very well, but some available only in PowerPoint format weren’t viewable.

My congratulations and thanks to the conference chairs and the IRM team for an excellent event. Time to start thinking about a topic for the next one…

- Posted using BlogPress from my iPad

Location:Falcon Rd,Wandsworth,United Kingdom

No Plan B

I don’t think the reason why the British travel infrastructure copes so badly with problems is actually down to a fundamental lack of capability or investment. The real problem is that the operators lack sufficient planning, and/or imagination, and/or flexibility to shift their services to alternative patterns better matched to changing circumstances. The only “plan B” seems to be “run what’s left of plan A and apologise”.

Take, for example, South West Trains, who run commuter services to the South West of London. There are two main lines out from Waterloo via Guildford and Woking, but also a number of parallel minor lines, like the secondary line to Guildford which runs past my house.

When North Surrey got a foot of snow for the first time in 30 years in February 2009, it was clear that no trains were going to run on any of these lines for a couple of days, but only a relatively short stretch of the lines was blocked. It was still possible, for example, to get from Surbiton (about 10 miles nearer to London than my home) to Waterloo.

I had to attend a course in London, and the roads were becoming passable, so I dug the car out and drove to Surbiton. It rapidly became clear that everyone else had had the same idea. How had SWT reacted? By running the same four commuter services an hour from Surbiton. These were, of course, enormously overcrowded and slow. What about the other trains which would, for example, have usually been running the express services carrying the rest of the traffic? These were nowhere to be seen, presumably sat in a siding near Waterloo. Would it have been beyond the wit of man to press some of these into use as additional shuttle services to carry the excess traffic from those stations which were accessible? Apparently so.

Last night, I got caught again. I got to Waterloo at 10:30 pm to see a blank indicator board. The cause of the trouble was signalling problems in turn due to cable theft at Woking. Now I don’t blame the rail companies for that, and I hope the perpetrators are found, hung, drawn and transported to South Georgia, but I do think the train companies’ response is inadequate.

True to form, they had reverted to “what’s left of plan A”, running a tiny number of overcrowded and delayed services under manual signalling procedures. Now theoretically my line should not have been affected. Not only should I have been able to get home, but my line is perfectly capable of carrying some additional “relief” traffic, as it does when there is planned engineering work on the main lines. (About once a month the 8 commuter services per hour are joined by about 20 express and freight services, and when planned that seems to work fine.) With a bit of ingenuity you could even alert taxi drivers at the intermediate stops to the sudden need for their services, at profitable late night rates.

Is that what happened? I should coco. Instead not even the regular services to my home station appeared to be running. I ended up on one of the overcrowded trains to Surbiton, and finished my day with a £40 cab ride.

Why is this so difficult for the train companies to get right? In both of these cases there was no fundamental problem with the remaining infrastructure or rolling stock. In both cases they even have a model for the alternative schedule. For last night it’s in a file marked “Saturday service with engineering work at Woking”. Staff flexibility might be the problem, but that must be resolvable, maybe via higher overtime rates?

There’s also an architectural lesson here. I design computer systems and networks. My clients run national power networks. In both cases the customers expect those systems and networks to be resilient, and to cope with growing demand without wholesale replacement. It’s not always possible to justify dedicated “DR” capacity, so you have to get inventive with alternative configurations of the capacity you do have, and then run tests and introduce clever asset monitoring and management practices to make sure those configurations can be used safely.

If we can do it, why can’t the transport operators?

- Posted using BlogPress from my iPad

Location:Cobham,United Kingdom

Enterprise Architecture Conference

Halfway through, and this is shaping up to be the best EAC I have attended for a while.

I was umming and aahing about whether to attend yesterday’s seminar sessions, and couldn’t make up my mind which to join. In the end I made up my mind about the morning session while having a cup of coffee on the way, when I recognised one of the speakers, Lawrence Helm, as having given an excellent presentation a couple of years ago on NASA’s knowledge management problems. This time he and his colleague Robert Stauffer were talking about NASA’s adoption of Capability Modelling, and how they have put it to use supporting some very high level decisions about NASA’s future shape.

This was another stimulating session, and really benefitted from the extra space from making it a half-day session. Lawrence and Robert actually ran out of time, which was probably a testament to the depth of the material and the discussions it engendered.

The principle of relating capabilities to strategic objectives was not new to me, although the NASA examples certainly were. What did surprise me was the level of detail required for capability definitions in that environment. For example, the launch capabilities relate specifically to certain target longitudes and temperature ranges, and could not be moved to a location outside those ranges (for example Korou or Baikonur) without re-engineering the rocket platforms.

The afternoon session was also a bit random, as I got confused between Mike Rosen’s half-day seminar and his separate one hour talk for which I had the slides. Not a problem, the half day session on case study methods was very educational. The example, of how Wells Fargo created a federated model to integrate their various systems under a common customer model was interesting, and plays nicely into my EAI talk tomorrow. Like a good sermon, I didn’t learn much new, but I felt thoroughly validated that Wells Fargo did what I would have recommended, and succeeded with it. We had a very robust discussion on the importance of stable service interfaces, so hopefully that will drum up some support for my talk.

You get a very good class of attendee at these sessions. Alec Sharp joined the NASA session, and John Zachman joined the afternoon session, although he didn’t participate much.

Thursday’s highlights have probably been the two keynotes: this morning on how different companies have developed different strategies to come through and out of the recession, and this afternoon on “how to think like a CEO” and get your messages across to senior managers. However, there was also an excellent talk this morning by David Tollow on how EA feeds management and planning of long term outsourcing deals, from the supplier’s viewpoint. Very relevant to many of us in the current day and age.

Just to make things interesting, Sally has asked me to swap slots with someone else tomorrow, so my talk which was carefully trimmed to the constraints of the last slot on Friday will now be at 10 am. This may or may not be a good thing.

Wish me luck!

- Posted using BlogPress from my iPad

Location:Portman Towers,Paddington,United Kingdom

Practical Enterprise Integration

I’m speaking at the IRM Enterprise Architecture Conference 2011, in London next week. My topic is “Practical Enterprise Integration: Realising the Benefits of a Strong Canonical Architecture”. In the paper I discuss the evolution of an EAI environment at National Grid, and how over time some key decisions on the underlying architecture have delivered significant benefits.

I’ve just uploaded the slides to my website. You can download them here.

Unfortunately, I’ve been put into a real “graveyard slot”, right at the end of business on the Friday afternoon. And I thought Sally, the conference chair, was my friend! If you are attending, I would very much appreciate your support. If not, I’d appreciate your thoughts on this topic, as always.

The Wrong Orientation?

In an odd confluence, multiple streams of activity have come together to convince me that current IS thinking may be suffering from a bad dose of “the wrong orientation”. My work on data modelling at National Grid, an excellent course on business process modelling, my frustrations with the iPad software architecture and finally some exposure to Oracle’s latest SOA tools have all convinced me that we need to get back to designing systems with an “object-action orientation”, rather than the regressive move back to the alternative.

 

A$$hole Driven Development and Other Anti-Patterns

During a project management meeting today, I was driven to look for a reference to “Document Driven Development”, a great anti-pattern developed a few years ago by the Agile crowd, in order to emphasise the importance of working solutions, not documents, as the goal of IT projects. I was in for a few surprises…

Oddly, although the wonderful “Waterfall 2006” web site still exists, I couldn’t find DDD on it. So I checked with Google and found a couple of references to non-ironic (as far as I can tell) papers on the subject. Yes, some people seem to think that document-driven development is a good idea! Now I might be prepared to concede this for applications where documents are themselves the key business objects (some legal processes, for example), but as far as I can see this isn’t what those papers were referring to. If that’s the case, they really haven’t understood…

What I did find, however, was a wonderful blog post from a few years ago with the excellent title “Asshole Driven Development”, in which Scott Berkun has collected a wide variety of development and project management anti-patterns. It takes a while to read through all the comments, but doing so is quite rewarding, if mainly as a form of therapy. At least you know you’re not alone.

The list is pretty comprehensive, but despite over 300 contributions, I couldn’t see my own bête noir. A lot of large corporate organisations now seem to follow a governance methodology I term IAKOM (the “It’s A Knock Out Method”), known on the continent as la Methode Jeux Sans Frontieres (MJSF). Those of a certain age will remember a series of hilarious television games in which relatively simple tasks (such as carrying a bucket of water) were rendered impossible by the imposition of progressive handicaps and obstacles (such as carrying the bucket up a greased slope against a rubber bungy while wearing clown shoes and being pelted with wet sponges).

Some IT governance is like that. Just when you think you might have a fair run at doing something, a new governance hurdle or document check is inserted into the process. It wouldn’t be so bad if it all made sense, but sometimes it feels almost capricious. Some organisations are more enlightened than others, but as a general industry trend it’s inescapable.

I don’t know what the answer is. If you do, let me know!

Red Roof Reflections

As my “photographic eye” develops, I find I’m noticing much more readily the colour of light, and how it can be modified by things both inside and outside the scene. This shot of St. Nicholas Abbey on Barbados is an interesting case in point. The porch has a flat roof, and that flat roof is obviously painted red to match the railings and guttering. We can’t see it directly, but its effects are very dramatic. I’ve boosted the colour saturation slightly to make this work on the web, but only slightly – the pink glow was immediately obvious as we looked back to take this shot.

St. Nicholas Abbey, despite its name, was never an Abbey, but a plantation house. It’s recently been revived, and sits at the hub of a busy farming and rum distilling business. It’s also an interesting example of the challenges of architectural re-use. It was built from a set of plans developed and used for a similar manor house in England. These were faithfully followed, including all the fireplaces and chimneys. In nearly 400 years Barbados has never had a day cold enough for any of the fires to be lit!

In Agile development, the mantra is that you don’t build features you don’t need, but I’ve rarely seen a discussion on what to do if those features come “free with the design”. Re-using an established design has significant benefits, particularly if the architectural effort comes at a significant cost (which was obviously the driver for the decision about St. Nicholas Abbey). Tailoring that design to omit features you don’t need will have a cost, and a risk that by doing so you break some other capability. For example, chimneys tend to be built very strongly, and often have an important structural role in a building. On the other hand, building features which won’t be used is also costly. If you can’t find exactly the right design pattern, you will have an interesting decision – whether to change it, or whether to follow it regardless.

Beauty is Only Skin Deep

I’m currently reading a book called “Beautiful Architecture“. This has at its core the concept that some software structures are inherently elegant, things of beauty as well as great function, like many of our greatest buildings.

The trouble is that for every St. Paul’s there must be a Bletchley Park – an architectural mish-mash which while possibly important, successful or even revered is inherently inelegant, or even downright ugly.
My analysis is that behind the glossy facade, the iPad software architecture has to be the best current example of “Ugly Architecture”.

In many ways it’s strongly reminiscent of PCs in the days of DOS, or maybe Windows 3.0, before the emergence of strong component-based architectures and unifying design standards in Windows 95 and NT.
The fundamental problem is the application-centric model, in which each application is a stand-alone combination of code and data, with very few shared services or components. This naturally leads each application developer to “do their own thing”, implementing separate, widely varying solutions for communications, document storage, printing support and so on. Apart from a token “open in another app…” supported by some applications, there’s effectively no cross-application linking, leading to massive duplication of functionality and data, and some significant functional limitations, for example the inability to directly open a URL embedded in a document.

Each application has its own data area, which may or may not interact with iTunes, web sites or a PC via FTP, websites via WebDAV or various different cloud storage services. Data which should arguably be general visible just isn’t – you can upload video files to the photos area, but they won’t be visible in the videos list. To test a variety of editors with a document you need to deliver a different copy of the document to each app.

Each application supports different models for document exchange, and different cloud stores, so a user potentially has to have multiple separate cloud accounts. While “public” cloud storage may be fine for individuals’ personal data (although individuals may still have valid security and privacy concerns), it is a real concern if used for corporate information. In corporate contexts, connectivity, security, copyright, access rights, service levels, data protection and privacy obligations, regulatory and legal constraints may all be compromised or complicated by cloud use, and become significant issues.

There’s also an interesting security implication to this which you don’t often see discussed. Because there’s no accessible file system, and no extensibility model for the application filing model, there’s nowhere for anti-virus solutions to run, and as of today iPhones and iPads are effectively unprotected devices. There are probably numerous iPads in the wild acting as festering reservoirs of infected documents. Those who are security conscious can’t be happy about this, and I know that many corporate security departments are making moves to ban connectivity to corporate services for that reason.

Even if an application interacts with the host PC more directly, you get multiple copies of documents, typically the original, a copy in iTunes and one on the device, with no mechanism to synchronise them or compare version information. Apple’s own applications such as Pages are even worse, with a completely separate iTunes space from their own “My Documents” spaces, and an additional copy step in each direction. This is a version control and management nightmare!

Why could the iPad not support a simple shared filing area with proper two-way synchronisation to the host PC, as the Pocket PC has had from day 1?

The communications architecture is a similar mess. The only application which can communicate with the host PC over USB is iTunes, but iTunes can’t use WiFi. All other apps have to use WiFi, but there’s no real shared comms application infrastructure, so the result is another diverse and fragmented “roll your own” free for all. The most obvious way for a companion device to talk to its host PC, BlueTooth, isn’t supported at all!

The WiFi only design works fine in the confines of, say, a small home office. Elsewhere it’s problematic at best. Paid WiFi (e.g. in a hotel) is typically limited to a single device, so you’ll end up paying twice if you want to connect both devices. Corporate WiFi systems are typically similar, and you may not be allowed to connect the iPad directly. Even if you do get connectivity, these networks are often set up to prevent routing between devices, as a security measure, so that’s that, then.

The alternative is to set up either the PC or iPad as a hot-spot itself. On the iPad, this is only possible on jailbroken devices. On the PC, it can be complicated and opens up potential security issues. Neither is ideal.

Apple’s policies effectively put software development back in the Stone Age, in the particular sense that “monolithic” means “single lump of rock”. Each application has to be “stand alone”, implementing many things which should arguably be shared. For example, each file management application implements its own storage management dialogs, its own comms model, its own browser, its own PDF and Word file viewers, each with their own subset of functionality, dialogs and gesture support, and so forth. There simply doesn’t seem to be any real concept of shared components or companion applications. Let’s be clear: I’m not criticising the application developers for trying their best to provide a comprehensive solution – my criticism is directed squarely at the crass architecture through which Apple force such an approach.

Even those applications which implement the “open in another app…” capability to open documents in other viewers suffer two common problems: you frequently have to open the document natively before you can send it elsewhere, and the act of doing so usually creates yet another copy of the document to manage separately! :(

Ironically, where there are shared components they impose significant constraints and limitations. The most obvious is the keyboard. Essentially there’s only one way to get text directly into any application, and that’s to use the on-screen keyboard configured exactly as the application developer decides. It’s “my way or the highway”. This is a dramatic contrast with the Microsoft world, where even a humble 2003-era Pocket PC supports not only a variety of built-in and third-party on-screen keyboards, but also handwriting recognition, character recognition (like the Palm Pilot), Swype, and even limited voice recognition. Importantly, these are all user-selectable at any time text input is required. On the iPad you can buy an app with a different keyboard layout, or dictation capability, but you have to cut and paste the raw text into your target application and typically reformat it to suit. This is simply primitive.

What makes all this worse is that the iPad application approval/delivery model makes it unlikely that anyone will innovate a better solution. No approved application can have legal access to another app’s or central iTunes data. Without approval, you won’t appear in the App Store or run on non-jailbroken devices, so Apple simply impose their will, whether good or bad.

OK. I am starting to love my iPad, but the software architect within me is incredibly frustrated. This great hardware is hamstrung by a clumsy, unimaginative, software architecture and oppressive centralist control by those who worship according to The Book of Jobs. It could be so much better.

</rant>

Some Good News

I’ve just had a bit of excellent news – my submission for the 2011 Enterprise Architecture Conference in London has been accepted. The provisional title is “Practical Enterprise Integration – Realising the Benefits of a Strong Canonical Architecture” and I’m going to tell the story of the evolution and benefits of a strong Enterprise Integration Architecture at National Grid with which I’ve been closely involved over several years.

Interestingly, a very similar submission last year didn’t make the cut. Whether the change is due to an increase in the quality of my submission, or a decrease in that of the competition, only time will tell… :)

The Half Arsed Agile Manifesto

If you’re wondering why agile methods don’t work in your organisation, look no further than this. Very funny, but scarily accurate.

In the words of the Tao of Lao-Tsu, “If you want to control something, you must first let it go free”. Big organisations just don’t get this, because too many people want to hold onto their favourite control mechanism. Agility is a culture, and trying to do it inside a culture of formal control is very difficult. Occasionally you can succeed in spite of the controllers, and that’s a real achievement.

21st Century Schizoid Man

My good friend and sometime manager, Mike Rawlins, has just started a new blog ruminating on leadership. In his first post, he discusses the question of how to decide what to do, to “do the right thing”.

Now I’m not sure whether his guidance on decision making process is generic, or whether that process depends on your organisational position and role in determining “the right thing”.  I don’t know whether the key difference in our perspectives is between leadership as a manager versus leadership as an influencer, or the difference between managerial and technical leadership, or the difference between synthesising solutions and deciding which to adopt, but Mike’s article portrays a very different perspective to mine.

Mike portrays as key the ability to focus on key issues, and exclude those which are “not relevant”.

In my experience as an architect and technical leader, I spend a lot of time understanding and analysing the different forces on a problem. These design forces may be technical, or human: financial, commercial or political. The challenge is to find a solution which best balances all the design forces, which if possible satisfies the requirements of all stakeholders. It is usually wrong and ultimately counter-productive to simply ignore some of the stakeholders or requirements as “less important” – any stakeholder (and by stakeholders I mean all those involved, not just senior managers) can derail a project if not happy.

Where design forces are either aligned or orthogonal, there is usually a “sweet spot” which strikes an acceptable balance. The problem effectively becomes one of performing a multi-dimensional linear analysis, and then articulating the solution.

However, sometimes the forces act in direct opposition. A good example, currently personally relevant, is system security, where requirements for broad, easy access directly conflict with those for high security. In these cases the architect has to invest heavily in his skills in diplomacy – to invest a lot of time understanding stakeholder positions. One common problem is “requirements” expressed as solutions, which usually hide an underlying concern which can be met many ways, once understood.

In cases of diametrically opposed requirements, there are usually three options:

  1. Compromise – find an intermediate position acceptable to both. This may work, but it may be unacceptable to both, or it may fatally compromise the architecture.
  2. Allow one requirement to dominate. This has to be a senior level business decision. As an architect, you then have to be sensitive to whether the outcome is genuinely accepted and viable, or whether suppressing the other requirements will cause the solution to fail.
  3. Reformulate the problem to remove or reduce the conflict. In the security example the architect may come up with a cunning partitioning of the system which allows access to different elements under different security rules.

Of course, you can’t resolve all the problems at once – that way lies madness. An architect uses techniques like layered or modular structures, and multiple views of the architecture to “separate concerns”. These are powerful tools to manage the problem’s complexity.

It’s also important to remember that the architecture, and its resolution of the various design forces (i.e. how it meets various stakeholder needs) have to be communicated to many who are not technical experts. The technical leader must take much of this responsibility. I have had great success with single-topic briefing papers, which describe aspects like security in business terms, and which are short and focused enough to encourage the readers to also consider their concerns separately.

One area where I do agree with Mike is the need to listen to the voice inside, and carry decisions through with integrity. For an architect, the question is whether the architecture is elegant, and will deliver an adequately efficient, reliable and flexible solution. If your internal answer to this is not an honest “yes”, you need to understand why not, and decide whether you and your users can live with the compromises.

And finally, the architect must protect the integrity of the solution against the slings and arrows of outrageous projects. Monitor in particular those design aspects which reflect compromises between design forces, because they will inevitably come under renewed pressure over time. You have to not only do the right thing, but ensure it is done right.

Non-Sequiteur

About the weird title: Mike is attempting to create his blog based largely on 1970s Prog Rock references. As a tribute to such an excellent idea, I feel compelled to join in (at least on this occasion)!

A Parable

In 2008, Bibble 4, supported by its community of plug-ins, was probably the best RAW convertor available – it was certainly my favourite. Then in late 2008 Bibble abandoned support for it, although Bibble 5 was not yet available (for another year, as it turned out). I wrote a short parable to express my frustration at this inexplicable decision.

Although mainly about photography, I think there are lessons for anyone contemplating a disruptive, rewrite from scratch change to a software product – beware!

A Shortage of Analysts?

I’ve just spent two days at the 2008 Enterprise Architecture Conference in London. It was a very high quality event, with a range of speakers covering topics from pragmatic analysis techniques to how to manage knowledge through the life of NASA’s Mars programme, more than any single working lifetime.

Overall there was much less focus on technology (read SOA and modelling tools) this year, and a vigorous and renewed focus on business alignment and business architecture, which, if we can deliver, potentially places architecture where it should be, as the business’s agent.

But there’s a problem. Good business analysis is fundamental to this, yet several delegates bemoaned the current lack of good business analysts. User organisations often struggle to articulate and abstract their needs, and this feeds into all downstream processes. Modelled requirements are an increasing rarity, poorly substituted by imprecise verbal statements in Word or PowerPoint.

The problem is, of course, not unique to analysts, and may have common cause with the equal lack of architects. Senior architects and analysts both tend to have several big birthdays under the belt, and many learned their trade as developers, gaining both practical method skills and the experience of turning ideas into working code. (The majority of exceptions have other “making it work” experience, such as building networks or running data centres.)

But in the current world of ERP packages and large-scale outsourcing, many organisations no longer build anything themselves. The live classroom has been thrown away.

I have worked with a number of good, keen young analysts, but most work for large supplier companies who still have both well-funded training programmes and the breadth of work to build experience and a broad skill set. These guys and girls can do a good job, but at the risk of higher costs and potential conflicts of interest.

We already know that this may reduce organisations’ ability to ensure the right solution to their needs, or assure its quality. Recent observations suggest that organisations who forgoe getting their hands dirty in IT will also suffer an increasing difficulty in creating a clear, concise and structured statement of those needs themselves.

Paradigm Shift – Clear Memory Now!

I’ve been musing lately on why we in IT insist on forgetting so much valuable knowledge. I don’t know whether it’s because of our youth-obsessed culture and our focus on the newest and best, because of our tendency to prioritise on-the-job over traditional learning, or whether there’s simply too much in the “architect’s book of knowledge” (ABOK), and we all have to focus on the new to keep up.

I explore two very different examples: the value of understanding RS232 in this 3G+ world, and some recent discussions on service reliability, both of which can be resolved using some quite old knowledge…. (Read More…)

The Tevye Scale of Approval

The accept/reject assessments of the Sarbanes-Oxley world are far too binary, as they don’t allow an architect to record his true feelings about a piece of work. I have therefore decided that in future I will record my assessments using what I have named the “Tevye Scale of Approval”

Enterprise Architecture Conference 2006 – My Paper

I’ve just spent three enjoyable days at the 2006 Enterprise Architecture Conference in London. IRM did their usual excellent job of making it run like clockwork, and my good friend Sally Bean helped them develop an interesting and varied programme. To my mind the best speakers were Jeff Scott, and Chris Wilson of BP. Another encouraging sign was the presence of a great many International delegates.

I presented a paper on Agile Architecture. If you regularly read my work you’ll recognise many of the ideas, but I’ve managed to bring them all together for the first time. You can download my slides and script here.

What was very interesting was how the thrust of the material has changed from a few years ago. No-one was claiming that a given framework, process or toolset can solve EA problems. At the risk of being uncharitable I thought John Zachman’s ideas sounded very tired, and there was almost no mention of alternative frameworks such as TOGAF. I may have self-selected by not attending any vendor sessions, but there was also no promotion of tools or technology. A common view was that EA, SOA and many supporting concepts are currently entering the trough of the hype cycle.

Instead the focus was largely on people-related problems and approaches. The labels varied, but several speakers introduced ideas familiar to agile architects. Maybe we’re doing something right after all.

You Need Architects…

Just in case you haven’t already seen it….

Why you need architects, in song and dance.

Enjoy!

Best Practices in Test Automation

I am looking for one of my clients into how costs can be reduced, or quality increased, by increasing the extent to which testing is automated.

As a first step, I am trying to develop a comprehensive list of test automation “best practices”, grouped roughly by life-cycle (or iteration) stage. I’m trying to find practices which are broadly independent of specific methods and technologies, although obviously tool support may vary depending on the chosen technology.

This article is my first draft of such a list.

I’d welcome suggestions from my readers if you think there are any omissions (or if you substantially disagree with anything I’ve included).

Thanks

Andrew

The Agile Architect at EAC 2006

If anyone is interested in hearing more about my views on architecture, and how agile methods apply to the work of the architect, please sign up for the 2006 Enterprise Architecture Conference in London in June.

I’m presenting a paper entitled “The Agile Architect”. This focuses on both how agile projects can have a strong architecture, and how architects can learn and benefit from agile approaches. I take a rather different approach to some recent papers with a similar title (e.g. at this week’s otherwise excellent Microsoft Architecture Insight conference), which suggest that agile projects can “do away with the architect”.

I look forwards to seeing you there.

Andrew

Who Are the Architects?

There’s a perennial discussion in architecture forums like the WWISA about the role of the architect, and the discussion regularly degenerates into a debate between the broad and narrow views of what the architect does.

But I’m not sure that’s the key question. I think the right question is “Who are the architects?”

Somehow, a number of tasks must be discharged, but how varies from project to project. In the last year I’ve had a modest building project which tells an interesting story about how different people contribute to “the architecture”. Read more here…

An Agile Architecture War Story

I don’t really believe in a common architectural process. As the author of a successful project management book, and recent articles on data architecture methods, I probably shouldn’t say this, but to paraphrase a famous quote, “When I hear ‘process’, I reach for my gun!”

This is a story of a project I worked upon which followed an informal, agile process, but delivered a successful architecture. Hopefully it serves to support my assertion that agile can have an architecture, but needs an agile architect.

Modelling Data Mapping – A Challenge

Almost all integration projects contain one or more transformations (sometimes called “mappings”) between two different structures holding equivalent data (for example the order tables in the database, and the order XML message). We know how to model the individual static data structures in various ways, but the most common approach is to represent each by a UML class model, and there are established conventions for how to do this for different data sources.

However, UML doesn’t help when it comes to the transformations themselves, and typically the detail has to be captured either in code, or a proprietary format. Most good integration tools provide some sort of “visual mapping tool”, where the developer drags and drops to create links between representations of the two structures, usually imported directly from their physical schemas. Here’s an example using SeeBeyond. Altova provide a good stand-alone data mapping tool called Mapforce – here’s an example showing it in use. The problem is that these tools work directly with the physical structure, and don’t export the mapping information in a reusable format, so that information is completely disconnected from the UML analysis or design models.

I have experimented with trying to represent mapping information in a UML model, but so far without much success. The best solution I’ve found so far is to use some sort of “pseudo code” (it could be OCL, pseudo-Java, pseudo-VB or anything similar). For example, we could easily annotate the model with code fragments such as:

Database.order_table.order_no = Message.Header.OrderNo

(where each element refers to a UML Package.Class.Attribute combination).

The problems are that it’s not clear where to put this annotation, most UML modelling tools won’t help generate it, and there’s no graphical representation. Ultimately, writing pseudo-code like this is probably not much better than abandoning the model and moving straight to using your integration tool.

My question is: does anybody know a better way? Has anybody found a good way of representing mapping information in UML? And if so, is there any good tool support?

If you know, please send me a message.

Metropolis – Where Do You Want To Live Today?

There’s been a lot of talk in recent years about a “city planning” metaphor for Enterprise Architecture development. Pat Helland’s article “Metropolis” in the Microsoft Architecture Journal is a very good example (see my post on this for some key quotes).

While the metaphor might still be valid, some people are beginning to question how far it should be taken. Helland’s article, like others before it, implies that “good” EA looks rather like a medium-sized modern American town, complete with relatively standard services, civic buildings and commercial venues. In an answer to the original “Metropolis” article Richard Veryard and Philip Boxer have published “Metropolis and SOA Governance” which challenges several of Helland’s assumptions.

I think that maybe we should extend the metaphor by thinking about cities, or Enterprise Architectures, as very diverse entities. What sort of “city” do you live in? To what extent is it planned? What is the vision, and do the citizens share in it? Does the EA resemble a nice neat midwest town, a dark, brooding Gotham City, a glass and steel Utopia, a federation of small towns with lots of empty space between them, a medieaval walled town, or a wartime mid-european ghetto?

And the metaphor can be taken further. Do you want to promote “infill development”, closing up functional gaps, or do you want to keep clear separation between the various zones? Do you want the shared services to be clearly visible, as they are in modern, purpose-built towns or hidden beneath a facade which looks much older or simpler? Do you expect to eventually knock down and rebuild older “legacy” zones, or do you want to preserve them with the minimum of change (a common requirement for our valuable historic buildings)? Do you want to accomodate the small hardware shop (read small the bespoke system) as well as the new DIY superstore (the ERP package)?

Finally, remember that it is extremely rare for a city to be truly planned and designed from scratch. You usually start with something established. Even if the city has been flattened by a bomb, you’ll have to observe land rights (this is what stopped Christopher Wren and Charles II realising their grand design after the Fire of London). This is equally true of Enterprise Architectures.

The city planning metaphor is a powerful one, but its true power may come if we use it to explore problems as well as utopian ideals.

Review – Enterprise Integration Patterns

I’ve just posted my review of Gregor Hohpe and Bobby Woolfe’s excellent book on Enterprise Integration using messaging, “Enterprise Integration Patterns”. Overall it’s an excellent book, and wiil probably become a “bible” for those involved in the high-level design of integration solutions. To find out more, please read my review.

Metropolis – a Metaphor for IT Maturity

I’ve just read an excellent paper by Pat Helland of Microsoft, in which he likens the development of cities and manufacturing in the 19th century to the development of systems and business models now. His conclusion – IT at the moment is about at the same stage as America in the 1880s, when they were just starting to turn the Wild West into an industrialised nation! 

Three short quotes from Helland’s conclusions bear repeating directly. On heterogeneity he says:   

Remember that heterogeneity happens. Unless you have a very simple application portfolio, shared services will not be achieved by trying to put all
of your applications on one version of one platform. Even if you could, the next
merger would change that! Rather, you have to design for interoperability and
integration across platforms. This is the force that is driving the industry
wide work in service-oriented architectures. 

He extends the popular “city planning” metaphor to IT investment:   

IT investment is a balance of funding the sacred, protecting historic monuments, and allocating spending between infrastructure and business opportunity. Striking this balance is a key facet in effective governance, and in realizing the potential of IT in your organization.

And finally, those who seek to maintain control of their enterprise
architecture through heavy governance would be well advised to note: 

You have to maintain a light hand. It is counterproductive to try to dictate
what happens in every structure in town, what color shirts are made, and how much is charged for soap. You have to embrace the semi-autonomous approach to governance that is characteristic of our cities, and allow the process owners to optimize and achieve efficiencies with as few constraints as
possible.

Cirrus Minor – A New Architecture Site

Arnon Rotem-Gal-Oz has set up an interesting new site / blog dedicated to software architecture. Of particular note, he’s trying to put some detail on the architecture “process” which is often negelcted as a single box on the development process picture. His approach has the name SPAMMED, catchy, but might cause the odd problem with email filters :)

Domain-Specific Languages

There seems to be quite a lot of activity on the “Domain Specific Language” front at the moment. Martin Fowler published “Language Workbenches: The Killer-App for Domain Specific Languages?”, in which he concludes that the common programming pattern of setting up repeating data structures via either very similar lines of code, or an external configuration file, is actually a DSL. He also republished a paper by Dave Thomas entitled “Design to Accomodate Change” on the related topic of table-driven programming.

However, Martin’s essay goes beyond common programming and data techniques to look at the development of specialist tools which he calls “Language Workbenches”. I’m not completely convinced that we need these in the world of XML and XSD. If you have a defined schema for you XML-based DSL (and aren’t all the many *ML langauges just different DLSs?) then any schema-sensitive editor will provide you with good design and editing support. The leading IDEs (e.g. Visual Studio) all have such a tool built into their core capabilities. Surely we now have a sufficiently sophisticated set of XML-based tools and standards that we have an opportunity to exploit synergies rather than re-inventing the wheel?

The Fear Premium

In an interesting echo of my last piece (Why Software Isn’t Like Building Construction), Scott Ambler has analysed bureaucratic processes as a response to management fear about what can go wrong in software development. His conclusion is that these processes only give the illusion of addressing the underlying fear. His article is well worth reading.

Why Software Isn’t Like Building Construction

Many software development and management methods are founded on a basic assumption – that constructing software is rather like building a bridge or a house. Once we’ve “done the design”, actually generating the software ought to be a completely predictable, relatively low-skilled process. However four decades of failure to achieve this vision might suggest that we should revisit
the assumption.

In a paper entitled “The New Methodology” Martin Fowler, the guru of object-oriented development, suggests a couple of reasons why this might be.

My article answers Martin’s, suggesting a couple of other considerations, and whether we have to completely abandon the physical construction analogy as a result.

Application Development Strategies

I recently attended a day of the Butler Group “Application Development Strategies” Symposium. I’ve just posted a short report on some of the more interesting discussions and presentations.

Almost all of the presentations shared a reminder that we still have a “software crisis” – the vast majority of software projects fail to deliver to their original targets and estimates. The presentations suggested three independent, but not exclusive, approaches to try and resolve the problem:

  • Adopting better, more agile processes to address fundamental weaknesses in “waterfall” processes,
  • Adopting better tools and techniques to improve development productivity and the integration of the application life-cycle,
  • Enforcing a stronger “enterprise architecture” framework for development.

This last one was surprising, with several papers echoing my view that a strong architecture is essential if agile development is to succeed on a large scale or in complex and critical applications.

There was also surprising agreement on things which won’t solve the problem:

  • No-one was promising a technical or product “silver bullet”. This includes web services!
  • No-one was suggesting that we should just “try harder” with old-fashioned tools and processes.
  • There’s no “one size fits all” solution. For example it’s a mistake to force a formal, high-ceremony process onto small business systems developments.
  • Excessive technical standardisation is also not the answer. The drawbacks include “lowest common denominator” technical solutions and inflated costs where the standard solution is “overkill”.

Read my report for more details.

Growing a Language

I’ve just read a wonderful paper by Guy L Steele, “Growing a Language“. He argues strongly that programming languages must be “small”, but able to grow. Such a language will have a relatively simple structure, syntactic rules, and a small core vocabulary. However it must also be able to “grow”, integrating new data types and functions, in the form of user code accessed in exactly the same way as core functions.

Steele’s argument is highlighted by the paper’s unique style – he uses a small but growing language himself. He writes using only words of one syllable, gradually adding other words he has defined in these terms.

The paper works at many levels. As well as the fascinating intellectual exercise in style, it makes a strong case for:

  • simple but extensible programming languages,
  • improving the extensibility of Java, rather than its core vocabulary,
  • an agile community process for developing languages, rather than up-front design of great monoliths,
  • the communication power of simple language and words.

Steele exhorts us to think about extensibility mechanisms – if we get these right then the core functionality can always grow. And by example, he encourages us to use simple, direct language and benefit from its discipline. On both accounts I agree wholeheartedly.

Enterprise Integration Patterns

Integration, like other design activities, can benefit from sharing ideas and proven strategies in the form of patterns. An excellent starting point is Gregor Hohpe and Bobby Woolf’s Enterprise Integration Patterns website and book.

In my recent work I’ve discovered a few patterns of my own, and I’ve started a page to document them.

First up is entitled “Change Indicator” . You may have a legacy system with an EAI adaptor, or a similar source of messages, which reports on the current state of key business objects. The messages will typically tell you when something has changed, but not necessarily what has changed, but the latter may be important to downstream systems. This pattern shows how to use EAI elements to add this information in a way which is totally transparent to the legacy system and existing users of the EAI scheme.

I Told You So

In Extensibility Points, Gregor Hohpe describes architecture as a guessing game, trying to understand future changes to a business, and designing a system to cater for those changes. He’s written a good article on types of extensibility and where they can be applied.

But Gergor’s article doesn’t really touch on understanding the dynamics of business change, and how these should dictate the form of architectural extensibility. Neither does he discuss the problem of being an architect who understands what flexibility is required, but being unable to get either the customers, or the suppliers, to agree. I seem to spend a lot of my time saying “I told you so” when a problem I predicted occurs – this is frustrating, even if I’m being proven right.

If you want to understand more about the dynamics of change, you might look at my previous postings on the subject:

Architects – Masters of Order and Unorder

Business Flexibility

Strategies for Flexibility

Web Service Challenges

In a recent article, Gregor Hohpe asks “Is SOA Like Drunk Driving?” In our attempts to address the shortcomings of component-based development have we “swung too far” and introduced new problems?

One recent experience suggests to me that we may well be at risk of this. Read about my first experience of Web Service development and judge for yourself: have we thrown the baby out with the bath-water?

Enterprise Architecture Design and the Integrated Architecture Framework

I’ve recently been catching up on the Microsoft Architecture Journal. This is an occasional MS publication, which can be downloaded from the

.NET Architecture Center
. It’s got a lot of good articles, with a software architecture and process focus.

Of particular interest, there’s an article in the first edition by Cap Gemini on “Enterprise Architecture Design and the Integrated Architecture Framework”. Obviously every consultancy has their own architecture framework, and they all share some features, but what makes this one a bit different is the strong focus on the contextual (business context), conceptual (vision) and logical (solution independent capability) aspects, with technical details relegated to a bottom physical layer. It also has one of the best discussions I’ve seen on the relationship and differences between enterprise and project architectures.

Business Flexibility

Inspired by an article on Richard Veryard’s SOAPBox Blog, this piece considers the business equivalents of expansion capabilities like the spare slots in a desktop PC.

Architects – Masters of Order and Unorder?

Do you work in an ordered environment, where things follow rules in a nice, predictable way? Or does your environment exhibit “unorder”, characteristics of complexity or chaotic behaviour. If the latter, you need to read this paper to understand how your domain may differ from others, and appropriate techniques to use as an architect, analyst or designer.

Domain-Specific Modelling Techniques

Sometimes formalised modelling techniques like UML are not the best solution. Microsoft have recently started talking about “Domain-Specific Languages”, specialised modelling techniques for specific tasks. My Domain-Specific Modelling Techniques page outlines some useful DSLs I have developed or discovered.

Architecture, Design and Engineering

Is IT architecture about the “hidden bits of plumbing”, or should it focus on the “visible design” delivered to users, developers and other stakeholders? Read this paper, and consider your view.

Architects Anonymous

Is Architecture an Addiction?

This rather tongue in cheek piece was inspired by a “group therapy” session at the IRM/Meta group Enterprise Architecture Conference 2004.

Modeling XML Applications with UML

Like many web-related technologies XML and its many derivatives have evolved much more quickly than the support from traditional modelling and development tools. As a result many developers creating XML-based applications are doing so with the crudest of tools, and find it very difficult to either exchange ideas with more traditional developers, or to benefit from the strengths of more powerful tools and modelling approaches. This book sets out to address that issue, and it does an excellent job.

At the same time, the book provides a valuable introduction to a range of XML and e-Business technologies for those more familiar with traditional approaches. I found it answered a lot of questions I had about XML which had not been addressed by reading more typical “how to” books, so this book bridges the divide both ways. (more…)

Modelling an Enterprise Data Architecture

Unlike the simplistic models in books and training courses, a real enterprise has a very complicated data architecture. Most of the data will be held in large legacy or package systems, for which the details of data structure may be unknown. Other data will be held in spreadsheets and personal databases (such as
Microsoft Access), and may be invisible to the IT department or senior business data administrators. Some key data may reside in external systems maintained by service providers or business partners. To manage this you need powerful, simple, but effective models of the data structure from an enterprise viewpoint
– a set of models known as the “Enterprise Data Architecture.”

This article, co-written by Richard Wiggins and originally published in the Rational Edge in February 2003 describes a new approach, based on UML, which meets the real requirements of modelling the Enterprise Data Architecture.

Combining Risk Factors

In “Waltzing with Bears” Tom DeMarco and Tim Lister introduce the very useful concept of the “Uncertainty Diagram”, the probability distribution for project metrics such as delivery date, expenditure or benefit delivery. This is used, for example, to assess the likelihood of delay from a given risk.

However, they rely entirely on Monte-Carlo simulation. I believe that where the curve is defined by, or can be approximated by, a few discrete points, a relatively simple analytical solution can then be used in place of simulation.

Modelling Change in Enterprise IT

One of the big problems in a strategic or “enterprise architecture” view of IT is how to model the change in an enterprise’s IT portfolio over time. Most established modeling techniques deal with an essentially static view of the system landscape, supplemented by some modelling of the dynamics within systems. These are very poor tools if you are trying to understand how the complete set of systems, technologies and capabilities change over time. This item discusses two simple techniques which address this problem.

A Square Peg, and Only Round Holes

Musings on whether an architect can fit neatly anywhere in a traditional IT organisation hierarchy.

The Changing Role of An IT Architect

An IT architect (for want of a generic term which isn’t already terribly overloaded) takes on many different roles over time. This article discusses some of those roles, and introduces a model for the different architectural interventions in a typical project life-cycle.

Characteristics of a Software Architect

My musings on what makes a good software architect.

Technical Reference Architecture

“Technical Reference Architecture for Component Based Development and Enterprise Application Integration” by Tim Barrett of ComCor IT Solutions BV is an excellent summary of the characteristics of a strong, flexible, layered and component-based architecture, and the different classes which comprise one. (Adobe Acrobat Format)

The Tao of the Architect

Philippe Kruchten of Rational has taken the Tao of Lao-Tsu, and created a modern translation focussed on the values and attitudes which an architect should adopt. He says many things which are also said on this site, but much more beautifully.

Articles on Agile Modeling and Architecture

Here’s a list of useful articles by Scott Ambler on either the Agile Modeling or Agile Data web sites. A lot of that material is relevant, but I’ve picked out a few articles which are most relevant to the Agile Architect:

Expressive Systems

This is an excellent and very important book. In the mid-1990s we were really starting to understand what made computer systems usable, how human interactions with their computer systems could be tuned to make a more rewarding experience. The user would reach the goal of their computer use and be helped to solve any problems on the way, without any of the common frustrations and mistakes.

Then the Internet happened, and suddenly we were put back 10 years. Everyone wanted "web based" systems, which had a some advantages, but one big disadvantage, Although such systems were superficially modern, with pretty colours and pictures, their models for the human-computer interface were mainly out of the dark ages of mainframe computing.

Richard’s book puts us back on track. He reminds us that we should be building software for people to use, and that if we get that wrong, most other considerations are secondary. He also has a clear vision of how systems should work, exposing the underlying object model to the users, so that they can directly manipulate a representation of their problem or task.

He starts by exploring the shortcomings of existing systems, and asking how a better interface would improve matters. He then develops a clear understanding of the characteristics of his "expressive" systems, as well as explaining why these are very different from capabilities like end user programming. Unusually, this is explained as a "business case" which business managers as well as IT specialists can understand. By doing so, we understand why this should be important to senior stakeholders, and not just something to keep the users and "techies" happy.

The next section of the book explains basic concepts of object-based and component-based systems, and explores why these are the basis for systems which put the user in control, in a problem-solving role.

Finally, the book discusses a range of design principles for expressive systems. Some of these are quite radical, such as "Don’t start by asking your customers what they want". A recurring theme is that IT may have to move beyond the limits of traditional requirements, which limit systems to very narrow definitions of both the problems and the system’s relationships to the business and the users.

This is a very readable book, designed to be read through in just a few sessions. Even this may leave you inspired, but the book is rich in both ideas and examples, and hopefully you’ll go back again to get more and more value from it.

Don’t expect a tutorial on the details of interface design. For this, read Alan Cooper’s "About Face" or one of Bruce Tognazzini’s books. Instead, use this book as a source of both inspiration and justification when you want to do more than traditional analysis and current fashions allow.

The book is beautifully presented, with most topics handled in a single two-page spread which makes it very easy to dip into. It’s richly illustrated, although some of the material originally from CSC’s "Foundation" research programme is not so strong. I’m proud to be a contributor in my own right, my ConQuest system making an appearance on page 63.

If you can get hold of it, the original paperback version is spectacular value for money. I haven’t seen the new hardback edition yet, but I expect it to be equally good.

I heartily recommend this book.

Strategies for Flexibility

Organisations need to protect and maximise the value of their IT assets. To protect against threats from business and technological change systems need to be flexible: able to change to support new functions, new workloads and new working environments. Flexibility does not happen by accident – it is usually the result of planning, forward thinking and adopting strategies known to enhance and encourage it.

This paper (in Adobe Acrobat Format), originally published by the CBDi Forum, presents some of those strategies.

 

Coming Soon

The following articles related specifically to the Agile Architect are currently under development:

  • A War Story About Agile Architecture
  • Architecture, Design and Engineering
  • Balancing Goals and Constraints
  • Modelling Change: Scenario Analysis and Change Cases

Technorati

Technorati provide search facilities across the world of weblogs. You may find the following links of interest.

View my Technorati profile.

Google Blog Search

Search for references to www.agilearchitect.org

© Questa Computing Ltd. 1999-2005
Page last updated