Agile Architecture — the rise of messy, inconsistent and emergent architecture

Houston we have a problem…

The ship is sinking — the agile ship that is — as many companies continue to approach architecture the way they did 10+ years ago with the goal of enterprise architecture to maximise reuse, consistency and ultimately reduce operating costs. This has detracted many of them from their ability to be nimble and responsive. Over the decades many of these companies have built monolithic “enterprise solutions” which are widespread and shared by many teams across the organisation. Great from a $$ point of view but for speed-to-market and agility it does nothing but leave teams with their hands tied, bound by a proliferation of inter-dependencies.

As companies “go agile” to deal with this agility problem, they restructure themselves, often under the advice of the agile coaches to have end-to-end feature teams. After running this new structure for a couple of months they eventually have issues — managers and executives start to wonder where they went wrong, and why these end-to-end feature teams aren’t working for them. Although there are numerous reasons why these issues arise, I’ve often found in many cases that they are constrained by the organisation’s technical stack — much like a kid trying to jam a square peg into a round hole, the organisation is trying to make feature teams work with traditional architecture.

Trying to get end-to-end feature teams to work with a traditional architecture

These executives and managers start to reminisce about the good-old project-driven days and component teams. Perhaps things would be better off if they went back to those days? — and in many cases, I often wonder if perhaps they are correct!

Many places do exactly that, they degrade back to component and project-style teams — or somewhere in-between — and apply band-aid solutions such as an increased dependency management, big room planning events, scrum-of-scrums style meetings and giant dependency boards — but much like taking Panadol when you have a cold, the bitterness of such cheap pills are but a temporary remedy.

I’ve often wondered how we even get ourselves into these situations in the first place? Is it tunnel vision, blindly plow ahead with team structure and the “must implement feature team 101” handbook? We wouldn’t ask a team to release more often if they didn’t have the appropriate automated testing and deployment pipeline to support it — so why do we do this with structure?

There is a strong need for thought leadership like Adrian’s in this space.

Often when I suggest such radical ideas like deliberately having messy, duplicated and inconsistent architecture to enable autonomous teams, I’m usually met with crickets — blank faces staring back — you can see them trying to process what I just said and how it fundamentally goes against everything they’ve been taught to date on architecture and management 101 — they just can’t wrap their heads around it — “Duplicate? Inconsistent? Are you insane!”

Management trying to understand why anyone would want duplication and inconsistency

So why would anyone ever want to have inconsistent, duplicated and messy architecture?

The problem with monoliths

Take for example a typical enterprise architecture. There are very few components, it looks simple, components carefully picked to serve many applications, each is diverse and allows for wide reuse and coverage.

However, there is a hidden cost to all this, one which is not so visible in the old project and component teams days. Yes, the more we spread a component across multiple teams the lower the inventory cost (in this case I’m referring to lines of code and other assets required for our apps to run as inventory) however what we are not seeing is that this inadvertently increases the cost of change.

This is because now that the component is shared across multiple teams, there is a snowball effect every time one team wants to make a change to the component. I’ve seen this all too often, one team make a small change and boom! It breaks half a dozen other team’s applications. Soon after we are forced to conduct large coordinated testing cycles to ensure that all dependent teams are not impacted.

Changing one component which spans across multiple teams has a snowball effect.

I’m not advocating against having a single source of truth or a single point of change, I agree that both do indeed lower the cost-of-change but they are not the full picture. Yes, they lower the cost-of-change from a code level but from a wide organisation point of view often that leads to an increased coordination and alignment cost which are exponentially greater than the code change itself. Often this cost is one which is not so visible, definitely not as tangible as the physical code change — perhaps that’s why it often goes unnoticed. Maybe I should reframe cost-of-change, to be the increase in “time to value” as Adrian had put it at the AWS Summit — after all that’s what the goal is, not how quickly we can make a change, but rather how fast that change can be delivered to a customer and provide them value.

Think about every time you’ve had to align teams or build consensus or have a meeting for building a consistent approach? This is all an increase in cost-of-change and ultimately delaying value realisation — I doubt any of your customers care how many meetings it took to try and build alignment or consensus on a particular approach.

Consistent architecture or fast autonomous teams — you can’t have both.

In fact many tech leading companies we know today had identified this exact problem with monoliths — the likes of Netflix, Amazon and eBay to name a few — all started with a monolithic architectures and over time evolved away from them.

Don't forget to share

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *