Separation of concerns

Best-of-breed – part 3

In this series (part 1 / part 2), I have been writing about the merits of best-of-breed, as the middle ground between monoliths and microservices. So far, the subject has been limited to just the scale of the software – how many functions should one unit handle. There is another facet to this, however: which functions should be collected together into one solution.

This problem is referred to as the separation of concerns. Or, more accurately, you should consider the separation of concerns when you face the problem of dividing up all of your requirements into units of software.

If you are not familiar with the term, separation of concerns is the process of extracting functionality relating to different domains into isolated units – as a programmer that could be different modules, functions or classes; as an architect that would be different components of the whole system.

To illustrate this concept and explore why you might want to separate concerns, let’s go through an example. Consider a website that requires you to log in to access its content. This system has at least two concerns: managing content and managing users. Now, if this website has a single database to store all data then it has potentially mixed concerns. How could this manifest as a problem? Well, what if the Asian and European markets are both very important for this site and performance is critical? In that case, the GDPR mandates that the users are stored within the EU but the latency of database calls from Asia to the EU could be a problem. The crux of the issue is that there are conflicting requirements upon the database. One concern (user management) requires that the data is never stored outside of the EU and another concern (content management) requires that the data is replicated between Europe and Asia. Obviously this is an over-simplification but hopefully it illustrates the point.

In that example, the resolution to the conflict is straightforward: use two databases – one for users and one for content. That change exemplifies a separation of concerns in a very literal way. Of course, that could cascade into the application tier or the architecture. For the latter, that would mean having one standalone solution for content management and a separate piece of software for user management.

So, given you have taken the decision to use best-of-breed solutions – not monoliths – the question of where to draw the boundaries between these solutions is forthcoming. To answer that question you need to contrast two factors:

  • the benefits of proximity, control and intimate knowledge; vs
  • the risk of conflicts arising between two functions in the same black-box

To illustrate the benefits of proximity, consider Identity and Access Management (IAM). These two families of functions are very commonly bundled together because the decision about access is intrinsically based upon the subject’s identity (“you can access this but they can not”). An access management tool can exist independently with an integration to an external identity store but then the management may be more convoluted, latency will slow the system down and, if the integration fails, the whole system will cease to function.

From an architect’s standpoint, the task is not just to cluster functionality into concerns but also focused on identifying functions that should not live together, because of the risks that coupling them expose your organization to. Unfortunately, analysing the risk of a conflict arising is akin to predicting the future. There are a few indicators you can consider, however… Is the function (not system) mission critical? Is the function heavily regulated, therefor subject to unpredicted change? Is the function intrinsically linked to external systems? Is the function intrinsically linked to client-side technologies, browsers, devices? Essentially, will one function need to moved and/or be changed (with some degree of urgency) independently to the rest of the system?

Finally, the vendors you are assessing will clearly have opinions about the correct separations of concerns and should be able to articulate them clearly.

For a true best-of-breed system the same vendors will regularly work with each other in a mutually beneficial and dependent ecosystem. The hallmark of best-of-breed is that the solution sells by word of mouth. That network effect will naturally cause the best software and, crucially, the most sensible integrations to bubble to the top: “We really liked X and it worked great with Y but, to be honest, the integration with Z was a mistake”.

Really, that final point – the Darwinian emergence of a best-of-breed ecosystem – is the crux of this series. It may be a challenging and multifaceted problem to architect a complex solution but a combination of foresight and market-wisdom can be used to mitigate the risks you take on.

In conclusion, the size and situation of your business affect whether you should be using smaller or larger solutions; monoliths are almost never beneficial; and, finally, careful planning – not just for the architectural patterns but also of where the split lies between various concerns in the system – will help you identify best-of-breed solutions that are a good match to your needs.

How big should your software be?

Best-of-breed – part 2

In the previous post, I talked about the extremes of a spectrum of enterprise software architectures: from large monoliths down to microservices. tl;dr Monoliths are potentially simple solutions but inflexible; micro-services are very agile but come with architectural complexity. At the end of the post, I touched upon the question of whether simplicity and flexibility were of equal value and, hence, if the trade-off was linear.

It seems fairly likely that this model is not correct – there will be some Goldilocks zone in the spectrum, where the flexibility is enough and the complexity manageable. Of course, that sweet-spot will not be universal: the size of the internal technology team, how specialized the outcome is, the market and a host of other factors will come into play.

In this post I am going to explore a framework for modelling the payoff and take some best guesses at how that applies to some example scenarios.

A generalized framework for modelling payoffs

The examples that I will explore in this post are imaginary; they are designed to illustrate that there is not a one-size-fits all solution, nor a paint-by-numbers exercise to find the solution for you. That said, there are aspects that I think are generally applicable and that should be considered:

  1. What is your objective and how are you measuring it? OKRs can work well for this.
  2. What are the attributes of a project that will influence the objective(s)?
  3. For the decision you are trying to make, how do those attributes play our between the extreme cases?

While you should know point one and be able to intuit point two, the final one is tricky and requires work. To effectively describe the landscape, you are likely to need to produce a model in a spreadsheet and put some numbers in. You can guess the numbers to start but it’s worth finding contacts in similar organizations who are friendly enough to share their experiences and help you refine your model.

In the following examples, keep the 3 points above in mind and consider them in the context of each case.

Greenfield startup – limited resources, both in time and budget

In this case, the biggest constraint may be budget. Where you do have budget to spend, you need to avoid one choice closing off other low cost options elsewhere in the stack. The trick will be to find an inter-operable set of software where no single pieces blows the budget. You probably won’t want to pay for engineers where you don’t get differentiating value, either, so the integrations need to be simple or out-of-the-box.

ObjectiveBuild a functional system within budget
Key resultProject total cost of ownership less than X
Influential attributesLicence cost and team size

Here you can see the need to avoid expensive solutions – which will often rule out large enterprise suites – and, equally, you will not want the overhead of a committed team. If you can, buy best-of-breed where it differentiates you and open source or consumer-grade where it doesn’t. You will also want to plan your team size very carefully; while the steps in the illustration above are purely figurative, as a small company the impact of each FTE will significant.

Well funded new venture – trying to scale rapidly

If you have taken investment to scale faster than the competition, the cost of delay far outweighs financial outlay (at least within reason). You’ll need to get the commodity stuff out the way quickly then focus on your differentiator. Bear in mind – you might need to adjust course or even pivot, so don’t get locked in!

Objective 1Get out an MVP and gather usage data
Key result 1Project delivered in less than X days
Objective 2Test-learn-iterate to achieve product-market fit
Key result 2Deliver second iteration in less than Y days
Influential attributesInitial delivery time and time to change some functionality

Some kind of SOA will allow you to adapt to changes as you try and achieve PMF: the big hump in the left of the “Time to pivot” wave indicates the risk of getting a single, inflexible solution for everything. The finance, in this case, is opening up the width of the Goldilocks zone: with enough budget and motivation, you could go all the way to true microservices.

Established company – fighting to get off a crippling legacy stack

The organizational overhead of a move away from a legacy stack can be paralysing. It’s a tough pill but you’re going to have to swallow it.

The OKR is much harder to define in this situation. It could be any one of many, For the purpose of illustration I will chose one possible OKR:

ObjectiveAccurately track customer data throughout the enterprise
Key resultReconcile Web Analytics with Single Customer View with an error rate of less than 0.01%
Influential attributesSCV de-duping accuracy, data integration reliability

A solid architectural strategy will be integral to success and without it the project risks spiralling into chaos. Plan your architecture wisely – hire an FTE to own it, if you don’t have someone already. Something well structured like an event-bus pattern is a good place to start, followed by a multi-stage migration plan that minimizes risk.

There are going to be a lot of requirements in a project like this as, if nothing else, it’s difficult to get agreement to drop BAU functionality in a large organization. That means a single monolith doesn’t exist. You might consider a suite, for some areas of the business, but make sure the vendor is a safe pair of hands.

Failure here is needing to do a large re-arch again. What you build now should last for at least 3 years and the architecture should be designed to live a lot longer than that.

As called out in part 1, the monoliths are spread too thinly to realistically achieve a best-of-breed status for any of the functions you need but, at the other end of the spectrum, if you self-build everything you will never catch up with the attention-to-detail and resilience that a vendor will have finessed over years and multiple clients.

The bump on the right hand side of the “Integration reliability” curve is for true microservices, and is debateable. I believe that there are enough engineers out there who are enthusiastic about micro-services that the right team will be likely to produce a more reliable network of services because they will spend a lot of time thinking about them. Microservices inherently make you consider integrations, whereas that is less of an emphasis if you have non-mission-critical integrations between a small number of self-contained solutions.


To conclude the second post in this series, I want to reiterate the sentiment that there is no right size of software. Small services work well for some companies, large monoliths have their place too. The prevailing truths are:

  • With a little analysis you can identify which style of software will minimize the risk for you, as a buyer
  • The specialists near the middle of the spectrum will have the best quality solutions, as long as their market is large enough and healthy enough to support their growth – these are the best-of-breed solutions

If you have taken the decision to look to the middle of the spectrum – avoiding both atomic microservices and all-encompassing monoliths – the next question is how do you decide which functions live together. In the final post in this series I will focus on that question and explore how a separation of concerns can help keep systems operating effectively over time.

The monolith to microservice spectrum

Best-of-breed software part 1

This is the first in a three part series, covering:

Enterprise software buyers have a really hard job. Aside from the specifics of meeting requirements, there’s no escaping the difficulty of the underlying architectural decisions an organization needs to make in order to buy well.

If the technology systems of your company are simple – with few users and minimal integration – you may be able to buy something that just does the job. But, most often, that’s not a strategy that can scale with your business or the changing technology landscape. Usually, there needs to be some consideration of architecture even before shortlisting a vendor.

The decision regularly comes down to buying a monolithic-solution or a point-solution. That is, you can buy a monolith (or suite) that handles the majority of the functional requirements of your business in one solution; or you can buy many solutions that each fulfil one function. Of course there are some solutions that fall between the two, which will be the subject of the next post in this series.

Now, on the face of it, a solution that meets all of your needs in one tool is the obvious winner, versus one that only meets a single need, but – as we all know – it’s not that simple.

Personally, in order to bring clarity to a difficult decision, I like to consider the polar extremes: a solution that literally does everything ones business needs verses a microservice that only handles the slimmest, most atomic function.

Take the example of an online newspaper (as an industry I know well). They need solutions to, at least:

  • Author content
  • Publish content to the web
  • Manage registered user data
  • Authenticate users
  • Hide content behind a paywall
  • Manage recurring subscriptions
  • Curate newsletters
  • Manage email lists
  • Send emails

(In fact, the list is much, much longer than that but they would definitely need to do those things.)

First, let’s consider the monolith. For a green-field publisher this might be an attractive option. One vendor (throat), one set of training for their users, one bill. And, hopefully, the whole business is supported by one integrated solution.

There are three particular drawbacks to the monolithic approach.

One is that the implementation project is likely to be huge, which constitutes a substantial gamble: if the project goes wrong the cost could be existentially damaging, particularly for a new business.

The second problem is vendor lock-in. Assuming the project is a success, what happens a year or two down the line if the publisher wants to change the way they send emails? Maybe they were getting a lot of bounces; maybe the reporting is inaccurate; whatever the reason, the solution is not cutting it and a better one is now desirable. The issue at this point is that the publisher is paying for the email sending functionality of the monolith, whether they are using it or not.

Finally, there is a more insipid problem of the development and maintenance. You should be cognisant to the fact that the vendor of a monolith is competing with several best-of-breed and point solutions, across different functions. If they offer a CMS (authoring & publishing content), an IDM (user data and auth’) and a paywall (access control & subscriptions) then there would be an alternative architecture with 3 separate best-of-breed vendors or 6 point solutions; so the monolith’s development team (and other teams) would need to be in the same order as 3 best-of-breed vendors combined. If they’re not, how can they be investing in their product competitively?

So, what about microservices?

The benefits of microservices are well documented elsewhere but, to summarize, because each service is modular and focused they can be built and maintained more easily and quickly. The service only has to worry about one thing and has a strict interface between it and other services so:

  • you can rebuild or refactor them often, without worrying about the rest of your stack;
  • you can use different technologies and programming languages for each service, picking the most appropriate for each, and;
  • when you release, you are testing an isolated scope.

All of that is predicated upon the architecture being responsible for the interactions between the services. A development team can safely work on just, say, Content Authoring because there is a strict interface (what comes in and out of the service must not change) and there is some kind of transport between them, such as an event bus or HTTP calls.

The real challenge with microservices is that architecture – the glue between the services. Each service is less complicated but the entire system is a whole lot more complex. Some services might be built in house, some bought; in that case you need to abstract the APIs of the bought systems to fit in with your interfaces, which means an API gateway, of some kind. The services you build yourself need to be containerized, load-balanced and auto-scaled. You need to monitor performance at either the interface or the network level to identify bottlenecks… It gets complicated.

In some cases, another down-side to a microservices architecture is performance. This one is arguable and depends a lot on specifics but more network request between the interconnected web of services means more latency. That should be considered too.

So, from this point of view we can see there is a trade-off between simplicity and flexibility. Monoliths are simple solutions, in many senses, but really undermine organizational flexibility; Microservices are flexible but come with an architectural complexity that should not be underestimated.

If these two traits – simplicity and flexibility – are equally valued and dissipate linearly from one extreme to the other then there is no outright advantage for any particular point on the spectrum. You would just chose the solution you liked. That, however, is probably not the case.

The value of each trait is absolutely dependent upon your team and existing systems. If you don’t employ any solutions architects then the value of simplicity is massively inflated. If you have very unique requirements that can only be built bespoke, then flexibility may be more important for you. In part two of this series I will have a look at some example cases and explore a framework for answering the question “how big should your software be”.

In practice, most enterprises will have some blend of solutions from across this spectrum. Extremes are generally risky and solutions that fall somewhere between microservices and monoliths may be the pragmatic choice. In part three of this series I am going to explore which functions should be collected into best-of-breed solutions and why.

About me

I’m an entrepreneur and technologist. I’m passionate about building SaaS products that provide real value, solving hard problems, but are easy to pick up and scale massively.

I’m the technical co-founder of a venture-backed start-up, Zephr. We have built the worlds most exciting CDN which delivers dynamic content to billions of our customer’s visitors, executing live, real-time decisions for every page.