DECISION NODE

January 25, 2024
Decision Nodes

Code sharing

Saurabh Dashora
,ㅤ
Software Architect

Saurabh is a Software Architect with over 13 years of experience working on large-scale distributed systems in banking, autonomous driving, and retail. He is also a passionate technical writer and publishes the weekly System Design Codex newsletter.

Sharing code between multiple services can turn into a topic of contention in a project team.

The larger the service footprint, greater the intensity of debate on how to share functionality between the various services.

On one side of the spectrum, you have developers suggesting that DRY (Don’t Repeat Yourself) is the right way to go.

On the other side are proponents of the “share nothing” philosophy.

Often, either side isn’t able to provide a definitive solution as there are so many different types of possible scenarios in typical enterprise applications. It’s hard to provide a generalized solution that can cater to all possible needs.

This article covers four different approaches of sharing code, exploring the trade-offs for each.

Code Replication

This is by far the simplest approach to share code between two independent services.

In this approach, you just copy shared code into each service.

Open in Eraser

Though it might seem like an ugly hack now, this technique was actually quite popular in the initial days of microservices architecture. It was made popular with the concept of bounded contexts that drove the whole movement towards a “share-nothing architecture”.

However, over the years, this approach has largely fallen out of favor.

Rolling out updates is difficult

Why is this approach considered problematic?

Imagine finding a bug in the shared or duplicated code or the need to make an important change to this piece of code.

You will need to update all the services containing the replicated code. No matter how hard you try, you’d probably miss updating some services resulting in issues. Also, you’d have to test all those services thoroughly.

For most new projects, you should avoid using this technique. However, there’s a chance of finding this approach being used in your existing applications and you may need to deal with it in an appropriate manner.

Shared Library

Using a shared library is the most common technique for reusing code across multiple services.

A shared library is an external artifact like a JAR file, DLL or NPM package that contains the common functionality.

The idea is that you simply include this shared library in any service that needs it and make use of the functionalities it provides.

Open in Eraser

Advantages and Concerns

The main advantage of this approach is that the shared library gets bound to the service at compile time, making it easier to spot issues during development and testing.

One concern with this approach can be around the raw size of the shared library. However, there are also bigger concerns around testing and version control, creating trade-offs related to the granularity of the shared library.

Dependency Management vs Change Control

To decide on this approach, you need to balance out the twin forces of dependency management and change control.

If a shared library becomes too big and is used by multiple services, each service gets impacted in the event of a change to the library even if the change has nothing to do with a particular service. By impact, it means that we need to rebuild, retest and redeploy the service. Also, local development becomes harder as frequent build and publish steps are required when you are modifying the shared library and the service at the same time.

On the other hand, if the shared libraries are made too small, it makes change control easier. However, you can end up with a very complicated dependency matrix.

Generally speaking, it’s still better to avoid large, coarse-grained shared libraries. Also, good versioning strategies can help manage the scope of change in a shared library.

Shared Service

The main alternative to the shared library approach is the shared service approach.

In this strategy, you extract all the common functionality into a shared service and deploy it as a standalone process.

Open in Eraser

Independent Deployment

With this technique, you are avoiding code reuse by placing the common functionality into a separate service with its own deployment path.

The approach is a great fit for environments using multiple heterogenous languages and platforms. Also, it’s more agile to make changes to the shared service when compared to a shared library approach.

Potential Trade-Offs

here are a few important trade-offs with this approach such as:

  • Change Risk - Any faulty change to the shared service can potentially bring down all the other services that depend on the service. This is because the change will only be available during runtime and not compile-time.
  • Performance - In the shared service approach, every service may need to make an inter-service call over the network. This means that the performance can be impacted by the overall network latency.
  • Scalability - The shared service must also scale with the service dependent on the shared service.
  • Local Development - With shared services, local development can be quite difficult if it’s hard to replicate the environment on your development machine. You have to coordinate with different consumers of the service and also go through multiple build and deploy activities.

Sidecars

An application typically consists of two types of functionalities:

  • Domain
  • Operational

With domain functionalities, we want to usually opt for loose coupling and high cohesion.

However, operational functionalities such as logging, monitoring, authentication and circuit breakers do much better with a high-coupling implementation.

You don’t want each service team to reinvent the wheel for operational functionalities. Also, there is often a need for standardized solutions across the organization or the project.

To share operational functionalities across multiple services, the Sidecar pattern is a great bet.

Open in Eraser

Implementing a Sidecar

In this setup, every service includes the Sidecar component that takes care of the operational functionalities. There are multiple ways to implement a Sidecar:

  • You can use container orchestration tools like Kubernetes. While defining a Kubernetes Pod, the specification includes two containers - the main application container and the sidecar container. They share the same network namespace and can communicate through localhost.
  • Service mesh frameworks such as Istio or Linkerd also provide a way to implement the Sidecar pattern by injecting a proxy with each service instance.

Using Hexagonal Architecture

Basically, the Sidecar pattern uses the concept of hexagonal architecture to decouple the domain logic from the technical or infrastructure logic.

Hexagonal architecture is a software design approach that emphasizes a clear separation between the core application domain logic and the external components such as logging, authentication, monitoring and so on.

Of course, it’s important to ensure that we don’t end up reusing domain functionalities with the sidecar.

Potential Risks with Sidecar Pattern

The main risk with a Sidecar is that it may grow too large or complex over time.

A couple of important problems that can show up because of this are as follows:

  • Maintenance becomes time consuming due to higher complexity
  • A large sidecar can result in greater resource consumption and create resource contention issues within the containerized environment

TL;DR

Each code sharing approach has its pros and cons along with trade-offs to consider.

Code replication is the easiest approach to share code but it can create a lot of problems later on when it comes to maintaining shared code.

Some big issues with code replication are:

  • Testing becomes a big pain as each impacted service has to be thoroughly tested
  • Figuring out the impact footprint is not easy. There are high chances of missing out on critical services resulting in production issues.
  • Older services might still be using it and it’s a good idea to see this as a technical debt and fix it by using some other mechanism.

As your service footprint increases in maturity, it’s always a good idea to choose something between the shared library or the shared service approach.

Some points to consider:

  • Shared libraries are bound to the service during compile-time. This makes it easy to discover potential issues during development.
  • However, with shared libraries you need to make a trade-off between change management and dependency management.
  • On the other hand, shared services are great for heterogenous environments where shared libraries may not be possible
  • However, shared services are bound to the other services during run-time. This opens them up for change risk, performance and scalability concerns.

Lastly, there is another way of sharing code and functionality between services using the Sidecar pattern.

  • The Sidecar pattern is a great way to share cross-cutting concerns across multiple services.
  • You can use it for various functionalities such as logging, monitoring and security.
  • These concerns do well with a tight coupling between various services since you want to provide consistency across the service footprint.