Skip to main content

Decomposition of Monoliths

Break apart monolithic systems along domain boundaries while maintaining consistency.

TL;DR

Break apart monolithic systems along domain boundaries while maintaining consistency. Rather than risky big-bang replacements, these patterns enable incremental, low-risk transitions that keep systems running while you modernize. They require discipline and clear governance but dramatically reduce modernization risk.

Learning Objectives

  • Understand the pattern and when to apply it
  • Learn how to avoid common modernization pitfalls
  • Apply risk mitigation techniques for major changes
  • Plan and execute incremental transitions safely
  • Manage team and organizational change during modernization

Motivating Scenario

You have a legacy system that is becoming a bottleneck. Rewriting it would take a year and risk breaking critical functionality. Instead, you incrementally replace it with new services while keeping the old system running, gradually shifting traffic and functionality. Six months later, the legacy system handles 10 percent of traffic and serves as a fallback. Eventually, you can retire it completely. This pattern turns a risky all-or-nothing gamble into a managed, incremental transition.

Core Concepts

Migration Risk

Major system changes carry existential risk: downtime impacts revenue, data corruption destroys trust, performance regression loses customers. These patterns manage that risk through incremental change and careful rollback planning.

Incremental Transition

Rather than "old system" then "new system", these patterns create "old plus new coexisting" then "new plus old as fallback" then "new only". This gives you multiple checkpoints to verify things are working.

tip

The key insight: You do not need to be perfect on day one. You just need to be good enough to carry traffic safely, with a fallback if something goes wrong.

Dual-Write and Data Consistency

When migrating data, you typically need both systems to have current data for a period. Dual writes keep both systems in sync, backfill catches up old data, and CDC (Change Data Capture) handles streaming updates.

Abstraction Layers

Rather than replacing a system wholesale, you wrap it with an abstraction (facade or anti-corruption layer). The abstraction routes traffic gradually: initially 100 percent to old, then 99 percent old / 1 percent new, then 50/50, then eventually 100 percent new.

Practical Example

Phase 1: Preparation (Weeks 1-4)
New service deployed but offline
Load balancer configured to route traffic
Both systems connected to same database
Risk: Low (new service not receiving traffic)

Phase 2: Canary (Weeks 5-6)
Traffic Distribution: 1 percent to new, 99 percent to old
Monitoring: Compare response times, error rates
Verify data consistency
Rollback: Instant (revert to 100 percent old)
Risk: Very low (affects 1 percent of traffic)

Phase 3: Ramp (Weeks 7-12)
Gradual Increase: 5 percent, 10 percent, 25 percent, 50 percent
Continuous comparison with old system
Alert on any divergence
Rollback: Revert traffic distribution instantly
Risk: Medium (affects majority by end)

Phase 4: Retirement (Weeks 13+)
New service handles all traffic independently
Old system monitored but not actively used
Archive old system after stable period
Extract data and lessons learned

Key Success Factors

  1. Clear success criteria: You know what "done" looks like
  2. Incremental approach: Big changes done in small, verifiable steps
  3. Comprehensive testing: Automated tests at multiple levels
  4. Constant verification: Continuous comparison between old and new
  5. Fast rollback: Can switch back to old system in minutes
  6. Team alignment: Everyone understands the approach and timeline
  7. Transparent communication: Stakeholders understand progress and risks

Pitfalls to Avoid

❌ All-or-nothing thinking: "Just rewrite it" instead of incremental migration ❌ Ignoring data consistency: Assuming old and new data will magically sync ❌ No fallback plan: If new system fails, you are stuck ❌ Invisible progress: Weeks of work with no deployed functionality ❌ Parallel maintenance: Maintaining old and new systems forever ❌ Rushing to cleanup: Retiring old system before new one is truly stable ❌ Team turnover: Key knowledge not documented

  • Strangler Fig: Wrap and gradually replace legacy systems
  • Feature Flags: Control which code path executes at runtime
  • Blue-Green Deployment: Switch entire systems at once
  • Canary Releases: Route small percentage to new version

Checklist: Before Modernization

  • Clear business case: Why modernize? What is the benefit?
  • Phased approach defined: How will you migrate incrementally?
  • Success criteria explicit: What does "done" look like?
  • Risk mitigation planned: What could go wrong? How will you recover?
  • Testing strategy defined: How will you verify correctness?
  • Monitoring in place: Can you detect problems in new system?
  • Team capacity sufficient: Do you have capacity for both?
  • Communication plan ready: How will you keep stakeholders informed?

Domain Identification Process

Step 1: Map Current Domains

def identify_domains_in_monolith(codebase_root):
"""Analyze codebase to identify domain boundaries"""
import os
from collections import defaultdict

packages = defaultdict(list)

# Scan package structure
for root, dirs, files in os.walk(codebase_root):
for file in files:
if file.endswith('.py'):
# Extract top-level package
relative_path = os.path.relpath(root, codebase_root)
parts = relative_path.split(os.sep)
if parts[0] != '.':
domain = parts[0]
packages[domain].append(os.path.join(root, file))

return dict(packages)

# Example output:
# {
# 'users': ['users/models.py', 'users/views.py', 'users/serializers.py'],
# 'orders': ['orders/models.py', 'orders/views.py', 'orders/serializers.py'],
# 'payments': ['payments/models.py', 'payments/views.py', 'payments/tasks.py'],
# 'shipping': ['shipping/models.py', 'shipping/views.py']
# }

Step 2: Analyze Dependencies

def analyze_cross_domain_calls(codebase):
"""Find which domains depend on which other domains"""
import ast
import os

dependencies = {}

for root, dirs, files in os.walk(codebase):
for file in files:
if file.endswith('.py'):
file_path = os.path.join(root, file)
domain = extract_domain(file_path)

with open(file_path) as f:
try:
tree = ast.parse(f.read())
for node in ast.walk(tree):
if isinstance(node, ast.ImportFrom):
if node.module:
imported_domain = node.module.split('.')[0]
if imported_domain != domain and imported_domain != '__future__':
if domain not in dependencies:
dependencies[domain] = set()
dependencies[domain].add(imported_domain)
except SyntaxError:
pass

return {k: list(v) for k, v in dependencies.items()}

# Example output:
# {
# 'orders': ['users', 'payments', 'shipping'],
# 'payments': ['users'],
# 'shipping': ['orders'],
# 'users': []
# }

Step 3: Design Anti-Corruption Layers

# Monolith: tight coupling
class OrderService:
def create_order(self, user_id, items):
user = User.objects.get(id=user_id) # Direct model access
payment = Payment.create(user.billing_address) # Tight coupling
return Order.create(user, items, payment)

# With Anti-Corruption Layer during migration
class OrderServiceAdapter:
"""Adapter that will become the inter-service API"""
def create_order(self, user_id, items):
# Initially calls monolith; later calls separate services
user = self.user_service.get_user(user_id) # Through adapter
payment = self.payment_service.create_payment(user.billing_address)
return Order.create(user, items, payment)

Migration Checklist

Pre-Migration

  • Identify all domains with clear boundaries
  • Map all cross-domain dependencies
  • Assess data migration complexity
  • Design new service APIs
  • Implement anti-corruption layers
  • Set up separate databases/data stores for new services
  • Create API gateway to route requests

During Phased Migration

  • Deploy new service (offline)
  • Implement dual writes (monolith and new service)
  • Run backfill job for historical data
  • Shadow traffic: route copy of requests to new service
  • Verify new service produces identical results
  • Gradually increase traffic percentage (5%, 10%, 25%, 50%, 100%)
  • Monitor metrics: latency, error rate, data divergence
  • Maintain rollback capability throughout

Post-Migration

  • Monitor new service for stability
  • Collect performance metrics
  • Document lessons learned
  • Schedule monolith domain decommissioning
  • Archive monolith code/data for legal/compliance
  • Update documentation and runbooks

Self-Check

  1. Can you identify domain boundaries clearly? If unclear, probably premature to decompose. Invest in domain modeling first.
  2. Do you have a way to verify new services match old behavior? Without this, you'll have silent bugs.
  3. Can you roll back if new service fails? Routing should be easy to revert.
  4. Does every team understand the new service ownership? Ownership unclear = disaster.
  5. What's your data consistency strategy? Eventual consistency OK? Or need strong consistency?

Warning signs you're moving too fast:

  • "We'll deal with the data migration later"
  • "Services will figure out how to talk to each other"
  • "Monitoring can come after we launch"
  • Only one person understands the architecture

Takeaway

Modernization is not about technology—it is about managing risk while delivering business value. The best migrations are the ones teams do not notice: they happen gradually, safely, with multiple checkpoints. Time is your friend in modernization; rushing increases risk without proportional gain.

Next Steps

  1. Define the scope: What exactly are you modernizing?
  2. Identify the risks: What could go wrong? How will you mitigate?
  3. Plan in phases: How can you migrate incrementally?
  4. Design verification: How will you prove old and new are equivalent?
  5. Prepare rollback: How will you revert if something goes wrong?

References

  1. Martin Fowler: Strangler Fig Application ↗️
  2. Sam Newman: Building Microservices ↗️
  3. Michael Nygard: Release It! ↗️