Part 5 of 8 in Living Architecture
How It Works: Deriving Architecture from Reality
From Concept to Concrete
In the last post, we introduced the idea of a "living architecture"—a dynamic model of your system that is continuously updated from reality. This might sound abstract, but the implementation is quite concrete. It relies on tapping into the rich streams of data that our modern software development practices already generate.
The core principle is this: architecture should be a byproduct of building and running the system, not a separate activity.
A living architecture is created by a pipeline that ingests, normalizes, and enriches signals from multiple sources. Let's look at the primary ones.
1. Code: The Structural Skeleton
The most fundamental source of truth is the code itself. Static analysis of your source code repositories can reveal:
- Components: What services, libraries, and functions exist.
- Dependencies: How they call and rely on each other.
- APIs: The contracts that define how components interact.
By analyzing every check-in, the living architecture model can understand the intended structure of the system and, crucially, how that structure evolves over time.
2. CI/CD Pipeline: The Deployment Reality
Code alone doesn't tell the whole story. The Continuous Integration and Continuous Delivery (CI/CD) pipeline knows how that code is packaged and deployed. By instrumenting the deployment process, we can understand:
- Artifacts: How code is bundled into deployable units (e.g., containers, serverless functions).
- Infrastructure: Where these artifacts are deployed (e.g., which Kubernetes cluster, which cloud region).
- Configuration: The environment variables and settings that govern the runtime behavior.
This provides the physical manifestation of the logical architecture defined in the code.
3. Observability and Telemetry: The Runtime Behavior
This is where the living architecture truly comes alive. The logs, metrics, and traces from your observability platform reveal what is actually happening in production. This telemetry can answer questions that code and deployment pipelines cannot:
- Traffic Flow: Which services are communicating with each other right now?
- Performance: Where are the latency bottlenecks?
- Dependencies: Are there unexpected connections between services that weren't declared in the code?
By analyzing this stream of logs and telemetry, we can see the emergent, real-world behavior of the system, complete with all its unexpected interactions and emergent properties.
4. Incidents and Operational Data: The Points of Failure
Finally, the history of incidents and other operational data provides a map of the system's weaknesses. By analyzing data from tools like PagerDuty or incident management systems, we can identify:
- Fragility: Which services are most often the source of outages?
- Cascading Failures: How do problems in one part of the system affect others?
- Cost: By integrating with cloud billing data, we can attach a real-dollar cost to every component and service.
This data adds a layer of operational reality, highlighting the parts of the architecture that carry the most risk and cost.
By synthesizing these four streams of data—code, deployments, telemetry, and operational data—we can construct a high-fidelity, multi-faceted model of our system that is always current. This is the foundation of the living architecture.
In the next post, we'll explore the most exciting part: what it feels like to interact with this living model and ask it questions.
Living Architecture
Part 5 of 8View all posts in this series
- 1.The Acceleration Trap: Why Architecture Can't Keep Up
- 2.The Mirage of Documentation
- 3.Management is Flying Blind
- 4.A Better Way: The Living Architecture
- 5.How It Works: Deriving Architecture from Reality
- 6.The Payoff: Asking Questions of Your Architecture
- 7.The Visual Payoff: Always-Accurate Diagrams
- 8.The Future: From Living Architecture to Sentient Systems


