Claiming Architectural Reality, Part I — ADRs That Find You at Change-Time

This article opens a three-part series on Claiming Architectural Reality — an effort to make architectural knowledge living, contextual, and actionable.

  • Part I introduces the Architecture Decision Record Scanner pre‑commit hook — a simple way to keep architectural decisions visible and accessible inside the codebase.
  • Part II explores pairing ADRs with architecture fitness tests to ensure architectural integrity through CI/CD.
  • Part III focuses on AI‑assisted discoverability and validation to surface relevant ADRs and detect drift early.

Together, these parts outline a path from static documentation toward architecture that reacts, informs, and evolves with the system.

→ Jump to the Practical Application


Introduction

Every long‑lived software system faces a growing gap between code reality and architectural reality. Code evolves daily through commits, refactors, and merges, while architecture often lingers in forgotten documents or outdated diagrams. Over time, this divergence creates friction — developers start re‑deciding what was already decided, reconstructing architectural intent from memory or chat logs.

This disconnect is not new. Since the 1990s, teams have tried to bridge it through architecture documentation with goals like:

  • knowledge preservation and continuity
  • communication and alignment
  • onboarding and consistency
  • maintenance and evolution

These goals remain valid, but early documentation standards were often too heavy and detached from day‑to‑day development. That’s why Architecture Decision Records (ADRs) gained traction: they promised a lightweight, version‑controlled way to capture architectural decisions alongside code.

A typical definition summarises their purpose:

“An Architecture Decision Record (ADR) is a lightweight document that records a significant architectural decision, including the context, the problem it solves, the options considered, the chosen solution, and its consequences. By maintaining a collection of ADRs, teams create a decision log that provides a shared understanding of the project’s history for new and existing members, helps in consistency and future troubleshooting, and facilitates asynchronous collaboration through version control systems like Git.”

Despite this simplicity, ADRs can still decay into forgotten text files — unless they are actively used. The rest of this series explores how to change that.

→ Jump to the Philosophy and Practice


The promise of ADRs — and their common failure

ADRs are praised for being simple and easy to maintain, but in most teams, they eventually lose their connection to real work. The documents exist — often well-written — yet they stop influencing daily decisions. They become historical artifacts instead of living guides.

This disconnect happens for predictable reasons:

  • ADRs live far from the developer’s workflow.
  • They require deliberate effort to find and read.
  • Searching for “which ADR explains this pattern?” often feels slower than just re‑deciding.

When this happens, architectural documentation fails its purpose. ADRs should participate in the development loop — shaping choices, providing reasoning anchors, and keeping architectural intent visible when change occurs.

The question becomes clear:

How do we turn ADRs from passive documentation into active, living context?

The next section outlines the mindset and practical direction that guided our team toward an answer.


The philosophy behind answering the main question

Instead of creating yet another documentation process, our focus shifted to embedding architecture awareness directly into the developer workflow. The idea was to make architectural reasoning available at the same moment code decisions are made.

Architecture documentation doesn’t have to sit in a separate place or demand special attention. It can quietly surface when it’s most relevant — during a commit, a code review, or a CI pipeline run.

By moving from enforcement to integration, we aim to:

  • Reduce cognitive load by surfacing ADRs automatically rather than requiring manual search.
  • Encourage re‑use of reasoning instead of re‑inventing past solutions.
  • Maintain architectural integrity by connecting reasoning to code changes.
  • Keep decisions alive as evolving artifacts linked to real system evolution.

The principle guiding this is simple:

Architectural documentation should not require searching — it should find you when you need it.

This mindset became the foundation for our experiment: using a lightweight pre‑commit hook to bring ADRs closer to everyday work.


Translating philosophy into practice

Turning this principle into something tangible required starting small — solving the simplest piece of the problem first: visibility. We didn’t aim to automate architectural enforcement or validation right away; instead, we wanted to ensure that architectural reasoning could always be found where work happens.

This led to the creation of the ADR Scanner pre‑commit hook, a lightweight script that connects architectural documentation to the developer’s daily workflow.

The idea was straightforward:

  1. Keep the architectural decision records always visible and up‑to‑date inside the repository.
  2. Eliminate manual maintenance by automatically generating an index of ADRs.
  3. Help developers rediscover context when they make changes — without leaving their editor or terminal.

By introducing a minimal automation layer, we could close the gap between architectural intent and implementation. The next section walks through how the hook works in practice and what it enables for development teams.

Introducing the ADR Scanner pre‑commit hook

To bridge the gap between architectural intent and code, a simple automation was built: the ADR Scanner pre‑commit hook. It runs locally in developers’ environments and keeps architecture decision records visible without requiring additional effort.

The repository of the hook : https://github.com/entrofi/pre-commit-hooks

How it works

  • Scans the ADR directory recursively for Markdown files.
  • Extracts each file’s title and optional Decision section.
  • Generates a Markdown index containing links to all ADRs, updating sections between defined markers in target files.
  • Can update multiple target files in one run (for example, both README.md and a dedicated ADR index).
  • When run in summary mode, prints short decision summaries directly to the terminal, providing quick architectural context at commit time.

Why it matters

This simple automation ensures that architectural documentation remains discoverable and synchronized with code changes. Developers see decisions evolve alongside implementation, helping them reconnect with reasoning instead of rediscovering it.

Example configuration

repos:
  - repo: https://github.com/entrofi/pre-commit-hooks
    rev: v0.0.1
    hooks:
      - id: adr-scanner
        args:
          - "--src-dir=docs/architecture/architecture_decision_records"
          - "--target-file=docs/architecture/architecture_decision_records/index.md"
          - "--target-file=docs/index.md"
          - "--marker-start=<!--adrlist-->"
          - "--marker-end=<!--adrliststop-->"
          - "--group-by=subdir"
          - "--group-depth=1"
          - "--group-heading-level=2"
          - "--group-title-case=title"
          - "--exclude=index.md"
          - "--exclude=adr_template.md"
          - "--include-decision-summary"
      - id: adr-scanner
        name: adr-scanner (summaries only)
        stages: [ manual ]
        args:
          - "--src-dir=docs/architecture/architecture_decision_records"
          - "--summarise"
          - "--decision-summary-lines=all"

Example output

$ pre-commit run adr-scanner --all-files --hook-stage manual
adr-scanner (summaries only).............................................Failed
- hook id: adr-scanner
- exit code: 3

Decision summaries (all decisions):
- ✅ Platform Buyer Management – Exposure of the Service/Domain Level Data Objects from Network Level Components (REST) – ADR #01
  _Disallow exposure of **any** service-layer objects._

  REST APIs must always define **dedicated network-level DTOs** or representations.

- ✅ Platform Buyer Management – Module Package Structure – ADR #02
  Adopt **Option 1** — a **strict per-module root package structure** under a shared global namespace.

The hook doesn’t enforce architecture — it makes it easier to stay aware of it.

Benefits in daily workflow

Adopting the ADR Scanner hook has subtle but meaningful effects on how teams work with architecture:

  • Reduced context loss – decisions stay visible and relevant even as team members change or projects evolve.
  • Smoother onboarding – new developers can see why the system looks the way it does without deep dives into historical documentation.
  • Fewer repeated discussions – when reasoning is accessible, teams revisit decisions less often and focus on improving them.
  • Lightweight maintenance – automation replaces the manual upkeep of architectural indexes.

By embedding awareness rather than enforcing compliance, the hook gently shifts team habits toward continuously maintaining architectural context.


Limitations and next steps

Like any focused tool, the ADR Scanner hook solves a specific part of the problem — visibility. It doesn’t yet provide cross-repo awareness, decision validation, or AI-assisted discovery.

Future iterations will explore:

  • Pairing ADRs with architecture fitness tests to validate conformance in CI/CD.
  • Introducing assistive commit-time suggestions to connect code changes with relevant ADRs.
  • Leveraging semantic search and embeddings for smarter ADR discovery.

Each of these extensions aims to strengthen the same goal: making architectural reasoning findable, verifiable, and always in context.


Closing reflection — architecture as a living dialogue

Architecture documentation should not be a record of the past but a dialogue with the present. The ADR Scanner pre-commit hook is a small concrete step toward that — transforming ADRs from static files into part of the everyday rhythm of software development.

Ultimately, architecture isn’t only about diagrams or documents. It’s about the shared mental model that keeps a system coherent as it grows.

Terraform Basics Refresher

Quick Terraform refresher: what Terraform is, core concepts, dependency management, execution model, files, and best‑practice workflow for multi‑env IaC.

1. What Terraform Is

Terraform is an Infrastructure as Code (IaC) tool by HashiCorp for defining, provisioning, and managing cloud infrastructure declaratively via HCL (HashiCorp Configuration Language).

Terraform manages infrastructure lifecycle — create, update, destroy — by comparing your desired configuration to the actual state.


2. Core Concepts

a. Providers

Plugins that enable Terraform to interact with cloud or service APIs.

provider "aws" {
  region = "eu-central-1"
}

b. Resources

Define infrastructure components (e.g., EC2, S3, etc.).

resource "aws_instance" "app_server" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t3.micro"
}

c. Data Sources

Read-only access to existing data or resources.

data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"]
}

d. Variables

Enable parameterization and flexibility.

variable "instance_type" {
  type    = string
  default = "t3.micro"
}

Use as var.instance_type.

e. Outputs

Expose values after provisioning.

output "instance_ip" {
  value = aws_instance.app_server.public_ip
}

f. State

Tracks deployed infrastructure and relationships. Enables Terraform to detect drift and manage updates efficiently. Stored locally or remotely (terraform.tfstate).

g. Modules

Reusable configuration units — each folder with .tf files is a module.

module "network" {
  source     = "./modules/vpc"
  cidr_block = "10.0.0.0/16"
}

h. Backends

Define where state is stored (local, S3, GCS, Terraform Cloud, etc.).

terraform {
  backend "s3" {
    bucket = "my-terraform-state"
    key    = "envs/prod/terraform.tfstate"
    region = "eu-central-1"
  }
}


3. Terraform Workflow

StepCommandPurpose
1️⃣terraform initInitialize project, download providers/modules
2️⃣terraform validateValidate syntax and configuration
3️⃣terraform planPreview intended changes
4️⃣terraform applyApply desired changes
5️⃣terraform destroyRemove all resources
6️⃣terraform fmtFormat code
7️⃣terraform showDisplay current state

4. Dependency Management

Terraform builds a dependency graph automatically based on references. You can also specify explicit dependencies:

resource "aws_instance" "app" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"
  depends_on    = [aws_vpc.main]
}


5. Execution Model

  1. Read Configuration – Parse .tf files.
  2. Refresh State – Sync with real infrastructure.
  3. Plan & Apply – Execute changes to match desired state.

6. Common Files and Structure

main.tf          # resources
variables.tf     # variable definitions
outputs.tf       # outputs
provider.tf      # providers and backend
terraform.tfvars # variable values


References

JPA-based Spring Boot with Docker Database that contains snapshot data

Spring boot jpa with dockerized snapshot data

This example demonstrates a spring data JPA-based Spring Boot setup which is backed up by Docker Database images. The images will contain initial data to ease up local development or to easily initiate any staging environment.

The core dependencies of the example are as follows:

  • Spring Boot 2.5.0
  • Spring 5.3.7
  • Hibernate 5.4.31.Final
  • PostgreSQL driver 42.2.20
  • MySQL connector 8.0.25 (Alternative Database Option)

We are going to follow the listed steps throughout this example:

  1. Introduce PostgreSQL database as the default database to the application
  2. Create and run a PostgreSQL docker image backed by initial data
  3. Add entities and repositories to the application
  4. Test the initial setup
  5. Introduce MySQL database as a secondary database option to the application
  6. Create and run a MySQL docker image backed by initial data
Continue reading “JPA-based Spring Boot with Docker Database that contains snapshot data”

Java Local Variable Type Inference

Java 10 Local Variable Type Inference
Java 10 – Local variable type inference

Introduction

Once written, a code block is definitely read more than it’s modified. Given this fact, it’s important to leave it as readable as it can be for future readers. Therefore, the Java team introduced a new feature called “Local Variable Type Inference” in Java 10 to help the developers further with this fact.

The feature is introduced with JEP 286. Let’s take a look at the excerpt from the goal section of the JEP.

Goals
We seek to improve the developer experience by reducing the ceremony associated with writing Java code, while maintaining Java’s commitment to static type safety, by allowing developers to elide the often-unnecessary manifest declaration of local variable types

JEP 286: Local-Variable Type Inference
Continue reading “Java Local Variable Type Inference”

Coupling Metrics – Afferent and Efferent Coupling

How to help your code base to stand the test of time using fan-in and fan-out metrics? A short revisit of afferent and efferent coupling metrics.

There are different definitions of coupling types in software development, and each of these has a different perspective. There is one shared concept among the definitions though; Coupling in software is about the dependency relationship of modules. This leads us to the generic definition of coupling; “Coupling is the degree of interdependence between software modules …[1]. Granted that, anyone who has dealt with coupling must have heard the widely known statement that it’s crucial to seek low coupling, and high cohesion between software modules to get a well structured, reliable, easy-to-change software. However, how can we know that our software design has the correct level of coupling? In order to answer this, first, we are going to revisit afferent coupling, efferent coupling concepts in this article. Secondly, we are going to explain the instability index concept, which relates these metrics to each other.

Afferent and Efferent Coupling as the Metrics of Coupling

A software quanta, -a module, a class, a component- is considered to be more stable if they have loose coupling to the other quanta in the system because it will remain undisturbed by the changes introduced to the others. The metrics afferent and efferent coupling, which were initially defined by Robert C. Martin in his books Agile Software Development, and Clean Architecture, help us understand the probability that a change in a software module will force a change in others. Similarly, they guide us to see the tendency of a module to be affected if an error in other modules of the system occurs.

Continue reading “Coupling Metrics – Afferent and Efferent Coupling”

Github’s Scientist as a helper to do large refactorings

Using github scientist for large refactorings in java

Github’s Scientist is a tool to create fitness functions for critical path refactorings in a project. It relies on the idea that for a large enough system, the behavior or data complexity makes it harder to refactor the critical paths only with the help of tests. If we can run the new path and old path in production in parallel without affecting the current behavior, and compare the results, then we can decide the best moment to switch to the new path more confidently.
I created a simple example to demonstrate the application of the concept on a java project.

Prologue

Some software companies don’t pay enough attention to the overall quality of their codebase. We might even say that this is a common pattern in the software business. The rationale for such a behavior is often justified by the claim that paying attention to “fast delivery to market” is far more important than code quality aspects of the product. Similarly, caring more about whether the functionality is there, is what matters for the business.

During the early stages of a project, this claim might have the (false) appearance of being true; your codebase has not grown that large yet, and you are delivering to your customers with “unbelievable” velocity. Since this the case, there is no point in caring about this technical nonsense. However, as time goes by, this kind of approach causes the technical debt to pile up. It slowly starts to cripple your market maneuvering capability, makes your code harder to change and degrades your developers’ motivation.

Continue reading “Github’s Scientist as a helper to do large refactorings”

CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced

Stateless Jenkins Container with Docker

In the previous article of this example series, we created a stateless Jenkins docker container that can be initialized solely by scripts. In that example, a seed job for the jobDsl plugin was also implemented. This job was later on used to create an automated simple Jenkins job,-inline-. Now we are ready to shape our Stateless Jenkins container to meet more advanced requirements, – advanced jobs as code-.

We will extend previous seedJob implementation further to create more complex jobs programmatically. Our extended seedJob will poll a job definition repository, and it will gather the information on how to create new jobs for some other remote repositories.

Continue reading “CI/CD as Code Part IV – Stateless Jenkins Docker Container: Jobs as Code – Advanced”

CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code

Stateless Jenkins Container with Docker

The purpose of these sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The main goal is to create a stateless Jenkins Docker Container setup, which can be bootstrapped from a set of configuration files, and scripts so that many problems related to maintenance of the infrastructure, and other operational issues are reduced.

 

The previous article of the series summarizes the steps to install and set up Jenkins programmatically. To see how it’s done, please visit CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically

First Steps to “job as code”

Check out this example from here

We are going to introduce the necessary plugins to achieve the goal of programmatic job descriptions in this step and configure them accordingly via programmatic means.

Continue reading “CI/CD as Code Part III – Stateless Jenkins Docker Container – Jobs as Code”

CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically

Stateless Jenkins Container with Docker: Installing and setting up Jenkins Programmatically

The purpose of this sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The primary goal is to create a stateless CI/CD setup, which can be bootstrapped from a set of configuration files and scripts so that many problems related to maintenance of the infrastructure and other operational issues are reduced.

We created a containerized Jenkins instance in the previous part of these example series. Moreover, we did most of the installation and configuration work via the user interfaces. In contrast to this, our target in this step is to automate most of the manual work so that we are one step closer to a stateless Jenkins Docker container. 

Checkout the example code from here

In summary, this basic setup will handle the following on behalf of an actual human being:

  1. Creation of the user(s) on Jenkins programmatically.
  2. Installation of basic plugins to get up and running with Jenkins.
Continue reading “CI/CD as Code Part II – Installing and Setting up Jenkins Programmatically”

CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container

Stateless Jenkins Container with Docker

The purpose of this sample series is to create a simple set of examples, which showcases CI/CD as Code, using Jenkins. The primary goal is to create a stateless CI/CD setup, which can be bootstrapped from a set of configuration files and scripts so that many problems related to maintenance of the infrastructure and other operational issues are reduced.

We are going to follow a step-by-step approach in this series. You can reach each step using the links below.

Continue reading “CI/CD as Code Part I – Introduce a stateful Jenkins Docker Container”