Posted in

The Pain Lies Beneath Modern Software Engineering

One of the younger engineers on my team was diving into Domain-Driven Design. He had been studying some of the well-known DDD books, while I mentored him and highlighted the importance of its strategic side — especially Ubiquitous Language.

Years earlier, when I first began exploring Domain-Driven Design myself, Ubiquitous Language was also the concept I struggled with most. It felt too vague and hard to grasp, and I couldn’t quite see its purpose — mostly because I wasn’t even aware of the problem it was meant to solve. Yet it was a problem I had been drowning in for years. I never imagined that developing a shared language could spark a quiet revolution in software development. Looking back, it seems so simple, almost obvious — maybe even a bit trivial, right? And that was almost exactly the reaction I got from the young engineer on my team. After spending some time on it, he came to me and said: “Well, this feels obvious. Of course we need to speak a common language with all the stakeholders — and we already do. Is that it?”. This was another “aha!” moment for me. The new generation has grown up in a world where all these principles, practices, and disciplines are just there by default — part of the landscape. How do we show them why all these principles, practices, and disciplines matter, and the struggles that gave birth to them? Or should we? What does it change?

In fact, what truly changes is how we embrace, comprehend, and apply these principles. Just as knowing our country’s history helps us understand why certain rules, principles and customs exist, knowing the backstory of these practices helps us truly grasp the realities beneath them. It moves us beyond simply following instructions “because that’s how it’s done.” Instead, we embrace these principles as our own, understand the costs of not following them, carry their lessons forward, and even defend them when shortcuts tempt teams to cut corners. That deeper comprehension is what makes these practices permanent — and transforms us from passive practitioners into advocates who know why they matter.

Let’s ground this with a few real-world examples. What were the real pains hidden beneath these principles? To find out, we need to rewind to the days before practices like Ubiquitous Language, CI/CD, or Git — and see why they became essential.

Before Ubiquitous Language

Picture this simple business case: Sometimes a customer calls with a payment issue or a fraud suspicion, and we need to hold the order in place so it can’t be shipped, but it must stay visible for our support team until the problem is resolved.

  • Business stakeholder: “We need to be able to freeze an order.”
  • Middle translator (analyst/PM): “So they mean pause the process — I’ll tell the developer that we need to disable the workflow.”
  • Developer: “Okay, I’ll mark the order as inactive in the database.”

Now here’s the problem:

  • To the business, “freeze” meant temporarily stop processing but still keep the order visible to customer service.
  • To the translator, it meant pause the workflow until further notice.
  • To the developer, it meant make the order disappear from the system.

The case was A, the business said B, it was translated into C, and the developer implemented D. Nobody was entirely wrong, but healthy communication never happened. Each side used different terms for the same idea — and the result was misalignment.

The deeper issue was that the development team didn’t truly comprehend the business they were building for. They knew the mechanics of implementation but not the meaning or purpose behind it — nor what was happening on the customer side.

That lack of shared understanding created friction everywhere: endless clarifications, misaligned features, bugs caused by misinterpretations, and delivery cycles slowed by constant back-and-forth.

This is exactly how it was. Ubiquitous Language solved this by eliminating translation. That’s all: eliminating translation! When business and developers co-create and use the same terms — words that live both in conversations and in the code — developers finally gain a direct understanding of the business itself. Software becomes not just functional, but a faithful reflection of the domain it serves.

Before Iterative & Incremental Development

For decades, software projects were guided by the Waterfall model. The lifecycle looked neat on paper:

  1. Requirements → capture everything up front.
  2. Analysis → refine requirements, study feasibility, model the problem.
  3. Design → create detailed architecture and system design.
  4. Implementation (Coding) → build the entire system in one go.
  5. Testing/Verification → validate at the very end.
  6. Deployment/Delivery → release the “finished” system all at once.
  7. Maintenance → patch issues and apply fixes after delivery.

Picture this:

  • Client: “We need a Learning Management System for our organization. It should allow employees to enroll in trainings, track their progress, manage the promotions via assigned exams and give managers detailed reports.”
  • Analyst: “We’ll capture every detail in a requirements document — all the possible training flows, user roles, reports, and integrations. This will be our blueprint.”
  • Development team: “We’ll design and code everything according to the spec, and deliver in 10 months.”

This was exactly how one of my past projects failed. The client was a well-known, reputable organization, and the project cost them close to a million dollars. The software company I worked for delivered the system roughly within the promised timeframe. We followed the analysis and design documents to the letter.

But once the organization’s employees started using it, the complaints poured in: “This isn’t user-friendly.” “That’s not how we do things.” “We didn’t mean it this way.” “This doesn’t work for us.” Support tickets and change requests kept coming for months.

For six months, the cycle of fixes and back-and-forth continued. In the end, the system was abandoned. The users refused to adopt it — and, to be fair, they had good reason. The software simply didn’t reflect their reality.

No one saw working software until the very end. By the time the product was delivered, the organization’s needs had shifted, regulations had changed, and many of the original requirements no longer mattered. Worse, the real misunderstandings only surfaced once people actually started using the system — after months of effort and nearly a million dollars invested.

This painful pattern was common in the Waterfall era and gave rise to Iterative & Incremental Development, first explored in the late 1970s: deliver in smaller pieces, validate early, and adapt. Barry Boehm’s Spiral Model (1986) shaped this thinking, which later evolved into Scrum in the 1990s and the broader Agile movement in the 2000s.

Before Git

Not long ago, version control was primitive — or in many teams, nonexistent. Projects were passed around like this:

  • Developer A: “I’ll zip the project folder and email it to you.”
  • Developer B: “Got it. I’ll make my changes and put it in the shared folder as project_final_v2.zip.”
  • Developer C: “Wait! I just added a small part. I will send you both it as project_final_final.zip.”

This was exactly the case in my first professional experience. We often had to plan carefully who would work on what, just to avoid stepping on each other’s toes. If two people needed to touch the same class or file, one had to wait until the other finished. It created delays, coordination overhead, and plenty of stress. And when a bug appeared, figuring out who changed what — and when — was a nightmare.

Git changed everything. With distributed version control, every developer had a full copy of the repository. Branching and merging became safe and fast, and collaboration scaled in ways that were impossible before. What once felt like chaos turned into a reliable foundation for teamwork.

To someone who has always used GitHub or GitLab, branching and pull requests might feel routine. But for those who lived through the era of project_final_v2.zip, Git was nothing short of a revolution.

Before CI/CD

This one doesn’t go that far back. Releases used to be big events for teams.

Code was developed for weeks — sometimes even longer — with little or no integration. Then, just before release day, everyone piled their changes together. That’s when the real nightmare began.

Checklists were read aloud step by step. Everyone was asked if their scripts were ready or they were missing anything. One missed step could mean hours of downtime. Merge conflicts stacked up, tests broke unpredictably, and nobody knew if the system would even run once deployed.

I remember “hell week” being a common phrase. Teams would spend sometimes several days locked in meeting rooms, untangling broken builds and chasing last-minute bugs. Deployments were risky, stressful, and exhausting. The only certainty was that production would behave differently than expected!

Continuous Integration and Continuous Delivery (CI/CD) was another revolution. Every commit could now be built, tested, and deployed automatically. Integration became routine, not a crisis. Developers got feedback within minutes instead of weeks, and deployments turned into safe, predictable events.

To someone who has only known Jenkins pipelines, GitHub Actions, or GitLab CI, automated builds may feel like background noise. But for those who lived through the pain of “hell week,” CI/CD was the difference between sleepless release weeks and true continuous flow.

Before Containerization

As I write this, I realize it’s been years since I last heard the phrase “It works on my machine.” Back then, it wasn’t a joke — it was the routine for every test and deployment. It even became such a cliché that people printed it on t-shirts.

A developer would ship code that ran flawlessly on their laptop, only for QA to watch it crash in test. Ops would then deploy the same code to production, where it failed again — this time because the server had a slightly different configuration, OS patch, or library version. Meanwhile, new hires could spend days (sometimes weeks) just trying to set up their environment to match the rest of the team.

A missing dependency, a misconfigured variable, or even a minor version difference in a library could cause hours of overhead. Instead of solving business problems, teams wasted enormous energy chasing environmental mismatches.

By packaging code together with its dependencies, configuration, and runtime environment, containers guaranteed consistency: if it worked on a developer’s machine, it worked everywhere. From laptops to test servers to production, the environment was the same.

For engineers who started with Docker and Kubernetes already in place, spinning up a container might feel routine. But for those who lived through the chaos of environment drift, containerization will always be remembered as a true revolution in software engineering 🙂 

Of course, it’s not possible to cover every modern software engineering practice here — and certainly not every story of pain that gave birth to them. Each one has its own history: long nights, failed projects, and lessons learned the hard way.

What matters is not memorizing every detail, but realizing that behind every principle lies a story. A story of why it exists, what problem it solved, and what chaos it prevented.

So here’s my encouragement to the new generation: don’t just accept practices like CI/CD, automated testing, or domain-driven design as “the way things are done.” Go deeper. Explore the history. Ask why they came to be. The more you understand the pain that shaped these practices, the better prepared you’ll be to embrace them fully —and to apply them with purpose rather than by habit.

Modern software engineering isn’t just a toolbox. It’s a collection of scars, experiences, and lessons passed down. And it’s up to each generation to keep those lessons alive.

Thanks for reading 🙂