Overengineering in Onion/Hexagonal Architectures

Clean Architecture, Onion Architecture, and Hexagonal Architecture (aka Ports-and-Adapters) have become the norm for backend system design today. Powerful influencers have promoted these architectures without stressing enough that they are (overly)complex, exhaustive blueprints that need to be simplified to the actual problem at hand. Applying these architectures ‘by the book’ can lead to overengineering that causes useless development effort and risks and can impede a deep strategical refactoring on a different axis tomorrow. The purpose of this article is to point to common places in which simplifications can be made to these software architectures, explaining the tradeoffs involved.

Throughout the history of software architectures, many influencers have stressed the importance of keeping the peripheral Integration concerns away from the central Domain area that handles the main complexity addressed by the application. This culminated with the rise of Domain-Driven Design that emphasized keeping your Domain supple, constantly looking for ways to deepen it, distill it and refactor it to better ways of modeling your problem.

The 3 styles of architecture above can be referred to as “Concentric Architectures”, as they organize code not in layers but in rings, concentric around the Domain. Note: in this article, the terms layer and ring are used interchangeably.

The pragmatic variations in this article might help you simplify the design of your application, but each requires the entire team to understand code smells and design values (cohesion, coupling, DRY, OOP, keep core logic in an agnostic domain). Any move you plan to take, start small and constantly weigh simplifications vs heterogeneity introduced in the code base.

If you prefer to watch a video, the first half of this video covers some of these points.


Useless Interfaces

Throughout my career, I’ve heard different variations of the ideas below (all are wrong):

  • Domain Entities must be used only through the interfaces they implement
  • Every Layer must expose and consume only interfaces. (in Layered Architecture)
  • Application Layer must implement Input Port interfaces. (in Hexagonal Architecture)

Besides the superfluous interface files that have to be kept in sync with changes in the signatures of their implementation, navigating code in such a project can be both mysterious (is there another implementation?) as well as frustrating (ctrl-click on a method might throw you into an interface instead of the method definition).

But why create more interfaces in the first place?

The excessive use of interfaces may originate from a very old Design Principle: “Depend on Abstractions, rather than Implementations“. The first level of understanding of this principle suggests using more interfaces and abstract classes to allow swapping in a different implementation via polymorphism at runtime. The second level of understanding is much more powerful but has nothing to do with interface keyword: it speaks about breaking down the problem into subproblems (abstractions) on top of which to build a simpler, cleaner solution (eg TCP/IP stack). But let’s focus on the former for this discussion.

Using interfaces allows swapping in a different implementation at runtime:

  • alternative implementations of the same contract (aka Strategy Design Pattern)
  • chains of “Filters” (aka Chain of Responsibility Pattern), eg web/security filters
  • test Fakes (almost eradicated by modern mocking frameworks): MyRepoFake implements IMyRepo
  • enriched variations of implementations (Decorator Pattern): Collections.unmodifiableList()
  • proxies to concrete implementations (Note: JVM-based languages do NOT require interfaces to proxy concrete classes)

All of the above are ways to increase the flexibility of your design using polymorphism. However, it is a mistake to introduce these patterns before they are actually used. In other words, be wary of thoughts like “Let’s introduce an interface there just in case“, as this is the very definition of the Speculative Generality code smell. Designing in advance perhaps made more sense decades ago when editing code required a lot of effort. But tools have evolved enormously since then. Using the refactoring tools of a modern IDE (eg IntelliJ), you can extract an interface from an existing class and replace the class references with the new interface in the entire codebase, in a matter of seconds with almost no risk. Having such a tool should encourage us to extract interfaces only when really needed.

Question any interface with a single implementation, in the same module.

As a matter of fact, the only reason to tolerate an interface with a single implementation is when the interface resides in a different compilation unit than its implementation:

  1. In a library used by your clients, eg my-api-client.jar
  2. Interface in an inner ring, implementation in an outer ring = Dependency Inversion Principle

The dependency Inversion Principle (a hallmark of any concentric architecture) allows the Inner Ring (eg Domain) to call methods in an Outer Ring (eg Infrastructure), but without being coupled to its implementation. For example, a Service inside the Domain Ring can retrieve information from another system by calling methods of an Interface declared in the Domain Ring but implemented in the outer Infrastructure Ring. In other words, Domain is able to call Infrastructure but without seeing the actual method implementation invoked. The direction of the call (Domain→Infra) and the direction of the code dependency (Infra→Domain) are inverted, thus its name, “Dependency Inversion”.

Here’s one more (tragi-comic) argument in favor of interfaces that I once heard:
– Victor, we need interfaces to clarify the public contract of our classes!
– But what’s wrong with looking at the class structure for public methods? I replied
– Yeah… But there are over 50 public methods in that class, so we like to have them listed in a separate file, grouped nicely.
– me: (speechless)…
– Oh, and btw, the implementation class has over 2000 lines of code. 😱
I think it’s obvious that the useless interface was NOT their real problem…

In conclusion:

An interface deserves to exist if and only if:

  1. it has more than one implementation in the project, or
  2. it is used to implement Dependency Inversion to protect an Inner Ring, or
  3. it is packaged in a client library

If an interface does not match either of the arguments above, consider destroying it (eg by using the “Inline” refactoring in IntelliJ).

But let’s come back to the opening quotes:

  • Interfaces for Domain Entities – are wrong! None of the 3 reasons above apply:
    1. Having alternative implementations of Entities is absurd
    2. No code is more precious than your Domain (Dep Inversion against Domain?!)
    3. Exposing Entities in your API is a very dangerous move
  • Input Port interfaces in Hexagonal Architecture – are wrong!
    1. They have a single implementation (the Application itself)
    2. The API controller does NOT need to be protected (it is an outer ring)
    3. If you expose the Input Ports interfaces directly, that’s no longer the classic Hexagonal Architecture.

Strict Layers

The proponents of this approach state that:

  • “Each layer/ring should only call the immediate next one”

For this argument, let’s assume these 4 layers/rings:

  1. Controller
  2. Application Service
  3. Domain Service
  4. Repository

If we enforce Strict Layers, Controller(1) should only talk to Application Service(2) → Domain Service(3) → Repository(4). In other words, Application Service(2) is NOT allowed to talk directly to the Repository(4); the call must always go through a Domain Service(3). Taking this decision leads to boilerplate methods like this:

class CustomerService {
  public Customer findById(Long id) {
    return customerRepo.findById(id);

The code above is almost a textbook example of a Code Smell called Middle Man, as this method represents “indirection without abstraction” – it does not add any new semantic (abstraction) to the method to which it delegates (customerRepo.findById). The method above does not improve code clarity, instead, it just adds one extra ‘hop’ in the call chain.

The (few) proponents of Strict Layers today argue that this rule requires taking fewer decisions (desirable for mid-junior teams), and reduces the coupling of the upper layers. However, one key role of the Application Service is to orchestrate the use case, so high coupling to the parts involved is usually expected. To put it another way, it is a design goal of the Application Service to host orchestration logic to allow lower-level components to become less coupled to one another.

The alternative to Strict Layers is called “Relaxed Layers“, and it allows skipping layers as long as the calls go in the same direction. For example, an Application Service(2) is free to call a Repository(4) directly, while at other times it might call through a Domain Service(3) first, if there’s any logic to push into the DS. This can lead to less boilerplate but does require a habit of refactoring to continuously extract logic growing complex into Domain Services.

One-liner REST Controller Methods

For more than a decade, we all agreed that:

  • The responsibility of the REST Controller layer is to handle the HTTP concerns and then delegate all logic to the Application Service

Indeed, that made perfect sense 10-20 years ago. When generating HTML webpages on the server side (think .jsp + Struts2), that called for quite a lot of ceremony in the Controller. But today, if they talk over HTTP, our apps and microservices usually only expose a REST API, pushing all the screen logic into a front-end/mobile app. Furthermore, the frameworks (eg Spring) we use today evolved so much that if used carefully can reduce the responsibility of the controller to just a series of annotations.

The HTTP-related responsibility of a REST Controller today can be summarized as:

  • Map HTTP requests to methods – via annotations (@GetMapping)
  • Authorize user action – via annotations (@Secured, @PreAuthorized)
  • Validate the request payload – via annotations (@Validated)
  • Read/Write HTTP request/response headers – best done via a web Filter
  • Set the Response status code – best done in a global exception handler (@RestControllerAdvice)
  • Handle file upload/download – not fancy anymore, but the only really ugly HTTP-related thing that’s left

Note: The examples provided above are from Spring Framework (the widest-used framework in Java), but there are equivalents for almost all features in other Java frameworks and other web-potent languages.

Therefore, unless you are uploading some files or doing some other HTTP kung-fu (WHY?!), there should be no more HTTP-related logic in your REST Controllers – the framework extinguished it. If we assume the data conversion (Dto ↔ Domain) does NOT happen in the Controller, but in the next layer, for example in the Application Layer, then the methods of a REST Controller will be one-liners, delegating each call to a method with a similar name in the next layer:

class WhiteController {
  public WhiteDto getWhite() {
    return whiteService.getWhite();

Oh no! It’s again the “Middle Man” code smell we saw before – boilerplate code.

If what you read above matches your setup, you can consider merging your controller with the next layer (eg Application Service).

Merge Controllers with Application Services, when developing a REST API.

Yes, I mean annotating as HTTP endpoints (@GetMapping..) the methods in the first layer holding logic, be it Application Service or just “Service” (in our example, in WhiteService).

< awkward silence >

I know, it goes against some very old habits we followed until just a while ago. Probably the more years you’ve spent in the field, the more strange this decision feels. But times have changed. In a system exposing a REST API, things can (and should) be simplified more.

In the workshops I run, I’ve often encountered architectures that allowed the REST Controller to contain logic: mapping, more elaborate validations, orchestration logic, and even bits of business logic. This is an equivalent solution, in which the Application Service was technically merged ‘up’ with the Controller in front of it. Both are equally good: (a) having a Controller do bits of application logic or (b) exposing an Application Service as a REST API.

⚠️ There is one trap though: don’t accumulate complex business rules in the REST API component. Instead, constantly look for cohesive bits of domain logic to move to the Domain Model (eg in an Entity) or into a Domain Service.

One last note: if you are heavily annotating your REST endpoint methods for documentation purposes (OpenAPI stuff), you could consider extracting an interface and moving all the REST annotations to it. The Application Service will then implement that interface and take over that metadata.

Mock-full Tests

Unit Testing is king. Professional developers thoroughly unit-test their code. Therefore:

  • Each layer should be tested by mocking the layer below

When the architecture mandates Strict Layers or allows One-liner REST Controller Methods (even if data conversion is performed in the Controllers), a rigorous team would ask the obvious question: should those stupid one-liner methods be unit-tested? How? Testing such silly methods with mocks would lead to 5x times larger test code than tested code. Worse, it would feel useless – what are the chances that a bug would occur inside such a method?

People might get frustrated (“Testing sucks!🤬”) or worse, relax their testing strictness (“Let’s not test THIS ONE🤫”).

In reality, what you are facing is honest feedback from your tests – your system is over-engineered. A seasoned TDD practitioner will quickly pick up the idea, but others will have a hard time accepting it: “when testing is hard, the production design can be improved“.

For completeness, I’d like to add one more suggestion to the classic statement above:

When testing is hard, the production design can be improved, or you’re testing too fine-grained.

If your integration tests fully cover that method, do you really need to unit test it in isolation? Read about Honeycomb Testing for microservices and explore the idea that Unit Testing is just a necessary evil. When testing microservices, test from outside-in: start with Integration Testing > then Unit Testing (to cover corner cases).

Separate Application DTOs vs REST DTOs

When studying the Clean Architecture by Uncle Bob, many people get the impression that the data structures exposed by the REST Endpoints should be different structures from the objects exposed by the Application ring (let’s call them “ApplicationDto”). That is, have an additional set of classes, transformed from one into the other. It’s usually quick to realize the huge price of that decision and most engineers give up. Others, however, insist, and every field they add later to a data structure has to be added now to 3 data structures: one in the Dto sent/received as JSON, one in the ApplicationDto object, and one in the Domain Entity that persists the data.

#Don’t do this!


Propagate REST API DTOs into the Application layer.

Yes, the Application Service would have a harder time working with API models, but that’s one more reason to keep it light, stripped of heavy domain complexity.

But is there a valid reason to have an ApplicationDto separated from the REST API DTO?

Yes, there is. In case the same use-case is exposed over 2 or more channels: for example over REST, Kafka, gRPC, WSDL, RMI, RSock, Server-side HTML (eg Vaadin,..) etc. For those use cases, it makes sense to have your Application Service speak its own data structures (language) via its public methods. Then, the REST Controller will convert them to/from REST API DTO, while (eg) the gRPC endpoint will convert to/from its own protobuf objects. When faced with such scenarios, a pragmatic engineer will probably apply this technique only for the several use cases that require it.

Separate Persistence from Domain Model

We finally reach one of the most heated debates around Clean/Onion Architecture:

Should I allow Persistence concerns to pollute my Domain Model?

In other words, should I annotate with @Entity my Domain Model and let my ORM framework save/fetch instances of my Domain Model objects?

If you answer NO to the questions above, then you need to:

  • create a copy of all your Domain Entities outside of the Domain, eg CustomerModel (Domain) vs CustomerEntity (Persistence)
  • create interfaces for repositories in the Domain that only retrieve/save Domain Entities, eg customerRepo.save(CustomerModel)
  • implement the repo interfaces in the infrastructure by converting Domain Entities to/from ORM Entities

In short: pain! bugs! frustration!
It’s one of the most expensive decisions to take, as it effectively increases the code 4 times or more for CRUD operations.

I’ve met teams that took this path and paid the price above, but all of them were regretting their decision 1-2 years later, spare a few exceptions.

But what makes respected tech leaders and architects take such an expensive decision?

What is the danger of using an ORM?

Using a framework as powerful as an ORM never comes for free.

Here are the main pitfalls of using an ORM:

  1. Magic Features: auto-flush dirty changes, write-behind, lazy loading, transaction propagation, PK assignment, merge(), orphanRemoval, …
  2. Non-obvious Performance Issues that might hurt you later: N+1 Queries, lazy loading, fetching useless amounts of data, naive OOP modeling,..
  3. Database-centric Modeling Mindset: thinking in terms of tables and Foreign Keys makes it harder to think about your Domain at higher levels of abstraction, finding better ways to remodel it (Domain Distillation)

It’s not prudent to just ignore the points above. After all, they concern the deepest, most precious part of your application: the Domain Model.

So here’s what you can do about them:

1) Magic Features: Learn or Avoid. The amount of surprise in the audience during my JPA workshops has always concerned me – people driving a Ferrari without a driver’s license. After all, Hibernate/JPA’s features are far more complex than the ones you find in Spring Framework. But somehow, everyone assumes they know JPA. Until magic strikes you with a dark bug.

You can also block some of the magic features, for example, detaching entities from Repo and not keeping transactions open, but such approaches usually incur a performance impact and/or unpleasant corner cases. So it’s better to make sure your entire team masters JPA.

2) Monitor Performance early: there are commercial tools available to help a less experienced team detect typical performance issues associated with ORM usage, but the easiest way to prevent them (and learn at the same time) is to keep an eye on the generated SQL statements with tools like Glowroot, p6spy, or any other way to log the executed JDBC statements.

3) Model your Domain in Object-Oriented. Even if you are using incremental scripts (and you should!), be always on the lookout for model refactoring ideas, and try them out by allowing your ORM to generate the schema while exploring. Object-Oriented reasoning can always provide far better levels of Domain insight than thinking in terms of tables, columns and foreign keys.

To summarize, my experience showed me that it’s cheaper and easier for a team in the long run to learn ORM than to run away from it. So allow ORM on your Domain but learn it!

Application Layer decoupled from Infrastructure Layer

In the original Onion Architecture, the Application Layer is decoupled from the Infrastructure Layer, aiming to keep ApplicationService logic separated from 3rd party APIs. Every time we want to get some data from a 3rd party API from the Application Layer, we would need to create new data objects + a new interface in the Application Layer, and then implement it in the Infrastructure (the Dependency Inversion Principle we mentioned earlier). That’s quite a high price for decoupling.

But since most of our business rules are implemented in the Domain layer, the Application layer is left to orchestrate the use-cases, without doing much logic itself. Therefore, the risk of manipulating 3rd party DTOs in the Application Layer is tolerable, compared with the cost of decoupling.

Merge Application with Infrastructure Layer.

= “Pragmatic Onion” 😏

Merging the Application with Infrastructure Layer allows free access to 3rd party API DTOs from ApplicationServices.

This way, you end up with only 2 layers/rings: Application (including Infrastructure) and Domain.

The risk: if core business rules are not pushed into the Domain Layer, but kept in the Application Layer, their implementation might be polluted and become dependent on 3rd party DTOs and APIs. So the Application Services should be stripped of business rules even harder.

There is an interesting corner case here. If you lack persistent data (if you don’t store much data), then you don’t have an obvious Domain Model to map to/from the 3rd party data. In such systems, if there is serious logic to implement on top of those 3rd party objects, at some point it might be worth creating your own data structures to which to map the external DTOs, so that you have control over the structures you write your complex logic with.

Not Enough Domain Complexity

Like any tool, concentric architectures are not suitable for any software project. If the domain complexity of your problem is fairly low (CRUD-like), or if the challenge of your application is NOT in the complexity of its business rules, then the Onion/Hexagonal/Ports-Adapters/Clean Architecture might not be the best choice, and you might be better off with vertical slicing, anemic model, CQRS, or another type of architecture.


In this article, we looked at the following sources of overengineering, waste, and bugs when applying the Concentric Architectures (Onion/Hexagonal/Clean-)

Useless Interfaces => remove them unless ≥2 implementations or Dependency Inversion

Strict Layers => Relaxed Layers = allow calls to skip layers while going in the same direction

One-liner REST Controllers => merge them with the next layer, but keep their complexity under control

Mock-ful Tests => Collapse layers or Test a larger chunk

Separate Application<>Controller Dtos => use a single set of objects unless multiple output channels

Separate Persistence from Domain Model => don’t! Use a single model, but learn the ORM magic, and performance pifalls.

Decouple Application from Infrastructure => merge Application+Infrastructure Layer, but push harder business rules in the Domain

PS: I am constantly debating these topics with different teams during my workshops. It’s my duty to keep this content as accurate as possible. So please help me refine the content with YOUR experience in the comments below. Thank you!

Popular Posts


  1. Manos Tzagkarakis says:

    As with every architecture there are trade offs. Your proposals have trade offs as well (you propose architecture related decisions after all). Team size and maturity affects those kind of decisions because of broken window theory.
    An example:
    A single controller calling the repo for a GET. Then the same kind of GET needs a filter(s) via query params which need some kind of validation. My guess is that the experience of the developer will determine where those changes are implemented and how and what will he/she test via unit or integration tests (if at all).

    And all I am saying is that there are trade offs and you present them as silver bullets. Taking shortcuts yes, but consciously.

    1. You are right to put the focus more on expanding behavior than contracting (simplifying) it.

      To work out your example: basic validation can rapidly be implemented with annotations on the search criteria data structure (that you can unmarshal from your query params), while more elaborate validation I would implement right in the controller method, until it grows *big*, then extract it. Indeed it’s a matter of experience and personal taste, but I believe there are also lots of established guidelines around how to grow the complexity.

      Validation in particular poses more problems when it needs to be enforced across the entire app (in the Domain Model), for Commands rather than Queries.

      But in the end, it’s about avoiding repetition of logic (DRY principle) and keeping a decent amount of complexity in each component (SRP principle).

      Indeed, my position is to start small and grow the design continuously, iff needed. And yes, this calls for an above-average level of maturity in the team.

  2. Matthias Perktold says:

    Great article!

    I‘m curious about „Separate Persistence from Domain Model“. I totally agree that having a separate model of JPA entities besides your actual domain model is a bad idea. On the other hand, the concentric architectures do suggest to push persistance out to the infrastructure layer, and I think that really brings some advantages.

    So I always interpreted this like you cannot access JPA API like EntityManager from inside your domain, but you can use JPA annotations to annotate your domain entities.
    If you want to use/test the domain in isolation, you can ignore the annotations, they don‘t do anything by themselves. And you can have the repository interface as part of the domain, and implement it in the infrastructure layer.

    What do you think of such a hybrid approach?

    1. In order to avoid creating delegating methods for findById or save in that implementation in infrastructure, you could have the spring Data JPA interface in the domain, extending a “Custom” interface that is implemented OUTSIDE of the domain. In other words, @Query annotations and CRUD methods will be part of the DOmain interface, but the dynamic queries that you need to manually implement (with code, IFs, …) can be moved to infrastructure.

      Pro: all the code in the Domain can be unit tested without a DB up => you could impose higher coverage targets
      But I usually demand 100% mutation coverage anyway for any dynamic query we write.

      Cons: there are some dynamic queries that include bits of domain knowledge, but they are moved out of the Domain.
      A fix for the cons would be using Spring Specifications to implement ‘predicates’ to combine in the Domain to get the stuff done.

      It probably also depends of the way you write those queries. If it’s something readable like concatenating JPQL fragments(❤️) or QueryDSL, probably that code is not ugly to throw away from Domain. If it’s complex queries using Criteria API (🤢), that code probably needs to be hidden away somewhere (in Infra).

      Interesting to see Spring Framework already provides support out of the box for such arch decisions.

      1. Hi Victor,

        I caught this part in your comment:
        > you could have the spring Data JPA interface in the domain, extending a “Custom” interface that is implemented OUTSIDE of the domain.

        As far as I can tell, you’re not allowed to declare the “Impl” class which implements the “Custom” interface in a different package other than the interface itself

        Or I’m maybe misunderstanding something here?

        1. You’re right. Spring Data Jpa has this awkward limitation. It only looks for the “Impl” classes under the package (or subpackages) where you defined the Repo interface. That’s a pity, since the only option left is to define a separate interface (not extending JpaRepository) with only the methods we need to manually implement and implement this interface in infra.

          However, today I believe I’d question this – is using Entity Manager such a terrible thing to have in your Domain?
          Removing JPA from a large project becomes extremely risky/ virtually impossible anyway – perhaps I’d choose my battles, and accept to live with JPA next to me. After all, the queries we write are traversing our own Domain Model.

  3. It is a good sign to see that other people walked the same path.
    You mention many painful aspects and I like your approach overall.

    I will dare to mention a few differences compared to what I got:
    1. One-liner REST Controller Methods: I still prefer not to pollute the App layer with HTTP specific annotations. Then we have the fact the often the internal contract might differ a bit from the external contract (e.g. RestAPI). And last but not least I seem to prefer a fluent non-flat internal API while the HTTP (or other) part with its annotation fits better into a flat structure.
    2. Separate Application DTOs vs REST DTOs: that’s easy, if they are the same use the App DTOs. If there is a difference for whatever reason, just convert from one to the other.
    3. Separate Persistence from Domain Model: having a separate persistence model is very often an overhead. The way I prefer to address this issue is to simply have a typeless resultset which gets transformed from/to the domain model structure by either a dedicated transformer or by an automatic one (for trivial and easy models). Thus I have just one source of truth and in addition, I tell the ORM what to do and not vice versa.
    4. Application Layer decoupled from Infrastructure Layer: the example was somehow not convincing. I’d rather have Domain & App together and Infra separate. But the whole separation topic is quite hard to solve in a clean way.

    So once again, excellent topic and a must read for every developer trying to follow Onion/Hexagonal-like Architectures.

    1. First of all, thank you for your comment.

      I am sure that good reasons have led you to your conclusions, but I’d still love to debate them a bit:

      1) There are 3 ideas
      – If you believe the annotations clutter your application code, you can always extract an interface (implemented by application) and move that metadata in the interface. I’ve seen that done in several projects, especially when they had to also document their APIs (OpenAPI docs)
      – Why would the Internal API differ from the External API? Assuming the application exposes its behavior over a single channel (eg REST), why would the two contracts be different? To me, it should be a good reason to justify the extra mapping effort imposed to the other endpoints (if you make it an architectural guideline).
      – The magic is probably in the Fluent API that you mention eposed by the Application layer. But writing a fluent API requires careful effort, so what’s the benefit. I come to suspect that in the Controller you have significant logic built on top of that fluent API. In fact, if we distill it a bit, we might be looking at two levels of abstraction: one is the internal API, on top of which the controller adds more logic.
      Btw: do you have a DOmain Service layer also?

      2. Can you give me a brief example of such a reason?

      3. If you map a typeless result set into your entities, I wouldn’t say you are using an ORM anymore, but maybe just a data mapper (iBatis style). ORM if embraced always tends to surround you. So it might be more fair to say you discarded the ORM workflow altogether. That is also an option for highly complex domains, legacy databases or terrible performance requirements. Although in general, I’m not a big fan of this approach, tbh.
      Or maybe I misunderstood?

      4. I see. So you consider the external APIs more damaging your code than your OWN API (that you expose). Indeed, any ‘separation’ has a high cost and leads to boilerplate, so perhaps in the end it’s about what you want NOT to care about.
      In my practice, I found more useful to separate the ‘standard’ responsibilities of an endpoint like validation, transaction, mapping, orchestration (in Application) from the deep, weird business rules (in Domain).

      But, as always, it depends on what you have in front of you.
      Thank you

      1. Thank you for replying to my comment. Every discussion is welcome as it helps us improve our work so let me give an answer to your comments/questions.

        1. I would avoid putting the annotations in an interface and implementing it in the application because the dependency should be exactly in the opposite direction. The delivery mechanism should depend on the application and not the other way round. The annotations are bound on one of the possible delivery mechanisms and the app is not supposed to know it.
        The internal API might differ for many reasons. A simple example is that some functionalities are available only through a console command or triggered by a event bus.
        As for the controllers, I have no logic on top of the API, just calls and translation from/to the HTTP language, mostly done via annotations.
        2. Contrary to your suggestion to “Propagate REST API DTOs into the Application layer” in my comment I meant that the App is the boss. I know that in practice our App layer does what our Rest API promises but this is more of a coincidence. It is not easy for me to give an example for separate REST vs App DTOs since I try to avoid duplication but let’s say that the App layer DTO delivers some values that should be removed in the REST API response for whatever reason (e.g. security).
        3. It might be influenced by the programming language that I am using but trust me, the way the data is structured matches closely the domain model structure, while at the same time avoiding boilerplate stuff. The most important detail here is to never let the relational model enforce anything on the domain as some ORMs tend to do.
        4. They might be indeed more damaging since this is the place where typeless unsafe not yet validated data structures and values meet our safe, well-typed code. On the other side we have the well known challenge of domain purity vs completeness vs performance and this is where the closer collaboration between Domain and App provides benefits.

        Thank once again!

        1. 1a. The moment I would see the application (useful) code being polluted by the delivery mechanism, I’d create a Controller to move the garbage there. Fun fact, I just had 2 days ago an example in which an OpenAPI-generated interface was NOT clean -> controller made perfect sense there.

          1b. Agree, as soon as some functionality has to be exposed over 2 channels (REST API + event bus), its implem should not be coupled to either of them.

          2. I see, but then why are those fields in the App Dto? If we’re talking about ‘null-ing’ some fields for GDPR/security visibility issues, I saw that done based on annotations placed on DTO fields, based on which a transverse filter/aspect was clearing those fields for some less-powerful user profiles. So, imho, security should be generally implemented as a transverse concern, for safety reason. But, ofc, it depends.

          3. Too often I saw developers aligning the Domain Model to their DB (wrong!) because they did not use more advanced ORM features (the classic example is @Embeddable). The domain should be Object-Oriented and clean.

          4. 👌

          Thank you!

  4. Aleksandar says:

    Thank you for creating this article, it’s good to know potential simplifications.
    Couple of comments:
    1. Wouldn’t “Interface segregation principle” justify creating interfaces as well? I like using interface segregation with CQS.
    2. Regarding vertical slicing as an alternative to Onion arch (when domain complexity is low), there’s still an option to use vertical slicing with simplified Onion arch inside each slice.
    3. Middle man methods may be useful, let’s say in Clean architecture. Unlike Onion, in Clean arch Use cases have a mixed role of both Application and Domain services – they do orchestration and may contain parts of business logic. But they also describe the intention, the use cases. So, even if they’re sometimes just wrappers around repository, it makes sense to have them as they describe the implemented feature. Plus, they can protect Controllers from repository changes needed for other controllers.

    1. Thank you for sharing your thoughts!

      1. Interface segregation principle technically means interfaces should be more specific to clients. But the word “interface” there does not necessarily mean java/c#/.. “interface” type. It just means a collection of public methods with the associated methods. Creating interfaces in front of classes without doing neither polymorphism, not client-jar, nor dependency inversion IMO equals to waste.
      However, I might be missing something: what do you mean be ISP + CQS? Could you elaborate a bit your practice, please?

      2. YOu mean little onions, one in each slice? Yes-and-no.
      Yes when the slices are large enough you get a “Modulith”. There one could use onion to protect the domain of each ‘submodule’ both against the outside world, as well as against sibling modules. There are many trick there, it’s a half of day of debates on strategies and techniques. But YES.
      No when the slices are super thin, like “features”, because then these features don’t contain enough complexity and onion-ing them would overengineer stuff up. How I see it, the purpose of onion is to protect a complex domain that typically has a lot of entry points and does a lot of state transitions.

      3. If the method names in Use-Cases reveal an intent that adds semantic value over the names of the repos, then they start to bring real value. Middle Man happens when there is ‘indirection without abstraction’ -> when the method hop (indirection) does not tell anything new (it has the same name as the method to which it delegates).
      Now, regarding protecting Controllers… I see your point, but why would you like to protect a controller? That’s a peripheric component, usually not doing much, that I couldn’t say I care much about.
      Let me twist this around – I agree with you, but then, why not making that use case become a Controller too (assuming that only means adding several annotations here and there).

      Thank you for your thought, looking forward to your answer.

      1. Aleksandar says:

        Victor, thank you for replying!
        1. Here’s one example. Feature wants to expose its API’s to other features. Mechanism for exposing the API’s is described with one interface, let’s say interface with a method “registerAPI”. Mechanism for using the API’s is exposed through other interface with method “getAPI”. Single Impl class located in shared kernel implements both interfaces, as they make a cohesive whole. Now, feature that exposes its API’s uses that Impl class, but only through “registerAPI” interface. That’s the only thing that feature needs. In the same time, other feature which uses the API, uses the same Impl class, but through other interface “getAPI”. That’s interface segregation principle in my mind, as these interfaces hide un-need public parts of the implementation. Both features use only parts of the class implementation, the parts they need.
        2. Yes, I agree, we don’t want such slices to be super thin, as that defeats the purpose. I actually heard of different term that describes this -> “Modular monolith” instead of “Modulith”. Here we have MiniService mentioned as well, it all depends on the level of granularity: https://medium.com/att-israel/will-modular-monolith-replace-microservices-architecture-a8356674e2ea
        3. Well, I would like to protect controller for the sake of SRP 🙂 If repository changes for some other purposes, should controllers have to change as well?
        Regarding your twist, if UseCase becomes a Controller, that we are deleting entire Clean arch Adapters layer. UseCases deal only with internal model, while controllers adapt internal models to REST ones and vice versa. And please note, Rest DTO’s are propagated in the Adapters (or Application) layer here 🙂

  5. Highly controversial article. I like it.
    In my experience, it’s much easier to start by going “by the book”. I don’t like useless interfaces, I don’t like “middle man” abstractions, but it’s much easier to follow these approaches.
    All this complexity introduces quite a bit of code – yes. But all this code is super dull, it’s a bunch of mappers, with little to no logic, which is extremely easy to test. Create an entity, shove it into the mapper, and assert the domain model – it doesn’t get any easier than that. On the other hand, learning and understanding(!) ORM magic is not an easy task. Yes, it’s the right thing to do, but it’s just not feasible to ask it from every developer. And we all know what might happen, when half the code constantly mutates entities left, right, and center while struggling with transactions and “LazyInitializationException”, etc.

    Overall, I would say I will keep this article in mind and re-read it once or twice. But pushing it to the wider audience – not really. Someone might take it as a justification for bad practices: “Why do we need architecture? Let’s just use controller->service->repository and be done with it.”.

    1. I must say some high tech companies doing microservices do exactly that: start very simple. If we split by domain to small-enough pieces (microservices or modules), there is less focus on layers and rings. on the other end, we have The Majestic Monolith that indeed requires a much more careful design

  6. Thank you for the article, I will bookmark it for future reference.
    Two things caught my eye:
    1. “test Fakes (almost eradicated by modern mocking frameworks): MyRepoFake implements IMyRepo”
    I think that this approach is not really necessary anymore. That is, sure, I can have Mockito but I feel it is cumbersome in some situations.
    Hiding IO behind interfaces (so just hexagonal architecture really) is very handy in my case where I just create a fake repo with a hashmap.
    Not only I can check exactly what is saved there but also I don’t need to wire up many dependencies in save/fetch methods: because of this I can
    test the whole flow of my business logic and be quite sure it is safe.

    2. “In other words, should I annotate with @Entity my Domain Model and let my ORM framework save/fetch instances of my Domain Model objects?”
    Your answer was no but I’d challenge that a bit: in this article https://victorrentea.ro/blog/control-your-data-structures/ you provided that people should avoid using the foreign DTOs in their core logic
    because you have no control over the fields. I’d say (though it is far fetched) that the same principle applies here: you are the one creating the entity DTO and despite that
    there is a lot of additional magic you have no control over (or might not know about). As some people already pointed that out I’d also say very rarely there is a developer
    that has the grasp on ORM, usually it tends to be on the surface of the dirty checking, cache or something similar (me included).

    People tend to use libraries as core logic and therefore integrating it into the business logic of the application and I’d like to challenge that: things like ORM should be used
    more as a plugin than a foundation. Sure, you add more structures to code and create the duplicate classes but because of that you hold a tool that is more precise
    (like having same data in different contexts in DDD).
    TLDR single point of entry in reading/writing the data.

Leave a Reply

Your email address will not be published. Required fields are marked *