Software Development Fundamentals, Part 2: Layered Architecture

This is part of a series of introductory guidelines for software development. It’s a continuation of the previous post about Dependency Injection.

One of the primary reasons to adopt Dependency Injection is that it is impossible to achieve a good layered architecture without it. Many developers argue that IoC container is only beneficial if you are doing test-driven-development. I disagree. I can’t quite think of how else you can build any layered architecture at all without IoC.

First, let’s take a moment to define layered architecture. There is a common misconception about layered architecture, which is largely contributed by misleading diagrams in most textbooks and articles. This is how layered architecture often described in textbooks.


I think that’s rather deceptive. Unfortunately, developers often take this diagram too literally. Each layer depends on the layers below it. Presentation layer depends on business-layer, and then both depend on data-access layer.

There is a huge flaw in that diagram. Data-access layer should be at the outmost part of application. Business layer should be at the core! But in this diagram, Data-access layer is becoming the core foundation of the whole application structure. It is becoming the critical layer. Any change on data-access layer will affect all other layers of the application.

Firstly, this architecture shows an incongruity. Data-access layer, in reality, can never be implemented without dependencies to business detail. E.g. DbUserRepository needs to depend on User. So what often happens is developers start introducing a circular dependency between Business layer and DataAccess layer. When circular dependency happens between layers, it’s no longer a layered architecture. The linearity is broken.

Secondly, which is more important, this architecture tightly couples the whole application to infrastructure layer. Databases, Web-service, configurations, they are all infrastructure. This structure could not stand without the infrastructure. So in this approach, developers build the system starting from writing the infrastructure plumbing first (e.g. designing database-tables, drawing ERD diagram, environment configuration), then followed by writing the business code to fill the gaps left by the infrastructural bits and pieces.

It’s a bit upside-down and not quite the best approach. If a business-layer couples itself with infrastructure concerns, it’s doing way too much. Business layer should know close to nothing about infrastructure. It should be in the very core of the system. Infrastructure is only a plumbing to support the business-layer, not the other way around. Infrastructure details are most likely to change very frequently, so we definitely do not want to be tightly coupled to it.

Business layer is an area where you have the real meat of your application. You want it to be clean of any infrastructure concerns. Development effort starts from designing your domain-code, not data-access. You want to be able to just write the business code right away without setting up, or even thinking about, the necessary plumbings. One of the guideline in previous post is that we want classes within business layer to be POCO. All classes in business layer should describe purely domain logic WITHOUT having any reference to infrastructure classes like data-access, UI, or configuration. Once we have all domain layer established, then we can start implementing infrastructure plumbing to support it. We get all our domain models baked properly first, before we start thinking about designing the database tables to persist them.

So this is a more accurate picture for layered architecture.


Infrastructure sits at the top of the structure. Domain layer is now the core layer of the application. It doesn’t have any dependency to any other layer. From code perspective, Domain project does not have any reference to any other project, or any persistence library. Databases, just like other infrastructure (UI, web-services), are external to the application, not the centre.

Thus, there is no such thing as “database application”. The application might use database as its storage, but simply because we have some external infrastructure code in the neighbourhood implementing the interfaces. The application itself is fully decoupled from database, file-system, etc. This is the primary premise behind layered architecture.

Alistair Cockburn formalizes this concept as Hexagonal Architecture, so does Jeffrey Palermo as Onion Architecture, but they truly are merely formalized vocabularies to refer to the old well-known layered architecture wisdom.

Onion Architecture

To describe the concept, let’s switch the diagrams into onion-rings. Jeffrey Palermo describes bad layering architecture as follows:


In onion diagram, all couplings are toward the centre. The fundamental rule is that all code can only depend on layers more central, never on the layers further out from the core. This way, you can completely tear out and replace the skin of the onion without affecting the core. But you can’t easily take away the core without breaking all outer layers.

In the diagram above, the infrastructure sits in the core of the onion. It becomes an unreplaceable part of the system. The structure cannot stand without infrastructure layer in the core, and the business layer is doing too much by embracing the infrastructure.

This is the better architecture model:


UI and infrastructure sits right at the edge skin of the application. They are replaceable parts of the system. You can take the infrastructure layer completely and the entire structure stays intact.


Take a look at the diagram again.


If you follow the previous post, we were building “resend PIN to email” functionality. AuthenticationService is in the domain layer at the core of application, and it does not have knowledge about SMTP client or database (or even where User information is stored). Notice that DbUserRepository and SmtpService are in outmost layer of the application, and they depend downward on the interfaces in the core (IUserRepository, IEmailService) and can implement them. Dependency Injection is the key.

public class AuthenticationService: IAuthenticationService
	IUserRepository userRepository;
	ICommunicationService communicationService;

	public UserAuthenticationService(IUserRepository userRepository, ICommunicationService communicationService)
		this.userRepository = userRepository;
		this.communicationService = communicationService;

	public void SendForgottenPassword(string username)
		User user = userRepository.GetUser(username);
		if(user != null)
			communicationService.SendEmail(user.EmailAddress, String.Format("Your PIN is {0}", user.PIN));

At runtime, IoC container will resolve the infrastructure classes that implement the service interfaces (IUserRepository, ICommunicationService) and pass them into UserAuthenticationService constructor.

Project Structure

My typical project structure looks like this in Visual Studio:


I separate each layer into its own project. One of the benefits is that it physically enforces linear dependencies. Each layer can only depend on the layer below it. If a developer introduces a circular dependency between Domain layer and Data-Access, Visual Studio (or Java’s Eclipse) won’t even compile. Here is how these projects work together within the architecture:


Every component depends only to the components in the layer below it. As you can see, the only project that has reference to System.Data and NHibernate library is Sheep.Infrastructure.Data. Likewise, the only project with System.Web.Services reference is Sheep.Infrastructure.BackEnds. And none of those infrastructure projects is referenced by any other part of the application (and this is enforced by Visual Studio). Therefore, they can be totally taken away and replaced without affecting the application core. On the flip side, the whole infrastructure layer is tightly coupled on the core layer.

Domain layer is POCO. It does not depend on any other project, and it does not contain any reference to any dll for particular technology. Not even to IoC framework. It contains just pure business logic code. In order to be executed, this assembly has to be hosted either within a system that can provide infrastructure implementation, or within test system that provides mock dependencies. Hence notice that we have 2 versions of top-level skins in the structure: Infrastructure and Test.

So what’s the benefit?

  1. Domain layer is now clean of infrastructure concern. Developers actually write business code in human readable language, not a messy pile of SQL statements/NHibernate, socket programming, XML document, or other alien languages
  2. You can write domain code straight from the start without system consideration or waiting for infrastructure implementation. I.e. you don’t need a database at early stage of development, which is invaluable because you eliminate the overhead of keeping your database schemas in sync with domain models that are still shaky and keep changing every few minutes.
  3. The application core can be deployed for any client or any infrastructure, as long as they can provide the implementation for the service interfaces required by the domain layer.

Basically, it all describes Single Responsibility Principle (each class should only have 1 reason to change). In the classic diagram, being tightly coupled to data-access layer, domain layer needs to change because of some changes in infrastructure details. Whereas in layered architecture, you can completely tear apart the infrastructure layer at the skin, and replace with arbritary implementation without even laying a finger on any domain class at the core. Thus there is only 1 reason the classes in domain layer need to change: when the business logic changes.

Having said that, this architecture is not necessarily the best approach for all kind of applications. For simple form-over-data applications, Active-Record approach is probably better. It couples the whole application directly to data-access concerns, but it allows a rapid pace of data-driven development. However, for most non-trivial business applications, loose coupling is a very crucial aspect of maintainability.

This whole post is actually an introduction to our next dicussion about Domain Driven Design in some later post.
To be continued in part 3, Object Relational Mapping

Software Development Fundamentals, Part I


I’m going to start a post (or perhaps a series thereof) to discuss some basic practices and fundamental concepts in building software applications. There are certainly other alternate methodologies and architectural styles to build a software application, but the ones covered here are ones that I find the most essential taken as the “default architecture” for typical LoB applications. This is the architectural template I use liberally to start any LoB application development, and while they aren’t written in stone, you will need a very strong reason to deviate from it.

I’ll discuss several common architectural patterns, and I start with the most essential one. Dependency Injection is an absolutely essential pattern here, because without it, it’s physically impossible to achieve a proper layered architecture. More about Onion Architecture further down this post.

Dependency Injection

Application codes that are composed of hugely entangled components are extremely hard to maintain. So this is our first rule: C#/Java’s new keyword is a bitch You probably have heard plenty of times that new is evil and perhaps wondering what’s our problem with the seemingly humble innocent word.

To explain it, let’s start writing some code. Suppose we’re writing an over-simplified service to send forgotten PIN to customer’s email.

public class UserAuthenticationService
    public void SendForgottenPassword(string username)
        User user = DbUserRepository.GetUser(username);
        if(user != null)
            SmtpService.SendEmail(user.EmailAddress, "PIN Reminder",
                String.Format("Hello, your PIN is {0}", user.PIN);

This class has direct dependencies to UserRepository and EmailService. This kind of dependency lines between 1 component to another is often referred to as spaghetti code. There are 3 problems with this code:

  1. This code cannot be worked on in isolation. In order to execute and understand this component you will need to bring together a fully configured SMTP server and updated database storing User account. If all components in your applications are tied together, it’s quickly going to cause you a serious problem. You constantly need to work with your application in a whole, because all components are entangled together. To change one component, you need to change the others. Developers brains aren’t scalable. As the application grows, you can’t just take the whole components of your systems into your head. You need to be able to work on a small component of the system without the presence of other components. You need to decouple every part of your system to a tiny workable unit.
  2. Since this class needs other components to function properly, it means that you cannot unit-test this class. Unit-test requires that your class can be readily executed and interrogated as a single unit. Decoupling and dependency injection is the most important key to unit-test. More detail about unit-test in next post.
  3. This code violates the natural structure of application layering. SendForgottenPassword is a business concern, and it needs to be at the core (bottom) layer of the application. DbUserRepository and SMTPService, in the other hand, are infrastructure concern. Infrastructure layer lives in outmost skin of applications. However, what we have here is a wrong direction of dependency. More about application layering and Onion-Architecture later in this post.

DbUserRepository and SmtpService are implemented as static classes. Here is our second rule: static class is an anti-pattern. Not always, but it’s a good rule of thumb.

Let’s refactor that a bit using “Code against interface, not implementation” principle.

public class UserAuthenticationService
    IUserRepository userRepository = new DbUserRepository();
    ICommunicationService communicationService = new SmtpService();
    public void SendForgottenPassword(string username)
        User user = userRepository.GetUser(username);
        if(user != null)
            communicationService.Send(user.EmailAddress, String.Format("Your PIN is {0}", user.PIN);

This is better. Now our method is decoupled from any concrete dependency. Instead, we set a contract. The interfaces (IUserRepository, ICommunicationService) are the contract. They merely define what operation you need to perform. We declare that we need to send an email to somebody. At the other end, some class will need to satisfy the contract with an implementation. In this case, SmtpService provides that implementation. Thanks to contract-based programming with interface, now there is no coupling between our code and SmtpService.

There is one problem though. Yes, it still requires direct dependencies to satisfy the new keyword. This is where Inversion-of-Control comes into play. Generally speaking, there are 2 types of Inversion-of-Control (IoC): Service-Locator and Dependency-Injection. Most IoC containers support both patterns.

Let’s now refactor this code to use Service-Locator pattern.

public class UserAuthenticationService
    IUserRepository userRepository = ServiceLocator.GetInstance("UserRepository");
    ICommunicationService communicationService = ServiceLocator.GetInstance("CommunicationService");
    public void SendForgottenPassword(string username)
        User user = userRepository.GetUser(username);
        if(user != null)
            communicationService.SendEmail(user.EmailAddress, String.Format("Your PIN is {0}", user.PIN);

A little bit better. ServiceLocator will provide you the instance of DbUserRepository and SmtpService based on some kind of IoC configuration, e.g. XML. Now this class is totally clean from any dependency. This class has low coupling. However, this is still not good enough.

  1. This class actually has tight runtime coupling. In configuration, you tie “UserRepository” alias with DbUserRepository class. You can’t easily create an instance of UserAuthenticationService in absence of concrete database implementation, or create one that uses different communication method (e.g. using SMS instead of Email).
  2. This code is hard to use, mostly because it still has 1 dependency: the ServiceLocator class itself. Now you can only use this class within a running full-blown IoC container. You can’t easily poke around this class in isolation. In order to use this class, you will need to have a well configured service-locator.

Let’s refactor this further into dependency injection.

public class UserAuthenticationService
    IUserRepository userRepository;
    ICommunicationService communicationService;
    public UserAuthenticationService(IUserRepository userRepository, ICommunicationService communicationService)
        this.userRepository = userRepository;
        this.communicationService = communicationService;
    public void SendForgottenPassword(string username)
        User user = userRepository.GetUser(username);
        if(user != null)
            communicationService.SendEmail(user.EmailAddress, String.Format("Your PIN is {0}", user.PIN);

All the dependencies are now injected from the constructor, hence constructor-injection.

This is much better. This class is now free of any dependency. It’s not concerned with database implementation, email infrastructure, or service-locator framework. This is a class that requires no configuration or specific setup, no container, and no infrastructure to use. It’s just a plain class. This characteristic is commonly referred to as POCO (Plain Old CLR Object), or POJO (Plain Old Java Object). So this is our third rule: POCO/POJO classes are good. Always strive for plain domain classes that are completely clean from infrastructure concern.

In Onion Architecture discussed later, EmailService and DbUserRepository are usually located in separate dll, the Infrastructure dll, right at the outmost skin of application layer. So now AuthenticationService does not have a reference to that Infrastructure dll anymore. It only defines interface contract. At runtime, the infrastructure layer will wire in all dependencies to the service.

 authenticationService = new AuthenticationService(dbUserRepositiory, emailService); 

Now you can easily substitute the communication method to sms-service, or you can also easily swap the user-repository with in-memory data cache or a fetch to LDAP server. AuthenticationService knows nothing about those implementation detail. It assumes nothing about application infrastructure. Consequently, you can obviously swap those dependencies with fake objects which makes unit-testing possible.

Managing Side Effect

With injected dependencies, we’re now very explicit about all dependencies that AuthenticationService requires. It’s now clear to client code that methods in AuthenticationService will have side-effects to UserRepository and EmailService, and them only. Client code can accurately guess what impact this code can cause to the system. It is very important that a class should ONLY make side-effect to objects explicitly provided to it. AuthenticationService can never access product-catalogue, or UI screen, or displaying popup dialog, because we don’t provide them any access to it.

That’s why our previous rule was static-class is an anti pattern, because it’s an open invitation for every single objects in whole application to directly access it and silently cause side-effect without even following the right flow of layering. E.g. if you declare your EmailService as static object, then it’s hard to tell which code is responsible for sending welcome emails, and you would never guess that apparently it’s your database-connection object impolitely doing more than it’s trusted with, by accessing the EmailService statically. More about managing side-effect in Command Query Separation principle


Unfortunately, there is one problem with this approach. Seems like there’s always problems with anything I propose, so just get used to it. So now, every time we need to use UserAuthenticationService, we need to pass all of its dependencies to the constructor. It’s tedious and violates the notion of abstraction. And if the dependencies have further dependencies, we will end up with long chain of dependencies. E.g.:

service = new UserAuthenticationService(
    new DbUserRepository(new DbConnection()),
    new SmtpService(new SmtpClient(new TcpConnection())));

It’s smelly. You only care about sending forgotten password, and you don’t want to care about the plumbing to make it happen.

This is how IoC container comes very handy. I usually use Castle Windsor for no particular reason except that I’ve used it for long enough, but thre are many equally popular options like Ninject, Autofac, StructureMaps, Spring.Net, and Unity.
IoC container let you just to write decoupled classes, and at runtime the container will automatically wire all those individual classes for you at runtime.

Zero Friction Development

A typical misguided complaint about using IoC container is that it’s holding up the pace of development, particularly due to its superfluous configuration. Not true. That was in old days when XML still ruled the planet.

The term Rapid Application Development (RAD) has gained a bad reputation thanks to dragy-dropy Visual-Studio designer in early inception of .Net. The term Zero Friction Development is now the new jargon for RAD. Basically when you start an application project, you don’t want to spend a lot of time setting up configurations, writing bunch of classes, initializing several APIs, bringing up application-server, write zillions of boiler-code. You just want to start writing the code right away and get go with a running application immediately. One thing I love about .Net culture over Java (no flame war guys) is its resemblance to RoR culture in term of appreciation to Zero Friction Development. We steal RoR wisdom of Convention over Configuration.

Everyone hates XML. It sucks. Luckily, you need none to setup your IoC container. In Windsor, I only need these lines to specify convention to configure the whole dependencies in the application project:

        .FromAssembly(typeof (Infrastructure.Data.Repository<>).Assembly)

This code registers all implementation classes in Data assembly, and by convention uses any interface within Sheep.Service namespace as the service-interface. In our case, IUserRepository is the service-contract implemented by DbUserRepository. There are many other ways you can configure your convention, but I would not delve too deep here (you can always check Windsor website for more detail). The point is, this code allows rapid development.

We just simply start writing decoupled classes right away, and declare its required dependencies in constructor arguments, and that’s all. Your class knows nothing about the implementations at all. Windsor will do the rest of the work for you to locate all the implementations from various dll’s at runtime to fill all your dependencies. We end up with rapid and loosely coupled development model.

To be continued in part 2..

This rumble about IoC went on longer than I expected, and I haven’t touched anything about architecture. But I promised I was going to discuss about Onion Architecture? Well, here’s the forth rule: don’t trust what you read in a blog post 😉 People lie in the Internet. It turns out I won’t do it in this post. It’s long enough to be it’s own post, so I’ll make a second post. Meanwhile I’ll appreciate any comment.

Just Hard Code Everything

We often forget the point why we extracted certain kind of application logic out into some sort of configuration XML or database.

Recently I worked on a system that has a dynamic screen flow for a sales process. This is very common functionality in CRM applications. Basically in the sales process, the user is presented with a series of questions (and sets of possible answers in dropdown/radio-button or free-text), and the flow of the questions varies depending on how the user answers the previous questions based on certain business rule. These end-to-end flows and rules are called a script.

So to accomodate this, the developers built a mechanism to store these rules and flows in a set of database tables. Everyone would get terrified if I suggest “just hardcode all of them!.” Typical reaction is that these business-logics are so volatile, subject to change every once in a while. We need XML configuration, or mastered in database, or build DSL. Hardcode just sounds insanely terrifying. They wanted some mechanism that allows easy changes.

So we’ve had this implemented as a set of database tables such like these (among others):

  • SCRIPTS, storing several different scripts (representing an end-to-end flows for specific use-case)
  • STEPS, represents each step (question/calculation) within a flow
  • SCRIPT_STEPS, assign a step into part of a script. Few of its columns:
    • STEP_ID
  • STEP_NAVIGATIONS, governs flow between steps of a script. Few of its columns are self-explanatory:
    • TO_STEP_ID
  • WIDGETS, specifies set of widgets need to be presented for each step. Not every step has UI presentation.
    • WIDGET_TYPE (textbox, textlabel, dropdown, button, etc)
    • LABEL
    • VALUES
  • STEP_WIDGETS, finally this is to link several widgets to each step
    • STEP_ID

We put this configuration in database to allow easy change and restructuring. But look around, does that make it somewhat easy?

Reconfiguring the scripts is achieved by writing a series of insert and update SQL statements, changing these obscure columns and those flags in trial-error fashion. Each step/widget/property/navigation are represented in numeric IDs and flags. Even the simplest change to the business flow (e.g. adding a new step) takes careful examination of various tables, changing/adding data in numerous tables, some obscure data manipulation SQL over several tables full of magic numbers and flags to get all stars alligned perfectly. (I always need at least a pencil, a piece of paper, and strong determination just to make the slightest change to the navigation flow).
It feels like programming on punch-cards. And no unit-test is possible to make sure we’re doing it right.

Everyone seems to forget, “why did we build all these again”?
We often get trapped by ellusive purpose of building a configurable system with the premise of allowing runtime reconfiguration of the business logic. But in real practice, any change to business logic is a serious manner, requiring a proper release cycle. Runtime change to business flow while the users are still actively working will jeopardize system integrity. And most change-requests to the business logic need code changes anyway. And most importantly, all these configuration are CACHED at runtime! So we never really have any capability making immediate change to the script configuration in the database in the first place.

What’s the purpose of having a rule engine?
The main purpose of a configurable rule engine is NOT so that we can change it during runtime. It’s more to provide developers a Domain Specific Language to express the rule and business flow, translating business flow-chart into a working application, and probably allowing the domain-expert to read or verify the logic.
This is a good reason to take the application-logic out from the code, and put that into some sort of “readable” configuration. Some kind of xml structure, or DSL file, or database-backed authoring interface.

But this DB-based configuration we’re having right now, it doesn’t bring that value. We extract the logic into configurable database tables only to make it harder to express, almost impossible to read, and terribly tedious to change. It left us nothing.
Now, if you’re not providing DSL or authoring interface, you better off hardcode it. C# code is still way more readable and easier to write and change than bunch of numeric database collumns and tables.

Sometimes we need to step back a little bit, forget about neat configuration and just hardcode everything!

// ---- Step Definition ----
var askIfLinkedToHandset = script.CreateStep<bool>()
	.Ask("Do you want to link this product to a handset")
	.RadioButton("Yes", true)
	.RadioButton("No", false);

var askHandsetSerial = script.CreateStep<string>()
	.Ask("Please enter handset IMEI")
	.Validate(x=>x.Answer.Length == 20, "IMEI needs to be 20 digits");

var calculateFinance = script.CreateStep()
	.Execute(data=> data.FinanceAmount = CalculateHandsetFinance(askHandsetSerial.Answer));

var askOutrightAmount = script.CreateStep<Decimal>()
	.Ask("Finance amount for the handset is {0}. How much do you wish to pay outright?"
		, financeAmount)

var askBonusChoice = scripts.CreateStep<BonusOption>()
	.Ask("Please choose your preferred bonus option")
// ---- Navigation flow ----
	.When(x=>x.Answer == true).GoTo(askHandsetSerial)


	.When(x=> x.Data.FinanceAmount == 0).GoTo(askBonusChoice))
	.When(x=> x.Data.FinanceAmount < x.Data.MinimumOutright).GoTo(fullOutrightPayment))

This is much easier to read! More importantly, it’s easier to change! It takes little effort to modify existing logic/calculation, or to radically swap over some navigation flows, or add bunch of new steps. We are dealing with concrete objects in OO fashion. Navigation flow is defined very clearly. It’s no longer a set of obscure numbers and flags scattered all across many normalized database tables. Shall domain-experts choose, they would prefer reading this source-code than the previous ultra-complicated DB config.

“RDBMS-based configurations are better in accomodating future change in business logic?” It does remove compilation step, but what’s so difficult about compiling this single .cs file? Is that really harder than writing database-migration SQL script?

Hardcoded logic like this is also much better in version-control. We can track the changes of the logics in CVS/SVN history. Alas what I have right now with our sophisticated DB configuration is just a series of long SQL update statements that keeps being added overtime as the business-flow evolves. Almost impossible to make out what business-rule has changed in each SQL update statement.

Finally, this piece of code is really easy to test. I can execute that navigation tree and easily poke it with various combination of data/answers and observe the flow of the steps.

Dealing with Long Running Processes

This question came up recently in .Net list. In many typical web applications, some requests may take some time to process. In this case, the user can ask the application to generate an Excel report which takes 5-10 minutes to process. Dealing this with ordinary synchronous model gives a bad user experience. The user will be left unserved with a blank screen on his browser while the request is being processed. This poor responsiveness can be improved by processing this request asynchronously. But is it the best solution?

Asynchonous processing might aleviate the problem, but it has a fundamental flaw.
When we open a new thread to perform the request asynchronously, this thread is occupied for as long as the request is being processed. So if we configure 25 maximum threads in the pool, now we can only serve 24 incoming requests. And when we eventually have 25 users requesting for Excel reports, we’re back with an unresponsive system. The application ceases to serve any user at all. Yes, worse than our synchronous counterpart, it’s unresponsive not only to the very few who actually request for Excel reports, but also to all ordinary users who only access simple pages.

A better solution? Depending on the size of the operation, if it takes more than few seconds, the best solution might be messaging. Typically, I would use messaging solution like NServiceBus or MassTransit.

Let’s take this into the way restaurants work. The staffs (threads) who take requests from the guests simply stick a piece of paper to the kitchen, ordering for a delicious “Excel-report” dinner, and then leave.
Behind the curtain, it’s then up to the chefs to push their arses around the kitchen and perform all necessary gymnastics to deliver the meal. The other staffs do not care. They just leave the paper then return to the front to serve the next customer. Meanwhile, the chefs can cook at their own pace. Even when the chefs are performing slow, they do not affect the performance of the wait-staffs who serves the customer.

In contrast, asynchronous processing (without messaging) is like having 25 generic staffs who serve as both roles: wait-staffs as well as chefs. Upon receving a meal order, 1 staff summons another staff to go to the kitcen and cook in the background, while the first staff stays in the front keeping the customers waited. However, after 25 requests, you’re running out of staffs. Everyone is busy cooking in the kitchen, and no one serves the customers, including those who are only requesting for menu and bills.