Here is my take on the relationship between functional, imperative, actor, and service-oriented programming, and a short description of a hybrid approach combining them all and giving each an explicit function and level within the system (computation).

There's a natural progression between them, and a great deal of efficiency when they are combined to do the part they do best. I call it "opportunistic programming" (working title). If you're aware of existing ideas/publications etc. along the same lines, please share them with me. As usually, I probably didn't discover anything new.

Whenever you can - it's best to express computation declaratively: using functional programming. It should be a default mode of operation, only not used when not practical. It has many advantages over alternative approaches, and very little downsides (when used opportunistically). The goal here is to express as much logic as possible using pure mathematical computation that is easy to reason about, prove, and test.

When your code is dealing with external side-effects or things like computation performance are important, you have to abandon the luxury of the FP mode, and switch to imperative code. It's lesser and harder to use mode but it is closer to how the reality (computers) works, so it gives more control. You should still aim at writing as much as possible in FP mode, and only wrap the FP core logic in an imperative shell coordinating data-sharing, mutation and side-effects where needed. Depending on the problem and requirements the ratio might be different, but generally, imperative code should be isolated and kept to the necessary minimum. The goal here is to either explain to the machine exactly how to efficiently compute something and/or take control of ordering between events.

As your computation (program) grows it will become apparent it is possible to split it into parts that don't require "data-coherency". That means - parts that have no reason to share data (even for performance) and it is natural for them to communicate entirely using message passing (immutable copies of data), typically using in-memory message queues of some kind. That's (kind of) the actor model. That goal here is to encapsulate and decompose the system along the most natural borders. The size of actors depends entirely on the problem. Some programs can be composed of many tiny actors - single function each. Some will be hard to decompose at all or have complex and big (code-wise) actors. It is worthwhile to consciously consider the design possibilities that allow finer granularity in this layer.

When the operational needs (availability, scalability, etc.) demand it, actors from the previous paragraph are a natural candidate to be moved to run on different machines potentially in many copies and become "services". The cost and additional work are in handling: network latency, unreliable communication, and potential data loss. The goal here is to adapt the computation to the requirements of hardware: limited capacity and imperfect availability.

That's it. Some side-comments:

  1. It's a shame that FP is still not a default school of mainstream programming. FP is really easy and natural when applied opportunistically and generally will lead to both better runtime and developer performance.
  2. My main problem with OOP (and possibly actor model) is granularity. Encapsulation is costly. That's why encapsulating every single "object" is a bad idea. The right granularity for OOP is the module/library/component level, and for actors - along the problem-dependent natural lines where sharing data is no longer required anyway. Within functional and imperative code I recommend data-oriented approach, instead of the typical OOP-approach.
  3. This model easily handles the problem of converting "monolith" into microservices-based system. "encapsulation and decomposition" level is just "microservices but without the extra work (yet)".

#software #oop