DEV Community

Ed Toro
Ed Toro

Posted on • Updated on

Teaching Functional Programming: Two Big Picture Approaches

Functional Programming (FP) has been around just as long, if not longer than, Object-Oriented Programming (OOP). But it's only (relatively) recently gaining in popularity, particularly in the JavaScript community. Why?

I went to MIT in the early 00's. Structure and Interpretation of Computer Programs (SICP - sick-pee) was my textbook. So my first formally-taught programming language was functional. Then I worked in industry for over a decade and hardly ever thought about FP. Now I'm shocked to learn that the textbook from college I don't remember very well anymore is considered the "functional programming bible".

Don't get me wrong. It's a good textbook. I'm sure it made me a better programmer. But FP wasn't something I applied very often in my Java/ActionScript/PHP/Python/Ruby/JavaScript career. OOP patterns dominated.

Then I taught at Wyncode Academy for four years and found myself trying to explain some FP concepts to newcomers. In a world dominated by OOP, it's hard to explain FP. It's so different.

After learning OOP, why is FP so much harder?

Related questions: Why has it taken so long for FP to catch on? Why aren't I talking about techniques for learning OOP in an FP-dominated world?

We in the coding community need to grapple with why the OOP->FP transition is so hard to teach. Evangelizing FP like a religion repeats the same mistakes that caused FP to languish in the industry for so long.

Many introductions to FP are missing something. It's not just an alternative programming style. It's a new way of thinking. When introducing something big and new to my students, I try to ease them into it. These same tricks may also work with more experienced programmers from OOP backgrounds.

One of the techniques I used at Wyncode to get a running start into a hard concept is storytelling. If I can get my students to understand the context - the big picture - I find it easier to later explain the technical details.

So here are two big-picture strategies for introducing FP - particularly to an OOP audience.

Big Picture 1: History

Sometimes it's good to start from the beginning: How does a computer work?

The most common (popular? easy-to-understand?) model of computing is the Turing Machine. The state that FP programmers complain about is staring us right in the face in a Turing Machine. An algorithm for operating this machine represents transitions between different states, e.g. from some boxes being on/off (1 or 0) to some other boxes being on/off.

If we try to imagine two Turing Machines operating on the same section of tape at the same time, we can begin to understand why "shared state" and concurrency in OOP are hard problems. But that's a post for another time.

The Turing Machine is a universal machine. It can be used to solve every solvable (effectively calculable) math and logic problem. This simple collection of operations - move left, move right, write a dot, read a dot, erase a dot - are enough (given enough time and resources) to tackle every math problem in the universe. That's what Alan Turing proved in 1936.

In many ways, a Turing Machine is how a computer "works".

But this is also how a computer works.

full adder circuit
A full adder circuit

This is a circuit for addition. It's the kind of component found inside the CPU of a computer.

This is not a Turing Machine. It's not universal. It's just addition. It can't (easily) be "reprogrammed".

There's also no Turing-machine-like "state". Apply voltage to the inputs corresponding to the numbers-to-add and detect voltages in the outputs corresponding to the sum. As soon as the voltage is shut off, the answer goes away. There's no "tape" sitting around to read or manipulate. Two circuits can't operate on the same logic gates simultaneously. (I don't think they can, but I'm sure someone will comment to prove me wrong.)

This circuit is also fast. While a classic Turing Machine flips 1s and 0s back-and-forth on some medium, this circuit operates at the speed of electricity through a wire. There are no moving parts.

A circuit is a different model of computation. Each of the logic gates (AND, OR, NAND, NOR, XOR, etc.) are pure functions. They accept inputs and produce outputs with no side-effects. If all we have is the ability to create and combine these "functions", we can also solve every solvable math problem in the universe. That's what Alonzo Church proved, also in 1936.

So we've got two different models of computing: the Turing Machine's little boxes of 0s and 1s (objects) and Alonzo's Church's lambda calculus built out of logic gates (functions). Which one is correct?

For a time there was a debate about whether an abstract Turing Machine could solve the same set of math problems as lambda calculus (and vice versa). Eventually they were proven to be equivalent.

Being equivalent means that they're equally powerful. Any algorithm that can be written for a Turing Machine can also be written using functions. So any program that can be written in Turing Machine software can also be represented in circuitry hardware.

What does it mean to "program in hardware"?

We can see "hardware programming" embodied in Application-specific Integrated Circuits (ASICs). Circuits can be created that are "programmed" to do one thing very quickly, like mine Bitcoin or play chess.

Since the proposal of the Church-Turing Thesis, we've had two programming options. Hardware is faster and software is slower. Make a mistake in software? Just hit the delete key and try again. Make a mistake in hardware? It's time to grab a soldering iron. It's a classic engineering design trade-off.

So let's say we have an algorithm written in an OOP style that we'd like to convert into an ASIC. It's probably a good strategy to rewrite the program in a FP style so it better maps to the circuit diagram's domain. Most programming languages are flexible enough to do that, but some are better at it others.

# Elixir pipes
"1" |> String.to_integer() |> Kernel.*(2) # returns 2
Enter fullscreen mode Exit fullscreen mode

Many FP-oriented languages tend to look like circuits. Specifically the "pipe operators" in Unix, Elixir, F#, JavaScript (maybe someday) and others make code look like a circuit diagram: inputs go into the left, flow through a number of "gates" (pipes) until they're transformed into the final output on the right. It's probably not a coincidence that the pipe operator used by some languages (|>) looks like a logic gate.

NOT logic gate
The NOT gate

Putting my coding instructor hat back on, a good "big picture" way of introducing FP is to start by talking about how circuits work, how they can be "programmed", and how we can model circuit diagrams in code.

Big Picture 2: Philosophy

I picked up a Philosophy minor with my CS degree, so one of the things I'm fascinated by is the intersection between those two fields of study. I find talking about the overlap helpful when teaching new coders, particularly those with Humanities instead of STEM backgrounds.

A philosophically important concept in FP is "functional equivalence".

Perhaps the best example demonstrating this equivalence is Tom Stuart's great article "Programming From Nothing".

Stuart demonstrates how a program (specifically the ubiquitous FizzBuzz) can be written entirely out of functions. I'm not going to repeat that entire exercise here, but I am going to borrow his explanation of how numbers can be represented entirely with functions (the Church encoding).

Start by defining the concept of zero as a function that accepts a function argument and does nothing with it.

# Ruby
ZERO = -> (func) { 
  # does nothing
  func
}
Enter fullscreen mode Exit fullscreen mode

Similarly, we can define all the natural numbers as functions that accept function arguments and call them n-times.

ONE = -> (func) {
  # calls it once
  # same as "func.call()"
  func[]
  func
}

TWO = -> (func) {
  # calls it twice
  func[]
  func[]
  func
}
Enter fullscreen mode Exit fullscreen mode

To test these "function-numbers", pass them a test function.

HELLO = ->() { puts "hello" }

# same as "ZERO.call(HELLO)"
ZERO[HELLO] # nothing displayed
ONE[HELLO]  # one "hello" displayed
TWO[HELLO]  # "hello" twice
Enter fullscreen mode Exit fullscreen mode

This functional-numeric representation can be hard to play around with and debug.

p ZERO
# outputs #<Proc:0x000055d195ae57b0@(repl):3 (lambda)>
Enter fullscreen mode Exit fullscreen mode

So to make it easier to work with we can define a method that will convert these functional-numbers into the object-numbers we're used to.

# convert number function into number object
def to_integer(func)
  # count how many times counter is called
  n = 0
  counter = ->() { n += 1 }
  func[counter]
  n
end

p to_integer(ZERO) # 0
p to_integer(ONE)  # 1
p to_integer(TWO)  # 2
Enter fullscreen mode Exit fullscreen mode

This converter creates a counting function and passes it to the numeric function. The ZERO function will call it zero times, the ONE function will call it one time, etc. We keep track of how many times the counter has been called to get the result.

Given these function-number definitions, we can implement addition.

ADD = -> (func1, func2) {
  -> (f) { func1[func2[f]] }
}

sum = ADD[ZERO, ZERO]
p to_integer(sum) # 0

sum = ADD[ZERO, ONE]
p to_integer(sum) # 1

sum = ADD[ONE, ONE]
p to_integer(sum) # 2
Enter fullscreen mode Exit fullscreen mode

If TWO calls a function twice, then ADD[TWO, TWO] will return a function-number that calls its argument four times (the function-number FOUR).

It's a mind-bending exercise. When I get to the end of "Programming From Nothing", I get the sense that this is an interesting product of the clever application of a fundamental computer science concept, but not something I could use in my day job.

And that's exactly the sense that I (and I suspect many others) have about FP in general - it's clever, but doesn't seem very useful. That feeling of unnecessary complexity is exactly the problem we need to solve if we hope to make FP techniques more popular.

So a better place to start teaching FP than Church numerals is The Matrix.

In that 1999 sci-fi film, the reality perceived by most humans is actually a simulation called "The Matrix". A few months ago Elon Musk suggested that this "simulation hypothesis" may be real, starting weeks of "Philosophy 101"-level media on the topic.

What does The Matrix have to do with FP?

The metaphysical debate, of which the "simulation hypothesis" is but one response, is very old and mind-numbingly complicated at times. So my attempt to summarize it won't do it justice. But the big idea is that we have no proof that the world around us is real. Maybe there are actual objects in the world or maybe we're just brains in jars.

So there are at least two contradictory theories of what, for example, the number one is. Is it a thing (a noun, an object) that we can interact with (touch and feel)? Or is it an action (a verb, a function), something that acts on the world, but isn't embodied?

The functional-one is a simulation of the number one. It's functionally equivalent to the object-one, meaning it does everything the object-one can do. For example, we can do arithmetic with it.

But it's not really "there" in the way that objects in OOP are "there". It's a Matrix simulation. It doesn't have inherent attributes - it isn't x, it just does x.

To pick a less abstract example, is the chair you're sitting in real or just forces pressing against your body? A "chair" may be a chair-object that exists in the real world or a chair-function: a (hopefully comfortable) force pushing against you with no underlying objective basis.

red delicious apple
A red delicious apple

Consider color. Is a red delicious apple really red (adjective describing a noun) or does it act red (verb)? Is color an inherent attribute of a real underlying apple-object or just an action that an apple-function is programmed to do when light shines on it? Is the apple real or just a simulation?

# A "real" apple
class Apple
  attr_reader :color
  def initialize
    @color = "ruby red"
  end
end

p Apple.new.color # "ruby red"
Enter fullscreen mode Exit fullscreen mode
# A "simulated" apple
APPLE = -> (applied) {
  return "ruby red" if applied == "light"
}

p APPLE["light"] # "ruby red"
Enter fullscreen mode Exit fullscreen mode

The difficulty of this philosophical concept is a good metaphor for why FP is so hard to teach in an OOP-dominated world. To help students understand, start by opening up their minds to the possibility of a world made up solely of "functions". Start with that big picture concept, then transition towards FP models of the world: how they differ from OOP representations yet maintain equivalent results. Ask an experienced OOP developer to consider rewriting a class into its functional equivalent.

Conclusion

Transitioning from OOP into FP can be hard. It's not just a different programming style. It's an alternative model of the world. And the better we are at easing students into that paradigm shift, the easier it'll be to avoid another half-century of ignoring this useful tool in the coder's toolbox.

Edits
Writing is just as debuggable as code. So I've decided to clarify that I'm presenting teaching strategies for introducing FP to OOP-minded programmers. FP programming itself isn't hard. It's the paradigm shift that needs support.

Top comments (31)

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Thanks for the article. I like where it is going, and I’d love to see more.

One thing though. I do not believe FP is hard, not harder than OO. I think changing paradigms is what is hard. I’ve brought fresh devs into FP and I never had to explain SOLID or inheritance gotchas or law of Demeter or many other things which are considered normal in OO. These just aren’t things that trip up FP programmers. In training them I also never got into type theory. I just focused on pure functions. The reuse you get from some type-based things are nice discoveries along the way. However when you already know how to solve a lot of problems in one paradigm, changing to the other means having to relearn, which is hard. I think it would be equally difficult switching from lots of experience in FP to OO, but not many of those people exist to tell us.

Collapse
 
dizid profile image
Marc de Ruyter

Similar to going from SQL to no-SQL databases or changing from (PHP) to asynchrone (node.js).
An old dog can learn new tricks, but it takes alot of re-training.

Collapse
 
eddroid profile image
Ed Toro

You make a good point about the advantage of being first (First-mover advantage? First-learned? I don't know if there's a name for it.)

I get a sense from FP evangelism that, if not for some historical accident, the tables would be turned. FP would be the dominant paradigm and I would instead feel compelled to write an article about teaching OOP. But I don't know enough about the history of this debate to say one way or the other.

I'm inclined to believe that OOP dominated (and still dominates) because of something inherently more desirable about it rather than a coin toss that always landed in OOP's favor at engineering meetings over the past decades. But I admit I could be wrong.

The "mistake" I hint at in this article is that, upon first learning (or "first-learning", if I may coin a new term) FP (like I did with Scheme), to later see it ignored in the industry is disheartening to a new, job-seeking graduate. Throwing a bunch of FP juniors into the marketplace to see them struggle to find work isn't going to solve the problem, no matter how superior FP may seem to some.

Instead we need to acknowledge that OOP has "won" and try to figure out why and what that means for the future of coding education.

As a bootcamp instructor I like to start with OOP because it's more relevant to the job search, then introduce FP concepts gradually because I believe it's a good skill for the future. To do the reverse, given my local market conditions, would be a disservice to my student's tuition money. So this article represents a teaching strategy for that style of curriculum.

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Thanks for your generous response.

I tend to agree about starting learners with OOP. Outside of work, I have a youth mentee, and I am starting with imperative/OOP too. Mainly because that's really the only youth learning material available. And that state reflects the broader market for developers. However, internally at our company, I rather feel like I am making our devs all the more valuable by teaching them FP plus experience running FP in production. Because FP work tends to pay pretty well. At least, according to SO salary surveys.

I haven't tracked the history of FP either, but I feel like OO won (like most things that win in the market) because it had compelling products sooner. Smalltalk is a common cited example, and is generally considered a great experience. Interestingly, I see some parallels between Smalltalk and functional methods, which were lost in later incarnations of OO. For example, there was no if in Smalltalk. Instead, you tell the boolean object: ifTrue, run these statements. And your other objects with multiple data cases tended to be modeled this way too. Seems structurally similar to category objects and their operations. Of course, it was still imperative and it still mixed state and behavior together.

I also meant to mention. I agree that FP is generally done a disservice by people being religious about it. OO is and will continue to be a perfectly valid way to solve problems. Follow your bliss! But I like to share my FP experiences in case someone is interested. My particular journey into FP was because, despite my best efforts, I was dismayed at the long term quality of products I created. FP was one of many things I tried. FP doesn't fix the quality problem on its own, but it plays a helpful part for me. In that FP makes pure functions easier to write. And pure functions make refactoring significantly less risky.

Thread Thread
 
mkoennecke profile image
mkoennecke

I would like to chime in that the difficult bit is the change in paradigm. I started my career with Fortran 77 and I still remember learning OOP. It was hard to get my head around that and the first two OO programs I wrote were crap. Working but crap.

The next paradigm shift came with UI code being event based.

Collapse
 
sudeeprp profile image
Sudeep Prasad

Are they really two different things? Sometimes a function needs state, so we can combine/hide/abstract it in an object... Mostly we can avoid coupling it to state, so it can remain a pure function. Maybe I'm too focused on the implementation... Am I missing something here?

Collapse
 
kspeakman profile image
Kasey Speakman

Yes, the characteristics and design of one versus the other are drastically different and do not work well together. Objects lead you to create an object graph (object containing references to other objects) with mutable state distributed all throughout the graph, and that is what constitutes a program. State is shifting sand -- it is always moving underneath you due to mutations (from external method calls on the object). FP uses immutable state where possible and programs look like a tree of functions with data flowing between them. You can examine/modify an individual pure function in that tree in complete isolation -- you don't have to know or care what the rest of the program is doing because its outputs only depend on its inputs. But to get that benefit, you have to design and solve problems slightly differently than what OO programmers muscle memory will be.

Thread Thread
 
sudeeprp profile image
Sudeep Prasad

I understand what you mean. I'd been stuck with the 'smeared state in objects' rut for a while. I realized that I was just bucketing shared-state without really thinking about the decomposition-goals. I would hesitate to call my mess as object-oriented. It felt more like 'class-oriented random-partitioning'
These days I recognize object-oriented as a caching mechanism, like when UI Widgets need to keep local state to animate/re-draw themselves without a round-trip to the server (like StatefulWidget in Dart). Relationships are more like constraints. Where caching is not needed, a pure function that can be tested in isolation is the way to go!

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

P.S. I love the apple example. In fact, the red delicious apple is not actually red. Rather it reflects the red spectrum of light into our eyeballs so we perceive red when looking at it. And the same goes for any item with color, plus white reflects most of the light spectrum and black absorbs most of it. I find that deeply interesting.

Related: Vantablack

Collapse
 
eddroid profile image
Ed Toro • Edited

To expand on this a bit, one point in favor of FP's model is that, in various scientific senses, the world may actually be functional.

The apple isn't necessarily red. It just reflects red when you shine wide spectrum light on it. It'll appear as a different color with other spectrums of light (or none at all). So "being red" can be modeled as a function whose outputs depend on its inputs.

If you think that's a superior model of the concept of color, you may be a functionalist. (That sounds like the beginning of a joke/meme.)

Collapse
 
antonrich profile image
Anton

It becomes like 2d. Interesting.

Collapse
 
antonrich profile image
Anton • Edited

I guess I gravitated towards FP because I've read that it will help you have less bugs, code that is easier to refactor and maintain.

Tell me, in your life, when you used a buggy software what did you think?

I thought who designed that piece of shit and kinda get emotional a little bit. But then I would go to the internet to find a solution.

So when I discovered that there is a way to write programs and have less bugs that's a green light for me.

Collapse
 
eddroid profile image
Ed Toro

I don't have an opinion on whether FP generates less bugs than OOP. I need to see an example written in each style side-by-side to decide which one I prefer.

Collapse
 
antonrich profile image
Anton • Edited

Then I guess you need an example with the same language? Because I personally cannot give you an example in the same language.

Also I won't give you an example from my experience because I'm a novice. But I've been watching some Elm courses where instructor gave some examples and said that the opposite JS code will actually end up biting you in the ass.

I also want to say that a lot of things end up biting in JS are the lack of types whereas Elm has a strong type system. You probably need an example that demonstrated exactly the difference between two styles rather than the difference between having strong type system and a week one.

Thread Thread
 
avalander profile image
Avalander

It's worth saying that those examples are hand picked to show the strengths of Elm.

I've used both, Javascript and Elm (a lot more js than Elm, to be honest) and in many cases Elm's types helped me a lot, but there have been instances where I had to fight Elm to do things that would have been straightforward in Javascript.

Collapse
 
antonrich profile image
Anton

Ed, I just a found a talk by Richard Feldman talking about null and how it propagates into the code. I'm currently watching it and I'm on minute 7.

I will provide the link. But I have to say, that I don't know if this is the example that you want to see. But please do watch this talk even just a little bit, because I'm interested in your opinion on this.

Collapse
 
dfockler profile image
Dan Fockler

OOP seems like it's much more intuitive to the way people interact with the world. There are objects and they interact and you manipulate them. Whereas FP is like how a scientist sees the world. You have an input and you can transform the input(s) and route it to the next place, just like your example of circuitry.

Learning physics feels the same way as learning FP coming from OOP. You have this intuition about the world that may be useful for being a human, but it's all wrong to actually describe mathematically what's happening. So you have to rebuild your intuition with the new concepts you've learned.

Collapse
 
eddroid profile image
Ed Toro

I wish I had used the words "intuitive" and "intuition" somewhere in my post. 😞

Although I'd probably shy away from describing an OOP worldview as "wrong" (not to allege that you are describing it that way). As engineers we just want to use the best tool for the job. Sometimes simple-but-incomplete representations are useful, in physics and in coding.

Collapse
 
walkhard13 profile image
Phillip Smith • Edited

I love that the code examples were used to back up the philosophical ideas presented and not used as the sole crux for this piece. Thank you.

"Do not try and bend the spoon, that's impossible. Instead, only try to realize the truth...there is no spoon. Then you will see it is not the spoon that bends, it is only yourself." Spoon Boy, The Matrix

Collapse
 
stereobooster profile image
stereobooster • Edited

The story around Turing/Church is not quite correct. Turing and Church both tried to solve the same problem (halting problem or Entscheidungsproblem). This problem comes from the list of problems compiled by a German mathematician (Hilbert's problems). They arrived at the same conclusion same-ish time, but Church paper was published a little earlier, and Turing later add small proof to his original paper, which proved equivalence of Turing Machines and lambda calculus. Original Turing machines didn't use 0/1 they used special alphabet (Gödel numbering). Gödel also solved one of the problems from Hilbert's list (1931, Gödel's incompleteness theorems).

If you want to know actual story from Leibniz to Turing I recommend to watch this video.

Some fun facts:

  • Gödel "invented" computational complexity theory - he asked von Neuman in one of his letters is it possible to prove P = NP.
  • there is a modern field of science which studies machines, that are able to do more than original Turing Machines (from 1936). It is called Hypercomputation. Turing himself doubt that his original machines are the limit of computation, so in later works, he proposed new TM with Oracles
  • Hilbert, I guess, is most known for Hilbert Curve, which is very important in quantum physics
Collapse
 
johnkazer profile image
John Kazer

Very interesting thank you.
I find myself writing JavaScript web apps in a sort of hybrid functional-OOP approach. What works best to represent the issue at hand.
I don't know if that makes me weird or just not very good at either?!

Collapse
 
fc250152 profile image
Nando

I'm not a big expert, but reading your discussion on "object vs function" reminded me of something I've read about quantum mechanic. It seems that nothing does really exist, but that all is a kind of process instead ...
All of us are used to think in term of "hard objects" also in our normal life; so it's difficult to conceive the reality as prospected by "all is a process" (it echoes as Panta Rei - 'all flows' in greek).
So, I think that it's our materialistic frame of mind that likes better an Object paradigm versus a Function paradigm ... maybe ;)
At the end of the day, this strong discussion could be better solved by a big laugh, as some Zen master did on the end of his life :)
last but not least ... please excuse my bad english: I'm and old aged italian that didn't studied it at school
have nice days

Collapse
 
eddroid profile image
Ed Toro

lobste.rs/s/sjkxch/teaching_functi...

Here is some reaction to this post from another community. I share it to both acknowledge that they're correct and to compare that community with this one.

Collapse
 
drbearhands profile image
DrBearhands

I must agree with much that is said there. However, your post seems to have gathered a lot of attention, so you're doing something right, much more than myself.

To clarify, I believe teaching should consist of teasing (cultivating interest for it) and after that dumping knowledge. Most teachers fail at the first part. I personally lost interest in a few subjects for this very reason, later discovering they were really cool.

This post was a success in 'teasing', the numbers clearly show that.

Collapse
 
differentsmoke profile image
Pablo Barría Urenda • Edited

I will preface the following opinion by saying that I feel like I have never really "gotten" OOP, which may mean that I haven't, or it may mean that I have and was simply underwhelmed by it. So please, feel free to enlighten me on how I got it wrong:

Having clarified that, I must say that the way I understand it, while FP is a way of structuring processes, OOP is a way of structuring code, so the two paradigms operate at different levels, and their frequent opposition is more a result of companies staking their worth on buzzword branding than anything having to do with the paradigms themselves.

OOP was the ultimate programming buzzword for so long, that my realization that it is mostly about how one organizes a program's code, and deciding which parts of it can call which other parts, was something of a buzz kill. I feel OOP has little bearing on how things are done, mostly on where they are kept, while FP, which can be traced back to Lambda Calculus which is itself a model of computation, is much more involved in the how of things.

Collapse
 
alexm77 profile image
Alex Mateescu

Well, I had some thoughts around the matter as well (take a look here if you have time on your hands: dev.to/alexm77/how-i-relearned-oop...).
I have also found that (for reasons I cannot explain) OOP makes it easier to get to grips with projects that go beyond "hello world" examples. Yet, in the grand scheme of things, expressing business logic in functional terms is simpler more often than not.
As for why FP hasn't taken off till now, my theory is the compilers/interpreters are inherently more CPU intensive and until recently we just didn't have enough CPU HP to make these things tractable.

Collapse
 
belinde profile image
Franco Traversaro

Great article and I really enjoyed reading it, but still I don't get how FP can be "better" than OOP when working on a "normal" web project (there's a lot of "" as you see). It's interesting the part about hardware programming, and I see the advantages of a FP approach, but I remain of my advice that FB smells like an articulated academical argument. Surely I'm wrong, but I can't see it better 😟

Collapse
 
eddroid profile image
Ed Toro

I don't mean to give the impression that FP is better. I'm more interested in presenting it as a valid alternative. It is possible to build a "normal" web project using it. But I'm not arguing we should.

Once we accept the possibility that we can, I hope curiosity will compel us to try to figure out how. There are plenty of web frameworks for functional-styled languages to explore.

If you find yourself exploring one of those frameworks, it may violate your intuition. And that will feel "wrong", like a bad code smell. And we're trained to "trust our guts" on those kinds of things.

Before you go on that journey, try some of the thought experiments presented in this article to open yourself up to the possibility that you intuition may be, instead of protecting you from making a mistake, just hiding something useful.

Since most languages are multi-paradigm, you can begin to explore these features in your own code without making a huge commitment - even if you just replace some for-loops with forEach, map, filter, or reduce in some deep down method somewhere. FP isn't all-or-nothing.

Collapse
 
qm3ster profile image
Mihail Malo • Edited

Turing Machine

I understand OOP
I understand FP

But that... thing. It scares me.

Collapse
 
baso53 profile image
Sebastijan Grabar

Wow, what a mind blowing comparison between OOP and FP. It almost seems like a conspiracy theory when you're reading it, but it is such a creative explanation. I'm out of words!

Collapse
 
ondrejs profile image
Ondrej

One of the best introduction articles to FP I have read, Ed. Thanks for it, bookmarking & will share ! Keep on writing.