Little Functional Programming Lexicon

by Elise Huard on February 21, 2014

This guest post is by Elise Huard, who is working as a freelance software engineer. She has 15 years of software under her belt and is keen on providing experienced advice as well as coding help. She has programmed in Ruby for 6 years before turning to Clojure and Haskell, and enjoys exploring the world of functional programming. She lives in Berlin, Germany with her family.

Elise Huard With Clojure, Scala and Haskell on the scene, functional programming is getting a lot of attention. I’m going to explain some terms that are related to functional programming, to help you understand, and – who knows – nod intelligently random discussions you read or overhear.

This is meant to be a little “Don’t panic” lexicon, not going incredibly in-depth but trying to describe the terms in as simple and friendly a way as possible. To know more, I invite you to read up on the concepts, but I hope this’ll get you started.


A closure is a function that “stores” the surrounding scope. An example in javascript to make this clearer:

  function multiplier(factor) {
    return function(otherFactor) {
      return factor*otherFactor;

The function multiplier returns another function, which will multiply any given number with the argument factor.

The function that is returned by this function “closes” over factor – that is it will retain the factor variable information even though it is no longer in the scope of the multiplier function. Every number that is fed to the returned function will be multiplied by factor.

In this example we used an argument of the surrounding scope, but it could also be any other variable in the function scope.


You have a function that takes several arguments – currying allows you to apply one or more of these arguments and return a new function which takes the remaining arguments. Applying one of the argument is called partial application.

In some programming languages, this is a very easy operation

haskell currying

plusThree = (+3)
plusThree 5 -- 8

in some others, it’s a little more work syntactically, but it’s possible

javascript currying

function plusThree(x) {
  return function(x) {
    return x+3;

Higher Order Functions

In functional languages functions are first-class citizens. You use them more or less as you would use any other type of value (I say more or less, because establishing equality of two functions is not possible, you cannot compare 2 functions as you would some other types).

Higher order functions act on this concept, and either:

  • take one or more functions as arguments – one example is map or filter.
  • return a function (like some of the earlier examples in currying and closures) which can then be used in later operations.

Hindley-Milner Type system

The Hindley-Milner type system is the name of a type system for the lambda calculus, which comes with a fast type inference algorithm. It’s called Hindley-Milner because it was independently described by first Roger Hindley, then Robin Milner.

Type inference means you don’t have to specify the type of every single variable or function (as is the case in java or C), because the compiler will infer the type for you, which will save a lot of typing and makes it nicer to read.

The H-M type system is used in Haskell and ML type languages (like OCaml).


Homoiconicity means that the abstract syntax tree has the same structure as the program. Going from one to the other is a straightforward conversion.

Another property of homoiconic languages is that the program representation is also a data structure in the language.
Example below, for Clojure: every expression is also a list (which is why it’s a Lisp-like language, LISP = LISt Processing)

  (defn addTwo [x] (+ x 2))
  (addTwo 5)

The advantage of homoiconicity is that code = data. You can manipulate your program as if it were data, since it’s effectively a data structure already.
Homoiconicity is used in lisp-like languages to allow powerful macros – anything that lisp can do to data structures, lisp macros can do to lisp code. Move over ruby DSL metaprogramming!


Strictly speaking not a property of functional programming, though it is a corrollary of purity. If your function can not change the state of the program, variables will be immutable. Say adding an element of a list will take a list as an argument and return another list, which is the same but with the element added.

This may seem like a dreadful waste of memory (especially when the data structures/objects are large), but in functional programming languages there are often optimizations under the hood which will re-use the existing data structures.

Immutability is considered an advantage when working in concurrent programs. There is no danger that the data you’re currently working on will be changed while you’re working on it, since it’s immutable. Then there are strategies of reconciliation to work out which function output will win.


Impure programming languages allow side-effects in the code without pointing them out with loud syntactic claxons.

The difference between a pure and an impure programming language is that in a pure programming language, it’s made very explicit when a function has side effects, and which kind, and that it’s impossible to confuse functions doing side-effecting with pure functions.

Popular impure functional programming languages are Clojure and Lisps, OCaml, with Scala and Javascript in their own category since they implement both functional and object-oriented paradigm.


greek letter ?, used (in the functional programming context) to refer to:

  1. lambda calculus
  2. an anonymous function – a function which is used immediately and doesn’t need naming for further reference, for instance being passed in as an argument to a higher order function (like filter, map, etc).

Lambda calculus

?-calculus, Wikipedia says, is a formal system for expressing computation based on function abstraction and application using variable binding and substitution. Lambda calculus is a universal model of computation (one can express anything a Turing machine can do in lambda calculus).

Should you know the details of lambda calculus to do functional programming?

No, unless you’re really interested in the mathematical underpinnings of functional programming, have some time and aren’t afraid to spend some time reading with pen and paper scribbling mathematical formulas.


Lazy evaluation (so not lazy like lying on the couch) is strictly speaking independent of functional programming. It means that an expression is only evaluated when the resulting value is used or displayed. So you could have lazy evaluation in imperative languages.

I mention it in this little lexicon because lazy evaluation was actually introduced for the lambda calculus, and Clojure and Haskell (especially the latter) have plenty of lazily evaluated functions in their standard library.


  • being able to create infinite list or series, and you only evaluate elements as you need them
  • only evaluate the part of a conditional structure that needs evaluated (so in Ruby, if – else is actually lazily evaluated)
  • sometimes performance gains by avoiding unnecessary calculations

Monads (et Al.)

There are numerous text and blog posts about what monads are, some of them crystal clear and some of them slightly obfuscating the concepts.
Here’s the thing though: unless you’re doing Haskell or similar statically typed pure functional languages, you don’t really need to know what they are.

In short:

  1. monads allow people to bundle in side-effects in a pure typed language (IO monad, state monad, etc). They have a type, which indicates which kind of side-effect they’re used for
  2. monads also have a number of mathematical properties and associated functions. Those functions are designed to let you daisy chain monad-handling functions or to change ordinary functions to handle monads.

Other terms you might hear: Monoids, Functors, Applicative, … these are also only really necessary in the context of Haskell and company. Learning about them when you’re working in other types of languages might be interesting in the abstract, but is not essential to your everyday programming.


Purity can be used in two contexts: a pure programming language, and a pure function.

Purity for a functional programming language means: side-effecting has to be explicitly indicated, as is the case for Haskell (both in the function signature, usually returning monads, and in the code).

In pure functions, there are no ‘leaks’. A pure function’s only input are its arguments, and its only output is its return value(s). This brings us to referential transparency

Referential Transparency

This is a property of pure functions – if you apply a function on a set of input, then it will always return the same output. This is due to the lack of side-effects (see side-effects for explanation), which means no hidden parameters will change anything about the execution.

As an example of a function that wouldn’t be referentially transparent, consider a function that would use a random number in its result. The random number (a side-effect) will change the result every time the function is run, so the function is not referentially transparent.

why is this useful?

Well, referentially transparent part of the program are dependable and easy to test. You only have to test on the sets of expected arguments without setting up any other state that could influence it, and it will reliably crank out the same output every time you run it.

Side Effects

Side effects are everything that can change the state of the world – that means the state of your program (outside of the scope of current function), or the standard output of your terminal, or a file, or database content.

Let’s be clear, a program has to have side effects (if only displaying a result in the terminal), otherwise it has very little point. Let’s put it more strongly: the program’s sole reason of existence is to have some desired side-effects, like migrating a database, showing a web page, calculating some statistics and showing them to you

I hope this was helpful and will give you some terms to be going on with! Welcome to the wonderful world of functional programming, I wish you all a pleasant journey!

Feel free to ask questions and give feedback in the comments section of this post. Thanks!

Technorati Tags: ,

Posted by Elise Huard

{ 12 comments… read them below or add one }

Sarit Seal February 25, 2014 at 11:23 pm


Good article. It will help if you can include a paragraph for what is functional programming, purely functional, benefits of this approach before you jump into the cooler stuff.



Elise February 26, 2014 at 7:29 pm

Hi Sarit,
Thanks for your comment. This is not meant as an introduction to functional programming, but a little lexicon so that you can look up a particular term you overhear … I agree that as an introduction it would need a more structured approach.


Pat Shaughnessy February 26, 2014 at 7:00 pm

Thanks a lot for this, Elise. Not only did you define these common terms, but you explained the fundamental ideas behind functional programming along the way. Superb!


Pedro Assumpcao February 26, 2014 at 8:30 pm

Thanks, very useful!


Nirmalya February 27, 2014 at 8:22 pm

Very useful! Since the time I have begun to learn Functional (way of) Programming – after doing almost two decades of Imperative Programming for commercial use and hence a lot to unlearn – I am desperately trying to come to terms with all these, ahem, terms! A quick reckoner like this helps. Thanks. Also, after reading a number of blogs to understand what a Monad is and ending up being sufficiently confused, I have decided not to break my head over it any more. I consider a Monad to be a type which can hold more than one type of data and have operations defined to use it effectively, most possibly in a chain of operations on the same type. Hopefully, the real understanding of a Monad will permeate my mind some day! :-)

Many thanks again for this write-up.


Martin DeMello March 1, 2014 at 6:57 am

You can be idempotent without being pure – for instance, consider “rm -rf /”, which produces the same end state no matter how many times in succession it’s run.


Damian Esteban March 4, 2014 at 10:49 pm

This was a very helpful write-up. Thank you.


Wesley Kerfoot March 5, 2014 at 10:11 am

Hi, great article. I would also add that closures only care about the free variables of the closed function, i.e. ‘factor’ in your example. A naive compiler might just pair the function with the entire environment, which could be quite costly in terms of memory, but actually it only needs to care about the free variables. Also it might be instructive to point out that a program with no free variables is indistinguishable from a program with no closures, since all of the variables get bound at the call site (via the parameters or a let, or whatever other binding constructs the language has).

Again, very nice article, I often see people misunderstand a lot of these terms but you’ve done a good job at explaining them!


Levi March 5, 2014 at 11:15 am

Hi! This was a pretty good list of terms, and your definitions were mostly clear and to the point. There are just a couple of things I wanted to point out:

First, idempotence is actually a property of a function that, when applied more than once, yields the same value as its first application. In other words, the function “f” is idempotent if “f(f(x)) == f(x)”. For example, imagine the interface to a networked speaker. An “increment volume by x” command would not be idempotent, while a “set volume to x” command would be. Either could be expressed as a pure function if they returned the new state, but only the second could be applied multiple times without changing the value beyond where it was after the first application.

The other comment I have is regarding your treatment of Monoid, Monad, Functor, Applicative, et al. I agree that it’s not particularly important for someone who is just hearing the definitions of the other terms to understand them, but to suggest that they are only relevant to pure, statically typed languages is not correct.

Monads arise in programming more often than you might think. Promises in Javascript (depending on the particular formulation of them) form a monadic structure. The C++ proposal for promises does as well. Haskell just happens to have syntactic sugar to make using them a lot cleaner than in other languages.

Functors are a generalization of things like containers. The ‘map’ operation on lists comes from its functor structure; all functors have a similar operation that lets you apply functions evenly over their “contents”. So if you understand mapping a function over a list, you mostly understand functors. And Applicative is just a particular kind of functor that allows you to map a curried function over multiple functor values, one per argument the function takes. Monads are a kind of functor as well (lists can be monads in a couple of different ways, but polymorphic data types are functors in a way that’s unique per type).

Finally, Monoids are an incredibly useful concept that generalizes things that have an associative binary operator. It encompasses adding integers, multiplying integers, concatenating lists, and all sorts of other things. The mathematical properties of monoids play an important role in Etsy’s statsd, which allow them to efficiently gather a lot of statistical information about visitors to their site.

Anyway, as I said, there’s no need to introduce them in depth in an introductory functional programming lexicon, but you will also not serve your readers well by making them out to be irrelevant or useless concepts.


Elise March 5, 2014 at 2:17 pm

Hi Levi,
thanks for your comment.
1. idempotence: you are right, what I meant was referential transparency, so mix-up on my part.
2. allow me to disagree. Maybe when you learn about them, there’s an a-ha moment because you see the abstract groups of concepts, but they don’t fundamentally add to the everyday programming when you’re not doing Haskell. I’d posit you can be a very good programmer and spend your career not knowing those concepts.


Levi March 6, 2014 at 1:23 pm

I understand your point that you could be a happy and productive programmer without knowing the terms, but that is not the same as the concepts being unimportant. Mostly I wanted to point out that they are not tied to static types or Haskell in any fundamental way; Haskell just happens to be the vehicle through which the terms are coming into the lexicon from their native homes in abstract algebra and category theory.

Your sentence “using them in other languages is a purely academic exercise” is just plain wrong. You use monoids whenever you sum numbers or concatenate strings. You use functors whenever you apply functions over polymorphic data types as in ‘map’. You use monads when you chain promises together. Most languages statically capture the types involved, but that’s not essential to writing programs that take advantage of those patterns of program structure, as everyone who has programmed in a dynamically typed language knows.

Again, I want to emphasize that I *agree* that there is no need to try to explain them to any depth in an introductory lexicon. What I disagree with is dismissing them as “purely academic” and only usable in statically typed functional languages. That’s not a matter of opinion, it’s just incorrect. There’s no need to learn the terms immediately, but they are so general as to be applicable to any language that supports some basic functional programming features, or in the case of monoids, any programming language at all. They form a sort of “pattern language” for reasoning about programs. Yeah, you can get by without knowing them, but the same could be said of “homoiconicity”. Plenty of productive programmers spend their entire careers without touching a Lisp! Yet the concept is valuable, and ought not be dismissed as irrelevant.


Elise Huard March 6, 2014 at 4:11 pm

I see your point. Rephrased the end of the paragraph to be more nuanced.


Leave a Comment

{ 1 trackback }

Previous post:

Next post: