Top functional-programming Questions

List of Tags
458
anderstornvig

For a few days I've tried to wrap my head around the functional programming paradigm in Haskell. I've done this by reading tutorials and watching screencasts, but nothing really seems to stick. Now, in learning various imperative/OO languages (like C, Java, PHP), exercises have been a good way for me to go. But since I don't really know what Haskell is capable of and because there are many new concepts to utilize, I haven't known where to start.

So, how did you learn Haskell? What made you really "break the ice"? Also, any good ideas for beginning exercises?

Answered By: David Miani ( 805)

I'm going to order this guide by the level of skill you have in haskell, going from an absolute beginner right up to an expert. Note that this process will take many months (years?), so it is rather long.

Absolute Beginner

Firstly, haskell is capable of anything, with enough skill. It is very fast (behind only c and c++ in my experience), and can be used for anything from simulations to servers, guis and web applications.

However there are some problems that are easier to write for a beginner in haskell than others. Mathematical problems and list process programs are good candidates for this, as they only require the most basic of haskell knowledge to be able to write.

Firstly, a good guide to learning the very basics of haskell is the first 6 chapters of learn you a haskell. While reading this, it is a very good idea to also be solving simple problems with what you know.

A good list of problems to try is the haskell 99 problems page. These start off very basic, and get more difficult as you go on. It is very good practice doing a lot of those, as they let you practice your skills in recursion and higher order functions. I would recommend skipping any problems that require randomness as that is a bit more difficult in haskell.

Once you have done a few of those, you could move on to doing a few of the Project Euler problems. These are sorted by how many people have completed them, which is a fairly good indication of difficulty. These test your logic and haskell more than the previous problems, but you should still be able to do the first few. A big advantage haskell has with these problems is Integers aren't limited in size. To complete some of these problems, it will be useful to have read chapters 7 and 8 of learn you a haskell as well.

Beginner

After that you should have a fairly good handle on recursion and higher order functions, so it would be a good time to start doing some more real world problems. A very good place to start is Real World Haskell (online book, you can also purchase a hard copy). I found the first few chapters introduced too much too quickly for someone who has never done functional programming/used recursion before. However with the practice you would have had from doing the previous problems you should find it perfectly understandable.

Working through the problems in the book is a great way of learning how to manage abstractions and building reusable components in haskell. This is vital for people used to object-orientated (oo) programming, as the normal oo abstraction methods (oo classes) don't appear in haskell (haskell has type classes, but they are very different to oo classes, more like oo interfaces). I don't think it is a good idea to skip chapters, as each introduces a lot new ideas that are used in later chapters.

After a while you will get to chapter 14, the dreaded monads chapter (dum dum dummmm). Almost everyone who learns haskell has trouble understanding monads, due to how abstract the concept is. I can't think of any concept in another language that is as abstract as monads are in functional programming. Monads allows many ideas (such as IO operations, computations that might fail, parsing,...) to be unified under one idea. So don't feel discouraged if after reading the monads chapter you don't really understand them. I found it useful to read many different explanations of monads; each one gives a new perspective on the problem. Here is a very good list of monad tutorials. I highly recommend the All About Monads, but the others are also good.

Also, it takes a while for the concepts to truly sink in. This comes through use, but also through time. I find that sometimes sleeping on a problem helps more than anything else! Eventually, the idea will click, and you will wonder why you struggled to understand a concept that in reality is incredibly simple. It is awesome when this happens, and when it does, you might find haskell to be your favorite imperative programming language :)

To make sure that you are understanding Haskell type system perfectly, you should try to solve 20 intermediate haskell exercises. Those exercises using fun names of functions like "furry" and "banana" and helps you to have a good understanding of some basic functional programming concepts if you don't have them already. Nice way to spend your evening with list of paper covered with arrows, unicorns, sausages and furry bananas.

Intermediate

Once you understand Monads, I think you have made the transition from a beginner haskell programmer to an intermediate haskeller. So where to go from here? The first thing I would recommend (if you haven't already learnt them from learning monads) is the various types of monads, such as Reader, Writer and State. Again, Real world haskell and All about monads gives great coverage of this. To complete your monad training learning about monad transformers is a must. These let you combine different types of Monads (such as a Reader and State monad) into one. This may seem useless to begin with, but after using them for a while you will wonder how you lived without them.

Now you can finish the real world haskell book if you want. Skipping chapters now though doesn't really matter, as long as you have monads down pat. Just choose what you are interested in.

With the knowledge you would have now, you should be able to use most of the packages on cabal (well the documented ones at least...), as well as most of the libraries that come with haskell. A list of interesting libraries to try would be:

  • Parsec: for parsing programs and text. Much better than using regexps. Excellent documentation, also has a real world haskell chapter.

  • Quickcheck: A very cool testing program. What you do is write a predicate that should always be true (eg length (reverse lst) == length lst). You then pass the predicate the quickCheck, and it will generate a lot of random values (in this case lists) and test that the predicate is true for all results. See also the online manual.

  • HUnit: Unit testing in haskell.

  • gtk2hs: The most popular gui framework for haskell, lets you write gtk applications in haskell.

  • happstack: A web development framework for haskell. Doesn't use databases, instead a data type store. Pretty good docs (another framework would be Turbinado, haven't tried it yet though).

Also, there are many concepts (like the Monad concept) that you should eventually learn. This will be easier than learning Monads the first time, as your brain will be used to dealing with the level of abstraction involved. A very good overview for learning about these high level concepts and how they fit together is the Typeclassopedia.

  • Applicative: An interface like Monads, but less powerful. Every Monad is Applicative, but not vice versa. This is useful as there are some types that are Applicative but are not Monads. Also, code written using the Applicative functions is often more composable than writing the equivalent code using the Monad functions. See Functors, Applicative Functors and Monoids from the learn you a haskell guide.

  • Foldable,Traversable: Typeclasses that abstract many of the operations of lists, so that the same functions can be applied to other container types. See also the haskell wiki explaination.

  • Monoid: A Monoid is a type that has a zero (or mempty) value, and an operation that joins two Monoids together, such that operation x mempty = x. Many types are Monoids, such as numbers, with mempty = 0 and operation = plus. This is useful in many situations.

  • Arrows: Arrows are a way of representing computations that take an input and return an output. A function is the most basic type of arrow, but there are many other types. The library also has many very useful functions for manipulating arrows - they are very useful even if only used with plain old haskell functions.

  • Arrays: the various mutable/immutable arrays in haskell.

  • ST Monad: lets you write code with a mutable state that runs very quickly, while still remaining pure outside the monad. See the link for more details.

  • FRP: Functional Reactive Programming, a new, experimental way of writing code that handles events, triggers, inputs and outputs (such as a gui). I don't know much about this though.

There are a lot of new language features you should have a look at. I'll just list them, you can find lots of info about them from google, the haskell wikibook, the haskellwiki.org site and ghc documentation.

  • Multiparameter type classes/functional dependencies
  • Type families
  • Existentially quantified types
  • Phantom types
  • GADTS
  • others...

A lot of haskell is based around category theory, so you may want to look into that. Unfortunately I don't know any good links for learning category theory for beginners.

Finally you will want to learn more about the various haskell tools. These include:

  • ghc (and all its features)
  • cabal: the haskell package system
  • darcs: a distributed version control system written in haskell, very popular for haskell programs.
  • haddock: a haskell automatic documentation generator

While learning all these new libraries and concepts, it is very useful to be writing a moderate-sized project in haskell. It can be anything (eg a small game, data analyser, website, compiler). Working on this will allow you to apply many of the things you are now learning. You stay at this level for ages (this is where I'm at).

Expert

It will take you years to get to this stage (hello from 2009!), but from here I'm guessing you start writing phd papers, new ghc extensions, and coming up with new abstractions.

Getting Help

Finally, while at any stage of learning, there are multiple places for getting information. These are:

  • the #haskell irc channel
  • the mailing lists. These are worth signing up for just to read the discussions that take place - some are very interesting.
  • other places listed on the haskell.org home page

Conclusion

Well this turned out longer than I expected... Anyway, I think it is a very good idea to become proficient in haskell. It takes a long time, but that is mainly because you are learning a completely new way of thinking by doing so. It is not like learning ruby after learning java, but like learning java after learning c. Also, I am finding that my object-orientated programming skills have improved as a result of learning haskell, as I am seeing many new ways of abstracting ideas.

Since I started learning F# and OCaml last year, I've read a huge number of articles which insist that design patterns (especially in Java) are workarounds for the missing features in imperative languages. One article I found makes a fairly strong claim:

Most people I've met have read the Design Patterns book by the Gang of Four. Any self respecting programmer will tell you that the book is language agnostic and the patterns apply to software engineering in general, regardless of which language you use. This is a noble claim. Unfortunately it is far removed from the truth.

Functional languages are extremely expressive. In a functional language one does not need design patterns because the language is likely so high level, you end up programming in concepts that eliminate design patterns all together.

The main features of functional programming include functions as first-class values, currying, immutable values, etc. It doesn't seem obvious to me that OO design patterns are approximating any of those features.

Additionally, in functional languages which support OOP (such as F# and OCaml), it seems obvious to me that programmers using these languages would use the same design patterns found available to every other OOP language. In fact, right now I use F# and OCaml everyday, and there are no striking differences between the patterns I use in these languages vs the patterns I use when I write in Java.

Is there any truth to the claim that functional programming eliminates the need for OOP design patterns? If so, could you post or link to an example of a typical OOP design pattern and its functional equivalent?

Answered By: jalf ( 376)

The blog post you quoted overstates its claim a bit. FP doesn't eliminate the need for design patterns. The term "design patterns" just isn't widely used to describe the same thing in FP languages. But they exist. Functional languages have plenty of best practice rules of the form "when you encounter problem X, use code that looks like Y", which is basically what a design pattern is.

However, it's correct that most OOP-specific design patterns are pretty much irrelevant in functional languages.

I don't think it should be particularly controversial to say that design patterns in general only exist to patch up shortcomings in the language. And if another language can solve the same problem trivially, that other language won't have need of a design pattern for it. Users of that language may not even be aware that the problem exists, because, well, it's not a problem in that language.

The main features of functional programming include functions as first-class values, currying, immutable values, etc. It doesn't seem obvious to me that OO design patterns are approximating any of those features.

What is the command pattern, if not an approximation of first-class functions? :) In a FP language, you'd simply pass a function as the argument to another function. In an OOP language, you have to wrap up the function in a class, which you can instantiate and then pass that object to the other function. The effect is the same, but in OOP it's called a design pattern, and it takes a whole lot more code. And what is the abstract factory pattern, if not currying? Pass parameters to a function a bit at a time, to configure what kind of value it spits out when you finally call it.

So yes, several GoF design patterns are rendered redundant in FP languages, because more powerful and easier to use alternatives exist.

But of course there are still design patterns which are not solved by FP languages. What is the FP equivalent of a singleton? (Disregarding for a moment that singletons are generally a terrible pattern to use)

And it works both ways too. As I said, FP has its design patterns too, people just don't usually think of them as such.

But you may have run across monads. What are they, if not a design pattern for "dealing with global state"? That's a problem that's so simple in OOP languages that no equivalent design pattern exists there.

We don't need a design pattern for "increment a static variable", or "read from that socket", because it's just what you do.

In (pure) functional languages, side effects and mutable state are impossible, unless you work around it with the monad "design pattern", or any of the other methods for allowing the same thing.

Additionally, in functional languages which support OOP (such as F# and OCaml), it seems obvious to me that programmers using these languages would use the same design patterns found available to every other OOP language. In fact, right now I use F# and OCaml everyday, and there are no striking differences between the patterns I use in these languages vs the patterns I use when I write in Java.

Perhaps because you're still thinking imperatively? A lot of people, after dealing with imperative languages all their lives, have a hard time giving up on that habit when they try a functional language. (I've seen some pretty funny attempts at F#, where literally every function was just a string of 'let' statements, basically as if you'd taken a C program, and replaced all semicolons with 'let'. :))

But another possibility might be that you just haven't realized that you're solving problems trivially which would require design patterns in an OOP language.

When you use currying, or pass a function as an argument to another, stop and think about how you'd do that in an OOP language.

Is there any truth to the claim that functional programming eliminates the need for OOP design patterns?

Yep. :) When you work in a FP language, you no longer need the OOP-specific design patterns. But you still need some general design patterns, like MVC or other non-OOP specific stuff, and you need a couple of new FP-specific "design patterns" instead. All languages have their shortcomings, and design patterns are usually how we work around them.

Anyway, you may find it interesting to try your hand at "cleaner" FP languages, like ML (my personal favorite, at least for learning purposes), or Haskell, where you don't have the OOP crutch to fall back on when you're faced with something new.

Edit: As expected, a few people objected to my definition of design patterns as "patching up shortcomings in a language", so here's my justification: As already said, most design patterns are specific to one programming paradigm, or sometimes even one specific language. Often, they solve problems that only exist in that paradigm (See monads for FP, or abstract factories for OOP). Why doesn't the abstract factory pattern exist in FP? Because the problem it tries to solve does not exist there. So, if a problem exists in OOP languages, which does not exist in FP languages, then clearly that is a shortcoming of OOP languages. The problem can be solved, but your language does not do so, but requires a bunch of boilerplate code from you to work around it. Ideally, we'd like our programming language to magically make all problems go away. Any problem that is still there is in principle a shortcoming of the language. ;)

I've to admit that I don't know much about functional programming. I read about it from here and there, and so came to know that in functional programming, a function returns the same output, for same input, no matter how many times the function is called. It's exactly like mathematical function which evaluates to same output for same value of input parameter which involves in the function expression.

For example, consider this:

f(x,y) = x*x + y; //it is a mathematical function

No matter how many times you use f(10,4), it's value will always be 104. As such, wherever you've written f(10,4), you can replace it with 104, without altering the value of the whole expression. This property is referred to as referential transparency of an expression.

As Wikipedia says (link),

Conversely, in functional code, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) both times.

So my question is: can a time function (which returns the current time) exist in functional programming?

  • If yes, then how can it exist? Does it not violate the principle of functional programming? It particularly violates referential transparency which is one of the property of functional programming (if I correctly understand it).

  • Or if no, then how can one know the current time in functional programming?

Answered By: Carsten König ( 190)

Yes and no.

Different FP languages solve them differently.

In Haskell (a very pure one) all this stuff has to happen in something called the IO Monad - see here. You can think of it as getting another input (and output) into your function (the world-state) or easier as a place where "impureness" like getting the changing time happens.

Other languages like F# just have some impureness built in and so you can have a function that returns different values for the same input - just like normal imperative languages.

As Jeffrey Burka mentioned in his comment: Here is the nice intro to the IO Monad straight from the HaskellWiki.

I've read the Wikipedia article on reactive programming. I've also read the small article on functional reactive programming. The descriptions are quite abstract.

What does functional reactive programming (FRP) mean in practice? What does reactive programming (as opposed to non-reactive programming?) consist of? My background is in imperative/OO languages, so an explanation that relates to this paradigm would be appreciated.

Answered By: Conal ( 247)

If you want to get a feel for FRP, you could start with the old Fran tutorial from 1998, which has animated illustrations. For papers, start with Functional Reactive Animation and then follow up on links on the publications link on my home page and the FRP link on the Haskell wiki.

Personally, I like to think about what FRP means, rather than how it might be implemented. So I don't describe FRP in representation/implementation terms as Thomas K does in another answer (graphs, nodes, edges, firing, execution, etc). There are many possible implementation styles, but no implementation says what FRP is.

I do resonate with Laurence G's simple description that FRP is about "datatypes that represent a value 'over time' ". Conventional imperative programming captures these dynamic values only indirectly, through state and mutations. The complete history (past, present, future) has no first class representation. Moreover, only discretely evolving values can be (indirectly) captured, since the imperative paradigm is temporally discrete. In contrast, FRP captures these evolving values directly and has no difficulty with continuously evolving values.

FRP is also unusual in that it is concurrent without running afoul of the theoretical & pragmatic rats' nest that plagues imperative concurrency. Semantically, FRP's concurrency is fine-grained, determinate, and continuous. (I'm talking about meaning, not implementation. An implementation may or may not involve concurrency or parallelism.) Semantic determinacy is very important for reasoning, both rigorous and informal. While concurrency adds enormous complexity to imperative programming, due to nondeterministic interleaving, it is effortless in FRP.

So, what is FRP? You could have invented it yourself. Start with these ideas:

  • Dynamic/evolving values (i.e., values "over time") are first class values in themselves. You can define them and combine them, pass them into & out of functions. I called these things "behaviors".

  • Behaviors are built up out of a few primitives, like constant (static) behaviors and time (like a clock), and then with sequential and parallel combination. n behaviors are combined by applying an n-ary function (on static values), "point-wise", i.e., continuously over time.

  • To account for discrete phenomena, have another type (family) of "events", each of which has a stream (finite or infinite) of occurrences. Each occurrence has an associated time and value.

  • To come up with the compositional vocabulary out of which all behaviors and events can be built, play with some examples. Keep deconstructing into pieces that are more general/simple.

  • So that you know you're on solid ground, give the whole model a compositional foundation, using the technique of denotational semantics, which just means that (a) each type has a corresponding simple & precise mathematical type of "meanings", and (b) each primitive and operator has a simple & precise meaning as a function of the meanings of the constituents. Never, ever mix implementation considerations into your exploration process. If this description is gibberish to you, consult (a) Denotational design with type class morphisms, (b) Push-pull functional reactive programming (ignoring the implementation pieces in the latter), and (c) the Denotational Semantics Haskell wikibooks page. Beware that denotational semantics has two parts, from its two founders Christopher Strachey and Dana Scott: the easier & more useful Strachey part and the harder and less useful (for design) Scott part.

If you stick with these principles, I expect you'll get something more-or-less in the spirit of FRP.

Where did I get these principles? In software design, I always ask the same question: "what does it mean?". Denotational semantics gave me a precise framework for this question, and one that fits my aesthetics (unlike operational or axiomatic semantics, which leaves me unsatisfied). So I asked myself what is behavior? I soon realized that the temporally discrete nature of imperative computation is an accommodation to a particular style of machine, rather than a natural description of behavior itself. The simplest precise description of behavior I can think of is simply "function of time", so that's my model. Delightfully, this model handles continuous, deterministic concurrency with ease and grace.

It's been quite a challenge to implement this model correctly and efficiently, but that's another story.

Does anyone know what is the worst possible asymptotic slowdown that can happen when programming purely functionally as opposed to imperatively (i.e. allowing side-effects)?

Clarification from comment by itowlson: is there any problem for which the best known non-destructive algorithm is asymptotically worse than the best known destructive algorithm, and if so by how much?

Answered By: Brian Campbell ( 379)

According to Pippenger [1996], when comparing a Lisp system that is purely functional (and has strict evaluation semantics, not lazy) to one that can mutate data, an algorithm written for the impure Lisp that runs in O(n) can be translated to an algorithm in the pure Lisp that runs in O(n log n) time (based on work by Ben-Amram and Galil [1992] about simulating random access memory using only pointers). Pippenger also establishes that there are algorithms for which that is the best you can do; there are problems which are O(n) in the impure system which are Ω(n log n) in the pure system.

There are a few caveats to be made about this paper. The most significant is that it does not address lazy functional languages, such as Haskell. Bird, Jones and De Moor [1997] demonstrate that the problem constructed by Pippenger can be solved in a lazy functional language in O(n) time, but they do not establish (and as far as I know, no one has) whether or not a lazy functional language can solve all problems in the same asymptotic running time as a language with mutation.

The problem constructed by Pippenger requires Ω(n log n) is specifically constructed to achieve this result, and is not necessarily representative of practical, real-world problems. There are a few restrictions on the problem that are a bit unexpected, but necessary for the proof to work; in particular, the problem requires that results are computed on-line, without being able to access future input, and that the input consists of a sequence of atoms from an unbounded set of possible atoms, rather than a fixed size set. And the paper only establishes (lower bound) results for an impure algorithm of linear running time; for problems that require a greater running time, it is possible that the extra O(log n) factor seen in the linear problem may be able to be "absorbed" in the process of extra operations necessary for algorithms with greater running times. These clarifications and open questions are explored briefly by Ben-Amram [1996].

In practice, many algorithms can be implemented in a pure functional language at the same efficiency as in a language with mutable data structures. For a good reference on techniques to use for implementing purely functional data structures efficiently, see Chris Okasaki's "Purely Functional Data Structures" [Okasaki 1998] (which is an expanded version of his thesis [Okasaki 1996]).

Anyone who needs to implement algorithms on purely-functional data structures should read Okasaki. You can always get at worst an O(log n) slowdown per operation by simulating mutable memory with a balanced binary tree, but in many cases you can do considerably better than that, and Okasaki describes many useful techniques, from amortized techniques to real-time ones that do the amortized work incrementally. Purely functional data structures can be a bit difficult to work with and analyze, but they provide many benefits like referential transparency that are helpful in compiler optimization, in parallel and distributed computing, and in implementation of features like versioning, undo, and rollback.

Note also that all of this discusses only asymptotic running times. Many techniques for implementing purely functional data structures give you a certain amount of constant factor slowdown, due to extra bookkeeping necessary for them to work, and implementation details of the language in question. The benefits of purely functional data structures may outweigh these constant factor slowdowns, so you will generally need to make trade-offs based on the problem in question.

References

Having briefly looked at Haskell recently I wondered whether anybody could give a brief, succinct, practical explanation as to what a monad essentially is? I have found most explanations I've come across to be fairly inaccessible and lacking in practical detail, so could somebody here help me?

Answered By: JacquesB ( 304)

First: The term monad is a bit vacuous if you are not a mathematician. An alternative term is computation builder which is a bit more descriptive of what they are actually useful for.

You ask for practical examples:

Example 1: List comprehension:

[x*2 | x<-[1..10], odd x]

This expression returns the doubles of all odd numbers in the range from 1 to 10. Very useful!

Example 2: Input/Output:

do
   putStrLn "What is your name?"
   name <- getLine
   putStrLn ("Welcome, " ++ name ++ "!")

Both examples uses monads aka computation builders. The common theme is that the monad chains operations in some specific, useful way. In the list comprehension, the operations are chained such that if an operation returns a list, then the following operations are performed on every item in the list. The IO monad on the other hand performs the operations sequentially, but passes a "hidden variable" along, which represents "the state of the world", which allows us to write IO code in a pure functional manner.

It turns out the pattern of chaining operations is quite useful, and is used for lots of different things in Haskell.

An other example is exceptions: Using the Error monad, operations are chained such that the are performed sequentially, except if an error is thrown, in which case the rest of the chain is abandoned.

Both the list-comprehension syntax and the do-notation are syntactic sugar for chaining operations using the >>= operator. A monad is basically just a type that supports the >>= operator.

Example 3: A parser

This is a very simple parser which parses either a quoted string or a number:

parseExpr = parseString <|> parseNumber

parseString = do
        char '"'
        x <- many (noneOf "\"")
        char '"'
        return (StringValue x)

parseNumber = do
    num <- many1 digit
    return (NumberValue (read num))

The operations char, digit etc. are pretty simple, they either match or don't match. The magic is the monad which manages the control flow: The operations are performed sequentially until a match fail, in which case the monad backtracks to the latest <|> and tries the next option. Again, a way of chaining operations with some additional, useful semantics.

Example 4: Asynchronous programming

The above examples are in Haskell, but it turns out F# also supports monads. This example is stolen from Don Syme:

let AsyncHttp(url:string) =
    async {  let req = WebRequest.Create(url)
             let! rsp = req.GetResponseAsync()
             use stream = rsp.GetResponseStream()
             use reader = new System.IO.StreamReader(stream)
             return reader.ReadToEnd() }

This method fetches a web page. The punch line is the use of GetResponseAsync - it actually waits for the response on a separate thread, while the main thread returns from the function. The last three lines are executed on the spawned thread when the response have been received.

In most other languages you would have to explicitly create a separate function for the lines that handle the response. The async monad is able to "split" the block on its own and postpone the execution of the latter half. (The async {} syntax indicates that the control flow in the block is defined by the async monad)

How they work

So how can a monad do all these fancy control-flow thing? What actually happens in a do-block (or a computation expression as they are called in F#), is that every operation (basically every line) is wrapped in a separate anonymous function. These functions are then combined using the bind operator (spelled >>= in Haskell). Since the bind operation combines functions, it can execute them as it sees fit: sequentially, multiple times, in reverse, discard some, execute some on a separate thread when it feels like it and so on.

As an example, this is the expanded version of the IO-code from example 2:

putStrLn "What is your name?"
>>= (\_ -> getLine)
>>= (\name -> putStrLn ("Welcome, " ++ name ++ "!"))

This is uglier, but it's also more obvious what is actually going on. The >>= operator is the magic ingredient: It takes a value (one the left side) and combines it with a function (on the right side), to produce a new value. This new value is then taken by the next >>= operator and again combined with a function to produce a new value. >>= can be viewed as a mini-evaluator.

Note that >>= is overloaded for different types, so every monad has its own implementation of >>=. (All the operations in the chain have to be of the type of the same monad though, otherwise the >>= operator wont work.)

The simplest possible implementation of >>= just takes the value on the left and applies it to the function on the right and returns the result, but as said before, what makes the whole pattern useful is when there is something extra going on in the monads implementation of >>=.

There is some additional cleverness in how the values are passed from one operation to the next, but this requires a deeper explanation of the Haskell type system.

Summing up

In Haskell-terms a monad is a parameterized type which is an instance of the Monad type class, which defines >>= along with a few other operators. In layman's terms, a monad is just a type for which the >>= operation is defined.

In itself >>= is just a cumbersome way of chaining functions, but with the presence of the do-notation which hides the "plumbing", the monadic operations turns out to be a very nice and useful abstraction, useful many places in the language, and useful for creating your own mini-languages in the language.

Why are monads hard?

For many Haskell-learners, monads are an obstacle they hit like a brick wall. It's not that monads themselves are complex, but that the implementation relies on many other advanced Haskell features like parameterized types, type classes, and so on. The problem is that Haskell IO is based on monads, and IO is probably one of the first things you want to understand when learning a new language - after all, its not much fun to create programs which doesn't produce any output. I have no immediate solution for this chicken-and-egg problem, except treating IO like "magic happens here" until you have enough experience with other parts of language. Sorry.

Whilst starting to learn lisp, I've come across the term tail-recursive. What does it mean?

Answered By: Lorin Hochstein ( 231)

Tail recursion is well-described in previous answers, but I think an example in action would help to illustrate the concept.

Consider a simple function that adds the first N integers. (e.g. sum(5) = 1 + 2 + 3 + 4 + 5 = 15).

Here is a simple Python implementation that uses recursion:

def recsum(x):
 if x == 1:
  return x
 else:
  return x + recsum(x - 1)

If you called recsum(5), this is what the Python interpreter would evaluate.

recsum(5)
5 + recsum(4)
5 + (4 + recsum(3))
5 + (4 + (3 + recsum(2)))
5 + (4 + (3 + (2 + recsum(1))))
5 + (4 + (3 + (2 + 1)))
15

Note how every recursive call has to complete before the Python interpreter begins to actually do the work of calculating the sum.

Here's a tail-recursive version of the same function:

def tailrecsum(x, running_total=0):
  if x == 0:
    return running_total
  else:
    return tailrecsum(x - 1, running_total + x)

Here's the sequence of events that would occur if you called tailrecsum(5), (which would effectively be tailrecsum(5, 0), because of the default second argument).

tailrecsum(5, 0)
tailrecsum(4, 5)
tailrecsum(3, 9)
tailrecsum(2, 12)
tailrecsum(1, 14)
tailrecsum(0, 15)
15

In the tail-recursive case, with each evaluation of the recursive call, the running_total is updated.

Note: As mentioned in the comments, Python doesn't have built-in support for optimizing away tail calls, so there's no advantage to doing this in Python. However, you can use a decorator to achieve the optimization.

What is a good way to design/structure large functional programs, especially in Haskell?

I've been through a bunch of the tutorials (Write Yourself a Scheme being my favorite, with Real World Haskell a close second) - but most of the programs are relatively small, and single-purpose. Additionally, I don't consider some of them to be particularly elegant (for example, the vast lookup tables in WYAS).

I'm now wanting to write larger programs, with more moving parts - acquiring data from a variety of different sources, cleaning it, processing it in various ways, displaying it in user interfaces, persisting it, communicating over networks, etc. How could one best structure such code to be legible, maintainable, and adaptable to changing requirements?

There is quite a large literature addressing these questions for large object-oriented imperative programs. Ideas like MVC, design patterns, etc. are decent prescriptions for realizing broad goals like separation of concerns and reusability in an OO style. Additionally, newer imperative languages lend themselves to a 'design as you grow' style of refactoring to which, in my novice opinion, Haskell appears less well-suited.

Is there an equivalent literature for Haskell? How is the zoo of exotic control structures available in functional programming (monads, arrows, applicative, etc.) best employed for this purpose? What best practices could you recommend?

Thanks!

EDIT (this is a follow-up to Don Stewart's answer):

@dons mentioned: "Monads capture key architectural designs in types."

I guess my question is: how should one think about key architectural designs in a pure functional language?

Consider the example of several data streams, and several processing steps. I can write modular parsers for the data streams to a set of data structures, and I can implement each processing step as a pure function. The processing steps required for one piece of data will depend on its value and others'. Some of the steps should be followed by side-effects like GUI updates or database queries.

What's the 'Right' way to tie the data and the parsing steps in a nice way? One could write a big function which does the right thing for the various data types. Or one could use a monad to keep track of what's been processed so far and have each processing step get whatever it needs next from the monad state. Or one could write largely separate programs and send messages around (I don't much like this option).

The slides he linked have a Things we Need bullet: "Idioms for mapping design onto types/functions/classes/monads". What are the idioms? :)

Answered By: Don Stewart ( 203)

I talk a bit about this in Engineering Large Projects in Haskell and in the Design and Implementation of XMonad. Engineering in the large is about managing complexity. The primary code structuring mechanisms in Haskell for managing complexity are :

The type system

  • Use the type system to enforce abstractions, simplifying interactions.
  • Enforce key invariants via types
    • (e.g. that certain values cannot escape some scope)
    • That certain code does no IO, does not touch the disk
  • Enforce safety: checked exceptions (Maybe/Either), avoid mixing concepts (Word,Int,Address)
  • Good data structures (like zippers) can make some classes of testing needless, as they rule out e.g. out of bounds errors statically.

The profiler

  • Provide objective evidence of your programs heap and time profiles.
  • Heap profiling, in particular, is the best way to ensure no uneccessary memory use.

Purity

  • Reduce complexity dramatically by removing state. Purely functional code scales, because it is compositional. All you need is the type to determine how to use some code -- it won't mysteriously break when you change some other part of the program.
  • Use lots of "model/view/controller" style programming: parse external data as soon as possible into purely functional data structures, operate on those structures, then once all work is done, render/flush/serialize out. Keeps most of your code pure

Testing

  • QuickCheck + Haskell Code Coverage, to ensure you are testing the things you can't check with types.
  • GHC +RTS is great for seeing if you're spending too much time doing GC.
  • QuickCheck can also help you identify clean, orthogonal APIs for your modules. If the properties of your code are difficult to state, they're probably too complex. Keep refactoring until you have a clean set of properties that can test your code, that compose well. Then the code is probably well designed too.

Monads for Structuring

  • Monads capture key architectural designs in types (this code accesses hardware, this code is a single-user session, etc .)
  • E.g. the X monad in xmonad, captures precisely the design for what state is visible to what components of the system.

Type classes and existential types

  • Use type classes to provide abstraction: hide implementations behind polymorphic interfaces.

Concurrency and paralleism

  • Sneak par into your program to beat the competition with easy, composable parallelism.

Refactor

  • You can refactor in Haskell a lot. The types ensure your large scale changes will be safe, if you're using types wisely. This will help your codebase scale. Make sure that your refactorings will cause type errors until complete.

Use the FFI wisely

  • The FFI makes it easier to play with foreign code, but that foreign code can be dangerous.
  • Be very careful in assumptions about the shape of data returned.

Meta programming

  • A bit of Template Haskell or generics can remove boilerplate.

Packaging and distribution

  • Use Cabal. Don't roll your own build system.
  • Use Haddock for good API docs
  • Tools like graphmod can show your module structures.
  • Rely on the Haskell Platform versions of libraries and tools, if at all possible. It is a stable base.

Warnings

  • Use -Wall to keep your code clean of smells. You might also look at Agda, Isabelle or Catch for more assurance. For lint-like checking, see the great hlint, which will suggest improvements.

With all these tools you can keep a handle on complexity, removing as many interactions between components as possible. Ideally, you have a very large base of pure code, which is really easy to maintain, since it is compositional. That's not always possible, but it is worth aiming for.

In general: decompose the logical units of your system into the smallest referentially transparent components possible, them implement them in modules. Global or local environments for sets of components (or inside components) might be mapped to monads. Use algebraic data types to describe core data structures. Share those definitions widely.

In terms that an OOP programmer would understand (without any functional programming background), what is a monad?

What problem does it solve and what are the most common places it's used?

EDIT:

To clarify the kind of understanding I was looking for, let's say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads to the OOP app?

Answered By: Eric Lippert ( 245)

In terms that an OOP programmer would understand (without any functional programming background), what is a monad?

A monad is an "amplifier" of types that obeys certain rules and which has certain operations provided.

First, what is an "amplifier of types"? By that I mean some system which lets you take a type and turn it into a more special type. For example, in C# consider Nullable<T>. This is an amplifier of types. It lets you take a type, say int, and add a new capability to that type, namely, that now it can be null when it couldn't before.

As a second example, consider IEnumerable<T>. It is an amplifier of types. It lets you take a type, say, string, and add a new capability to that type, namely, that you can now make a sequence of strings out of any number of single strings.

What are the "certain rules"? Briefly, that there is a sensible way for functions on the underlying type to work on the amplified type such that they follow the normal rules of functional composition. For example, if you have a function on integers, say

int M(int x) { return x + N(x * 2); }

then the corresponding function on Nullable<int> can make all the operators and calls in there work together "in the same way" that they did before.

(That is incredibly vague and imprecise; you asked for an explanation that didn't assume anything about knowledge of functional composition.)

What are the "operations"?

  1. That there is a way to take a value of an unamplified type and turn it into a value of the amplified type.
  2. That there is a way to transform operations on the unamplified type into operations on the amplified type that obeys the rules of functional composition mentioned before
  3. That there is usually a way to get the unamplified type back out of the amplified type. (This last point isn't strictly necessary for a monad but it is frequently the case that such an operation exists.)

Again, take Nullable<T> as an example. You can turn an int into a Nullable<int> with the constructor. The C# compiler takes care of most nullable "lifting" for you, but if it didn't, the lifting transformation is straightforward: an operation, say,

int M(int x) { whatever }

is transformed into

Nullable<int> M(Nullable<int> x) 
{ 
    if (x == null) 
        return null; 
    else 
        return new Nullable<int>(whatever);
}

And turning a nullable int back into an int is done with the Value property.

It's the function transformation that is the key bit. Notice how the actual semantics of the nullable operation -- that an operation on a null propagates the null -- is captured in the transformation. We can generalize this. Suppose you have a function from int to int, like our original M. You can easily make that into a function that takes an int and returns a Nullable<int> because you can just run the result through the nullable constructor. Now suppose you have this higher-order method:

Nullable<T> Bind<T>(Nullable<T> amplified, Func<T, Nullable<T>> func)
{
    if (amplified == null) 
        return null;
    else
        return func(amplified.Value);
}

See what you can do with that? Any method that takes an int and returns an int, or takes an int and returns a nullable int can now have the nullable semantics applied to it.

Furthermore: suppose you have two methods

Nullable<int> X(int q) { ... }
Nullable<int> Y(int r) { ... }

and you want to compose them:

Nullable<int> Z(int s) { return X(Y(s)); }

That is, Z is the composition of X and Y. But you cannot do that because X takes an int, and Y returns a nullable int. But since you have the "bind" operation, you can make this work:

Nullable<int> Z(int s) { return Bind(Y(s), X); }

The bind operation on a monad is what makes composition of functions on amplified types work. The "rules" I handwaved about above are that the monad preserves the rules of normal function composition; that composing with identity functions results in the original function, that composition is associative, and so on.

In C#, "Bind" is called "SelectMany". Take a look at how it works on the sequence monad. We need to have three things: turn a value into a sequence, turn a sequence into a value, and bind operations on sequences. Those operations are:

IEnumerable<T> MakeSequence<T>(T item)
{
    yield return item;
}
T Single<T>(IEnumerable<T> sequence)
{
    // let's just take the first one
    foreach(T item in sequence) return item; 
}
IEnumerable<T> SelectMany<T>(IEnumerable<T> seq, Func<T, IEnumerable<T>> func)
{
    foreach(T item in seq)
        foreach(T result in func(item))
            yield return result;            
}

The nullable monad rule was "to combine two functions that produce nullables together, check to see if the inner one results in null; if it does, produce null, if it does not, then call the outer one with the result". That's the desired semantics of nullable. The sequence monad rule is "to combine two functions that produce sequences together, apply the outer function to every element produced by the inner function, and then concatenate all the resulting sequences together". The fundamental semantics of the monads are captured in the Bind/SelectMany methods; this is the method that tells you what the monad really means.

We can do even better. Suppose you have a sequences of ints, and a method that takes ints and results in sequences of strings. We could generalize the binding operation to allow composition of functions that take and return different amplified types, so long as the inputs of one match the outputs of the other:

IEnumerable<U> SelectMany<T,U>(IEnumerable<T> seq, Func<T, IEnumerable<U>> func)
{
    foreach(T item in seq)
        foreach(U result in func(item))
            yield return result;            
}

So now we can say "amplify this bunch of individual integers into a sequence of integers. Transform this particular integer into a bunch of strings, amplified to a sequence of strings. Now put both operations together: amplify this bunch of integers into the concatenation of all the sequences of strings." Monads allow you to compose your amplifications.

What problem does it solve and what are the most common places it's used?

That's rather like asking "what problems does the singleton pattern solve?", but I'll give it a shot.

Monads are typically used to solve problems like:

  • I need to make new capabilities for this type and still combine old functions on this type to use the new capabilities.
  • I need to capture a bunch of operations on types and represent those operations as composable objects, building up larger and larger compositions until I have just the right series of operations represented, and then I need to start getting results out of the thing
  • I need to represent side-effecting operations cleanly in a language that hates side effects

(Note that these are basically three ways of saying the same thing.)

C# uses monads in its design. As already mentioned, the nullable pattern is highly akin to the "maybe monad". LINQ is entirely built out of monads; the "SelectMany" method is what does the semantic work of composition of operations. (Erik Meijer is fond of pointing out that every LINQ function could actually be implemented by SelectMany; everything else is just a convenience.)

To clarify the kind of understanding I was looking for, let's say you were converting an FP application that had monads into an OOP application. What would you do to port the responsibilities of the monads into the OOP app?

Most OOP languages do not have a rich enough type system to represent the monad pattern itself directly; you need a type system that supports types that are higher types than generic types. So I wouldn't try to do that. Rather, I would implement generic types that represent each monad, and implement methods that represent the three operations you need: turning a value into an amplified value, turning an amplified value into a value, and transforming a function on unamplified values into a function on amplified values.

A good place to start is how we implemented LINQ in C#. Study the SelectMany method; it is the key to understanding how the sequence monad works in C#. It is a very simple method, but very powerful!

For a more in-depth and theoretically sound explanation of monads in C#, I highly recommend my colleague Wes Dyer's article on the subject. This article is what explained monads to me when they finally "clicked" for me.

http://blogs.msdn.com/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx

I've recently caught the FP bug (trying to learn Haskell), and I've been really impressed with what I've seen so far (first-class functions, lazy evaluation, and all the other goodies). I'm no expert yet, but I've already begun to find it easier to reason "functionally" than imperatively for basic algorithms (and I'm having trouble going back where I have to).

The one area where current FP seems to fall flat, however, is GUI programming. The Haskell approach seems to be to just wrap imperative GUI toolkits (such as GTK+ or wxWidgets) and to use "do" blocks to simulate an imperative style. I haven't used F#, but my understanding is that it does something similar using OOP with .NET classes. Obviously, there's a good reason for this--current GUI programming is all about IO and side effects, so purely functional programming isn't possible with most current frameworks.

My question is, is it possible to have a functional approach to GUI programming? I'm having trouble imagining what this would look like in practice. Does anyone know of any frameworks, experimental or otherwise, that try this sort of thing (or even any frameworks that are designed from the ground up for a functional language)? Or is the solution to just use a hybrid approach, with OOP for the GUI parts and FP for the logic? (I'm just asking out of curiosity--I'd love to think that FP is "the future," but GUI programming seems like a pretty large hole to fill.)

Answered By: Don Stewart ( 74)

The Haskell approach seems to be to just wrap imperative GUI toolkits (such as GTK+ or wxWidgets) and to use "do" blocks to simulate an imperative style

That's not really the "Haskell approach" -- that's just how you bind to imperative GUI toolkits most directly -- via an imperative interface. Haskell just happens to have fairly prominent bindings.

There are several moderately mature, or more experimental purely functional/declarative approaches to GUIs, mostly in Haskell, and primarily using functional reactive programming.

An example is

For those of you not familiar with Haskell, Flapjax, http://www.flapjax-lang.org/ is an implementation of functional reactive programming on top of JavaScript.