I remember when I first learned of the existence of Haskell . It was actually around the time that Apple first released Swift . Excited by the prospect of learning an entirely new language and participating in an entirely new language community, I dove right in ― and immediately slammed into the hard concrete of Optionals . I was suddenly confronted with implications of type safety I had never thought about before. What should happen if you try to access a value that does not exist? Should the compiler give you an error? A warning? Should your program compile but throw an exception at runtime? Is it the responsibility of the programmer to handle errors or the compiler to refuse to emit unsafe code? One question that had not occurred to me was whether the designers of a language themselves should do something syntacticly to prevent errors. In Swift, optionals were the proposed solution to the conundrum of handling values that might not exist. When accessing a value from a dictionary, for example, Swift does not return the value for a given key but an optional value that includes, as part of the value’s type, the possibility that the value does not actually exist. In order to access the raw value, you have to either forcibly “unwrap” it or use an if let expression to test for the possibility of nil:
Words can hardly express, for the newcomer to Swift, just how annoying this was. In earlier versions of the language, you could rapidly end up with if let pyramids of doom if you had multiple optionals to unwrap. This problem was mitigated somewhat in Swift 1.2 and finally addressed head-on with the addition of the guard statement to Swift 2.0. But by that point, I had already moved away from Swift. Wondering where concepts like optionals came from, I early-on discovered something called “functional programming” and a language called “Haskell” that everybody said is “hard”. Since I am always interested in getting closer to the source of ideas, and rather stubborn about learning things people claim to be difficult, I thought it could only enhance my understanding of Swift if I learned Haskell, too. What I did not expect was to be drawn into the FP world so completely that I lost interest in Swift entirely.
It’s not that Swift isn’t an interesting language. I definitely prefer it to Objective-C, which I never managed to learn in-depth (I blame my early exposure to C++). It’s just that after encountering Haskell and functional programming in its natural habitat, I found Swift to be ultimately much less exciting than I had thought it would, could, or should be. The way I see things, Swift is trying too hard to have it both ways: it combines the type safety of a language like Haskell with the syntactic pizazz of dynamic scripting languages such as python and Ruby. This latter quality, I believe, is what leads people to refer, incongruously, to certain programming languages as “fun”. As far as language evolution is concerned, Swift is also just another iteration of the ALGOL/C family. Will it also mimic the tendency of C++ to pack in as many features as possible, whether or not they contradict one another? Sure, it supports Unicode, so you can assign values to unicorns and piles of poop, but it does not otherwise seem like a significant advancement to me.
Learning Haskell, on the other hand, was like entering a portal to another universe, one with entirely different laws of physics. Suddenly, I couldn’t reassign variables (what?); or create objects to represent my data (What?); or even perform basic I/O operations, like printing to the screen, in an obvious and straightforward manner (WHAT?). What I could do, however, is write the most elegant, concise, and correct code of my life. Sure, it all looks like algebraic alphabet soup at first. But once you get past the general lack of curly braces and parentheses, once you learn to think in terms of “why” instead of “how” to do things, once you accept the infelicity of languages that model the operations of the machine instead of the operations of problem solving, you begin to embrace the virtues of things like functional purity, composition, and referential transparency and cease to fear the abstractions represented by abstruse terms like functor and monad.
This function works by taking its parameters ― a function followed by all the arguments that function expects ― and recursively applying the function to each argument until the function is either fully applied or it runs out of arguments. In the former case, partial simply calls the function and returns whatever value the fully-applied function evaluates to. In the latter case, partial returns a new function partially applied to the given arguments and ready to accept further arguments until it is fully applied.
The $ function works like the Haskell composition operator:
The first line of each function is called a “function signature.” The signature expresses the types of the function’s parameters and return values. The arrows indicate function applications. For the sake of simplicity, you could regard the final type as the actual return type of the function and the others as arguments. For example, && takes two arguments, a boolean and another boolean, and returns a boolean. That part should be straightforward. What is actually happening, however, is that && takes a single boolean argument and returns a new function that is partially applied to that argument. This new function also takes a single boolean argument. When it is applied to that argument, the function as a whole is then fully applied, and the result reduces to a boolean value, True or False . The body of the function performs pattern matching to determine this final value. If the first argument is True , the final value will be the same as the value of x . If the first argument is False , however, then it doesn’t matter what the second argument is, because the function will always evaluate to False in that case. The underscore indicates that the second value does not matter. The patterns in the other functions ought to be self-explanatory along these lines. No need for extra syntax, like if let expressions and guard keywords here!
The important parts of these functions, two lines in each one, are identical to their Haskell counterparts. The bulk of them, however, are given over to type checking (which I arguably could have left out of these examples) and the affordance for partial application. In the future, I may want to replace the rather clunky type checking with something invisible, built using the new Proxy and Symbol APIs. I may even be able to do the same for partial application, trapping function calls with Proxy and doing all the Haskell-y stuff behind the scenes. In the meantime, I think it’s salutary to see how these things can be implemented with less complicated code as well as the challenges of adapting a strongly-typed paradigm to a weakly-typed language.
For my final two examples, I will show off partial application and function composition together. Haskell defines two functions, even and odd in terms of one another: