Top terminology Questions

List of Tags

I always thought Java was pass-by-reference; however I've seen a couple of blog posts (e.g. this blog) that claim it's not. I don't think I understand the distinction they're making.

Could someone explain it please?

Answered By: erlando ( 604)

Java is always pass-by-value. The difficult thing can be to understand that Java passes objects as references passed by value.

It goes like this:

public void foo(Dog d) {
  d.name.equals("Max"); // true
  d = new Dog("Fifi");
  d.name.equals("Fifi"); // true
}

Dog aDog = new Dog("Max");
foo(aDog);
aDog.name.equals("Max"); // true

In this example aDog.name will still be "Max". "d" is not overwritten in the function as the object reference is passed by value.

Likewise:

public void foo(Dog d) {
  d.name.equals("Max"); // true
  d.setname("Fifi");
}

Dog aDog = new Dog("Max");
foo(aDog);
aDog.name.equals("Fifi"); // true

There have been several questions already posted with specific questions about dependency injection, such as when to use it, what frameworks are there for it. However, here's the newbie question:

What is dependency injection and when/why should or shouldn't it be used?

Edit: While external links for followup reading are always appreciated, I'd like to encourage people to write as complete an answer here as possible, so that SO itself can be a good source to learn. I believe this is the intent of the site.

Answered By: wds ( 153)

Basically, instead of having your objects creating a dependency or asking a factory object to make one for them, you pass the needed dependencies in to the constructor, and you make it somebody else's problem (an object further up the dependency graph, or a dependency injector that builds the dependency graph). A dependency as I'm using it here is any other object the current object needs to hold a reference to.

One of the major advantages of dependency injection is that it can make testing lots easier. Suppose you have an object which in its constructor does something like:

public SomeClass() {
    myObject = Factory.getObject();
}

This can be troublesome when all you want to do is run some unit tests on SomeClass, especially if myObject is something that does complex disk or network access. So now you're looking at mocking myObject but also somehow intercepting the factory call. Hard. Instead, pass the object in as an argument to the constructor. Now you've moved the problem elsewhere, but testing can become lots easier. Just make a dummy myObject and pass that in. The constructor would now look a bit like:

public SomeClass (MyClass myObject) {
    this.myObject = myObject;
}

Most people can probably work out the other problems that might arise when not using dependency injection while testing (like classes that do too much work in their constructors etc.) Most of this is stuff I picked up on the Google Testing Blog, to be perfectly honest...

I've read the Wikipedia article on reactive programming. I've also read the small article on functional reactive programming. The descriptions are quite abstract.

What does functional reactive programming (FRP) mean in practice? What does reactive programming (as opposed to non-reactive programming?) consist of? My background is in imperative/OO languages, so an explanation that relates to this paradigm would be appreciated.

Answered By: Conal ( 247)

If you want to get a feel for FRP, you could start with the old Fran tutorial from 1998, which has animated illustrations. For papers, start with Functional Reactive Animation and then follow up on links on the publications link on my home page and the FRP link on the Haskell wiki.

Personally, I like to think about what FRP means, rather than how it might be implemented. So I don't describe FRP in representation/implementation terms as Thomas K does in another answer (graphs, nodes, edges, firing, execution, etc). There are many possible implementation styles, but no implementation says what FRP is.

I do resonate with Laurence G's simple description that FRP is about "datatypes that represent a value 'over time' ". Conventional imperative programming captures these dynamic values only indirectly, through state and mutations. The complete history (past, present, future) has no first class representation. Moreover, only discretely evolving values can be (indirectly) captured, since the imperative paradigm is temporally discrete. In contrast, FRP captures these evolving values directly and has no difficulty with continuously evolving values.

FRP is also unusual in that it is concurrent without running afoul of the theoretical & pragmatic rats' nest that plagues imperative concurrency. Semantically, FRP's concurrency is fine-grained, determinate, and continuous. (I'm talking about meaning, not implementation. An implementation may or may not involve concurrency or parallelism.) Semantic determinacy is very important for reasoning, both rigorous and informal. While concurrency adds enormous complexity to imperative programming, due to nondeterministic interleaving, it is effortless in FRP.

So, what is FRP? You could have invented it yourself. Start with these ideas:

  • Dynamic/evolving values (i.e., values "over time") are first class values in themselves. You can define them and combine them, pass them into & out of functions. I called these things "behaviors".

  • Behaviors are built up out of a few primitives, like constant (static) behaviors and time (like a clock), and then with sequential and parallel combination. n behaviors are combined by applying an n-ary function (on static values), "point-wise", i.e., continuously over time.

  • To account for discrete phenomena, have another type (family) of "events", each of which has a stream (finite or infinite) of occurrences. Each occurrence has an associated time and value.

  • To come up with the compositional vocabulary out of which all behaviors and events can be built, play with some examples. Keep deconstructing into pieces that are more general/simple.

  • So that you know you're on solid ground, give the whole model a compositional foundation, using the technique of denotational semantics, which just means that (a) each type has a corresponding simple & precise mathematical type of "meanings", and (b) each primitive and operator has a simple & precise meaning as a function of the meanings of the constituents. Never, ever mix implementation considerations into your exploration process. If this description is gibberish to you, consult (a) Denotational design with type class morphisms, (b) Push-pull functional reactive programming (ignoring the implementation pieces in the latter), and (c) the Denotational Semantics Haskell wikibooks page. Beware that denotational semantics has two parts, from its two founders Christopher Strachey and Dana Scott: the easier & more useful Strachey part and the harder and less useful (for design) Scott part.

If you stick with these principles, I expect you'll get something more-or-less in the spirit of FRP.

Where did I get these principles? In software design, I always ask the same question: "what does it mean?". Denotational semantics gave me a precise framework for this question, and one that fits my aesthetics (unlike operational or axiomatic semantics, which leaves me unsatisfied). So I asked myself what is behavior? I soon realized that the temporally discrete nature of imperative computation is an accommodation to a particular style of machine, rather than a natural description of behavior itself. The simplest precise description of behavior I can think of is simply "function of time", so that's my model. Delightfully, this model handles continuous, deterministic concurrency with ease and grace.

It's been quite a challenge to implement this model correctly and efficiently, but that's another story.

Having briefly looked at Haskell recently I wondered whether anybody could give a brief, succinct, practical explanation as to what a monad essentially is? I have found most explanations I've come across to be fairly inaccessible and lacking in practical detail, so could somebody here help me?

Answered By: JacquesB ( 304)

First: The term monad is a bit vacuous if you are not a mathematician. An alternative term is computation builder which is a bit more descriptive of what they are actually useful for.

You ask for practical examples:

Example 1: List comprehension:

[x*2 | x<-[1..10], odd x]

This expression returns the doubles of all odd numbers in the range from 1 to 10. Very useful!

Example 2: Input/Output:

do
   putStrLn "What is your name?"
   name <- getLine
   putStrLn ("Welcome, " ++ name ++ "!")

Both examples uses monads aka computation builders. The common theme is that the monad chains operations in some specific, useful way. In the list comprehension, the operations are chained such that if an operation returns a list, then the following operations are performed on every item in the list. The IO monad on the other hand performs the operations sequentially, but passes a "hidden variable" along, which represents "the state of the world", which allows us to write IO code in a pure functional manner.

It turns out the pattern of chaining operations is quite useful, and is used for lots of different things in Haskell.

An other example is exceptions: Using the Error monad, operations are chained such that the are performed sequentially, except if an error is thrown, in which case the rest of the chain is abandoned.

Both the list-comprehension syntax and the do-notation are syntactic sugar for chaining operations using the >>= operator. A monad is basically just a type that supports the >>= operator.

Example 3: A parser

This is a very simple parser which parses either a quoted string or a number:

parseExpr = parseString <|> parseNumber

parseString = do
        char '"'
        x <- many (noneOf "\"")
        char '"'
        return (StringValue x)

parseNumber = do
    num <- many1 digit
    return (NumberValue (read num))

The operations char, digit etc. are pretty simple, they either match or don't match. The magic is the monad which manages the control flow: The operations are performed sequentially until a match fail, in which case the monad backtracks to the latest <|> and tries the next option. Again, a way of chaining operations with some additional, useful semantics.

Example 4: Asynchronous programming

The above examples are in Haskell, but it turns out F# also supports monads. This example is stolen from Don Syme:

let AsyncHttp(url:string) =
    async {  let req = WebRequest.Create(url)
             let! rsp = req.GetResponseAsync()
             use stream = rsp.GetResponseStream()
             use reader = new System.IO.StreamReader(stream)
             return reader.ReadToEnd() }

This method fetches a web page. The punch line is the use of GetResponseAsync - it actually waits for the response on a separate thread, while the main thread returns from the function. The last three lines are executed on the spawned thread when the response have been received.

In most other languages you would have to explicitly create a separate function for the lines that handle the response. The async monad is able to "split" the block on its own and postpone the execution of the latter half. (The async {} syntax indicates that the control flow in the block is defined by the async monad)

How they work

So how can a monad do all these fancy control-flow thing? What actually happens in a do-block (or a computation expression as they are called in F#), is that every operation (basically every line) is wrapped in a separate anonymous function. These functions are then combined using the bind operator (spelled >>= in Haskell). Since the bind operation combines functions, it can execute them as it sees fit: sequentially, multiple times, in reverse, discard some, execute some on a separate thread when it feels like it and so on.

As an example, this is the expanded version of the IO-code from example 2:

putStrLn "What is your name?"
>>= (\_ -> getLine)
>>= (\name -> putStrLn ("Welcome, " ++ name ++ "!"))

This is uglier, but it's also more obvious what is actually going on. The >>= operator is the magic ingredient: It takes a value (one the left side) and combines it with a function (on the right side), to produce a new value. This new value is then taken by the next >>= operator and again combined with a function to produce a new value. >>= can be viewed as a mini-evaluator.

Note that >>= is overloaded for different types, so every monad has its own implementation of >>=. (All the operations in the chain have to be of the type of the same monad though, otherwise the >>= operator wont work.)

The simplest possible implementation of >>= just takes the value on the left and applies it to the function on the right and returns the result, but as said before, what makes the whole pattern useful is when there is something extra going on in the monads implementation of >>=.

There is some additional cleverness in how the values are passed from one operation to the next, but this requires a deeper explanation of the Haskell type system.

Summing up

In Haskell-terms a monad is a parameterized type which is an instance of the Monad type class, which defines >>= along with a few other operators. In layman's terms, a monad is just a type for which the >>= operation is defined.

In itself >>= is just a cumbersome way of chaining functions, but with the presence of the do-notation which hides the "plumbing", the monadic operations turns out to be a very nice and useful abstraction, useful many places in the language, and useful for creating your own mini-languages in the language.

Why are monads hard?

For many Haskell-learners, monads are an obstacle they hit like a brick wall. It's not that monads themselves are complex, but that the implementation relies on many other advanced Haskell features like parameterized types, type classes, and so on. The problem is that Haskell IO is based on monads, and IO is probably one of the first things you want to understand when learning a new language - after all, its not much fun to create programs which doesn't produce any output. I have no immediate solution for this chicken-and-egg problem, except treating IO like "magic happens here" until you have enough experience with other parts of language. Sorry.

230
Brad Leach

I am crafting an application and cannot decide whether to use the terms Login/out or Logon/off. Is there a more correct option between these two? Should I use something else entirely (like "Sign on/off").

In terms of usability, as long as I am consistent it probably doesn't matter which terms I choose, but I did wonder about the origins of the terms - and whether one or another makes more grammatical sense. I also care deeply about the application I am creating, and want to take the time to investigate all aspects of its user experience.

Answered By: Adam Liss ( 235)

Since you're looking for correctness,

login, logout, logon, and logoff are all nouns:

"Please enter your login credentials."
"I see three logons but only two logoffs from this user."

The corresponding verbs are each two words:

"Please log in to see your reputation."
"You must log off and talk to a human."


Update: according to dictionary.com, the various definitions of login are all nouns and involve gaining access to a computer or computer service. Interestingly, logon redirects to login as an exact equivalent. Have the definitions evolved?

For a person without a comp-sci background, what is a lambda in the world of Computer Science?

Answered By: mk. ( 205)

Lambda comes from the Lambda Calculus and refers to anonymous functions in programming.

Why is this cool? It allows you to write quick throw away functions without naming them. It also provides a nice way to write closures. With that power you can do things like this.

Python

def adder(x):
    return lambda y: x + y
add5 = adder(5)
add5(1)
6

JavaScript

var adder = function (x) {
    return function (y) {
        return x + y;
    };
};
add5 = adder(5);
add5(1) == 6

Scheme

(define adder
    (lambda (x)
        (lambda (y)
           (+ x y))))
(define add5
    (adder 5))
(add5 1)
6

As you can see from the snippet of Python and JavaScript, the function adder takes in an argument x, and returns an anonymous function, or lambda, that takes another argument y. That anonymous function allows you to create functions from functions. This is a simple example, but it should convey the power lambdas and closures have.

I am a long-time Applescript user and new shell scripter who wants to learn a more general scripting language like Javascript or Python for performance reasons.

I am having trouble getting my head around concepts like object orientation, classes and instantiation.

If someone could point me to a pithy explanation of methods vs. functions it might help me get over the "hump". The explanations I found using google are just barely over my head.

Thanks.

Answered By: Andrew Edgecombe ( 163)

A function is a piece of code that is called by name. It can be passed data to operate on (ie. the parameters) and can optionally return data (the return value).

All data that is passed to a function is explicitly passed.

A method is a piece of code that is called by name that is associated with an object. In most respects it is identical to a function except for two key differences.

  1. It is implicitly passed the object for which it was called
  2. It is able to operate on data that is contained within the class (remembering that an object is an instance of a class - the class is the definition, the object is an instance of that data)

(this is a simplified explanation, ignoring issues of scope etc.)