Top java Questions

List of Tags

Here is a piece of C++ code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x:

#include <algorithm>
#include <ctime>
#include <iostream>

int main()
{
    // Generate data
    const unsigned arraySize = 32768;
    int data[arraySize];

    for (unsigned c = 0; c < arraySize; ++c)
        data[c] = std::rand() % 256;

    // !!! With this, the next loop runs faster
    std::sort(data, data + arraySize);

    // Test
    clock_t start = clock();
    long long sum = 0;

    for (unsigned i = 0; i < 100000; ++i)
    {
        // Primary loop
        for (unsigned c = 0; c < arraySize; ++c)
        {
            if (data[c] >= 128)
                sum += data[c];
        }
    }

    double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;

    std::cout << elapsedTime << std::endl;
    std::cout << "sum = " << sum << std::endl;
}
  • Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds.
  • With the sorted data, the code runs in 1.93 seconds.

Initially I thought this might be just a language or compiler anomaly. So I tried it in Java:

import java.util.Arrays;
import java.util.Random;

public class Main
{
    public static void main(String[] args)
    {
        // Generate data
        int arraySize = 32768;
        int data[] = new int[arraySize];

        Random rnd = new Random(0);
        for (int c = 0; c < arraySize; ++c)
            data[c] = rnd.nextInt() % 256;

        // !!! With this, the next loop runs faster
        Arrays.sort(data);

        // Test
        long start = System.nanoTime();
        long sum = 0;

        for (int i = 0; i < 100000; ++i)
        {
            // Primary loop
            for (int c = 0; c < arraySize; ++c)
            {
                if (data[c] >= 128)
                    sum += data[c];
            }
        }

        System.out.println((System.nanoTime() - start) / 1000000000.0);
        System.out.println("sum = " + sum);
    }
}

with a similar but less extreme result.


My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated.

What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

Answered By: Mysticial ( 6432)

You are the victim of branch prediction fail.


What is Branch Prediction?

Consider a railroad junction:

Image by Mecanismo, from Wikimedia Commons: http://commons.wikimedia.org/wiki/File:Entroncamento_do_Transpraia.JPG

Now for the sake of argument, suppose this is back in the 1800s - before long distance or radio communication.

You are the operator of a junction and you hear a train coming. You have no idea which way it will go. You stop the train to ask the captain which direction he wants. And then you set the switch appropriately.

Trains are heavy and have a lot of momentum. So they take forever to start up and slow down.

Is there a better way? You guess which direction the train will go!

  • If you guessed right, it continues on.
  • If you guessed wrong, the captain will stop, back up, and yell at you to flip the switch. Then it can restart down the other path.

If you guess right every time, the train will never have to stop.
If you guess wrong too often, the train will spend a lot of time stopping, backing up, and restarting.


Consider an if-statement: At the processor level, it is a branch instruction:

enter image description here

You are a processor and you see a branch. You have no idea which way it will go. What do you do? You halt execution and wait until the previous instructions are complete. Then you continue down the correct path.

Modern processors are complicated and have long pipelines. So they take forever to "warm up" and "slow down".

Is there a better way? You guess which direction the branch will go!

  • If you guessed right, you continue executing.
  • If you guessed wrong, you need to flush the pipeline and roll back to the branch. Then you can restart down the other path.

If you guess right every time, the execution will never have to stop.
If you guess wrong too often, you spend a lot of time stalling, rolling back, and restarting.


This is branch prediction. I admit it's not the best analogy since the train could just signal the direction with a flag. But in computers, the processor doesn't know which direction a branch will go until the last moment.

So how would you strategically guess to minimize the number of times that the train must back up and go down the other path? You look at the past history! If the train goes left 99% of the time, then you guess left. If it alternates, then you alternate your guesses. If it goes one way every 3 times, you guess the same...

In other words, you try to identify a pattern and follow it. This is more or less how branch predictors work.

Most applications have well-behaved branches. So modern branch predictors will typically achieve >90% hit rates. But when faced with unpredictable branches with no recognizable patterns, branch predictors are virtually useless.

Further Reading: http://en.wikipedia.org/wiki/Branch_predictor


As hinted from above, the culprit is this if-statement:

if (data[c] >= 128)
    sum += data[c];

Notice that the data is evenly distributed between 0 and 255. When the data is sorted, roughly the first half of the iterations will not enter the if-statement. After that, they will all enter the if-statement.

This is very friendly to the branch predictor since the branch consecutively goes the same direction many times. Even a simple saturating counter will correctly predict the branch except for the few iterations after it switches direction.

Quick visualization:

T = branch taken
N = branch not taken

data[] = 0, 1, 2, 3, 4, ... 126, 127, 128, 129, 130, ... 250, 251, 252, ...
branch = N  N  N  N  N  ...   N    N    T    T    T  ...   T    T    T  ...

       = NNNNNNNNNNNN ... NNNNNNNTTTTTTTTT ... TTTTTTTTTT  (easy to predict)

However, when the data is completely random, the branch predictor is rendered useless because it can't predict random data. Thus there will probably be around 50% misprediction. (no better than random guessing)

data[] = 226, 185, 125, 158, 198, 144, 217, 79, 202, 118,  14, 150, 177, 182, 133, ...
branch =   T,   T,   N,   T,   T,   T,   T,  N,   T,   N,   N,   T,   T,   T,   N  ...

       = TTNTTTTNTNNTTTN ...   (completely random - hard to predict)

So what can be done?

If the compiler isn't able to optimize the branch into a conditional move, you can try some hacks if you are willing to sacrifice readability for performance.

Replace:

if (data[c] >= 128)
    sum += data[c];

with:

int t = (data[c] - 128) >> 31;
sum += ~t & data[c];

This eliminates the branch and replaces it with some bitwise operations.

(Note that this hack is not strictly equivalent to the original if-statement. But in this case, it's valid for all the input values of data[].)

Benchmarks: Core i7 920 @ 3.5 GHz

C++ - Visual Studio 2010 - x64 Release

//  Branch - Random
seconds = 11.777

//  Branch - Sorted
seconds = 2.352

//  Branchless - Random
seconds = 2.564

//  Branchless - Sorted
seconds = 2.587

Java - Netbeans 7.1.1 JDK 7 - x64

//  Branch - Random
seconds = 10.93293813

//  Branch - Sorted
seconds = 5.643797077

//  Branchless - Random
seconds = 3.113581453

//  Branchless - Sorted
seconds = 3.186068823

Observations:

  • With the Branch: There is a huge difference between the sorted and unsorted data.
  • With the Hack: There is no difference between sorted and unsorted data.
  • In the C++ case, the hack is actually a tad slower than with the branch when the data is sorted.

A general rule of thumb is to avoid data-dependent branching in critical loops. (such as in this example)


Update :

  • GCC 4.6.1 with -O3 or -ftree-vectorize on x64 is able to generate a conditional move. So there is no difference between the sorted and unsorted data - both are fast.

  • VC++ 2010 is unable to generate conditional moves for this branch even under /Ox.

  • Intel Compiler 11 does something miraculous. It interchanges the two loops, thereby hoisting the unpredictable branch to the outer loop. So not only is it immune the mispredictions, it is also twice as fast as whatever VC++ and GCC can generate! In other words, ICC took advantage of the test-loop to defeat the benchmark...

  • If you give the Intel Compiler the branchless code, it just out-right vectorizes it... and is just as fast as with the branch (with the loop interchange).

This goes to show that even mature modern compilers can vary wildly in their ability to optimize code...

1394
Freewind

If I run the following program, which parses two date strings referencing times one second apart and compares them:

public static void main(String[] args) throws ParseException {
    SimpleDateFormat sf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");  
    String str3 = "1927-12-31 23:54:07";  
    String str4 = "1927-12-31 23:54:08";  
    Date sDt3 = sf.parse(str3);  
    Date sDt4 = sf.parse(str4);  
    long ld3 = sDt3.getTime() /1000;  
    long ld4 = sDt4.getTime() /1000; 
    System.out.println(ld3);  
    System.out.println(ld4);  
    System.out.println(ld4-ld3);
}

The output is:

-1325491905
-1325491552
353

Why is ld4-ld3 not 1 (as I would expect from the one-second difference in the times), but 353?

If I change the dates to times one second later:

String str3 = "1927-12-31 23:54:08";  
String str4 = "1927-12-31 23:54:09";  

Then ld4-ld3 will be 1


UPDATE

Java version:

java version "1.6.0_22"
Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
Dynamic Code Evolution Client VM (build 0.2-b02-internal, 19.0-b04-internal, mixed mode)

Timezone(TimeZone.getDefault()):

sun.util.calendar.ZoneInfo[id="Asia/Shanghai",
offset=28800000,dstSavings=0,
useDaylight=false,
transitions=19,
lastRule=null]

Locale(Locale.getDefault()): zh_CN
Answered By: Jon Skeet ( 3070)

It's a time zone change on December 31st in Shanghai.

See this page for details of 1927 in Shanghai. Basically at midnight at the end of 1927, the clocks went back 5 minutes and 52 seconds. So "1927-12-31 23:54:08" actually happened twice, and it looks like Java is parsing it as the later possible instant for that local date/time - hence the difference.

Just another episode in the often weird and wonderful world of time zones.

EDIT: Stop press! History changes...

The original question would no longer demonstrate quite the same behaviour, if rebuilt with version 2013a of TZDB. In 2013a, the result would be 358 seconds, with a transition time of 23:54:03 instead of 23:54:08.

I only noticed this because I'm collecting questions like this in Noda Time, in the form of unit tests... The test has now been changed, but it just goes to show - not even historical data is safe.

I work with Java all day long. The most used idiom (code snippet) I'm programming in Java, is to test if an object != null before I use it. This is to avoid a NullPointerException. I find the code very ugly and it becomes unreadable.

Is there a good alternative to avoid this code snippet?

Update: Pan, I was not clear with my question. I want to address the necessity to test every object if you want to access a field or method of this object. For example:

...
if (someobject != null)
{
    someobject.doCalc();
}
...

in this case I want avoid a NullPointerException, and I don't know exactly if the object is null or not. So my code get splattered with these tests.

Nevertheless thanks a lot for your answers, I got a bunch of new insight.

Answered By: cletus ( 600)

This to me sounds like a reasonably common problem that junior to intermediate developers tend to face at some point: they either don't know or don't trust the contracts they are participating in and defensively overcheck for nulls. Additionally, when writing their own code, they tend to rely on returning nulls to indicate something thus requiring the caller to check for nulls.

To put this another way, there are two instances where null checking comes up:

  1. Where null is a valid response in terms of the contract; and

  2. Where it isn't a valid response.

(2) is easy. Either use assert statements (assertions) or allow failure (for example, NullPointerException). Assertions are a highly-underused Java feature that was added in 1.4. The syntax is:

assert *<condition>*

or

assert *<condition>* : *<object>*

where <object>'s toString() output will be included in the error.

An assert statement throws an Error (AssertionError) if the condition is not true. By default, Java ignores assertions. You can enable assertions by passing the option -ea to the JVM. You can enable and disable assertions for individual classes and packages. This means that you can validate code with the assertions while developing and testing, and disable them in a production environment, although my testing has shown next to no performance impact from assertions.

Not using assertions in this case is OK because the code will just fail, which is what will happen if you use assertions. The only difference is that with assertions it might happen sooner, in a more-meaningful way and possibly with extra information, which may help you to figure out why it happened if you weren't expecting it.

(1) is a little harder. If you have no control over the code you're calling then you're stuck. If null is a valid response, you have to check for it.

If it's code that you do control, however (and this is often the case), then it's a different story. Avoid using nulls as a response. With methods that return collections, it's easy: return empty collections (or arrays) instead of nulls pretty much all the time.

With non-collections it might be harder. Consider this as an example: if you have these interfaces:

public interface Action {
  void doSomething();
}

public interface Parser {
  Action findAction(String userInput);
}

where Parser takes raw user input and finds something to do, perhaps if you're implementing a command line interface for something. Now you might make the contract that it returns null if there's no appropriate action. That leads the null checking you're talking about.

An alternative solution is to never return null and instead do something like this:

public class MyParser implements Parser {
  private static Action DO_NOTHING = new Action() {
    public void doSomething() { /* do nothing */ }
  };

  public Action findAction(String userInput) {
    // ...
    if ( /* we can't find any actions */ ) {
      return DO_NOTHING;
    }
  }
}

Compare:

Parser parser = ParserFactory.getParser();
if (parser == null) {
  // now what?
  // this would be an example of where null isn't (or shouldn't be) a valid response
}
Action action = parser.findAction(someInput);
if (action == null) {
  // do nothing
} else {
  action.doSomething();
}

to

ParserFactory.getParser().findAction(someInput).doSomething();

which is a much better design because it leads to more concise code.

702
Honza Brabec

Until today I thought that for example:

i += j;

is just a shortcut for:

i = i + j;

But what if we try this:

int i = 5;
long j = 8;

Then i = i + j; will not compile but i += j; will compile fine.

Does it mean that in fact i += j; is a shortcut for something like this i = (type of i) (i + j)?

I've tried googling for it but couldn't find anything relevant.

Answered By: Lukas Eder ( 504)

As always with these questions, the JLS holds the answer. In this case §15.26.2 Compound Assignment Operators. An extract:

A compound assignment expression of the form E1 op= E2 is equivalent to E1 = (T)((E1) op (E2)), where T is the type of E1, except that E1 is evaluated only once.

And an example:

For example, the following code is correct:

short x = 3;
x += 4.6;

and results in x having the value 7 because it is equivalent to:

short x = 3;
x = (short)(x + 4.6);

In other words, your assumption is correct.

620
Mat Banik

I just had an interview, and I was asked to create a memory leak with Java. Needless to say I felt pretty dumb having no clue on how to even start creating one.

What would an example be?

Answered By: Daniel Pryden ( 478)

Here's a good way to create a true memory leak (objects inaccessible by running code but still stored in memory) in pure Java:

  1. The application creates a long-running thread (or use a thread pool to leak even faster).
  2. The thread loads a class via an (optionally custom) ClassLoader.
  3. The class allocates a large chunk of memory (e.g. new byte[1000000]), stores a strong reference to it in a static field, and then stores a reference to itself in a ThreadLocal. Allocating the extra memory is optional (leaking the Class instance is enough), but it will make the leak work that much faster.
  4. The thread clears all references to the custom class or the ClassLoader it was loaded from.
  5. Repeat.

This works because the ThreadLocal keeps a reference to the object, which keeps a reference to its Class, which in turn keeps a reference to its ClassLoader. The ClassLoader, in turn, keeps a reference to all the Classes it has loaded. It gets worse because in many JVM implementations Classes and ClassLoaders are allocated straight into permgen and are never GC'd at all.

A variation on this pattern is why application containers (like Tomcat) can leak memory like sieve if you frequently redeploy applications that happen to use ThreadLocals in any way. (Since the application container uses Threads as described, and each time you redeploy the application a new ClassLoader is used.)

614
dmanxiii

What is the difference between a HashMap and a Hashtable in Java?

Which is more efficient for non-threaded applications?

Answered By: Josh Brown ( 645)

There are several differences between HashMap and Hashtable in Java:

  1. Hashtable is synchronized, whereas HashMap is not. This makes HashMap better for non-threaded applications, as unsynchronized Objects typically perform better than synchronized ones.
  2. Hashtable does not allow null keys or values. HashMap allows one null key and any number of null values.
  3. One of HashMap's subclasses is LinkedHashMap, so in the event that you'd want predictable iteration order (which is insertion order by default), you could easily swap out the HashMap for a LinkedHashMap. This wouldn't be as easy if you were using Hashtable.

Since synchronization is not an issue for you, I'd recommend HashMap.

I always thought Java was pass-by-reference; however I've seen a couple of blog posts (e.g. this blog) that claim it's not. I don't think I understand the distinction they're making.

Could someone explain it please?

Answered By: erlando ( 604)

Java is always pass-by-value. The difficult thing can be to understand that Java passes objects as references passed by value.

It goes like this:

public void foo(Dog d) {
  d.name.equals("Max"); // true
  d = new Dog("Fifi");
  d.name.equals("Fifi"); // true
}

Dog aDog = new Dog("Max");
foo(aDog);
aDog.name.equals("Max"); // true

In this example aDog.name will still be "Max". "d" is not overwritten in the function as the object reference is passed by value.

Likewise:

public void foo(Dog d) {
  d.name.equals("Max"); // true
  d.setname("Fifi");
}

Dog aDog = new Dog("Max");
foo(aDog);
aDog.name.equals("Fifi"); // true
605
Johnny Maelstrom

If you have java.io.InputStream object, how should you process that object and produce a String?


Suppose I have an InputStream that contains text data, and I want to convert this to a String (for example, so I can write the contents of the stream to a log file).

What is the easiest way to take the InputStream and convert it to a String?

public String convertStreamToString(InputStream is) { 
    // ???
}
Answered By: Pavel Repin ( 614)

Here's a way using only standard Java library (note that the stream is not closed, YMMV).

public static String convertStreamToString(java.io.InputStream is) {
    java.util.Scanner s = new java.util.Scanner(is).useDelimiter("\\A");
    return s.hasNext() ? s.next() : "";
}

I learned this trick from "Stupid Scanner tricks" article. The reason it works is because Scanner iterates over tokens in the stream, and in this case we separate tokens using "beginning of the input boundary" (\A) thus giving us only one token for the entire contents of the stream.

Note, if you need to be specific about the input stream's encoding, you can provide the second argument to Scanner ctor that indicates what charset to use (e.g. "UTF-8").

Hat tip goes also to Jacob, who once pointed me to the said article.

EDITED: Thanks to a suggestion from Patrick, made the function more robust when handling an empty input stream. One more edit: nixed try/catch, Patrick's way is more laconic.

602
Ahamed

In Swing, the password field has a getPassword() (returns char[]) method instead of the usual getText() (returns String) method. Similarly, I have come across a suggestion not to use Strings to handle passwords. Why does String pose a threat to security when it comes to passwords?

It feels inconvenient to use char[].

Answered By: Jon Skeet ( 836)

Strings are immutable. That means once you've created the string, if another process can dump memory, there's no way (aside from reflection) you can get rid of the data before GC kicks in.

With an array, you can explicitly wipe the data after you're done with it: you can overwrite the array with anything you like, and the password won't be present anywhere in the system, even before garbage collection.

So yes, this is a security concern - but even using char[] only reduces the window of opportunity for an attacker, and it's only for this specific type of attack.

EDIT: As noted in comments, it's possible that arrays being moved by the garbage collector will leave stray copies of the data in memory. I believe this is implementation-specific - the GC may clear all memory as it goes, to avoid this sort of thing. Even if it does, there's still the time during which the char[] contains the actual characters as an attack window.

568
kunj2aan

I am learning GoF Java Design Patterns and I want to see some real life examples of them. Can you guys point to some good usage of these Design Patterns, preferably in Java's core libraries?

Thank you!

Answered By: BalusC ( 1309)

You can find an overview of a lot design patterns in Wikipedia. It also mentions which patterns are mentioned by GoF. I'll sum them up here and try to assign as much as possible pattern implementations found in both the Java SE and Java EE API's.


Creational patterns

Abstract factory (recognizeable by creational methods returning the factory itself which in turn can be used to create another abstract/interface type)

Builder (recognizeable by creational methods returning the instance itself)

Factory method (recognizeable by creational methods returning an implementation of an abstract/interface type)

Prototype (recognizeable by creational methods returning a different instance of itself with the same properties)

Singleton (recognizeable by creational methods returning the same instance (usually of itself) everytime)


Structural patterns

Adapter (recognizeable by creational methods taking an instance of different abstract/interface type and returning an implementation of own/another abstract/interface type which decorates/overrides the given instance)

Bridge (recognizeable by creational methods taking an instance of different abstract/interface type and returning an implementation of own abstract/interface type which delegates/uses the given instance)

  • None comes to mind yet. A fictive example would be new LinkedHashMap(LinkedHashSet<K>, List<V>) which returns an unmodifiable linked map which doesn't clone the items, but uses them. The java.util.Collections#newSetFromMap() and singletonXXX() methods however comes close.

Composite (recognizeable by behavioral methods taking an instance of same abstract/interface type into a tree structure)

Decorator (recognizeable by creational methods taking an instance of same abstract/interface type which adds additional behaviour)

Facade (recognizeable by behavioral methods which internally uses instances of different independent abstract/interface types)

Flyweight (recognizeable by creational methods returning a cached instance, a bit the "multiton" idea)

Proxy (recognizeable by creational methods which returns an implementation of given abstract/interface type which in turn delegates/uses a different implementation of given abstract/interface type)

The Wikipedia example is IMHO a bit poor, lazy loading has actually completely nothing to do with the proxy pattern at all.


Behavioral patterns

Chain of responsibility (recognizeable by behavioral methods which (indirectly) invokes the same method in another implementation of same abstract/interface type in a queue)

Command (recognizeable by behavioral methods in an abstract/interface type which invokes a method in an implementation of a different abstract/interface type which has been encapsulated by the command implementation during its creation)

Interpreter (recognizeable by behavioral methods returning a structurally different instance/type of the given instance/type; note that parsing/formatting is not part of the pattern, determining the pattern and how to apply it is)

Iterator (recognizeable by behavioral methods sequentially returning instances of a different type from a queue)

Mediator (recognizeable by behavioral methods taking an instance of different abstract/interface type (usually using the command pattern) which delegates/uses the given instance)

Memento (recognizeable by behavioral methods which internally changes the state of the whole instance)

Observer (or Publish/Subscribe) (recognizeable by behavioral methods which invokes a method on an instance of another abstract/interface type, depending on own state)

State (recognizeable by behavioral methods which changes its behaviour depending on the instance's state which can be controlled externally)

Strategy (recognizeable by behavioral methods in an abstract/interface type which invokes a method in an implementation of a different abstract/interface type which has been passed-in as method argument into the strategy implementation)

Template method (recognizeable by behavioral methods which already have a "default" behaviour definied by an abstract type)

Visitor (recognizeable by two different abstract/interface types which has methods definied which takes each the other abstract/interface type; the one actually calls the method of the other and the other executes the desired strategy on it)