List of Tags

- javascript (210)
- c# (130)
- git (119)
- java (110)
- jquery (99)
- python (94)
- c++ (87)
- .net (85)
- android (71)
- html (58)
- c (51)
- iphone (50)
- css (48)
- version-control (44)
- objective-c (41)
- php (41)
- string (41)
- ios (32)
- performance (31)
- language-agnostic (30)
- sql (30)
- algorithm (29)
- database (27)
- vim (25)
- c++-faq (24)
- asp.net (23)
- ruby (23)
- arrays (23)
- eclipse (22)
- mysql (22)
- sql-server (22)
- xcode (20)
- svn (20)
- windows (19)
- json (17)
- ruby-on-rails (17)
- cocoa-touch (17)
- asp.net-mvc (17)
- security (16)
- url (16)
- bash (16)
- ajax (15)
- ide (14)
- operators (14)
- node.js (13)
- coding-style (13)
- unit-testing (13)
- http (13)
- command-line (13)
- enums (12)
- gui (12)
- functional-programming (12)
- regex (12)
- visual-studio (12)
- cocoa (12)
- shell (12)
- list (11)
- branch (11)
- datetime (11)
- hidden-features (11)
- tsql (11)
- unix (11)
- optimization (11)
- linq (10)
- design-patterns (10)
- xml (10)
- random (10)
- scala (10)
- merge (10)
- math (10)
- oop (10)
- editor (10)
- sorting (10)
- dom (9)
- rest (9)
- polls (9)
- sqlite (9)
- browser (9)
- html5 (9)
- function (8)
- dvcs (8)
- lambda (8)
- github (8)
- syntax (8)
- memory-management (8)
- image (8)
- xcode4 (8)
- books (8)
- linux (8)
- wpf (8)
- vi (8)
- multithreading (8)
- osx (8)
- collections (7)
- generics (7)
- constructor (7)
- git-branch (7)
- haskell (7)
- pointers (7)
- open-source (7)
- terminology (7)
- javascript-events (7)
- tips-and-tricks (7)
- cross-browser (6)
- mercurial (6)
- entity-framework (6)
- forms (6)
- database-design (6)
- django (6)
- console (6)
- casting (6)
- user-interface (6)
- ipad (6)
- iframe (6)
- r (6)
- .net-4.0 (6)
- file (6)
- design (6)
- search (6)
- language-features (6)
- date (6)
- vb.net (6)
- dictionary (6)
- email (5)
- git-commit (5)
- gitignore (5)
- layout (5)
- google-chrome (5)
- closures (5)
- private (5)
- reference (5)
- data-structures (5)
- google (5)
- theory (5)
- big-o (5)
- singleton (5)
- exception (5)
- passwords (5)
- interface (5)
- programming-languages (5)
- soap (5)
- serialization (5)
- resources (5)
- initialization (5)
- variables (5)
- asynchronous (5)
- grep (5)
- productivity (5)
- floating-point (5)
- join (5)

I am doing some numerical optimization on a scientific application. One thing I noticed is that GCC will optimize the call `pow(a,2)`

by compiling it into `a*a`

, but the call `pow(a,6)`

is not optimized and will actually call the library function `pow`

, which greatly slows down the performance. (In contrast, Intel C++ Compiler, executable `icc`

, will eliminate the library call for `pow(a,6)`

.)

What I am curious about is that when I replaced `pow(a,6)`

with `a*a*a*a*a*a`

using GCC 4.5.1 and options "`-O3 -lm -funroll-loops -msse4`

", it uses 5 `mulsd`

instructions:

```
movapd %xmm14, %xmm13
mulsd %xmm14, %xmm13
mulsd %xmm14, %xmm13
mulsd %xmm14, %xmm13
mulsd %xmm14, %xmm13
mulsd %xmm14, %xmm13
```

while if I write `(a*a*a)*(a*a*a)`

, it will produce

```
movapd %xmm14, %xmm13
mulsd %xmm14, %xmm13
mulsd %xmm14, %xmm13
mulsd %xmm13, %xmm13
```

which reduces the number of multiply instructions to 3. `icc`

has similar behavior.

Why do compilers not recognize this optimization trick?

Answered By: Lambdageek ( 1069)

Because Floating Point Math is not Associative. The way you group the operands in floating point multiplication has an effect on the numerical accuracy of the answer.

As a result, most compilers are very conservative about reordering floating point calculations unless they can be sure that the answer will stay the same, or unless you tell them you don't care about numerical accuracy. For example: the `-ffast-math`

option of gcc.

Why does this bit of code,

```
const float x[16] = { 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8,
1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6};
const float z[16] = {1.123, 1.234, 1.345, 156.467, 1.578, 1.689, 1.790, 1.812,
1.923, 2.034, 2.145, 2.256, 2.367, 2.478, 2.589, 2.690};
float y[16];
for (int i = 0; i < 16; i++)
{
y[i] = x[i];
}
for (int j = 0; j < 9000000; j++)
{
for (int i = 0; i < 16; i++)
{
y[i] *= x[i];
y[i] /= z[i];
y[i] = y[i] + 0.1f; // <--
y[i] = y[i] - 0.1f; // <--
}
}
```

run more than 10 times faster than the following bit (identical except where noted)?

```
const float x[16] = { 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8,
1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6};
const float z[16] = {1.123, 1.234, 1.345, 156.467, 1.578, 1.689, 1.790, 1.812,
1.923, 2.034, 2.145, 2.256, 2.367, 2.478, 2.589, 2.690};
float y[16];
for (int i = 0; i < 16; i++)
{
y[i] = x[i];
}
for (int j = 0; j < 9000000; j++)
{
for (int i = 0; i < 16; i++)
{
y[i] *= x[i];
y[i] /= z[i];
y[i] = y[i] + 0; // <--
y[i] = y[i] - 0; // <--
}
}
```

when compiling with Visual Studio 2010 SP1. (I haven't tested with other compilers.)

Answered By: Mysticial ( 814)

**Welcome to the world of denormalized floating-point!** They can wreak havoc on performance!!!

Denormal (or subnormal) numbers are kind of a hack to get some extra values very close to zero out of the floating point representation. Operations on denormalized floating-point can be ** tens to hundreds of times slower** than on normalized floating-point. This is because many processors can't handle them directly and must trap and resolve them using microcode.

If you print out the numbers after 10,000 iterations, you will see that they have converged to different values depending on whether `0`

or `0.1`

is used.

Here's the test code compiled on x64:

```
int main() {
double start = omp_get_wtime();
const float x[16]={1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,2.0,2.1,2.2,2.3,2.4,2.5,2.6};
const float z[16]={1.123,1.234,1.345,156.467,1.578,1.689,1.790,1.812,1.923,2.034,2.145,2.256,2.367,2.478,2.589,2.690};
float y[16];
for(int i=0;i<16;i++)
{
y[i]=x[i];
}
for(int j=0;j<9000000;j++)
{
for(int i=0;i<16;i++)
{
y[i]*=x[i];
y[i]/=z[i];
#ifdef FLOATING
y[i]=y[i]+0.1f;
y[i]=y[i]-0.1f;
#else
y[i]=y[i]+0;
y[i]=y[i]-0;
#endif
if (j > 10000)
cout << y[i] << " ";
}
if (j > 10000)
cout << endl;
}
double end = omp_get_wtime();
cout << end - start << endl;
system("pause");
return 0;
}
```

**Output:**

```
#define FLOATING
1.78814e-007 1.3411e-007 1.04308e-007 0 7.45058e-008 6.70552e-008 6.70552e-008 5.58794e-007 3.05474e-007 2.16067e-007 1.71363e-007 1.49012e-007 1.2666e-007 1.11759e-007 1.04308e-007 1.04308e-007
1.78814e-007 1.3411e-007 1.04308e-007 0 7.45058e-008 6.70552e-008 6.70552e-008 5.58794e-007 3.05474e-007 2.16067e-007 1.71363e-007 1.49012e-007 1.2666e-007 1.11759e-007 1.04308e-007 1.04308e-007
//#define FLOATING
6.30584e-044 3.92364e-044 3.08286e-044 0 1.82169e-044 1.54143e-044 2.10195e-044 2.46842e-029 7.56701e-044 4.06377e-044 3.92364e-044 3.22299e-044 3.08286e-044 2.66247e-044 2.66247e-044 2.24208e-044
6.30584e-044 3.92364e-044 3.08286e-044 0 1.82169e-044 1.54143e-044 2.10195e-044 2.45208e-029 7.56701e-044 4.06377e-044 3.92364e-044 3.22299e-044 3.08286e-044 2.66247e-044 2.66247e-044 2.24208e-044
```

Note how in the second run the numbers are very close to zero.

Denormalized numbers are generally rare and thus most processors don't try to handle them efficiently.

To demonstrate that this has everything to do with denormalized numbers, if we **flush denormals to zero** by adding this to the start of the code:

```
_MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON);
```

Then the version with `0`

is no longer 10x slower and actually becomes faster. (This requires that the code be compiled with SSE enabled.)

This means that rather than using these weird lower precision almost-zero values, we just round to zero instead.

**Timings: Core i7 920 @ 3.5 GHz:**

```
// Don't flush denormals to zero.
0.1f: 0.564067
0 : 26.7669
// Flush denormals to zero.
0.1f: 0.587117
0 : 0.341406
```

In the end, this really has nothing to do with whether it's an integer or floating-point. The `0`

or `0.1f`

is converted/stored into a register outside of both loops. So that has no effect on performance.

In the following program you can see that for each value slightly less that .5 is rounded down, except for 0.5.

```
for (int i = 10; i >= 0; i--) {
long l = Double.doubleToLongBits(i + 0.5);
double x;
do {
x = Double.longBitsToDouble(l);
System.out.println(x + " rounded is " + Math.round(x));
l--;
} while (Math.round(x) > i);
}
```

prints

```
10.5 rounded is 11
10.499999999999998 rounded is 10
9.5 rounded is 10
9.499999999999998 rounded is 9
8.5 rounded is 9
8.499999999999998 rounded is 8
7.5 rounded is 8
7.499999999999999 rounded is 7
6.5 rounded is 7
6.499999999999999 rounded is 6
5.5 rounded is 6
5.499999999999999 rounded is 5
4.5 rounded is 5
4.499999999999999 rounded is 4
3.5 rounded is 4
3.4999999999999996 rounded is 3
2.5 rounded is 3
2.4999999999999996 rounded is 2
1.5 rounded is 2
1.4999999999999998 rounded is 1
0.5 rounded is 1
0.49999999999999994 rounded is 1
0.4999999999999999 rounded is 0
```

I am using Java 6 update 31.

Answered By: Oli Charlesworth ( 299)

According to the Java 6 docs, `round(x)`

is implemented as `floor(x+0.5)`

.^{1} But 0.5+0.49999999999999994 is exactly 1 in double precision:

```
static void print(double d) {
System.out.printf("%016x\n", Double.doubleToLongBits(d));
}
public static void main(String args[]) {
double a = 0.5;
double b = 0.49999999999999994;
print(a);
print(b);
print(a+b);
print(1.0);
}
```

gives:

```
3fe0000000000000
3fdfffffffffffff
3ff0000000000000
3ff0000000000000
```

This is because 0.49999999999999994 has a smaller exponent than 0.5, so when they're added, its mantissa is shifted, and the ULP gets bigger.

_{
1. At least, this is the definition given in the Java 6 docs. It's not given in the Java 7 docs, which may explain why people are seeing different behaviour when they run in Java 7. UPDATE: According to Simon Nickerson's answer, it's a known bug, so this almost certainly explains the difference in the docs and the observed behaviour between versions.}

This should be simple - In python, how can I parse a numeric string like "545.2222" to its corresponding float value, 542.2222 or "31" to an integer, 31?

EDIT: I just wanted to know how to parse a float string to a float, and (separately) an int string to an int. Sorry for the confusing phrasing/original examples on my part.

At any rate, the answer from Harley is what I needed.

Answered By: Harley Holcombe ( 261)

```
>>> a = "545.2222"
>>> float(a)
545.22220000000004
>>> int(float(a))
545
```

Here is the example with comments:

```
class Program
{
// first version of structure
public struct D1
{
public double d;
public int f;
}
// during some changes in code then we got D2 from D1
// Field f type became double while it was int before
public struct D2
{
public double d;
public double f;
}
static void Main(string[] args)
{
// Scenario with the first version
D1 a = new D1();
D1 b = new D1();
a.f = b.f = 1;
a.d = 0.0;
b.d = -0.0;
bool r1 = a.Equals(b); // gives true, all is ok
// The same scenario with the new one
D2 c = new D2();
D2 d = new D2();
c.f = d.f = 1;
c.d = 0.0;
d.d = -0.0;
bool r2 = c.Equals(d); // false! this is not the expected result
}
}
```

So, what do you think about this?

Answered By: SLaks ( 243)

The bug is in the following two lines of `System.ValueType`

: (I stepped into the reference source)

```
if (CanCompareBits(this))
return FastEqualsCheck(thisObj, obj);
```

(Both methods are `[MethodImpl(MethodImplOptions.InternalCall)]`

)

When the all of the fields are 8 bytes wide, `CanCompareBits`

mistakenly returns true, resulting in a bitwise comparison of two different, but semantically identical, values.

When at least one field is not 8 bytes wide, `CanCompareBits`

returns false, and the code proceeds to use reflection to loop over the fields and call `Equals`

for each value, which correctly treats `-0.0`

as equal to `0.0`

.

Here is the source for `CanCompareBits`

from SSCLI:

```
FCIMPL1(FC_BOOL_RET, ValueTypeHelper::CanCompareBits, Object* obj)
{
WRAPPER_CONTRACT;
STATIC_CONTRACT_SO_TOLERANT;
_ASSERTE(obj != NULL);
MethodTable* mt = obj->GetMethodTable();
FC_RETURN_BOOL(!mt->ContainsPointers() && !mt->IsNotTightlyPacked());
}
FCIMPLEND
```