Top math Questions

List of Tags

I can't get my head around this, which is more random?

rand()

OR

rand() * rand()

I´m finding it a real brain teaser, could you help me out?

EDIT:

Intuitively I know that the mathematical answer will be that they are equally random, but I can't help but think that if you "run the random number algorithm" twice when you multiply the two together you'll create something more random than just doing it once.

Answered By: belisarius ( 1272)

Just a clarification

Although the previous answers are right whenever you try to spot the randomness of a pseudo-random variable or its multiplication, you should be aware that while Random() is usually uniformly distributed, Random() * Random() is not.

Example

This is a uniform random distribution sample simulated through a pseudo-random variable:

Histogram of Random()

        BarChart[BinCounts[RandomReal[{0, 1}, 50000], 0.01]]

While this is the distribution you get after multiplying two random variables:

Histogram of Random() * Random()

        BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] * 
                                 RandomReal[{0, 1}, 50000], {50000}], 0.01]]

So, both are “random”, but their distribution is very different.

Another example

While 2 * Random() is uniformly distributed:

Histogram of 2 * Random()

        BarChart[BinCounts[2 * RandomReal[{0, 1}, 50000], 0.01]]

Random() + Random() is not!

Histogram of Random() + Random()

        BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] + 
                                 RandomReal[{0, 1}, 50000], {50000}], 0.01]]

The Central Limit Theorem

The Central Limit Theorem states that the sum of Random() tends to a normal distribution as terms increase.

With just four terms you get:

Histogram of Random() + Random() + Random() + Random()

BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000] +
                   Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000],
                   {50000}],
         0.01]]  

And here you can see the road from a uniform to a normal distribution by adding up 1, 2, 4, 6, 10 and 20 uniformly distributed random variables:

Histogram of different numbers of random variables added

Edit

A few credits

Thanks to Thomas Ahle for pointing out in the comments that the probability distributions shown in the last two images are known as the Irwin-Hall distribution

Thanks to Heike for her wonderful torn[] function

511
Hrvoje Prgeša

A question I got on my last interview:

Design a function f, such that:

f(f(n)) == -n

Where n is a 32 bit signed integer; you can't use complex numbers arithmetic.

If you can't design such a function for the whole range of numbers, design it for the largest range possible.

Any ideas?

Answered By: viraptor ( 291)

You didn't say what kind of language they expected... Here's a static solution (Haskell). It's basically messing with the 2 most significant bits:

f :: Int -> Int
f x | (testBit x 30 /= testBit x 31) = negate $ complementBit x 30
    | otherwise = complementBit x 30

It's much easier in a dynamic language (Python). Just check if the argument is a number X and return a lambda that returns -X:

def f(x):
   if isinstance(x,int):
      return (lambda: -x)
   else:
      return x()
412
c, math
Aashish

This question has been asked in an Oracle interview.

How would you divide a number by 3 without using *, /, +, -, %, operators?

The number may be signed or unsigned.

Answered By: Coodey ( 391)

There is a simple function I found here. But it's using the + operator, so you have to add the values with the bit-operators:

// replaces the + operator
int add(int x, int y) {
    int a, b;
    do {
        a = x & y;
        b = x ^ y;
        x = a << 1;
        y = b;
    } while (a);
    return b;
}

int divideby3 (int num) {
    int sum = 0;
    while (num > 3) {
        sum = add(num >> 2, sum);
        num = add(num >> 2, num & 3);
    }
    if (num == 3)
        sum = add(sum, 1);
    return sum; 
}

As Jim commented this works because:

  • n = 4 * a + b
  • n / 3 = a + (a + b) / 3
  • So sum += a, n = a + b, and iterate
  • When a == 0 (n < 4), sum += floor(n / 3); i.e. 1, if n == 3, else 0

I'm looking for the fastest way to determine if a long value is a perfect square (i.e. its square root is another integer). I've done it the easy way, by using the built-in Math.sqrt() function, but I'm wondering if there is a way to do it faster by restricting yourself to integer-only domain. Maintaining a lookup table is impratical (since there are about 231.5 integers whose square is less than 263).

Here is the very simple and straightforward way I'm doing it now:

public final static boolean isPerfectSquare(long n)
{
  if (n < 0)
    return false;

  long tst = (long)(Math.sqrt(n) + 0.5);
  return tst*tst == n;
}

Notes: I'm using this function in many Project Euler problems. So no one else will ever have to maintain this code. And this kind of micro-optimization could actually make a difference, since part of the challenge is to do every algorithm in less than a minute, and this function will need to be called millions of times in some problems.


Update 2: A new solution posted by A. Rex has proven to be even faster. In a run over the first 1 billion integers, the solution only required 34% of the time that the original solution used. While the John Carmack hack is a little better for small values of n, the benefit compared to this solution is pretty small.

Here is the A. Rex solution, converted to Java:

private final static boolean isPerfectSquare(long n)
{
  // Quickfail
  if( n < 0 || ((n&2) != 0) || ((n & 7) == 5) || ((n & 11) == 8) )
    return false;
  if( n == 0 )
    return true;

  // Check mod 255 = 3 * 5 * 17, for fun
  long y = n;
  y = (y & 0xffffffffL) + (y >> 32);
  y = (y & 0xffffL) + (y >> 16);
  y = (y & 0xffL) + ((y >> 8) & 0xffL) + (y >> 16);
  if( bad255[(int)y] )
      return false;

  // Divide out powers of 4 using binary search
  if((n & 0xffffffffL) == 0)
      n >>= 32;
  if((n & 0xffffL) == 0)
      n >>= 16;
  if((n & 0xffL) == 0)
      n >>= 8;
  if((n & 0xfL) == 0)
      n >>= 4;
  if((n & 0x3L) == 0)
      n >>= 2;

  if((n & 0x7L) != 1)
      return false;

  // Compute sqrt using something like Hensel's lemma
  long r, t, z;
  r = start[(int)((n >> 3) & 0x3ffL)];
  do {
    z = n - r * r;
    if( z == 0 )
      return true;
    if( z < 0 )
      return false;
    t = z & (-z);
    r += (z & t) >> 1;
    if( r > (t  >> 1) )
    r = t - r;
  } while( t <= (1L << 33) );
  return false;
}

private static boolean[] bad255 =
{
   false,false,true ,true ,false,true ,true ,true ,true ,false,true ,true ,true ,
   true ,true ,false,false,true ,true ,false,true ,false,true ,true ,true ,false,
   true ,true ,true ,true ,false,true ,true ,true ,false,true ,false,true ,true ,
   true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,false,true ,false,
   true ,true ,true ,false,true ,true ,true ,true ,false,true ,true ,true ,false,
   true ,false,true ,true ,false,false,true ,true ,true ,true ,true ,false,true ,
   true ,true ,true ,false,true ,true ,false,false,true ,true ,true ,true ,true ,
   true ,true ,true ,false,true ,true ,true ,true ,true ,false,true ,true ,true ,
   true ,true ,false,true ,true ,true ,true ,false,true ,true ,true ,false,true ,
   true ,true ,true ,false,false,true ,true ,true ,true ,true ,true ,true ,true ,
   true ,true ,true ,true ,true ,false,false,true ,true ,true ,true ,true ,true ,
   true ,false,false,true ,true ,true ,true ,true ,false,true ,true ,false,true ,
   true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,false,true ,true ,
   false,true ,false,true ,true ,false,true ,true ,true ,true ,true ,true ,true ,
   true ,true ,true ,true ,false,true ,true ,false,true ,true ,true ,true ,true ,
   false,false,true ,true ,true ,true ,true ,true ,true ,false,false,true ,true ,
   true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,false,false,
   true ,true ,true ,true ,false,true ,true ,true ,false,true ,true ,true ,true ,
   false,true ,true ,true ,true ,true ,false,true ,true ,true ,true ,true ,false,
   true ,true ,true ,true ,true ,true ,true ,true ,false,false,true ,true ,false,
   true ,true ,true ,true ,false,true ,true ,true ,true ,true ,false,false,true ,
   true ,false,true ,false,true ,true ,true ,false,true ,true ,true ,true ,false,
   true ,true ,true ,false,true ,false,true ,true ,true ,true ,true ,true ,true ,
   true ,true ,true ,true ,true ,false,true ,false,true ,true ,true ,false,true ,
   true ,true ,true ,false,true ,true ,true ,false,true ,false,true ,true ,false,
   false,true ,true ,true ,true ,true ,false,true ,true ,true ,true ,false,true ,
   true ,false,false,true ,true ,true ,true ,true ,true ,true ,true ,false,true ,
   true ,true ,true ,true ,false,true ,true ,true ,true ,true ,false,true ,true ,
   true ,true ,false,true ,true ,true ,false,true ,true ,true ,true ,false,false,
   true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,
   false,false,true ,true ,true ,true ,true ,true ,true ,false,false,true ,true ,
   true ,true ,true ,false,true ,true ,false,true ,true ,true ,true ,true ,true ,
   true ,true ,true ,true ,true ,false,true ,true ,false,true ,false,true ,true ,
   false,true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,true ,false,
   true ,true ,false,true ,true ,true ,true ,true ,false,false,true ,true ,true ,
   true ,true ,true ,true ,false,false,true ,true ,true ,true ,true ,true ,true ,
   true ,true ,true ,true ,true ,true ,false,false,true ,true ,true ,true ,false,
   true ,true ,true ,false,true ,true ,true ,true ,false,true ,true ,true ,true ,
   true ,false,true ,true ,true ,true ,true ,false,true ,true ,true ,true ,true ,
   true ,true ,true ,false,false
};

private static int[] start =
{
  1,3,1769,5,1937,1741,7,1451,479,157,9,91,945,659,1817,11,
  1983,707,1321,1211,1071,13,1479,405,415,1501,1609,741,15,339,1703,203,
  129,1411,873,1669,17,1715,1145,1835,351,1251,887,1573,975,19,1127,395,
  1855,1981,425,453,1105,653,327,21,287,93,713,1691,1935,301,551,587,
  257,1277,23,763,1903,1075,1799,1877,223,1437,1783,859,1201,621,25,779,
  1727,573,471,1979,815,1293,825,363,159,1315,183,27,241,941,601,971,
  385,131,919,901,273,435,647,1493,95,29,1417,805,719,1261,1177,1163,
  1599,835,1367,315,1361,1933,1977,747,31,1373,1079,1637,1679,1581,1753,1355,
  513,1539,1815,1531,1647,205,505,1109,33,1379,521,1627,1457,1901,1767,1547,
  1471,1853,1833,1349,559,1523,967,1131,97,35,1975,795,497,1875,1191,1739,
  641,1149,1385,133,529,845,1657,725,161,1309,375,37,463,1555,615,1931,
  1343,445,937,1083,1617,883,185,1515,225,1443,1225,869,1423,1235,39,1973,
  769,259,489,1797,1391,1485,1287,341,289,99,1271,1701,1713,915,537,1781,
  1215,963,41,581,303,243,1337,1899,353,1245,329,1563,753,595,1113,1589,
  897,1667,407,635,785,1971,135,43,417,1507,1929,731,207,275,1689,1397,
  1087,1725,855,1851,1873,397,1607,1813,481,163,567,101,1167,45,1831,1205,
  1025,1021,1303,1029,1135,1331,1017,427,545,1181,1033,933,1969,365,1255,1013,
  959,317,1751,187,47,1037,455,1429,609,1571,1463,1765,1009,685,679,821,
  1153,387,1897,1403,1041,691,1927,811,673,227,137,1499,49,1005,103,629,
  831,1091,1449,1477,1967,1677,697,1045,737,1117,1737,667,911,1325,473,437,
  1281,1795,1001,261,879,51,775,1195,801,1635,759,165,1871,1645,1049,245,
  703,1597,553,955,209,1779,1849,661,865,291,841,997,1265,1965,1625,53,
  1409,893,105,1925,1297,589,377,1579,929,1053,1655,1829,305,1811,1895,139,
  575,189,343,709,1711,1139,1095,277,993,1699,55,1435,655,1491,1319,331,
  1537,515,791,507,623,1229,1529,1963,1057,355,1545,603,1615,1171,743,523,
  447,1219,1239,1723,465,499,57,107,1121,989,951,229,1521,851,167,715,
  1665,1923,1687,1157,1553,1869,1415,1749,1185,1763,649,1061,561,531,409,907,
  319,1469,1961,59,1455,141,1209,491,1249,419,1847,1893,399,211,985,1099,
  1793,765,1513,1275,367,1587,263,1365,1313,925,247,1371,1359,109,1561,1291,
  191,61,1065,1605,721,781,1735,875,1377,1827,1353,539,1777,429,1959,1483,
  1921,643,617,389,1809,947,889,981,1441,483,1143,293,817,749,1383,1675,
  63,1347,169,827,1199,1421,583,1259,1505,861,457,1125,143,1069,807,1867,
  2047,2045,279,2043,111,307,2041,597,1569,1891,2039,1957,1103,1389,231,2037,
  65,1341,727,837,977,2035,569,1643,1633,547,439,1307,2033,1709,345,1845,
  1919,637,1175,379,2031,333,903,213,1697,797,1161,475,1073,2029,921,1653,
  193,67,1623,1595,943,1395,1721,2027,1761,1955,1335,357,113,1747,1497,1461,
  1791,771,2025,1285,145,973,249,171,1825,611,265,1189,847,1427,2023,1269,
  321,1475,1577,69,1233,755,1223,1685,1889,733,1865,2021,1807,1107,1447,1077,
  1663,1917,1129,1147,1775,1613,1401,555,1953,2019,631,1243,1329,787,871,885,
  449,1213,681,1733,687,115,71,1301,2017,675,969,411,369,467,295,693,
  1535,509,233,517,401,1843,1543,939,2015,669,1527,421,591,147,281,501,
  577,195,215,699,1489,525,1081,917,1951,2013,73,1253,1551,173,857,309,
  1407,899,663,1915,1519,1203,391,1323,1887,739,1673,2011,1585,493,1433,117,
  705,1603,1111,965,431,1165,1863,533,1823,605,823,1179,625,813,2009,75,
  1279,1789,1559,251,657,563,761,1707,1759,1949,777,347,335,1133,1511,267,
  833,1085,2007,1467,1745,1805,711,149,1695,803,1719,485,1295,1453,935,459,
  1151,381,1641,1413,1263,77,1913,2005,1631,541,119,1317,1841,1773,359,651,
  961,323,1193,197,175,1651,441,235,1567,1885,1481,1947,881,2003,217,843,
  1023,1027,745,1019,913,717,1031,1621,1503,867,1015,1115,79,1683,793,1035,
  1089,1731,297,1861,2001,1011,1593,619,1439,477,585,283,1039,1363,1369,1227,
  895,1661,151,645,1007,1357,121,1237,1375,1821,1911,549,1999,1043,1945,1419,
  1217,957,599,571,81,371,1351,1003,1311,931,311,1381,1137,723,1575,1611,
  767,253,1047,1787,1169,1997,1273,853,1247,413,1289,1883,177,403,999,1803,
  1345,451,1495,1093,1839,269,199,1387,1183,1757,1207,1051,783,83,423,1995,
  639,1155,1943,123,751,1459,1671,469,1119,995,393,219,1743,237,153,1909,
  1473,1859,1705,1339,337,909,953,1771,1055,349,1993,613,1393,557,729,1717,
  511,1533,1257,1541,1425,819,519,85,991,1693,503,1445,433,877,1305,1525,
  1601,829,809,325,1583,1549,1991,1941,927,1059,1097,1819,527,1197,1881,1333,
  383,125,361,891,495,179,633,299,863,285,1399,987,1487,1517,1639,1141,
  1729,579,87,1989,593,1907,839,1557,799,1629,201,155,1649,1837,1063,949,
  255,1283,535,773,1681,461,1785,683,735,1123,1801,677,689,1939,487,757,
  1857,1987,983,443,1327,1267,313,1173,671,221,695,1509,271,1619,89,565,
  127,1405,1431,1659,239,1101,1159,1067,607,1565,905,1755,1231,1299,665,373,
  1985,701,1879,1221,849,627,1465,789,543,1187,1591,923,1905,979,1241,181
};

Update: I've tried the different solutions presented below.

  • After exhaustive testing, I found that adding 0.5 to the result of Math.sqrt() is not necessary, at least not on my machine.
  • The John Carmack hack was faster, but it gave incorrect results starting at n=410881. However, as suggested by BobbyShaftoe, we can use the Carmack hack for n < 410881.
  • Newton's method was a good bit slower than Math.sqrt(). This is probably because Math.sqrt() uses something similar to Newton's Method, but implemented in the hardware so it's much faster than in Java. Also, Newton's Method still required use of doubles.
  • A modified Newton's method, which used a few tricks so that only integer math was involved, required some hacks to avoid overflow (I want this function to work with all positive 64-bit signed integers), and it was still slower than Math.sqrt().
  • Binary chop was even slower. This makes sense because the binary chop will on average require 16 passes to find the square root of a 64-bit number.

The one suggestion which did show improvements was made by John D. Cook. You can observe that the last hex digit (i.e. the last 4 bits) of a perfect square must be 0, 1, 4, or 9. This means that 75% of numbers can be immediately eliminated as possible squares. Implementing this solution resulted in about a 50% reduction in runtime.

Working from John's suggestion, I investigated properties of the last n bits of a perfect square. By analyzing the last 6 bits, I found that only 12 out of 64 values are possible for the last 6 bits. This means 81% of values can be eliminated without using any math. Implementing this solution gave an additional 8% reduction in runtime (compared to my original algorithm). Analyzing more than 6 bits results in a list of possible ending bits which is too large to be practical.

Here is the code that I have used, which runs in 42% of the time required by the original algorithm (based on a run over the first 100 million integers). For values of n less than 410881, it runs in only 29% of the time required by the original algorithm.

private final static boolean isPerfectSquare(long n)
{
  if (n < 0)
    return false;

  switch((int)(n & 0x3F))
  {
  case 0x00: case 0x01: case 0x04: case 0x09: case 0x10: case 0x11:
  case 0x19: case 0x21: case 0x24: case 0x29: case 0x31: case 0x39:
    long sqrt;
    if(n < 410881L)
    {
      //John Carmack hack, converted to Java.
      // See: http://www.codemaestro.com/reviews/9
      int i;
      float x2, y;

      x2 = n * 0.5F;
      y  = n;
      i  = Float.floatToRawIntBits(y);
      i  = 0x5f3759df - ( i >> 1 );
      y  = Float.intBitsToFloat(i);
      y  = y * ( 1.5F - ( x2 * y * y ) );

      sqrt = (long)(1.0F/y);
    }
    else
    {
      //Carmack hack gives incorrect answer for n >= 410881.
      sqrt = (long)Math.sqrt(n);
    }
    return sqrt*sqrt == n;

  default:
    return false;
  }
}

Notes:

  • According to John's tests, using or statements is faster in C++ than using a switch, but in Java and C# there appears to be no difference between or and switch.
  • I also tried making a lookup table (as a private static array of 64 boolean values). Then instead of either switch or or statement, I would just say if(lookup[(int)(n&0x3F)]) { test } else return false;. To my surprise, this was (just slightly) slower. I'm not sure why. This is because array bounds are checked in Java.
Answered By: A. Rex ( 208)

I figured out a method that works ~35% faster than your 6bits+Carmack+sqrt code, at least with my CPU (x86) and programming language (C/C++). Your results may vary, especially because I don't know how the Java factor will play out.

My approach is threefold:

  1. First, filter out obvious answers. This includes negative numbers and looking at the last 4 bits. (I found looking at the last six didn't help.) I also answer yes for 0. (In reading the code below, note that my input is int64 x.)
    if( x < 0 || (x&2) || ((x & 7) == 5) || ((x & 11) == 8) )
        return false;
    if( x == 0 )
        return true;
  2. Next, check if it's a square modulo 255 = 3 * 5 * 17. Because that's a product of three distinct primes, only about 1/8 of the residues mod 255 are squares. However, in my experience, calling the modulo operator (%) costs more than the benefit one gets, so I use bit tricks involving 255 = 2^8-1 to compute the residue. (For better or worse, I am not using the trick of reading individual bytes out of a word, only bitwise-and and shifts.)
    int64 y = x;
    y = (y & 4294967295LL) + (y >> 32); 
    y = (y & 65535) + (y >> 16);
    y = (y & 255) + ((y >> 8) & 255) + (y >> 16);
    // At this point, y is between 0 and 511.  More code can reduce it farther.
    
    To actually check if the residue is a square, I look up the answer in a precomputed table.
    if( bad255[y] )
        return false;
    // However, I just use a table of size 512
    
  3. Finally, try to compute the square root using a method similar to Hensel's lemma. (I don't think it's applicable directly, but it works with some modifications.) Before doing that, I divide out all powers of 4 with a binary search:
    if((x & 4294967295LL) == 0)
        x >>= 32;
    if((x & 65535) == 0)
        x >>= 16;
    if((x & 255) == 0)
        x >>= 8;
    if((x & 15) == 0)
        x >>= 4;
    if((x & 3) == 0)
        x >>= 2;
    At this point, for our number to be a square, it must be 1 mod 8.
    if((x & 7) != 1)
        return false;
    The basic structure of Hensel's lemma is the following. (Note: untested code; if it doesn't work, try t=2 or 8.)
    int64 t = 4, r = 1;
    t <<= 1; r += ((x - r * r) & t) >> 1;
    t <<= 1; r += ((x - r * r) & t) >> 1;
    t <<= 1; r += ((x - r * r) & t) >> 1;
    // Repeat until t is 2^33 or so.  Use a loop if you want.
    The idea is that at each iteration, you add one bit onto r, the "current" square root of x; each square root is accurate modulo a larger and larger power of 2, namely t/2. At the end, r and t/2-r will be square roots of x modulo t/2. (Note that if r is a square root of x, then so is -r. This is true even modulo numbers, but beware, modulo some numbers, things can have even more than 2 square roots; notably, this includes powers of 2.) Because our actual square root is less than 2^32, at that point we can actually just check if r or t/2-r are real square roots. In my actual code, I use the following modified loop:
    int64 r, t, z;
    r = start[(x >> 3) & 1023];
    do {
        z = x - r * r;
        if( z == 0 )
            return true;
        if( z < 0 )
            return false;
        t = z & (-z);
        r += (z & t) >> 1;
        if( r > (t >> 1) )
            r = t - r;
    } while( t <= (1LL << 33) );
    The speedup here is obtained in three ways: precomputed start value (equivalent to ~10 iterations of the loop), earlier exit of the loop, and skipping some t values. For the last part, I look at z = r - x * x, and set t to be the largest power of 2 dividing z with a bit trick. This allows me to skip t values that wouldn't have affected the value of r anyway. The precomputed start value in my case picks out the "smallest positive" square root modulo 8192.

Even if this code doesn't work faster for you, I hope you enjoy some of the ideas it contains. Complete, tested code follows, including the precomputed tables.

typedef signed long long int int64;

int start[1024] =
{1,3,1769,5,1937,1741,7,1451,479,157,9,91,945,659,1817,11,
1983,707,1321,1211,1071,13,1479,405,415,1501,1609,741,15,339,1703,203,
129,1411,873,1669,17,1715,1145,1835,351,1251,887,1573,975,19,1127,395,
1855,1981,425,453,1105,653,327,21,287,93,713,1691,1935,301,551,587,
257,1277,23,763,1903,1075,1799,1877,223,1437,1783,859,1201,621,25,779,
1727,573,471,1979,815,1293,825,363,159,1315,183,27,241,941,601,971,
385,131,919,901,273,435,647,1493,95,29,1417,805,719,1261,1177,1163,
1599,835,1367,315,1361,1933,1977,747,31,1373,1079,1637,1679,1581,1753,1355,
513,1539,1815,1531,1647,205,505,1109,33,1379,521,1627,1457,1901,1767,1547,
1471,1853,1833,1349,559,1523,967,1131,97,35,1975,795,497,1875,1191,1739,
641,1149,1385,133,529,845,1657,725,161,1309,375,37,463,1555,615,1931,
1343,445,937,1083,1617,883,185,1515,225,1443,1225,869,1423,1235,39,1973,
769,259,489,1797,1391,1485,1287,341,289,99,1271,1701,1713,915,537,1781,
1215,963,41,581,303,243,1337,1899,353,1245,329,1563,753,595,1113,1589,
897,1667,407,635,785,1971,135,43,417,1507,1929,731,207,275,1689,1397,
1087,1725,855,1851,1873,397,1607,1813,481,163,567,101,1167,45,1831,1205,
1025,1021,1303,1029,1135,1331,1017,427,545,1181,1033,933,1969,365,1255,1013,
959,317,1751,187,47,1037,455,1429,609,1571,1463,1765,1009,685,679,821,
1153,387,1897,1403,1041,691,1927,811,673,227,137,1499,49,1005,103,629,
831,1091,1449,1477,1967,1677,697,1045,737,1117,1737,667,911,1325,473,437,
1281,1795,1001,261,879,51,775,1195,801,1635,759,165,1871,1645,1049,245,
703,1597,553,955,209,1779,1849,661,865,291,841,997,1265,1965,1625,53,
1409,893,105,1925,1297,589,377,1579,929,1053,1655,1829,305,1811,1895,139,
575,189,343,709,1711,1139,1095,277,993,1699,55,1435,655,1491,1319,331,
1537,515,791,507,623,1229,1529,1963,1057,355,1545,603,1615,1171,743,523,
447,1219,1239,1723,465,499,57,107,1121,989,951,229,1521,851,167,715,
1665,1923,1687,1157,1553,1869,1415,1749,1185,1763,649,1061,561,531,409,907,
319,1469,1961,59,1455,141,1209,491,1249,419,1847,1893,399,211,985,1099,
1793,765,1513,1275,367,1587,263,1365,1313,925,247,1371,1359,109,1561,1291,
191,61,1065,1605,721,781,1735,875,1377,1827,1353,539,1777,429,1959,1483,
1921,643,617,389,1809,947,889,981,1441,483,1143,293,817,749,1383,1675,
63,1347,169,827,1199,1421,583,1259,1505,861,457,1125,143,1069,807,1867,
2047,2045,279,2043,111,307,2041,597,1569,1891,2039,1957,1103,1389,231,2037,
65,1341,727,837,977,2035,569,1643,1633,547,439,1307,2033,1709,345,1845,
1919,637,1175,379,2031,333,903,213,1697,797,1161,475,1073,2029,921,1653,
193,67,1623,1595,943,1395,1721,2027,1761,1955,1335,357,113,1747,1497,1461,
1791,771,2025,1285,145,973,249,171,1825,611,265,1189,847,1427,2023,1269,
321,1475,1577,69,1233,755,1223,1685,1889,733,1865,2021,1807,1107,1447,1077,
1663,1917,1129,1147,1775,1613,1401,555,1953,2019,631,1243,1329,787,871,885,
449,1213,681,1733,687,115,71,1301,2017,675,969,411,369,467,295,693,
1535,509,233,517,401,1843,1543,939,2015,669,1527,421,591,147,281,501,
577,195,215,699,1489,525,1081,917,1951,2013,73,1253,1551,173,857,309,
1407,899,663,1915,1519,1203,391,1323,1887,739,1673,2011,1585,493,1433,117,
705,1603,1111,965,431,1165,1863,533,1823,605,823,1179,625,813,2009,75,
1279,1789,1559,251,657,563,761,1707,1759,1949,777,347,335,1133,1511,267,
833,1085,2007,1467,1745,1805,711,149,1695,803,1719,485,1295,1453,935,459,
1151,381,1641,1413,1263,77,1913,2005,1631,541,119,1317,1841,1773,359,651,
961,323,1193,197,175,1651,441,235,1567,1885,1481,1947,881,2003,217,843,
1023,1027,745,1019,913,717,1031,1621,1503,867,1015,1115,79,1683,793,1035,
1089,1731,297,1861,2001,1011,1593,619,1439,477,585,283,1039,1363,1369,1227,
895,1661,151,645,1007,1357,121,1237,1375,1821,1911,549,1999,1043,1945,1419,
1217,957,599,571,81,371,1351,1003,1311,931,311,1381,1137,723,1575,1611,
767,253,1047,1787,1169,1997,1273,853,1247,413,1289,1883,177,403,999,1803,
1345,451,1495,1093,1839,269,199,1387,1183,1757,1207,1051,783,83,423,1995,
639,1155,1943,123,751,1459,1671,469,1119,995,393,219,1743,237,153,1909,
1473,1859,1705,1339,337,909,953,1771,1055,349,1993,613,1393,557,729,1717,
511,1533,1257,1541,1425,819,519,85,991,1693,503,1445,433,877,1305,1525,
1601,829,809,325,1583,1549,1991,1941,927,1059,1097,1819,527,1197,1881,1333,
383,125,361,891,495,179,633,299,863,285,1399,987,1487,1517,1639,1141,
1729,579,87,1989,593,1907,839,1557,799,1629,201,155,1649,1837,1063,949,
255,1283,535,773,1681,461,1785,683,735,1123,1801,677,689,1939,487,757,
1857,1987,983,443,1327,1267,313,1173,671,221,695,1509,271,1619,89,565,
127,1405,1431,1659,239,1101,1159,1067,607,1565,905,1755,1231,1299,665,373,
1985,701,1879,1221,849,627,1465,789,543,1187,1591,923,1905,979,1241,181};

bool bad255[512] =
{0,0,1,1,0,1,1,1,1,0,1,1,1,1,1,0,0,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,
 1,1,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,1,1,
 0,1,0,1,1,0,0,1,1,1,1,1,0,1,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,1,0,1,
 1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,1,1,1,0,0,1,1,1,1,1,1,
 1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,0,1,1,0,1,1,1,1,1,
 1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,
 1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,
 1,0,1,1,1,0,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,1,
 0,0,1,1,0,1,1,1,1,0,1,1,1,1,1,0,0,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,
 1,1,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,1,1,
 0,1,0,1,1,0,0,1,1,1,1,1,0,1,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,1,0,1,
 1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,1,1,1,0,0,1,1,1,1,1,1,
 1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,0,1,1,0,1,1,1,1,1,
 1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,
 1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,
 1,0,1,1,1,0,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,1,
 0,0};

inline bool square( int64 x ) {
    // Quickfail
    if( x < 0 || (x&2) || ((x & 7) == 5) || ((x & 11) == 8) )
        return false;
    if( x == 0 )
        return true;

    // Check mod 255 = 3 * 5 * 17, for fun
    int64 y = x;
    y = (y & 4294967295LL) + (y >> 32);
    y = (y & 65535) + (y >> 16);
    y = (y & 255) + ((y >> 8) & 255) + (y >> 16);
    if( bad255[y] )
        return false;

    // Divide out powers of 4 using binary search
    if((x & 4294967295LL) == 0)
        x >>= 32;
    if((x & 65535) == 0)
        x >>= 16;
    if((x & 255) == 0)
        x >>= 8;
    if((x & 15) == 0)
        x >>= 4;
    if((x & 3) == 0)
        x >>= 2;

    if((x & 7) != 1)
        return false;

    // Compute sqrt using something like Hensel's lemma
    int64 r, t, z;
    r = start[(x >> 3) & 1023];
    do {
        z = x - r * r;
        if( z == 0 )
            return true;
        if( z < 0 )
            return false;
        t = z & (-z);
        r += (z & t) >> 1;
        if( r > (t  >> 1) )
            r = t - r;
    } while( t <= (1LL << 33) );

    return false;
}
345
Ishan Sharma

I was trying various methods to implement a program that gives the digits of pi sequentially. I tried the Taylor series method, but it proved to converge extremely slowly (when I compared my result with the online values after some time). Anyway, I am trying better algorithms.

So, while writing the program I got stuck on a problem, as with all algorithms: How do I know that the n digits that I've calculated are accurate?

Answered By: Mysticial ( 963)

Since I'm the current world record holder for the most digits of pi, I'll add my two cents:

Unless you're actually setting a new world record, the common practice is just to verify the computed digits against the known values. So that's simple enough.

In fact, I have a webpage that lists snippets of digits for the purpose of verifying computations against them: http://www.numberworld.org/digits/Pi/


But when you get into world-record territory, there's nothing to compare against.

Historically, the standard approach for verifying that computed digits are correct is to recompute the digits using a second algorithm. So if either computation goes bad, the digits at the end won't match.

This does typically more than double the amount of time needed (since the second algorithm is usually slower). But it's the only way to verify the computed digits once you've wandered into the uncharted territory of never-before-computed digits and a new world record.


Back in the days where supercomputers were setting the records, two different AGM algorithms were commonly used:

These are both O(N log(N)^2) algorithms that were fairly easy to implement.

However, nowadays, things are a bit different. In the last three world records, instead of performing two computations, we performed only one computation using the fastest known formula (Chudnovsky Formula):

Enter image description here

This algorithm is much harder to implement, but it is a lot faster than the AGM algorithms.

Then we verify the binary digits using the BBP formulas for digit extraction.

Enter image description here

This formula allows you to compute arbitrary binary digits without computing all the digits before it. So it is used to verify the last few computed binary digits. Therefore it is much faster than a full computation.

The advantage of this is:

  1. Only one expensive computation is needed.

The disadvantage is:

  1. An implementation of the Bailey–Borwein–Plouffe (BBP) formula is needed.
  2. An additional step is needed to verify the radix conversion from binary to decimal.

I've glossed over some details of why verifying the last few digits implies that all the digits are correct. But it is easy to see this since any computation error will propagate to the last digits.


Now this last step (verifying the conversion) is actually fairly important. One of the previous world record holders actually called us out on this because, initially, I didn't give a sufficient description of how it worked.

So I've pulled this snippet from my blog:

N = # of decimal digits desired
p = 64-bit prime number

Enter image description here

Compute A using base 10 arithmetic and B using binary arithmetic.

enter image description here

If A = B, then with "extremely high probability", the conversion is correct.


For further reading, see my blog post Pi - 5 Trillion Digits.

258
polygenelubricants

I had an interesting job interview experience a while back. The question started really easy:

Q1: We have a bag containing numbers 1, 2, 3, …, 100. Each number appears exactly once, so there are 100 numbers. Now one number is randomly picked out of the bag. Find the missing number.

I've heard this interview question before, of course, so I very quickly answered along the lines of:

A1: Well, the sum of the numbers 1 + 2 + 3 + … + N is (N+1)(N/2) (see Wikipedia: sum of arithmetic series). For N = 100, the sum is 5050.

Thus, if all numbers are present in the bag, the sum will be exactly 5050. Since one number is missing, the sum will be less than this, and the difference is that number. So we can find that missing number in O(N) time and O(1) space.

At this point I thought I had done well, but all of a sudden the question took an unexpected turn:

Q2: That is correct, but now how would you do this if TWO numbers are missing?

I had never seen/heard/considered this variation before, so I panicked and couldn't answer the question. The interviewer insisted on knowing my thought process, so I mentioned that perhaps we can get more information by comparing against the expected product, or perhaps doing a second pass after having gathered some information from the first pass, etc, but I really was just shooting in the dark rather than actually having a clear path to the solution.

The interviewer did try to encourage me by saying that having a second equation is indeed one way to solve the problem. At this point I was kind of upset (for not knowing the answer before hand), and asked if this is a general (read: "useful") programming technique, or if it's just a trick/gotcha answer.

The interviewer's answer surprised me: you can generalize the technique to find 3 missing numbers. In fact, you can generalize it to find k missing numbers.

Qk: If exactly k numbers are missing from the bag, how would you find it efficiently?

This was a few months ago, and I still couldn't figure out what this technique is. Obviously there's a Ω(N) time lower bound since we must scan all the numbers at least once, but the interviewer insisted that the TIME and SPACE complexity of the solving technique (minus the O(N) time input scan) is defined in k not N.

So the question here is simple:

  • How would you solve Q2?
  • How would you solve Q3?
  • How would you solve Qk?

Clarifications

  • Generally there are N numbers from 1..N, not just 1..100.
  • I'm not looking for the obvious set-based solution, e.g. using a bit set, encoding the presence/absence each number by the value of a designated bit, therefore using O(N) bits in additional space. We can't afford any additional space proportional to N.
  • I'm also not looking for the obvious sort-first approach. This and the set-based approach are worth mentioning in an interview (they are easy to implement, and depending on N, can be very practical). I'm looking for the Holy Grail solution (which may or may not be practical to implement, but has the desired asymptotic characteristics nevertheless).

So again, of course you must scan the input in O(N), but you can only capture small amount of information (defined in terms of k not N), and must then find the k missing numbers somehow.

Answered By: sdcvvc ( 139)

Here's a summary of Dimitris Andreou's link.

Remember sum of i-th powers, where i=1,2,..,k. This reduces the problem to solving the system of equations

a1 + a2 + ... + ak = b1

a12 + a22 + ... + ak2 = b2

...

a1k + a2k + ... + akk = bk

Using Newton's identities, knowing bi allows to compute

c1 = a1 + a2 + ... ak

c2 = a1a2 + a1a3 + ... + ak-1ak

...

ck = a1a2 ... ak

If you expand the polynomial (x-a1)...(x-ak) the coefficients will be exactly c1, ..., ck - see Viète's formulas. Since every polynomial factors uniquely (ring of polynomials is an Euclidean domain), this means ai are uniquely determined, up to permutation.

This ends a proof that remembering powers is enough to recover the numbers. For constant k, this is a good approach.

However, when k is varying, the direct approach of computing c1,...,ck is prohibitely expensive, since e.g. ck is the product of all missing numbers, magnitude n!/(n-k)!. To overcome this, perform computations in Zq field, where q is a prime such that n <= q < 2n - it exists by Bertrand's postulate. The proof doesn't need to be changed, since the formulas still hold, and factorization of polynomials is still unique. You also need an algorithm for factorization over finite fields, for example the one by Berlekamp or Cantor-Zassenhaus.

High level pseudocode for constant k:

  • Compute i-th powers of given numbers
  • Subtract to get sums of i-th powers of unknown numbers. Call the sums bi.
  • Use Newton's identities to compute coefficients from bi; call them ci. Basically, c1 = b1; c2 = (c1b1 - b2)/2; see Wikipedia for exact formulas
  • Factor the polynomial xk-c1xk-1 + ... + ck.
  • The roots of the polynomial are the needed numbers a1, ..., ak.

For varying k, find a prime n <= q < 2n using e.g. Miller-Rabin, and perform the steps with all numbers reduced modulo q.

As Heinrich Apfelmus commented, instead of a prime q you can use q=2⌈log n⌉ and perform arithmetic in finite field.

189
configurator

Today I needed a simple algorithm for checking if a number is a power of 2.

The algorithm needs to be:

  1. Simple
  2. Correct for any ulong value.

I came up with this simple algorithm:

private bool IsPowerOfTwo(ulong number)
{
    if (number == 0)
        return false;

    for (ulong power = 1; power > 0; power = power << 1)
    {
        // this for loop used shifting for powers of 2, meaning
        // that the value will become 0 after the last shift
        // (from binary 1000...0000 to 0000...0000) then, the for
        // loop will break out

        if (power == number)
            return true;
        if (power > number)
            return false;
    }
    return false;
}

But then I thought, how about checking if log2x is an exactly round number? But when I checked for 2^63+1, Math.Log returned exactly 63 because of rounding. So I checked if 2 to the power 63 is equal to the original number - and it is, because the calculation is done in doubles and not in exact numbers:

private bool IsPowerOfTwo_2(ulong number)
{
    double log = Math.Log(number, 2);
    double pow = Math.Pow(2, Math.Round(log));
    return pow == number;
}

This returned true for the given wrong value: 9223372036854775809.

Does anyone have any suggestion for a better algorithm?

Answered By: Greg Hewgill ( 451)

There's a simple trick for this problem:

bool IsPowerOfTwo(ulong x)
{
    return (x & (x - 1)) == 0;
}

For completeness, zero is not a power of two. If you want to take into account that edge case, here's how:

bool IsPowerOfTwo(ulong x)
{
    return (x != 0) && ((x & (x - 1)) == 0);
}

Explanation

First and foremost the bitwise binary & operator from MSDN definition:

Binary & operators are predefined for the integral types and bool. For integral types, & computes the logical bitwise AND of its operands. For bool operands, & computes the logical AND of its operands; that is, the result is true if and only if both its operands are true.

Now let's take a look at how this all plays out:

The function returns boolean (true / false) and accepts one incoming parameter of type unsigned long (x, in this case). Let us for the sake of simplicity assume that someone has passed the value 4 and called the function like so:

bool b = IsPowerOfTwo(4)

Now we replace each occurrence of x with 4:

return (4 != 0) && ((4 & (4-1)) == 0);

Well we already know that 4 != 0 evals to true, so far so good. But what about:

((4 & (4-1)) == 0)

This translates to this of course:

((4 & 3) == 0)

But what exactly is 4&3?

The binary representation of 4 is 100 and the binary representation of 3 is 011 (remember the & takes the binary representation of these numbers. So we have:

100 = 4
011 = 3

Imagine these values being stacked up much like elementary addition. The & operator says that if both values are equal to 1 then the result is 1, otherwise it is 0. So 1 & 1 = 1, 1 & 0 = 0, 0 & 0 = 0, and 0 & 1 = 0. So we do the math:

100
011
----
000

The result is simply 0. So we go back and look at what our return statement now translates to:

return (4 != 0) && ((4 & 3) == 0);

Which translates now to:

return true && (0 == 0);
return true && true;

We all know that true && true is simply true, and this shows that for our example, 4 is a power of 2.

Given two date ranges, what is the simplest or most efficient way to determine whether the two date ranges overlap?

As an example, suppose we have ranges denoted by DateTime variables StartDate1 to EndDate1 and StartDate2 to EndDate2.

Answered By: Charles Bretana ( 312)

Let ConditionA Mean DateRange A Completely After DateRange B (True if StartA > EndB)

Let ConditionB Mean DateRange A Completely Before DateRange B (True if EndA < StartB)

Then Overlap exists if Neither A Nor B is true ( If one range is neither completely after the other, nor completely before the other, then they must overlap)

Now deMorgan's law says that:

Not (A Or B) <=> Not A And Not B

Which means (StartA <= EndB) and (EndA >= StartB)

NOTE: This includes conditions where the edges overlap exactly. If you wish to exclude that, change the >= operators to >, and <= to <

Is this defined by the language? Is there a defined maximum? Is it different in different browsers?

Answered By: Jimmy ( 153)

+/- 9007199254740992

ECMA Section 8.5 - Numbers

Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0).

They are 64-bit floating point values, the largest exact integral value is 253, or 9007199254740992.

Note that the bitwise operators and shift operators operate on 32-bit ints.


Test it out!

var x = 9007199254740992;
var y = -x;
x == x + 1; // true !
y == y - 1; // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
x / 2;      // 4503599627370496
x >> 1;     // 0
x | 1;      // 1
151
c#, math
Karsten

I want to ensure that a division of integers is always rounded up if necessary. Is there a better way than this? There is a lot of casting going on. :-)

(int)Math.Ceiling((double)myInt1 / myInt2)
Answered By: Eric Lippert ( 542)

UPDATE: This question was the subject of my blog in January 2013. Thanks for the great question!


Getting integer arithmetic correct is hard. As has been demonstrated amply thus far, the moment you try to do a "clever" trick, odds are good that you've made a mistake. And when a flaw is found, changing the code to fix the flaw without considering whether the fix breaks something else is not a good problem-solving technique. So far we've had I think five different incorrect integer arithmetic solutions to this completely not-particularly-difficult problem posted.

The right way to approach integer arithmetic problems -- that is, the way that increases the likelihood of getting the answer right the first time - is to approach the problem carefully, solve it one step at a time, and use good engineering principles in doing so.

Start by reading the specification for what you're trying to replace. The specification for integer division clearly states:

  1. The division rounds the result towards zero

  2. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs

  3. If the left operand is the smallest representable int and the right operand is –1, an overflow occurs. [...] it is implementation-defined as to whether [an ArithmeticException] is thrown or the overflow goes unreported with the resulting value being that of the left operand.

  4. If the value of the right operand is zero, a System.DivideByZeroException is thrown.

What we want is an integer division function which computes the quotient but rounds the result always upwards, not always towards zero.

So write a specification for that function. Our function int DivRoundUp(int dividend, int divisor) must have behaviour defined for every possible input. That undefined behaviour is deeply worrying, so let's eliminate it. We'll say that our operation has this specification:

  1. operation throws if divisor is zero

  2. operation throws if dividend is int.minval and divisor is -1

  3. if there is no remainder -- division is 'even' -- then the return value is the integral quotient

  4. Otherwise it returns the smallest integer that is greater than the quotient, that is, it always rounds up.

Now we have a specification, so we know we can come up with a testable design. Suppose we add an additional design criterion that the problem be solved solely with integer arithmetic, rather than computing the quotient as a double, since the "double" solution has been explicitly rejected in the problem statement.

So what must we compute? Clearly, to meet our spec while remaining solely in integer arithmetic, we need to know three facts. First, what was the integer quotient? Second, was the division free of remainder? And third, if not, was the integer quotient computed by rounding up or down?

Now that we have a specification and a design, we can start writing code.

public static int DivRoundUp(int dividend, int divisor)
{
  if (divisor == 0 ) throw ...
  if (divisor == -1 && dividend == Int32.MinValue) throw ...
  int roundedTowardsZeroQuotient = dividend / divisor;
  bool dividedEvenly = (dividend % divisor) == 0;
  if (dividedEvenly) 
    return roundedTowardsZeroQuotient;

  // At this point we know that divisor was not zero 
  // (because we would have thrown) and we know that 
  // dividend was not zero (because there would have been no remainder)
  // Therefore both are non-zero.  Either they are of the same sign, 
  // or opposite signs. If they're of opposite sign then we rounded 
  // UP towards zero so we're done. If they're of the same sign then 
  // we rounded DOWN towards zero, so we need to add one.

  bool wasRoundedDown = ((divisor > 0) == (dividend > 0));
  if (wasRoundedDown) 
    return roundedTowardsZeroQuotient + 1;
  else
    return roundedTowardsZeroQuotient;
}

Is this clever? No. Beautiful? No. Short? No. Correct according to the specification? I believe so, but I have not fully tested it. It looks pretty good though.

We're professionals here; use good engineering practices. Research your tools, specify the desired behaviour, consider error cases first, and write the code to emphasize its obvious correctness. And when you find a bug, consider whether your algorithm is deeply flawed to begin with before you just randomly start swapping the directions of comparisons around and break stuff that already works.