Top theory Questions

List of Tags
415
James Hall

So I'm using an app that stores images heavily in the DB. What's your outlook on this? I'm more of a type to store the location in the filesystem, than store it directly in the DB.

What do you think are the pros/cons?

Answered By: Mark Harrison ( 352)

I'm in charge of some applications that manage many TB of images. We've found that storing file paths in the database to be best.

There are a couple of issues:

  • database storage is usually more expensive than file system storage
  • you can super-accelerate file system access with standard off the shelf products
    • for example, many web servers use the operating system's sendfile() system call to asynchronously send a file directly from the file system to the network interface. Images stored in a database don't benefit from this optimization.
  • things like web servers, etc, need no special coding or processing to access images in the file system
  • databases win out where transactional integrity between the image and metadata are important.
    • it is more complex to manage integrity between db metadata and file system data
    • it is difficult (within the context of a web application) to guarantee data has been flushed to disk on the filesystem
198
Jason Baker

I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the amount of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here?

  • Like what exactly would an O(n^2) operation do?
  • And what the heck does it mean if an operation is O(n log(n))?
  • And does somebody have to smoke crack to write an O(x!)?
Answered By: Don Neufeld ( 191)

One way of thinking about it is this:

O(N^2) means for every element, you're doing something with every other element, such as comparing them. Bubble sort is an example of this.

O(N log N) means for every element, you're doing something that only needs to look at log N of the elements. This is usually because you know something about the elements that lets you make an efficient choice. Most efficient sorts are an example of this, such as merge sort.

O(N!) means do something for all possible permutations of the N elements. Traveling salesman is an example of this, where there are N! ways to visit the nodes, and the brute force solution is to look at the total cost of every possible permutation to find the optimal one.

179
Shalmanese

Are there any O(1/n) algorithms?

Or anything else which is less than O(1)?

Answered By: Konrad Rudolph ( 186)

This question isn't as stupid as it might seem. At least theoretically, something such as O(1/n) is completely sensible when we take the mathematical definition of the Big O notation:

enter image description here

Now you can easily substitute g(x) for 1/x … it's obvious that the above definition still holds for some f.

For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:

def get_faster(list):
    how_long = (1 / len(list)) * 100000
    sleep(how_long)

Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that sleep can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function still has runtime O(1).

But there are in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will not exhibit runtime behaviour below O(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).

A y-combinator is a comp-sci concept from the "functional" side of things. Most programmers don't know much at all about them, if they've even heard about them.

What is a y-combinator? How do they work? What are they good for? Are they useful in procedural languages?

Answered By: Nicholas Mancuso ( 85)

If you're ready for a long read, Mike Vanier has a great explanation. Long story short, it allows you to implement recursion in a language that doesn't necessarily support it natively.

For a person without a comp-sci background, what is a lambda in the world of Computer Science?

Answered By: mk. ( 205)

Lambda comes from the Lambda Calculus and refers to anonymous functions in programming.

Why is this cool? It allows you to write quick throw away functions without naming them. It also provides a nice way to write closures. With that power you can do things like this.

Python

def adder(x):
    return lambda y: x + y
add5 = adder(5)
add5(1)
6

JavaScript

var adder = function (x) {
    return function (y) {
        return x + y;
    };
};
add5 = adder(5);
add5(1) == 6

Scheme

(define adder
    (lambda (x)
        (lambda (y)
           (+ x y))))
(define add5
    (adder 5))
(add5 1)
6

As you can see from the snippet of Python and JavaScript, the function adder takes in an argument x, and returns an anonymous function, or lambda, that takes another argument y. That anonymous function allows you to create functions from functions. This is a simple example, but it should convey the power lambdas and closures have.