Top memory-management Questions

List of Tags
1548
mattshane

Programming language books usually explain that value types are created on the stack, and reference types are created on the heap, without really explaining what these two things are. With my only programming experience being in high level languages, I haven't read a clear explanation of this. I mean I understand what a stack is, but where and what are they (relative to the physical memory of a real computer)?

  • To what extent are they controlled by the OS or language runtime?
  • What is their scope?
  • What determines the size of each of them?
  • What makes one faster?
Answered By: Jeff Hill ( 1017)

The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.

The heap is memory set aside for dynamic allocation. Unlike the stack, there's no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.

Each thread gets a stack, while there's typically only one heap for the application (although it isn't uncommon to have multiple heaps for different types of allocation).

To answer your questions directly:


To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.


What is their scope?

The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.


What determines the size of each of them?

The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).


What makes one faster?

The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or free. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor's cache, making it very fast.

Here is the extract from the program in question. The matrix img[][] has the size SIZE×SIZE, and is initialized at:

img[j][i] = 2 * j + i

Then, you make a matrix res[][], and each field in here is made to be the average of the 9 fields around it in the img matrix. The border is left at 0 for simplicity.

for(i=1;i<SIZE-1;i++) 
    for(j=1;j<SIZE-1;j++) {
        res[j][i]=0;
        for(k=-1;k<2;k++) 
            for(l=-1;l<2;l++) 
                res[j][i] += img[j+l][i+k];
        res[j][i] /= 9;
}

That's all there's to the program. For completeness' sake, here is what comes before. No code comes after. As you can see, it's just initialization.

#define SIZE 8192
float img[SIZE][SIZE]; // input image
float res[SIZE][SIZE]; //result of mean filter
int i,j,k,l;
for(i=0;i<SIZE;i++) 
    for(j=0;j<SIZE;j++) 
        img[j][i] = (2*j+i)%8196;

Basically, this program is slow when SIZE is a multiple of 2048, e.g. the execution times:

SIZE = 8191: 3.44 secs
SIZE = 8192: 7.20 secs
SIZE = 8193: 3.18 secs

The compiler is GCC. From what I know, this is because of memory management, but I don't really know too much about that subject, which is why I'm asking here.

Also how to fix this would be nice, but if someone could explain these execution times I'd already be happy enough.

I already know of malloc/free, but the problem is not amount of memory used, it's merely execution time, so I don't know how that would help.

Answered By: Mysticial ( 600)

The difference is caused by the same super-alignment issue from the following related questions:

But that's only because there's one other problem with the code.

Starting from the original loop:

for(i=1;i<SIZE-1;i++) 
    for(j=1;j<SIZE-1;j++) {
        res[j][i]=0;
        for(k=-1;k<2;k++) 
            for(l=-1;l<2;l++) 
                res[j][i] += img[j+l][i+k];
        res[j][i] /= 9;
}

First notice that the two inner loops are trivial. They can be unrolled as follows:

for(i=1;i<SIZE-1;i++) {
    for(j=1;j<SIZE-1;j++) {
        res[j][i]=0;
        res[j][i] += img[j-1][i-1];
        res[j][i] += img[j  ][i-1];
        res[j][i] += img[j+1][i-1];
        res[j][i] += img[j-1][i  ];
        res[j][i] += img[j  ][i  ];
        res[j][i] += img[j+1][i  ];
        res[j][i] += img[j-1][i+1];
        res[j][i] += img[j  ][i+1];
        res[j][i] += img[j+1][i+1];
        res[j][i] /= 9;
    }
}

So that leaves the two outer-loops that we're interested in.

Now we can see the problem is the same in this question: Why does the order of the loops affect performance when iterating over a 2D array?

You are iterating the matrix column-wise instead of row-wise.


To solve this problem, you should interchange the two loops.

for(j=1;j<SIZE-1;j++) {
    for(i=1;i<SIZE-1;i++) {
        res[j][i]=0;
        res[j][i] += img[j-1][i-1];
        res[j][i] += img[j  ][i-1];
        res[j][i] += img[j+1][i-1];
        res[j][i] += img[j-1][i  ];
        res[j][i] += img[j  ][i  ];
        res[j][i] += img[j+1][i  ];
        res[j][i] += img[j-1][i+1];
        res[j][i] += img[j  ][i+1];
        res[j][i] += img[j+1][i+1];
        res[j][i] /= 9;
    }
}

This eliminates all the non-sequential access completely so you no longer get random slow-downs on large powers-of-two.


Core i7 920 @ 3.5 GHz

Original code:

8191: 1.499 seconds
8192: 2.122 seconds
8193: 1.582 seconds

Interchanged Outer-Loops:

8191: 0.376 seconds
8192: 0.357 seconds
8193: 0.351 seconds

I stumbled upon the Stack Overflow question Memory leak with std::string when using std::list<std::string> and one of the comments says this:

Stop using new so much. I can't see any reason you used new anywhere you did. You can create objects by value in C++ and it's one of the huge advantages to using the language. You do not have to allocate everything on the stack. Stop thinking like a Java programmer.

I'm not really sure what he means by that. Could anyone clarify why objects should be created by value in C++ as often as possible, and what difference it makes internally? (Or if I misinterpreted the answer, feel free to clarify what was meant.)

Answered By: Andr&#233; Caron ( 386)

There are 2 widely used memory allocation techniques: automatic allocation and dynamic allocation. Commonly, there is a corresponding region of memory for each: the stack and the heap.

Stack

The stack always allocates memory in a sequential fashion. It can do so because it requires you to release the memory in the reverse order (First-In, Last-Out: FILO). This is the memory allocation technique for local variables in many programming languages. It is very, very fast because it requires minimal book keeping and the next address to allocate is implicit.

In C++, this is called automatic storage because the storage is claimed automatically at the end of scope. As soon as execution of current code block (delimited using {}) is completed, memory for all variables in that block is automatically collected. This is also the moment where destructors are invoked to clean up resources.

Heap

The heap allows for a more flexible memory allocation mode. Bookkeeping is more complex and allocation is slower. Because there is no implicit release point, you must release the memory manually, using delete or delete[] (free in C). However, the absence of an implicit release point is the key to the heap's flexibility.

Reasons to use dynamic allocation

Even if using the heap is slower and potentially leads to memory leaks or memory fragmentation, there are perfectly good use cases for dynamic allocation, as it's less limited.

Two key reasons to use dynamic allocation:

  • You don't know how much memory you need at compile-time. For instance, when reading a text file into a string, you usually don't know what size the file has, so you can't decide how much memory to allocate until you run the program.

  • You want to allocate memory which will persist after leaving the current block. For instance, you may want to write a function string readfile(string path) that returns the contents of a file. In this case, even if the stack could hold the entire file contents, you could not return from a function and keep the allocated memory block.

Why dynamic allocation is often unnecessary.

In C++, there's a neat construct called a destructor. This mechanism allows you to manage resources by aligning the lifetime of the resource with the lifetime of a variable. It is commonly used to "wrap" resources into another object. std::string is a perfect example. This snippet:

int main ( int argc, char* argv[] )
{
    std::string program(argv[0]);
}

actually allocates a variable amount of memory. The std::string object allocates memory using the heap and releases it in its destructor. In this case, you did not need to manually manage any resources and still get the benefits of the dynamic memory allocation.

In particular, it implies that in this snippet:

int main ( int argc, char* argv[] )
{
    std::string * program = new std::string(argv[0]);
    delete program;
}

there is unneeded dynamic memory allocation. The program requires more typing (!) and introduces the risk of forgetting to deallocate the memory. It does this with no apparent benefit.

Why you should use automatic storage as often as possible

Basically, the last paragraph sums it up. Using automatic storage as often as possible makes your programs:

  • faster to type;
  • faster when run;
  • less prone to memory/resource leaks.

Bonus points

In the referenced question, there are additional concerns. In particular, the following class:

class Line {
public:
    Line();
    ~Line();
    std::string* mString;
};

Line::Line() {
    mString = new std::string("foo_bar");
}

Line::~Line() {
   //mString->clear(); // should not be neccessary
    delete mString;
}

Is actually a lot more risky to use than the following one:

class Line {
public:
    Line();
    std::string mString;
};

Line::Line() {
    mString = "foo_bar";
    // note: there is a cleaner way to write this.
}

The reason is that std::string properly defines a copy constructor. Consider the following program:

int main ()
{
    Line l1;
    Line l2 = l1;
}

Using the original version, this program will likely crash, as it uses delete on the same string twice. Using the modified version, each Line instance will own its own string instance, each with its own memory and both will be released at the end of the program.

Other notes

Extensive use of RAII is considered a best practice in C++ because of all the reasons above. However, there is an additional benefit which is not immediately obvious. Basically, it's better than the sum of its parts. The whole mechanism composes. It scales.

If you use the Line class as a building block:

 class Table
 {
      Line borders[4];
 };

Then

 int main ()
 {
     Table table;
 }

allocates 4 std::string instances, 4 Line instances, 1 Table instance and all the strings contents and everything is freed automagically.

268
Andrea Baccega

I would like to know how I can find the memory used on my Android application, programmatically.

I hope there is a way to do it. Plus I would like to understand how to get the free memory of the phone too.

Answered By: hackbod ( 487)

Note that memory usage on modern operating systems like Linux is an extremely complicated and difficult to understand area. In fact the chances of you actually correctly interpreting whatever numbers you get is extremely low. (Pretty much every time I look at memory usage numbers with other engineers, there is always a long discussion about what they actually mean that only results in a vague conclusion.)

First thing is to probably read the last part of this article which has some discussion of how memory is managed on Android:

http://android-developers.blogspot.com/2010/02/service-api-changes-starting-with.html

Now ActivityManager.getMemoryInfo() is our highest-level API for looking at overall memory usage. This is mostly there to help an application gauge how close the system is coming to having no more memory for background processes, thus needing to start killing needed processes like services. For pure Java applications, this should be of little use, since the Java heap limit is there in part to avoid one app from being able to stress the system to this point.

Going lower-level, you can use the Debug API to get raw kernel-level information about memory usage: http://developer.android.com/intl/de/reference/android/os/Debug.html#getMemoryInfo(android.os.Debug.MemoryInfo)

Note starting with 2.0 there is also an API, ActivityManager.getProcessMemoryInfo, to get this information about another process: http://developer.android.com/intl/de/reference/android/app/ActivityManager.html#getProcessMemoryInfo(int[])

This returns a low-level MemoryInfo structure with all of this data:

    /** The proportional set size for dalvik. */
    public int dalvikPss;
    /** The private dirty pages used by dalvik. */
    public int dalvikPrivateDirty;
    /** The shared dirty pages used by dalvik. */
    public int dalvikSharedDirty;

    /** The proportional set size for the native heap. */
    public int nativePss;
    /** The private dirty pages used by the native heap. */
    public int nativePrivateDirty;
    /** The shared dirty pages used by the native heap. */
    public int nativeSharedDirty;

    /** The proportional set size for everything else. */
    public int otherPss;
    /** The private dirty pages used by everything else. */
    public int otherPrivateDirty;
    /** The shared dirty pages used by everything else. */
    public int otherSharedDirty;

But as to what the difference is between "Pss", "PrivateDirty", and "SharedDirty"... well now the fun begins.

A lot of memory in Android (and Linux systems in general) is actually shared across multiple processes. So how much memory a processes uses is really not clear. Add on top of that paging out to disk (let alone swap which we don't use on Android) and it is even less clear.

Thus if you were to take all of the physical RAM actually mapped in to each process, and add up all of the processes, you would probably end up with a number much greater than the actual total RAM.

The Pss number is a metric the kernel computes that takes into account memory sharing -- basically each page of RAM in a process is scaled by a ratio of the number of other processes also using that page. This way you can (in theory) add up the pss across all processes to see the total RAM they are using, and compare pss between processes to get a rough idea of their relative weight.

The other interesting metric here is PrivateDirty, which is basically the amount of RAM inside the process that can not be paged to disk (it is not backed by the same data on disk), and is not shared with any other processes. Another way to look at this is the RAM that will become available to the system when that process goes away (and probably quickly subsumed into caches and other uses of it).

That is pretty much the SDK APIs for this. However there is more you can do as a developer with your device.

Using adb, there is a lot of information you can get about the memory use of a running system. A common one is the command "adb shell dumpsys meminfo" which will spit out a bunch of information about the memory use of each Java process, containing the above info as well as a variety of other things. You can also tack on the name or pid of a single process to see, for example "adb shell dumpsys meminfo system" give me the system process:

** MEMINFO in pid 890 [system] **
                    native   dalvik    other    total
            size:    10940     7047      N/A    17987
       allocated:     8943     5516      N/A    14459
            free:      336     1531      N/A     1867
           (Pss):     4585     9282    11916    25783
  (shared dirty):     2184     3596      916     6696
    (priv dirty):     4504     5956     7456    17916

 Objects
           Views:      149        ViewRoots:        4
     AppContexts:       13       Activities:        0
          Assets:        4    AssetManagers:        4
   Local Binders:      141    Proxy Binders:      158
Death Recipients:       49
 OpenSSL Sockets:        0

 SQL
            heap:      205          dbFiles:        0
       numPagers:        0   inactivePageKB:        0
    activePageKB:        0

The top section is the main one, where "size" is the total size in address space of a particular heap, "allocated" is the kb of actual allocations that heap thinks it has, "free" is the remaining kb free the heap has for additional allocations, and "pss" and "priv dirty" are the same as discussed before specific to pages associated with each of the heaps.

If you just want to look at memory usage across all processes, you can use the command "adb shell procrank". Output of this on the same system looks like:

  PID      Vss      Rss      Pss      Uss  cmdline
  890   84456K   48668K   25850K   21284K  system_server
 1231   50748K   39088K   17587K   13792K  com.android.launcher2
  947   34488K   28528K   10834K    9308K  com.android.wallpaper
  987   26964K   26956K    8751K    7308K  com.google.process.gapps
  954   24300K   24296K    6249K    4824K  com.android.phone
  948   23020K   23016K    5864K    4748K  com.android.inputmethod.latin
  888   25728K   25724K    5774K    3668K  zygote
  977   24100K   24096K    5667K    4340K  android.process.acore
...
   59     336K     332K      99K      92K  /system/bin/installd
   60     396K     392K      93K      84K  /system/bin/keystore
   51     280K     276K      74K      68K  /system/bin/servicemanager
   54     256K     252K      69K      64K  /system/bin/debuggerd

Here the Vss and Rss columns are basically noise (these are the straight-forward address space and RAM usage of a process, where if you add up the RAM usage across processes you get an ridiculously large number).

Pss is as we've seen before, and Uss is Priv Dirty.

Interesting thing to note here: Pss and Uss are slightly (or more than slightly) different than what we saw in meminfo. Why is that? Well procrank uses a different kernel mechanism to collect its data than meminfo does, and they give slightly different results. Why is that? Honestly I haven't a clue. I believe procrank may be the more accurate one... but really, this just leave the point: "take any memory info you get with a grain of salt; often a very large grain."

Finally there is the command "adb shell cat /proc/meminfo" that gives a summary of the overall memory usage of the system. There is a lot of data here, only the first few numbers worth discussing (and the remaining ones understood by few people, and my questions of those few people about them often resulting in conflicting explanations):

MemTotal:         395144 kB
MemFree:          184936 kB
Buffers:             880 kB
Cached:            84104 kB
SwapCached:            0 kB

MemTotal is the total amount of memory available to the kernel and user space (often less than the actual physical RAM of the device, since some of that RAM is needed for the radio, DMA buffers, etc).

MemFree is the amount of RAM that is not being used at all. The number you see here is very high; typically on an Android system this would be only a few MB, since we try to use available memory to keep processes running

Cached is the RAM being used for filesystem caches and other such things. Typical systems will need to have 20MB or so for this to avoid getting into bad paging states; the Android out of memory killer is tuned for a particular system to make sure that background processes are killed before the cached RAM is consumed too much by them to result in such paging.

Possible Duplicate:
Returning the address of local or temporary variable

I have the following code.

int * foo()
{
    int a = 5;
    return &a;
}

int main()
{
    int* p = foo();
    cout << *p;
    *p = 8;
    cout << *p;
}

And the code is just running with no runtime exceptions!

The output was 5 8

How can it be? Isn't the memory of a local variable inaccessible outside its function?

Answered By: Eric Lippert ( 2532)

How can it be? Isn't the memory of a local variable inaccessible outside its function?

You rent a hotel room. You put a book in the top drawer of the bedside table and go to sleep. You check out the next morning, but "forget" to give back your key. You steal the key!

A week later, you return to the hotel, do not check in, sneak into your old room with your stolen key, and look in the drawer. Your book is still there. Astonishing!

How can that be? Isn't the contents of a hotel room drawer inaccessible if you haven't rented the room?

Well, obviously that scenario can happen in the real world no problem. There is no mysterious force that causes your book to disappear when you are no longer authorized to be in the room. Nor is there a mysterious force that prevents you from entering a room with a stolen key.

The hotel management is not required to remove your book. You didn't make a contract with them that said that if you leave stuff behind, they'll shred it for you. If you illegally re-enter your room with a stolen key to get it back, the hotel security staff is not required to catch you sneaking in. You didn't make a contract with them that said "if I try to sneak back into my room later, you are required to stop me." Rather, you signed a contract with them that said "I promise not to sneak back into my room later", a contract which you broke.

In this situation anything can happen. The book can be there -- you got lucky. Someone else's book can be there and yours could be in the hotel's furnace. Someone could be there right when you come in, tearing your book to pieces. The hotel could have removed the table and book entirely and replaced it with a wardrobe. The entire hotel could be just about to be torn down and replaced with a football stadium, and you are going to die in an explosion while you are sneaking around.

You don't know what is going to happen; when you checked out of the hotel and stole a key to illegaly use later, you gave up the right to live in a predictable, safe world because you chose to break the rules of the system.

C++ is not a safe language. It will cheerfully allow you to break the rules of the system. If you try to do something illegal and foolish like going back into a room you're not authorized to be in and rummaging through a desk that might not even be there anymore, C++ is not going to stop you. Safer languages than C++ solve this problem by restricting your power -- by having much stricter control over keys, for example.

UPDATE

Holy goodness, this answer is getting a lot of attention. (I'm not sure why -- I considered it to be just a "fun" little analogy, but whatever.)

I thought it might be germane to update this a bit with a few more technical thoughts.

Compilers are in the business of generating code which manages the storage of the data manipulated by that program. There are lots of different ways of generating code to manage memory, but over time two basic techniques have become entrenched.

The first is to have some sort of "long lived" storage area where the "lifetime" of each byte in the storage -- that is, the period of time when it is validly associated with some program variable -- cannot be easily predicted ahead of time. The compiler generates calls into a "heap manager" that knows how to dynamically allocate storage when it is needed and reclaim it when it is no longer needed.

The second is to have some sort of "short lived" storage area where the lifetime of each byte in the storage is well known, and, in particular, lifetimes of storages follow a "nesting" pattern. That is, the allocation of the longest-lived of the short-lived variables strictly overlaps the allocations of shorter-lived variables that come after it.

Local variables follow the latter pattern; when a method is entered, its local variables come alive. When that method calls another method, the new method's local variables come alive. They'll be dead before the first method's local variables are dead. The relative order of the beginnings and endings of lifetimes of storages associated with local variables can be worked out ahead of time.

For this reason, local variables are usually generated as storage on a "stack" data structure, because a stack has the property that the first thing pushed on it is going to be the last thing popped off.

It's like the hotel decides to only rent out rooms sequentially, and you can't check out until everyone with a room number higher than you has checked out.

So let's think about the stack. In many operating systems you get one stack per thread and the stack is allocated to be a certain fixed size. When you call a method, stuff is pushed onto the stack. If you then pass a pointer to the stack back out of your method, as the original poster does here, that's just a pointer to the middle of some entirely valid million-byte memory block. In our analogy, you check out of the hotel; when you do, you just checked out of the highest-numbered occupied room. If no one else checks in after you, and you go back to your room illegally, all your stuff is guaranteed to still be there in this particular hotel.

We use stacks for temporary stores because they are really cheap and easy. An implementation of C++ is not required to use a stack for storage of locals; it could use the heap. It doesn't, because that would make the program slower.

An implementation of C++ is not required to leave the garbage you left on the stack untouched so that you can come back for it later illegally; it is perfectly legal for the compiler to generate code that turns back to zero everything in the "room" that you just vacated. It doesn't because again, that would be expensive.

An implementation of C++ is not required to ensure that when the stack logically shrinks, the addresses that used to be valid are still mapped into memory. The implementation is allowed to tell the operating system "we're done using this page of stack now. Until I say otherwise, issue an exception that destroys the process if anyone touches the previously-valid stack page". Again, implementations do not actually do that because it is slow and unnecessary.

Instead, implementations let you make mistakes and get away with it. Most of the time. Until one day something truly awful goes wrong and the process explodes.

This is problematic. There are a lot of rules and it is very easy to break them accidentally. I certainly have many times. And worse, the problem often only surfaces when memory is detected to be corrupt billions of nanoseconds after the corruption happened, when it is very hard to figure out who messed it up.

More memory-safe languages solve this problem by restricting your power. In "normal" C# there simply is no way to take the address of a local and return it or store it for later. You can take the address of a local, but the language is cleverly designed so that it is impossible to use it after the lifetime of the local ends. In order to take the address of a local and pass it back, you have to put the compiler in a special "unsafe" mode, and put the word "unsafe" in your program, to call attention to the fact that you are probably doing something dangerous that could be breaking the rules.

For further reading:

237
Anurag Uniyal

I want to know the memory usage of my Python application and specifically want to know what code blocks/portions or objects are consuming most memory. Google search shows a commercial one is Python Memory Validator.

And open source ones are PySizer and Heapy.

I haven't tried anyone, so I wanted to know which one is the best considering:

  1. Gives most details.

  2. I have to do least or no changes to my code.

Answered By: Torsten Marek ( 111)

Heapy is quite simple to use. At some point in your code, you have to write the following:

from guppy import hpy
h = hpy()
print h.heap()

This gives you some output like this:

Partition of a set of 132527 objects. Total size = 8301532 bytes.
Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
0  35144  27  2140412  26   2140412  26 str
1  38397  29  1309020  16   3449432  42 tuple
2    530   0   739856   9   4189288  50 dict (no owner)

You can also find out from where objects are referenced and get statistics about that, but somehow the docs on that are a bit sparse.

There is a graphical browser as well, written in Tk.

161
JimDaniel

I just finished a test as part of a job interview, and one question stumped me - even using google for reference. I'd like to see what the stackoverflow crew can do with it:

The “memset_16aligned” function requires a 16byte aligned pointer passed to it, or it will crash.

a) How would you allocate 1024 bytes of memory, and align it to a 16 byte boundary?
b) Free the memory after the memset_16aligned has executed.

{

   void *mem;

   void *ptr;

   // answer a) here

   memset_16aligned(ptr, 0, 1024);

   // answer b) here

}
Answered By: Jonathan Leffler ( 284)
{
void *mem = malloc(1024+16);
void *ptr = ((char *)mem+16) & ~ 0x0F;
memset_16aligned(ptr, 0, 1024);
free(mem);
}

Edited: Explanation as requested.

The first step is to allocate enough spare space, just in case. Since the memory must be 16-byte aligned (meaning that the leading byte address needs to be a multiple of 16), adding 16 extra bytes guarantees that we have enough space. Somewhere in the first 16 bytes, there is a 16-byte aligned pointer. (Note that malloc() is supposed to return a pointer that is sufficiently well aligned for any purpose. However, the meaning of 'any' is primarily for things like basic types - long, double, long double, long long. When you are doing more specialized things, like playing with graphics systems, they can need more stringent alignment than the rest of the system - hence questions and answers like this.)

The next step is to convert the void pointer to a char pointer; GCC notwithstanding, you are not supposed to do pointer arithmetic on void pointers (and GCC has warning options to tell you when you abuse it). Then add 16 to the start pointer. Suppose malloc() returned you an impossibly badly aligned pointer: 0x800001. Adding the 16 gives 0x800011. Now I want to round down to the 16-byte boundary - so I want to reset the last 4 bits to 0. 0x0F has the last 4 bits set to one; therefore, ~ 0x0F has all bits set to one except the last four. Anding that with 0x800011 gives 0x800010. You can iterate over the other offsets and see that the same arithmetic works.

The last step, free(), is easy: you always, and only, return to free() a value that one of malloc(), calloc() or realloc() returned to you - anything else is a disaster. You correctly provided mem to hold that value - thank you. The free releases it.

Finally, if you know about the internals of your system's malloc package, you could guess that it might well return 16-byte aligned data (or it might be 8-byte aligned). If it was 16-byte aligned, then you'd not need to dink with the values. However, this is dodgy and non-portable -- other malloc packages have different minimum alignments, and therefore assuming one thing when it does something different would lead to core dumps. Within broad limits, this solution is portable.

Someone else mentioned posix_memalign() as another way to get the aligned memory; that isn't available everywhere, but could often be implemented using this as a basis. Note that it was convenient that the alignment was a power of 2; other alignments are messier.

One more comment - this code does not check that the allocation succeeded.

Amendment Windows Programmer pointed out that you can't do bit mask operations on pointers, and, indeed, GCC (3.4.6 and 4.3.1 tested) does complain like that. So, an amended version of the basic code - converted into a main program, follows. I've also taken the liberty of adding just 15 instead of 16, as has been pointed out. I'm using uintptr_t since C99 has been around long enough to be accessible on most platforms. If it wasn't for the use of PRIXPTR in the printf() statements, it would be sufficient to #include <stdint.h> instead of using #include <inttypes.h>.

#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>

static void memset_16aligned(void *space, char byte, size_t nbytes)
{
    assert((nbytes & 0x0F) == 0);
    assert(((uintptr_t)space & 0x0F) == 0);
}

int main(void)
{
    void *mem = malloc(1024+15);
    void *ptr = (void *)(((uintptr_t)mem+15) & ~ 0x0F);
    printf("0x%08" PRIXPTR ", 0x%08" PRIXPTR "\n", mem, ptr);
    memset_16aligned(ptr, 0, 1024);
    free(mem);
    return(0);
}

And here is a marginally more generalized version, which will work for sizes which are a power of 2:

#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>

static void memset_16aligned(void *space, char byte, size_t nbytes)
{
    assert((nbytes & 0x0F) == 0);
    assert(((uintptr_t)space & 0x0F) == 0);
}

static void test_mask(size_t align)
{
    uintptr_t mask = ~(uintptr_t)(align - 1);
    void *mem = malloc(1024+align-1);
    void *ptr = (void *)(((uintptr_t)mem+align-1) & mask);
    assert((align & (align - 1)) == 0);
    printf("0x%08" PRIXPTR ", 0x%08" PRIXPTR "\n", mem, ptr);
    memset_16aligned(ptr, 0, 1024);
    free(mem);
}

int main(void)
{
    test_mask(16);
    test_mask(32);
    test_mask(64);
    test_mask(128);
    return(0);
}

To convert test_mask() into a general purpose allocation function, the single return value from the allocator would have to encode the release address, as several people have indicated in their answers.

[Added Uri commented: Maybe I am having reading [a] comprehension problem this morning, but if the interview question specifically says: "How would you allocate 1024 bytes of memory" and you clearly allocate more than that. Wouldn't that be an automatic failure from the interviewer?

My response won't fit into a 300-character comment...
It depends, I suppose. I think most people (including me) took the question to mean "How would you allocate a space in which 1024 bytes of data can be stored, and where the base address is a multiple of 16 bytes". If the interviewer really meant how can you allocate 1024 bytes (only) and have it 16-byte aligned, then the options are more limited.

  • Clearly, one possibility is to allocate 1024 bytes and then give that address the 'alignment treatment'; the problem with that approach is that the actual available space is not properly determinate (the usable space is between 1008 and 1024 bytes, but there wasn't a mechanism available to specify which size), which renders it less than useful.
  • Another possibility is that you are expected to write a full memory allocator and ensure that the 1024-byte block you return is appropriately aligned. If that is the case, you probably end up doing an operation fairly similar to what the proposed solution did, but you hide it inside the allocator.

However, if the interviewer expected either of those responses, I'd expect them to recognize that this solution answers a closely related question, and then to reframe their question to point the conversation in the correct direction. (Further, if the interviewer got really stroppy, then I wouldn't want the job; if the answer to an insufficiently precise requirement is shot down in flames without correction, then the interviewer is not someone for whom it is safe to work.)
]

157
kedar kamthe

When you create an instance of a class with the new operator, memory gets allocated on the heap. When you create an instance of a struct with the new operator where does the memory get allocated, on the heap or on the stack ?

Answered By: Jon Skeet ( 207)

Okay, let's see if I can make this any clearer.

Firstly, Ash is right: the question is not about where value type variables are allocated. That's a different question - and one to which the answer isn't just "on the stack". It's more complicated than that (and made even more complicated by C# 2). I have an article on the topic and will expand on it if requested, but let's deal with just the new operator.

Secondly, all of this really depends on what level you're talking about. I'm looking at what the compiler does with the source code, in terms of the IL it creates. It's more than possible that the JIT compiler will do clever things in terms of optimising away quite a lot of "logical" allocation.

Thirdly, I'm ignoring generics, mostly because I don't actually know the answer, and partly because it would complicate things too much.

Finally, all of this is just with the current implementation. The C# spec doesn't specify much of this - it's effectively an implementation detail. There are those who believe that managed code developers really shouldn't care. I'm not sure I'd go that far, but it's worth imagining a world where in fact all local variables live on the heap - which would still conform with the spec.


There are two different situations with the new operator on value types: you can either call a parameterless constructor (e.g. new Guid()) or a parameterful constructor (e.g. new Guid(someString)). These generate significantly different IL. To understand why, you need to compare the C# and CLI specs: according to C#, all value types have a parameterless constructor. According to the CLI spec, no value types have parameterless constructors. (Fetch the constructors of a value type with reflection some time - you won't find a parameterless one.)

It makes sense for C# to treat the "initialize a value with zeroes" as a constructor, because it keeps the language consistent - you can think of new(...) as always calling a constructor. It makes sense for the CLI to think of it differently, as there's no real code to call - and certainly no type-specific code.

It also makes a difference what you're going to do with the value after you've initialized it. The IL used for

Guid localVariable = new Guid(someString);

is different to the IL used for:

myInstanceOrStaticVariable = new Guid(someString);

In addition, if the value is used as an intermediate value, e.g. an argument to a method call, things are slightly different again. To show all these differences, here's a short test program. It doesn't show the difference between static variables and instance variables: the IL would differ between stfld and stsfld, but that's all.

using System;

public class Test
{
    static Guid field;

    static void Main() {}
    static void MethodTakingGuid(Guid guid) {}


    static void ParameterisedCtorAssignToField()
    {
        field = new Guid("");
    }

    static void ParameterisedCtorAssignToLocal()
    {
        Guid local = new Guid("");
        // Force the value to be used
        local.ToString();
    }

    static void ParameterisedCtorCallMethod()
    {
        MethodTakingGuid(new Guid(""));
    }

    static void ParameterlessCtorAssignToField()
    {
        field = new Guid();
    }

    static void ParameterlessCtorAssignToLocal()
    {
        Guid local = new Guid();
        // Force the value to be used
        local.ToString();
    }

    static void ParameterlessCtorCallMethod()
    {
        MethodTakingGuid(new Guid());
    }
}

Here's the IL for the class, excluding irrelevant bits (such as nops):

.class public auto ansi beforefieldinit Test extends [mscorlib]System.Object    
{
    // Removed Test's constructor, Main, and MethodTakingGuid.

    .method private hidebysig static void ParameterisedCtorAssignToField() cil managed
    {
        .maxstack 8
        L_0001: ldstr ""
        L_0006: newobj instance void [mscorlib]System.Guid::.ctor(string)
        L_000b: stsfld valuetype [mscorlib]System.Guid Test::field
        L_0010: ret     
    }

    .method private hidebysig static void ParameterisedCtorAssignToLocal() cil managed
    {
        .maxstack 2
        .locals init ([0] valuetype [mscorlib]System.Guid guid)    
        L_0001: ldloca.s guid    
        L_0003: ldstr ""    
        L_0008: call instance void [mscorlib]System.Guid::.ctor(string)    
        // Removed ToString() call
        L_001c: ret
    }

    .method private hidebysig static void ParameterisedCtorCallMethod() cil  managed    
    {   
        .maxstack 8
        L_0001: ldstr ""
        L_0006: newobj instance void [mscorlib]System.Guid::.ctor(string)
        L_000b: call void Test::MethodTakingGuid(valuetype [mscorlib]System.Guid)
        L_0011: ret     
    }

    .method private hidebysig static void ParameterlessCtorAssignToField() cil managed
    {
        .maxstack 8
        L_0001: ldsflda valuetype [mscorlib]System.Guid Test::field
        L_0006: initobj [mscorlib]System.Guid
        L_000c: ret 
    }

    .method private hidebysig static void ParameterlessCtorAssignToLocal() cil managed
    {
        .maxstack 1
        .locals init ([0] valuetype [mscorlib]System.Guid guid)
        L_0001: ldloca.s guid
        L_0003: initobj [mscorlib]System.Guid
        // Removed ToString() call
        L_0017: ret 
    }

    .method private hidebysig static void ParameterlessCtorCallMethod() cil managed
    {
        .maxstack 1
        .locals init ([0] valuetype [mscorlib]System.Guid guid)    
        L_0001: ldloca.s guid
        L_0003: initobj [mscorlib]System.Guid
        L_0009: ldloc.0 
        L_000a: call void Test::MethodTakingGuid(valuetype [mscorlib]System.Guid)
        L_0010: ret 
    }

    .field private static valuetype [mscorlib]System.Guid field
}

As you can see, there are lots of different instructions used for calling the constructor:

  • newobj: Allocates the value on the stack, calls a parameterised constructor. Used for intermediate values, e.g. for assignment to a field or use as a method argument.
  • call instance: Uses an already-allocated storage location (whether on the stack or not). This is used in the code above for assigning to a local variable. If the same local variable is assigned a value several times using several new calls, it just initializes the data over the top of the old value - it doesn't allocate more stack space each time.
  • initobj: Uses an already-allocated storage location and just wipes the data. This is used for all our parameterless constructor calls, including those which assign to a local variable. For the method call, an intermediate local variable is effectively introduced, and its value wiped by initobj.

I hope this shows how complicated the topic is, while shining a bit of light on it at the same time. In some conceptual senses, every call to new allocates space on the stack - but as we've seen, that isn't what really happens even at the IL level. I'd like to highlight one particular case. Take this method:

void HowManyStackAllocations()
{
    Guid guid = new Guid();
    // [...] Use guid
    guid = new Guid(someBytes);
    // [...] Use guid
    guid = new Guid(someString);
    // [...] Use guid
}

That "logically" has 4 stack allocations - one for the variable, and one for each of the three new calls - but in fact the stack is only allocated once, and then the same storage location is reused.

EDIT: Just to be clear, this is only true in some cases... in particular, the value of guid won't be visible if the Guid constructor throws an exception, which is why the C# compiler is able to reuse the same stack slot. See Eric Lippert's blog post on value type construction for more details and a case where it doesn't apply.

I've learned a lot in writing this answer - please ask for clarification if any of it is unclear!