Single objects still limited to 2 GB in size in CLR 4.0?

It’s worse than that – you’re process space, when you’re working in .NET in 32bit is much smaller than the theoretical limit. In 32bit .NET apps, my experience is that you’ll always tend to start getting out of memory errors somewhere around 1.2-1.4gb of memory usage (some people say they can get to 1.6… but I’ve never seen that). Of course, this isn’t a problem on 64bit systems.

That being said, a single 2GB array of reference types, even on 64bit systems, is a huge amount of objects. Even with 8 byte references, you have the ability to allocate an array of 268,435,456 object references – each of which can be very large (up to 2GB, more if they’re using nested objects). That’s more memory than would ever really be required by most applications.

One of the members of the CLR team blogged about this, with some options for ways to work around these limitations. On a 64bit system, doing something like his BigArray<T> would be a viable solution to allocate any number of objects into an array – much more than the 2gb single object limit. P/Invoke can allow you to allocate larger arrays as well.


Edit: I should have mentioned this, as well – I do not believe this behavior has changed at all for .NET 4. The behavior has been unchanged since the beginning of .NET.


Edit: .NET 4.5 will now have the option in x64 to explicitly allow objects to be larger than 2gb by setting gcAllowVeryLargeObjects in the app.config.

Leave a Comment