How does Bitmap allocation work on Oreo, and how to investigate their memory?

Looks like your app was killed by Linux OOM killer. Game developers and other people, who actively use native memory, see that happen all the time.

Enabling kernel overcommit together with lifting heap-based restrictions on Bitmap allocation may result in the picture you see. You can read a bit about overcommit here.

Personally I would love to see an OS API for learning about app deaths, but I won’t be holding my breath.


  1. What’s the correct way to get the max native memory that app is allowed to use, and print it on logs, and use it as something to decide of max ?

Pick some arbitrary value (say, quarter of heap size) and stick with it. If you get call to onTrimMemory (which is directly tied to OOM killer and native memory pressure), try to reduce your consumption.

  1. I guess this new behavior might affect some caching libraries, right? That’s because they might depend on the heap memory size instead.

Does not matter — Android heap size is always smaller than total physical memory. Any caching library, that used heap size as guideline, should continue to work either way.

  1. How could it be that I could create so many bitmaps, each of size 20,000×20,000

Magic.

I assume, that current version of Android Oreo allows memory overcommit: untouched memory isn’t actually requested from hardware, so you can have as much of it as allowed by OS addressable memory limit (a bit less than 2 gigabytes on x86, several terabytes on x64). All virtual memory consists of pages (usually 4Kb each). When you try to use a page, it is paged in. If the kernel does not have enough physical memory to map a page for your process, the app will receive a signal, killing it. In practice the app will killed by Linux OOM killer way before that happens.

  1. How could the native memory functions return me such huge values in the case of bitmap creation, yet more reasonably ones for when I decoded bitmaps ? What do they mean?

  2. What’s with the weird profiler graph on the Bitmap creation case? It barely rises in memory usage, and yet it reached a point that it can’t create any more of them, eventually (after a lot of items being inserted).

The profiler graph shows heap memory usage. If the bitmaps do not count
towards heap, that graph naturally won’t show them.

Native memory functions appear to work as (originally) intended — they correctly track virtual allocations, but do not realize, how much physical memory is reserved for each virtual allocation by kernel (that is opaque to user space).

Also, as opposed to when I decode bitmaps, I do get a crash here (including a dialog), but it’s not OOM. Instead, it’s… NPE !

You haven’t used any of those pages, so they are not mapped to physical memory, hence the OOM killer does not kill you (yet). The allocation might have failed because you have ran out of virtual memory, which is more harmless, compared to running out of physical memory, or because of hitting some other kind of memory limit (such as cgroups-based ones), which is even more harmless.

  1. …Can Crashlytics detect it? Is there a way to be informed of such a thing, whether by users or during development at the office?

OOM killer destroys your app with SIGKILL (same as when your process is terminated after going into background). Your process can not react to it. It is theoretically possible to observe process death from child process, but the exact reason may be hard to learn. See Who “Killed” my process and why?. A well-written library may be able to periodically check memory usage and make an educated guess. An extremely well-written library may be able to detect memory allocations by hooking into native malloc function (for example, by hot-patching application import table or something like that).


To better demonstrate how virtual memory management works, let’s imagine allocating 1000 of Bitmaps 1Gb each, then changing a single pixel in each of them. The OS does not initially allocate physical memory for those Bitmaps, so they take around 0 byte of physical memory in total. After you touch a single four-byte RGBA pixel of Bitmap, the kernel will allocate a single page for storing that pixel.

The OS does not know anything about Java objects and Bitmaps — it simply views all process memory as continuous list of pages.

The commonly used size of memory page is 4Kb. After touching 1000 pixels — one in each 1Gb Bitmap — you will still use up less than 4Mb of real memory.

Leave a Comment