How expensive is RTTI?

Regardless of compiler, you can always save on runtime if you can afford to do

if (typeid(a) == typeid(b)) {
  B* ba = static_cast<B*>(&a);
  etc;
}

instead of

B* ba = dynamic_cast<B*>(&a);
if (ba) {
  etc;
}

The former involves only one comparison of std::type_info; the latter necessarily involves traversing an inheritance tree plus comparisons.

Past that … like everyone says, the resource usage is implementation specific.

I agree with everyone else’s comments that the submitter should avoid RTTI for design reasons. However, there are good reasons to use RTTI (mainly because of boost::any). That in mind, it’s useful to know its actual resource usage in common implementations.

I recently did a bunch of research into RTTI in GCC.

tl;dr: RTTI in GCC uses negligible space and typeid(a) == typeid(b) is very fast, on many platforms (Linux, BSD and maybe embedded platforms, but not mingw32). If you know you’ll always be on a blessed platform, RTTI is very close to free.

Gritty details:

GCC prefers to use a particular “vendor-neutral” C++ ABI[1], and always uses this ABI for Linux and BSD targets[2]. For platforms that support this ABI and also weak linkage, typeid() returns a consistent and unique object for each type, even across dynamic linking boundaries. You can test &typeid(a) == &typeid(b), or just rely on the fact that the portable test typeid(a) == typeid(b) does actually just compare a pointer internally.

In GCC’s preferred ABI, a class vtable always holds a pointer to a per-type RTTI structure, though it might not be used. So a typeid() call itself should only cost as much as any other vtable lookup (the same as calling a virtual member function), and RTTI support shouldn’t use any extra space for each object.

From what I can make out, the RTTI structures used by GCC (these are all the subclasses of std::type_info) only hold a few bytes for each type, aside from the name. It isn’t clear to me whether the names are present in the output code even with -fno-rtti. Either way, the change in size of the compiled binary should reflect the change in runtime memory usage.

A quick experiment (using GCC 4.4.3 on Ubuntu 10.04 64-bit) shows that -fno-rtti actually increases the binary size of a simple test program by a few hundred bytes. This happens consistently across combinations of -g and -O3. I’m not sure why the size would increase; one possibility is that GCC’s STL code behaves differently without RTTI (since exceptions won’t work).

[1] Known as the Itanium C++ ABI, documented at http://www.codesourcery.com/public/cxx-abi/abi.html. The names are horribly confusing: the name refers to the original development architecture, though the ABI specification works on lots of architectures including i686/x86_64. Comments in GCC’s internal source and STL code refer to Itanium as the “new” ABI in contrast to the “old” one they used before. Worse, the “new”/Itanium ABI refers to all versions available through -fabi-version; the “old” ABI predated this versioning. GCC adopted the Itanium/versioned/”new” ABI in version 3.0; the “old” ABI was used in 2.95 and earlier, if I am reading their changelogs correctly.

[2] I couldn’t find any resource listing std::type_info object stability by platform. For compilers I had access to, I used the following: echo "#include <typeinfo>" | gcc -E -dM -x c++ -c - | grep GXX_MERGED_TYPEINFO_NAMES. This macro controls the behavior of operator== for std::type_info in GCC’s STL, as of GCC 3.0. I did find that mingw32-gcc obeys the Windows C++ ABI, where std::type_info objects aren’t unique for a type across DLLs; typeid(a) == typeid(b) calls strcmp under the covers. I speculate that on single-program embedded targets like AVR, where there is no code to link against, std::type_info objects are always stable.

Leave a Comment