How does having a dynamic variable affect performance?

I’ve read dynamic makes the compiler run again, but what it does. Does it have to recompile whole method with the dynamic used as a parameter or rather those lines with dynamic behavior/context(?)

Here’s the deal.

For every expression in your program that is of dynamic type, the compiler emits code that generates a single “dynamic call site object” that represents the operation. So, for example, if you have:

class C
{
    void M()
    {
        dynamic d1 = whatever;
        dynamic d2 = d1.Foo();

then the compiler will generate code that is morally like this. (The actual code is quite a bit more complex; this is simplified for presentation purposes.)

class C
{
    static DynamicCallSite FooCallSite;
    void M()
    {
        object d1 = whatever;
        object d2;
        if (FooCallSite == null) FooCallSite = new DynamicCallSite();
        d2 = FooCallSite.DoInvocation("Foo", d1);

See how this works so far? We generate the call site once, no matter how many times you call M. The call site lives forever after you generate it once. The call site is an object that represents “there’s going to be a dynamic call to Foo here”.

OK, so now that you’ve got the call site, how does the invocation work?

The call site is part of the Dynamic Language Runtime. The DLR says “hmm, someone is attempting to do a dynamic invocation of a method foo on this here object. Do I know anything about that? No. Then I’d better find out.”

The DLR then interrogates the object in d1 to see if it is anything special. Maybe it is a legacy COM object, or an Iron Python object, or an Iron Ruby object, or an IE DOM object. If it is not any of those then it must be an ordinary C# object.

This is the point where the compiler starts up again. There’s no need for a lexer or parser, so the DLR starts up a special version of the C# compiler that just has the metadata analyzer, the semantic analyzer for expressions, and an emitter that emits Expression Trees instead of IL.

The metadata analyzer uses Reflection to determine the type of the object in d1, and then passes that to the semantic analyzer to ask what happens when such an object is invoked on method Foo. The overload resolution analyzer figures that out, and then builds an Expression Tree — just as if you’d called Foo in an expression tree lambda — that represents that call.

The C# compiler then passes that expression tree back to the DLR along with a cache policy. The policy is usually “the second time you see an object of this type, you can re-use this expression tree rather than calling me back again”. The DLR then calls Compile on the expression tree, which invokes the expression-tree-to-IL compiler and spits out a block of dynamically-generated IL in a delegate.

The DLR then caches this delegate in a cache associated with the call site object.

Then it invokes the delegate, and the Foo call happens.

The second time you call M, we already have a call site. The DLR interrogates the object again, and if the object is the same type as it was last time, it fetches the delegate out of the cache and invokes it. If the object is of a different type then the cache misses, and the whole process starts over again; we do semantic analysis of the call and store the result in the cache.

This happens for every expression that involves dynamic. So for example if you have:

int x = d1.Foo() + d2;

then there are three dynamic calls sites. One for the dynamic call to Foo, one for the dynamic addition, and one for the dynamic conversion from dynamic to int. Each one has its own runtime analysis and its own cache of analysis results.

Make sense?

Leave a Comment