What is the purpose of a marker interface?

This is a bit of a tangent based on the response by “Mitch Wheat”.

Generally, anytime I see people cite the framework design guidelines, I always like to mention that:

You should generally ignore the framework design guidelines most of the time.

This isn’t because of any issue with the framework design guidelines. I think the .NET framework is a fantastic class library. A lot of that fantasticness flows from the framework design guidelines.

However, the design guidelines do not apply to most code written by most programmers. Their purpose is to enable the creation of a large framework that is used by millions of developers, not to make library writing more efficient.

A lot of the suggestions in it can guide you to do things that:

  1. May not be the most straightforward way of implementing something
  2. May result in extra code duplication
  3. May have extra runtime overhead

The .net framework is big, really big. It’s so big that it would be absolutely unreasonable to assume that anyone has detailed knowledge about every aspect of it. In fact, it’s much safer to assume that most programmers frequently encounter portions of the framework they have never used before.

In that case, the primary goals of an API designer are to:

  1. Keep things consistent with the rest of the framework
  2. Eliminate unneeded complexity in the API surface area

The framework design guidelines push developers to create code that accomplishes those goals.

That means doing things like avoiding layers of inheritance, even if it means duplicating code, or pushing all exception throwing code out to “entry points” rather than using shared helpers (so that stack traces make more sense in the debugger), and a lot of other similar things.

The primary reason that those guidelines suggest using attributes instead of marker interfaces is because removing the marker interfaces makes the inheritance structure of the class library much more approachable. A class diagram with 30 types and 6 layers of inheritance hierarchy is very daunting compared to one with 15 types and 2 layers of hierarchy.

If there really are millions of developers using your APIs, or your code base is really big (say over 100K LOC) then following those guidelines can help a lot.

If 5 million developers spend 15 mins learning an API rather than spending 60 mins learning it, the result is a net savings of 428 man years. That’s a lot of time.

Most projects, however, don’t involve millions of developers, or 100K+ LOC. In a typical project, with say 4 developers and around 50K loc, the set of assumptions are a lot different. The developers on the team will have a much better understanding of how the code works. That means that it makes a lot more sense to optimize for producing high quality code quickly, and for reducing the amount of bugs and the effort needed to make changes.

Spending 1 week developing code that is consistent with the .net framework, vs 8 hours writing code that is easy to change and has fewer bugs can result in:

  1. Late projects
  2. Lower bonuses
  3. Increased bug counts
  4. More time spent at the office, and less time on the beach drinking margaritas.

Without 4,999,999 other developers to absorb the costs it usually isn’t worth it.

For example, testing for marker interfaces comes down to a single “is” expression, and results in less code that looking for attributes.

So my advice is:

  1. Follow the framework guidelines religiously if you are developing class libraries (or UI widgets) meant for wide spread consumption.
  2. Consider adopting some of them if you have over 100K LOC in your project
  3. Otherwise ignore them completely.

Leave a Comment