These concepts matter in memory analysis, because often an object may itself be small, but may hold references to other much larger objects, and by doing this will prevent the garbage collector from freeing that extra memory.
You can see the dominators in a page using the Dominators view in the Memory tool.
During garbage collection, the runtime traverses the graph, starting at the root, and marks every object it finds. Any objects it doesn't find are unreachable, and can be deallocated.
So when an object becomes unreachable (for example, because it is only referenced by a single local variable which goes out of scope) then any objects it references also become unreachable, as long as no other objects reference them:
Conversely, this means that objects are kept alive as long as some other reachable object is holding a reference to them.
This gives rise to a distinction between two ways to look at the size of an object:
- shallow size: the size of the object itself
- retained size: the size of the object itself, plus the size of other objects that are kept alive by this object
Often, objects will have a small shallow size but a much larger retained size, through the references they contain to other objects. Retained size is an important concept in analyzing memory usage, because it answers the question "if this object ceases to exist, what's the total amount of memory freed?".
A related concept is that of the dominator. Node B is said to dominate node A if every path from the root to A passes through B:
If any of node A's dominators are freed, then node A itself becomes eligible for garbage collection.
One slight subtlety here is that if an object A is referenced by two other unrelated objects B and C, then neither object is its dominator, because you could remove either B or C from the graph, and A would still be retained by its other referrer. Instead, the immediate dominator of A would be its first common ancestor: