- Default mods, which are either the creator of the community or those that are assigned by that person
- Elected mods, mods which are chosen by the community. Election goes both ways, you can both be elected or impeached. For example, a default mod can be impeached by the election system, and that would render that mod a non-mod for you
- Mods you've personally chosen. Choosing someone to be a mod for you is your vote in the election.
So the system isn't 'enforcing' mods you haven't chosen onto you as a result of the elections. Elections only make the decision only if you haven't made a decision for that mod in either way — if you make a decision that is ironclad (for you, in your personal, local view), since nothing can override your personal vote for or against somebody. The more you vote in elections, the more you shape your own view of the universe.
The article is long, but what I can see there that is not implemented in Aether is the transitive property of trust, instead of having a vote which is binary, he seems to be advocating for a 0 to 100 trust, and the idea that trust of the people you trust means something to you. (Let me know if I got this wrong).
This is great in theory — and this was actually considered for implementation at one point. The issue isn't that it doesn't make sense but it is quite literally impossible to implement, since it makes it so that almost every trust decision made by someone on the network at some point in time affects almost all other entities, which leaves you with an almost entirely 'dirty' graph that you have to traverse in entirety and recompile.
This can be done on a centralised service since there is one graph to compile and everyone submits to it. However, in Aether, what we try to do is that we try to keep the graph compilation part on the user end, both because it's a P2P network, and also because custom graphs compiled on the client end is what allows the votes to be able to modify the graph structure itself. That sort of gradual outflow of trust across a social graph making decisions on what to show or not show for every single piece of content is an intense amount of computation to do for every new modification to the trust gradient.