He tried to get people to use it for a while, but since it was just a less-functional and empty Reddit, nobody was very interested. Eventually some of the users/subreddits banned from Reddit started using it since they had been kicked off real Reddit, and the developer ended up welcoming them while justifying it as "free speech". I think he mostly just seemed happy that some people actually wanted to use his site.
It's all been downhill from there, and the original creator even abandoned the site a few years ago and handed it over to someone else.
A relatively small group "migrated" for a few days, but didn't stay. Here's Whoaverse one month after the up/down counts were removed from Reddit (notice that they added visible vote counts, which they didn't have before): http://web.archive.org/web/20140718134533/http://whoaverse.c...
Other than the stickied site announcement, almost all of the posts only have a handful of votes and only a few have any comments.
The banned users were the first group that actually stuck around on Whoaverse/Voat, because they didn't have the option of just going back to Reddit.
Take for instance Ruqqus, another site created as a free speech reddit alternative. It consistently has horrifying content on the front page regularly; viciously racist content, anti-Semitic memes, unironic pro-Nazism/pro-genocide discussion posts, and generally terrible content. This is likely because it is exactly this content that is being "censored" from Reddit, not these harmless free speech advocates who are silenced by a big company.
Can anyone actually tell me what valuable discussion is being censored on Twitter, Reddit, etc? Banning this type of content is mandatory if you want a platform that is safe and available for trans people, Jews, gay people, women, etc.
People shouldn't have to tolerate people @ing them with slurs, be exposed to "reasoned" arguments for their extermination, or memes dehumanizing them for the sake of "free speech."
E.g. a large subset of Fediverse (Mastodon etc) communities are communities that avoid Twitter because they don't feel Twitter is doing enough to be safe for them. And instances have a varying policies about how they handle instances with different moderation standards.
Where does the line between "free speech" and moderation exists?
Free speech advocates who aren't convinced by the "just go to another platform" argument in a discussion about Reddit or Twitter censorship likely won't think the argument is worth accepting simply by redefining what a platform is.
I may not have done a good job of illustrating it in the previous post, but the example was mainly focused on platforms that bill themselves as an anti-censorship alternative to Reddit. Censorship and free speech on social media are incredibly complex topics, and the development of an endless stream of tiny, far right echo chambers doesn't seem to capture the spirit of this "town square" that free speech advocates are looking for.