I can't answer your question as asked, but I thought I would give you some information. Here is my setup.
I wrote a system that runs on my home server. Every 20 minutes it logs into my email, downloads everything it hasn't previously seen. Then I have an allowlist and a blocklist to perform an initial classification.
Everything else has a Bayesian Spam Filter run on it.
Anything from the blocklist or scoring 98% likely to be spam is deleted from the server. Everything scoring <98% is put in the grey bucket.
I never see things on the blocklist.
I read things on the allowlist.
I skim things in the grey bucket ... it's rare that anything is spam, but those that are go in the block-bucket. Everything goes in the allow-bucket.
Then I use the block-bucket and allow-bucket to update the filter's settings, and anything that ended up in the block-bucket is deleted from the server.
It wasn't much work to set up, very little spam gets through, and it's bespoke, so no one else sees my email. The downside is that I can't benefit from other people's work in classifications, and they from mine, but it's solid, secure, and effective.
(Some specific details omitted for brevity and clarity of exposition)