The HitL fine-tuning angle is exactly right. The labeled dataset you're building (good/bad/stylistically-wrong memory events) is probably worth more than the compaction itself. Coherence preferences are surprisingly personal — what reads as "not correct based on my style" is hard to spec without examples.
The loop-pruning maps really cleanly to the contradiction detection in our setup. A model circling the same state N times is often because it stored an inconclusive result with the same confidence as a resolved one they look identical at recall time. Tagging memory entries with a status [open, resolved, or contradicted] before they go in cuts a lot of that.
On the autonomy question: we ended up treating certainty as continuous rather than binary. Low-certainty memories stay soft, high-certainty ones get promoted. Automatic compaction only operates on the low end, higher certainty entries are off-limits without explicit override. That lets you keep the autonomy without the coherence risk. The failure mode shifts from "deleted something important" to "kept something stale too long," which feels more recoverable.
Would be curious what your pruning signal looks like at the turn level — are you scoring relevance per-turn retroactively, or flagging at write time?