> Being either risk seeking or risk averse looks like a flaw too, though perhaps less severe than cyclic preferences.
Yes, I don't want to deny that this view is appealing. However, even if you use probabilistic representations of degrees of belief, you need to deal with ignorance and conflicting evidence in one way or another. Convex sets of probabilities can be shown to be able to represent many of the alternative approaches I've mentioned. There is also this "subjective logic" by Jøsang that is surprisingly nice despite it's silly name. Check it out, maybe the only quirk I have with it is that he mostly seems to re-brand many prior ideas, but the framework is interesting.
> Not even acknowledging the piece of data might be a good approximation in some cases, but in general it seems quite foolish. You don't just ignore a piece of evidence, you explain why it doesn't change your mind.
I agree with you, but at the same time we know from qualitative belief revision theory that there are many, many ways of dealing with conflicting evidence. Okay, we can rule out some of them, e.g. discarding all previous beliefs to learn the new evidence, but among the many less obviously flawed methods a choice needs to be made. The probabilistic setting doesn't help too much in that area, it actually makes it harder to see what's going on. As I've said, the problem is underdetermined.
> Then this book comes up, and provide justifications for my intuitions that were even stronger than I anticipated.
I'm definitely going to read it! However, I might already be tainted by other books on the subject and philosophical discussions. I really do think a belief representation ought not be closed under negation, i.e., I have strong Dempster-Shafer intuitions, and that some way of distinguishing ignorance from doubt is needed.
> Call it confirmation bias, but when I read that book, I already subscribed to probability theory as the correct way to think. I had for a long time. The intuition of probabilities being degrees of beliefs, I had for as long as I can remember.
Kudos to you for having such strong intuitions. It makes life easier. Maybe I'd be willing to buy into them for probabilities, but that wouldn't help me because of similar problems on the evaluative side on which most of my work focuses. On the evaluative side we have thought experiments like Spectrum Cases (Temkin, Rachels): Suppose A gives you extremely high pleasure for a month, B gives you a little bit less pleasure than A (barely noticeable) for 3 months, C gives you a little bit less pleasure than B (barely noticeable) for 9 months, and so on. Some people (not all) have the intuition that B is better than A, C is better than B, and so forth, until at some point, say Z, they would judge that A is better than Z. These thought experiments come in all varieties, can also be made about well-being and other notions of goodness and can be made as realistic as one wishes. Most people who want to keep "better than" transitive introduce some notion of significance, which is lexicographic decision making in disguise (significant value attributes always outrank insignificant value attributes). But okay, you were talking about probabilities only and already acknowledged the evaluative component could be discontinuous. (To be more precise, in this case the Archimedean axiom fails.) It's just that even if graded belief is purely probabilistic, these kind of preferences will complicate making decisions on the basis of your belief.
I agree about the tractability, too. Since you are a probabilist about graded belief, that already makes your life much easier than mine, though. Couldn't you just say that any heuristics are permissible in certain circumstances as shortcuts that - under these circumstances - are conducive to adequate probability approximations?
I didn't want to insinuate that there is anything wrong with being a probabilist, it's in the end a matter of intuitions, I merely wanted to point out that there are some fairly well-known authors who are not probabilists about graded belief in the narrow sense, e.g. Bouyssou, Fishburn, Vincke, Pirlot in decision making, people like Halpern, Dubois, Prade, Spohn and their scholars in A.I., and of course almost everybody in mathematical psychology such as Luce and Tversky. But as I've said, most of their generalizations can be represented by more complicated probability representations such as sets of probabilities.
Anyway, it was nice chatting with you!