People who have a great deal of trust in science usually also have a great deal of trust in statistics in probability theory, because it is an incredibly useful tool in that and many other contexts. It does, however, become problematic when it turns into a theory of everything. It seems that if you don’t clearly say that this is the one true theory you can still be objective and change your theory of everything to something else. This is a trap!
This is how it goes: the Bayes theorem (and probability theory in general) might not be the one true model, but it probably is because I think so and my mind must be Bayesian so because I think that the Bayes theorem is probably the one true model, because…. Gödel what? As long as you think that the probability theory has the highest probability of all other models you are stuck. Your only chance to get out is the one that is irrational in this framework, probably when you stumble on something that just feels clearly wrong.
The annoying thing is not so much the unprovability, the annoying thing is that. It blinds you to everything that doesn’t conform to your theory. It is not a confirmation bias, it is a confirmation loop. Your sources are self-selecting for the ones using the Bayesian reasoning. Confirmation bias can be corrected for, this cannot.
This is also why confirmation bias is a bad argument against someone else’s reasoning, by the way. It is a tool to find one in your own head, but in others it is indistinguishable from just using the kind of sources they consider good sources. And you need some kind of selection, or you’ll wind up reading flat-earth arguments all day. But if you’re in a confirmation loop you end up discarding entire research fields and together with expert opinions, because they’re not using statistics as an argument.
So I’m afraid we’re stuck with just regular ways of trying to be reasonably right.
I have a lot of very interconnected thoughts in my mind about this (as well as a giant dumpster fire of drafts in Notion), so hopefully I’ll be posting some more about this soon. For now if you’re interested in reading more about limits of formal methods like this you’ll probably like https://metarationality.com/how-to-think. I’m really impressed with the discussion and examples there, at least.