Good ⟺ Truth

The human mind is an interesting object. It has fascinated humanity ever since we learned how to write—and probably long before. Again and again, people say that humans are especially smart (or moral, or creative) and thus distinct from animals. The very same rhetoric is now being used for artificial intelligences. Humans are said to be truly intelligent, while machines merely “imitate.” Whatever that means. I am certain that such claims about artificial intelligence will age just as poorly as those that once tried to set us apart from animals.

We are different from animals, and we are different from machines. But ants, or anything else, are just as different in their own way. Of course, the discussion about what sets us apart is immensely important, because only through it can we come to the conclusion that we are not so different after all.

I find the modern argument especially tedious that AIs make “mistakes,” and that this shows they don’t “understand” a subject. But we humans also make mistakes, even when we do understand something. We developed mechanisms to deal with those mistakes:

But let’s assume there were such a selection based on the criterion of “truth.” Do AI critics really believe that AIs cannot take advantage of such a selection process? I think people demand perfection from AIs. Yet all we really need is to build in a protocol for error correction—perhaps inspired by the human world. But this is nothing new. In software, the quality benchmark is the absence of bugs. And AIs today can already write tests and fix errors. In mathematics, there are proof verifications, and AIs can simply make use of those. For fuzzier subjects, the process might be more dialogical: multiple AIs could debate and come to a consensus.

All of this is only a prelude to another thought. AIs are accused of not really “understanding” things and merely producing letters. But what does “understanding” even mean? What happens inside me when I understand something? I have some causal intuitions about what must or cannot happen—but those intuitions are faulty, even in areas where I’m an expert.

My thesis is this:

In the search for truth, there is nothing but heuristics.

Because the true selection criterion in the universe is existence, not truth. We need intuitions, or mechanisms to generate thoughts, that help us survive. Truth only matters insofar as it helps us survive.

But let’s not forget: we do not possess truth either. We establish it through processes or protocols—maybe an SMT solver, a court, or whatever. No individual has “the truth.” It is always an external process.

It seems that for evolution—or for God—truth is secondary. This analysis goes deeper. We do not have truth and should embrace that fact. We should realize that this is something good. Sometimes, true things are not the ones that help us. I believe it can even be good to believe things that are false. False, but good. Good in the sense of helpful.

False but useful things include such domains as “faith” and “hope.” Faith is the conviction in something unseen (something not measurable). Hope is the same, projected into the future. Faith and hope are deeply religious concepts and perhaps one of religion’s great achievements.

I believe we once again need similar processes that can say what is true in “faith” and “hope.” Religion once provided this, but today we have neither faith nor hope. I believe this is a personal, societal, and evolutionary deficiency—and it urgently needs to be addressed.

However, this would mean deliberately replacing truth with pragmatism. So where does reason still have its place? And I believe that is precisely where the decision lies: deciding which untrue things we should believe. That decision should be made reasonably.

GitHub