Sunday, February 21, 2021

Knowing what is true and spotting lies

There is lots of research being done on various themes like Artificial Intelligence among other popular fields. We see constant daily stream of publication with new findings. The volume is overwhelming.

All publications are however not that great. It seems that anyone can publish almost anything without putting in the required effort of verifying that the findings are based upon a real effects. There also seems to be a disregard for existing research, either by authors not knowing enough about the subject or authors selecting to reference only the works that fit together with the desired findings. This seems to be especially a problem when the subject of the study is multidisciplinary. Using the language of one field can hide issues with the findings behind difficult terminology. 

Reviewing multidisciplinary research must be quite hard, as none can be expected to be an expert in all areas of many subjects. Recognising bad research publications is also not easy, but there are a few things you can do. First and most importantly everyone needs to practice skepticism when reading. Learn to know what bias looks like. If things look too clean and too good to be true the warning should sound that there may be a problem. When serious money or similar high value stakes are in question we need to be on the lookout for conflicting interests. 

The problem is made worse by the tendency of people to uncritically repeat findings with a catchy point, regardless of the truthiness of the findings. Once a problematic finding is out in this way it takes a lot to refute it.

The book "Calling Bullshit: The Art of Skepticism in a Data-Driven World" by Carl T. Bergstrom, Jevin West gives a very thorough overview of the problem that i highly recommend anyone to read.

Fooling oneself is the easiest thing to do, and by doing so we easily fool others.