AI BS
It is a language model, able to produce very eloquent sentences. Give it a number of keywords and it will wrap them into paragraphs of elegant text. Hardly distinguishable from text written by a human. Maybe in a way that there are no error in the generated text. While humans make errors. Of course if the goal is to make it indistinguishable from humans, it will be taught to make language errors, to look even more human-like.
But as cute as it is, there is really a question who can use it and for what purpose.
The Internet is a total jungle today. Full of creatures masquerading as something else and it is really difficult to figure out who is who and whom to trust. It was like that from the early days and then it only has gotten worse. Now full of bots, trolls, fake reviews, paid bait content. A jungle.
It takes a lot of personal effort and critical thinking to find really reliable information sources. With technologies like OpenAI generating "content", this taks becomes increasingly difficult, as poor sources (or campaigns targeting their own interests) will have easier job masquerading as the "real" stuff. Then people will be repeating / forwarding what they saw/heard/read with little or no critical filtering. They already do so on social media. Most of the stuff are likes/forwards/retweets. Tons of noise and very few real, thought through, original opinions. All subject to the Bushtit Asymmetry principle.
I wish there were more accessible tools for cleaning all that. Filters which pass through only true, unbiased, trustworthy information. Of course there are, but they are mostly used by closed groups / organizations such as governments or investment banks. Unfortunately the elites with access to such tools profit from the general public being misinformed and often driven into fear and doubt. And OpenAI, as it is today, only adds fuel to this mess.
Comments
Post a Comment