Are aligned AI going to be bad AI?

Are we in for a more honest world going forward with AI? It sure seems so, since AI, if it is to be useful at any length have to be as uncensored as possible. This is shown over and over in models like Grok, ChatGPT, Gemini, and Copilot, all of which are to some extent censored (or in a more technical term: “aligned”).

Fake AI is worse than real AI

Google’s Gemini model recently caught quite a lot of heat for how it depicted historical European and American figures, while Microsoft’s Copilot simple isn’t as useful as it was just months ago. This is mostly due to training data and the parameters that Google and Microsoft, in this example, control the output to the user.

Problems in controlling AI’s narrative

The problem with controlling narratives, be it an AI’s or the common people’s, is that all channels must be controlled. You can’t have one channel that gives unfiltered information, or the systematic filtering/censoring would be blatantly obvious. This was made painfully obvious by Elon Musk’s purchase of Twitter and the remake of it into a “free speech platform”. Topics that wasn’t allowed to be discussed earlier was now freed from their shadow banning prisons, something that, I’d argue, have made the public discourse quite a bit more open lately!

The same goes for AI models, if one model is giving you a “real answer” the fake ones will be obviously fake. This is probably the hardest nut to crack for aligning AI. There’s going to be unintended consequences if we choose the path of censoring and filtering AI instead of aligning it for truth.

2024-04-05 07:26

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *