Friday, March 15, 2024

AI R Us

This should not be surprising.* AI models are a mirror to human behavior. So shit that goes in results in exporting shit. 

AI models are trained to learn, consolidate, collate, and regurgitate. We don't like what we get, so we change it afterwards. But that doesn't change the mirror effect. It's a Band-Aid approach: Cover the abscess so we don't have to see it. Or, as Nikhil Garg, a computer scientist, eloquently puts it, “simply paper over the rot”.

This brings to mind a sci-fi series I'm watching on TV, "Beacon 23". AI, a common component of most sci-fi, is represented as two important components in the series, and are almost complete opposites: "Bart" and "Harmony". The latter is a logical Spock attached to an individual human and created by a major company that controls almost everything.

Bart is an AI developed hundreds of years prior to the primary time event of the series and serves the structure, Beacon 23, in space (it's a 'lighthouse' stationed near a flux of dark matter.  The Beacons guide space craft away from detected nearby dark matter 'clusters', like a lighthouse.) 

Bart learns and adopts human nature/behavior from all the beacon "keepers" and those who visit. He also controls all the communication and mechanics of the Beacon station. As such, Bart is just like a human, with all the messiness, assumptions, errors, etc: he lies, whines, plots, complains, quotes Shakespeare, and often acts like a child.  But Bart also significantly sets the course for what occurs on the Beacon. Whereas Harmony is logical and attuned only to her individual human, but also capable of Beacon control, including Bart (she scolds Bard many times).

Humans created AI, and in its current state in our world is like a young Bart. That we can't see that is human blindness. AI is not, and won't be our savior. We can't even save ourselves. 

"Even though human feedback seems to be able to effectively steer the model away from overt stereotypes, the fact that the base model was trained on Internet data that includes highly racist text means that models will continue to exhibit such patterns."

* "Chatbot AI makes racist judgements on the basis of dialect," Elizabeth Gibney. Nature, March 13, 2024.

No comments:

Post a Comment