The discussion about artificial intelligence can seem overwhelming at times. And I’m sympathetic to those who feel as though all of the voices in the discussion blend together.
If you’re one of these people, I suspect Tuesday’s Senate hearing on generative AI tools, like ChatGPT, didn’t help. You may have heard the main takeaway, though: that OpenAI CEO Sam Altman is, like, super worried about the scary things AI could do in the future.
But it’s time to cut through the noise.
There’s one important distinction to keep in mind as you witness the ongoing public discussion about AI — the good, the bad and the ugly of it.
Here it is: Most people steeped in the discussion fall into one of two camps. One consists of Big Tech elites and their sympathizers — people like Elon Musk, who have a tremendous stake (maybe political, maybe financial) in maximizing their personal power over tech industries, and artificial intelligence in particular. These people often swing like a pendulum between two rhetorical extremes: Pollyannaish portrayals of the positive things that AI will supposedly achieve, and grim fatalism about AI’s alleged ability to end the world. And no matter which line this group (disproportionately consisting of rich white dudes) is pushing, the suggestions are usually the same: AI is a technology yet to be realized, and these dudes know best who ought to control it — and how — before it gets too unwieldy.
In the other camp, we have AI ethicists, whose conversations are more tethered to reality. These are people I’ve mentioned in previous posts, like Joy Buolamwini and Timnit Gebru. They talk about artificial intelligence and its positive potential; they talk about the importance of guaranteeing equal access to AI; and they talk about how the technology is often built in a way that disfavors marginalized groups, such as Black women. Where the first camp obsesses over the coming future, this second camp talks about the harms in the here and now, evident in everything from hiring practices to housing.
This distinction is the prime reason Gebru and some of her fellow AI ethics experts were critical of a letter signed by Musk and other Big Tech elites urgently calling for a temporary halt in the development of powerful AI tools until Congress passes regulation.
The letter cited these experts’ research. And to some people, this may have seemed like Big Tech was being responsible. But for many techies in the know, the letter was akin to oil industry executives talking about solving climate change. As in: These people, with their clear incentives to self-deal and their obvious blindspots, are not the ones who should be driving this discussion.
Altman, in his role as CEO of the company that created ChatGPT, is one of those people. But many lawmakers don’t appear to know this. Although, in their defense, some previous tech hearings have gone so poorly that many of the senators may have hoped to just make it through this one without looking like complete fools.








