Tech Titans consider putting AI on a leash

Hello all. I read an interesting article on the Economist about how large companies are reacting to the institutionalization of AI, this is a shorter summary on the article. Hope you enjoy!

In the past few years, the artificial intelligence transformation has begun—and its vast potential to revolutionize a range of industries is mostly realized and acknowledged by the industry itself. But AI is being integrated into more and more parts of our daily lives, and with that has come a surge of concern over the ethics of AI and what some see as its inevitable risks. Interestingly, not just policymakers and researchers are pushing for more regulation of AI; several big tech companies have taken an interest in this issue, too, and have even begun to support regulatory efforts.

With the recent increase in AI applications, we can’t help but ask how far we’re willing to let AI manage important social tasks. In just the past few years, two types of AI—facial recognition and predictive AI—have been shopped around as great new tools for law enforcement. Their implementation has raised very serious questions about bias, false positives, and the morality of using AI in life-and-death situations. Overall, we’re left with a very uncomfortable gray area concerning AI’s current ability to step into the shoes of vital social tasks and do them well enough to earn our trust.

Creating an Unlevel Playing Field Major technology companies have fallen head over heels for artificial intelligence. Unlike human workers, AI doesn’t need to take breaks, sleep, or engage in other activities that could minimize the productivity of the workplace. Instead, you can find tech giants like Apple, Microsoft, and Google searching high and low for the next big thing in AI to ensure they come out on top. However, with this scavenger hunt comes a worrying issue: the patenting of AI technologies. Currently, IBM holds the most patents related to AI, with over 16,000. Close behind are Intel, Samsung, and Microsoft. And as companies rush to stake their claim in the AI gold rush, it’s creating an unfair and uncompetitive landscape for other businesses.

We must look forward and visualize how we can possibly regulate AI in the not-so-distant future. OpenAI executive Sam Altman took this concern to Congress, and on May 16, he told our lawmakers, in essence, “Look, folks, AI is powerful, it’s full of faults, and it can do a lot of harm if we don’t get it under control. And, uh, by the way, even the big tech companies that have vested interests in pushing innovation forward are nervous about this whole situation.” Running with this, of course, is the always-ready-to-be-cited Senator Majority Leader Chuck Schumer, who is asking for some legislative action to stir up “regulatory guard rails” for artificial intelligence.

Preventing More Stringent Regulations

Public sentiment regarding the power and influence of big tech has become very critical. Several well-known incidents have occurred: Zillow’s disaster with using AI to predict house prices, the total failure of AI to report tens of thousands of COVID cases in the UK, and Microsoft’s chatbot that couldn’t help but be racist because, as we learned, dataset training is really important. These companies are saying, “Hey, we’re big,” essentially, as an author in The Atlantic put it, “to avoid being the bad guys who do nothing to help control something that, if uncontrolled, could really mess things up.”

As AI continues to advance, many questions demand answers. How can we ensure that AI can carry out enormous responsibilities, like law enforcement? What will be the human cost of AI? What will be the impact of AI on our businesses? While we can only wait to see how all of this will play out, I believe that the current moment offers a real opportunity. Big tech and politicians around the globe are reaching a consensus on how to regulate this new technology. So far, anyway, there have been no life-threatening incidents involving AI. To my mind, we can look at this situation and see two possible outcomes. In one, a rather frightening scenario; AI could be used as a tool of totalitarian social control. So, what about the other outcome?

References:

https://hbr.org/2023/05/who-is-going-to-regulate-ai

https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html


Posted

in

by

Tags:

Comments

Leave a comment