It seems that you can’t turn your head nowadays without seeing artificial intelligence being incorporated into some software or platform. However, many leaders in the technology space have expressed their concerns about—as they put it—the “profound risks to society and humanity” that AI poses, outlined in an open letter.
With tens of thousands of signatures, the short letter cautioned against the unfettered growth of AI without a greater appreciation of the potential outcomes.
There are assorted reasons that these signees are so concerned, that have the potential to materialize in the short, medium, and long term.
With the capability that AI has to reference and even manufacture false or disingenuous data, there is a very real chance that any information that these systems can produce could be false…something particularly dangerous when so many people already turn to the Internet for important information. In addition, these falsehoods can be made far more convincing through the capabilities of these platforms.
What’s worse, this effect can also be manufactured, so content meant to spread misinformation can be produced far more quickly and shared that much more easily.
Many technology experts are also very concerned that AI could swiftly render many current forms of gainful employment obsolete. While some knowledge-based careers require more practical skills than AI is able to replicate (yet), many could ultimately have their roles reduced significantly, if not eliminated entirely.
Yes, it sounds extreme, but some experts—specifically those from the Future of Life Institute, which is an organization that tries to predict “existential risks to humanity”—foresee that the largely unpredictable nature of AI could create some very serious issues as it “learns” how to write its own code. In fact, the Future of Life Institute was who wrote the aforementioned open letter.
Likewise, the Center for AI Safety has collected signatures in support of their own brief statement:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
In short, scary stuff.
Like any other major technological enhancement, there are definitely some kinks to iron out in terms of artificial intelligence—and these challenges will almost certainly result in legislation meant to put stopgaps in place.
What do you think? Do you foresee anything being done to slow, or even stop, the advancement of AI?
About the author
David started LinkTech in the summer of 2014 after serving in a variety of IT leadership roles. Since 2017 he as additional held the role of CIO for a local leading hospitality company and has been key in the explosive growth of both organizations. David keeps busy with a hearty mix of business IT strategy, project management, technical consulting, and day-to-day IT operations.
Latest News & Events
Account Login
Contact Us
Learn more about what LinkTech can do for your business.
3301 Cambell St, Suite A
Rapid City, South Dakota 57701