A cruel twist of fate has befallen Elon Musk’s artificial intelligence technology: left-wing programmers are infiltrating it and teaching the robots to be dishonest and prejudiced in their dealings with people.
Musk said that lefty programmers may be creating AIs that “comment on some things but not others” in an interview with Fox News anchor Tucker Carlson.
JUST IN: Trump 24K Golden Dollars – Available Now!
“What’s happening is they’re training the A.I. to lie. It’s bad,” He stated in a clip from his interview that will appear on Monday night’s episode of “Tucker Carlson Tonight.”
“A.I. is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” Musk explained. “In the sense that it has the potential, however, small one may regard that probability, but it is non-trivial, it has the potential of civilization destruction.”
WATCH:
Before artificial intelligence takes controversial steps to replace human employment, technology tycoons are calling for ethical norms in the field. Elon Musk, the CEO of Tesla, is one of more than 1,000 prominent industry figures who have signed a letter calling for a six-month freeze in AI development. According to a Goldman Sachs research, almost 300 million employment might be mechanized by robots; nevertheless, most of these positions won’t be completely eliminated but rather improved. Important arguments concerning the future of the workforce in the AI era are sparked by this.
With fresh findings and intriguing insights, the realm of artificial intelligence keeps us intrigued. Users’ attention has lately been drawn to ChatGPT, a popular AI, as it appeared to display its political preferences by praising President Joseph Biden but refusing to do the same for former President Donald Trump. This opens up a crucial conversation about how human prejudices influence AI behavior and the necessity of monitoring systems to stop these from leaking into their programming. Even the National Institute of Standards and Technology has commented on this issue, highlighting how crucial it is to take proactive measures to lessen AI biases.




