In this article, we delve into the aftermath of the AI boom, examining the surge of public distrust, implementation hurdles, and the sobering realities faced by industries as they grapple with rushed adoption and ethical controversies.
The AI boom has been quite the ride so far, hasnât it?
Remember the buzz, the excitement, and the promises of a smarter, more efficient world when ChatGPT exploded into the public consciousness in late 2022 and early 2023?
Well, the party's over, and now weâre waking up with a bit of a hangover.
Itâs time to face the realities of AI. Overload, AI fatigue, concerns about data security, and a growing public distrust have taken some of the lustre off the technology that was supposed to change everything.
None of this is to say that we should abandon AI; it has, after all, had a major impact on how society is doing things. Itâs automated a lot of repetitive tasks, it can analyse huge amounts of data at speed, and itâs giving beleaguered security teams a much needed helping hand in the fight against escalating and evolving cyber threats.
But implementing AI needs a clear strategy, focus on ethical practices, and assurances that itâs truly adding value to our businesses and society.
Generative AI isnât going anywhere, but now, waking up sober and with a pounding headache, weâre taking a much more honest look at it - without the beer goggles.
Let's tackle the big issue first: the growing public distrust in AI. A recent report revealed that over half of Americans believe AI companies prioritise profits over ethics and safety.
And a survey conducted by the UK government shows that nearly half of Britons are worried about AI stealing their jobs.
Weâll get into the hard numbers behind that later, but it needs to be pointed out that this scepticism isnât coming out of nowhere; itâs driven by real incidents and controversies that have eroded trust in the industry.
Take OpenAI, for example. They were recently accused of copying Scarlett Johanssonâs voice for their AI models, a claim they denied. This peculiar case of life imitating art - Scarlett Johannson voiced the AI character Sam in the movie Her - was certainly not helped by the fact that it was revealed subsequently that OpenAI approached the actress multiple times, only to be refused.
And while they have since dropped the offending voice called âSkyâ, this controversy was a major blow to their reputation, highlighting concerns over intellectual property and ethical boundaries.
On top of that, their Head of Alignment, Jan Leike, left the company, citing that safety had taken a backseat to flashy new products. This was further compounded by the news that Leikeâs team had been disbanded after his departure.
And it wasnât just OpenAIâs safety team that was defanged.
Other companies either restructured or scaled back their AI safety efforts, including Google, Amazon, Microsoft, X and Meta - with the latter already having plenty of public safety concern skeletons in their closet, including the only recently settled Cambridge Analytica scandal.
Then there's Slack's recent change in its data policy, where user data is used to train AI models unless users opt out. This default setting raised eyebrows around ethical concerns about privacy, transparency, and user consent. Such practices contribute to the growing mistrust, as users feel they have little control over their own data.
These kinds of stories feed into the publicâs fear that AI companies, or those leveraging AI, are far more interested in profits than in doing whatâs right.
The term "AI ethics-washing" is becoming commonâcompanies make grand statements about ethics without following through in meaningful ways.
People are becoming increasingly wary of whether these companies are genuinely committed to ethical practices or just paying lip service to avoid scrutiny. And the numbers donât paint a pretty picture.
A survey from the Markkula Center for Applied Ethics showed that 68% of Americans are worried about AIâs impact on humanity, and 86% think AI companies should be regulated.
And according to the "Public attitudes to data and AI: Tracker survey" from GOV.UK, 31% of respondents believe AI will have a negative impact on fairness in society, while 45% fear AI will lead to job losses.
AIâs reputation as a relentless, job-stealing juggernaut isnât helped by high profile news of companies like Buzzfeed, Dukaan, and IBM laying off staff and replacing them with AI tools.
Metomicâs 2024 CISO survey further revealed that the apprehension isn't limited to the general public:
Adding to our AI hangover is the phenomenon of AI overload and fatigue. Since late 2022, weâve seen businesses rushing to adopt AI, hoping to stay ahead of the curve.
But as the dust settles, many are realising that AI isnât the miracle cure they thought it would be. Instead, we're dealing with the practical challenges of AI implementation.
We recently attended an insightful talk called âWading through AI Overloadâ at the Infosecurity Europe conference, moderated by David Savage.
During the discussion, Forrester Senior Analyst Tope Olufon, Seidea CEO Stephanie Itimi, and Generative AI and Deepfake expert Henry Ajder shared their expertise.
Among the key points highlighted by the panellists were the challenges of navigating AI overload and the importance of strategic AI implementation:
The sobering truth is this: AI is not a one-size-fits-all solution. It requires careful planning, ongoing management, and sometimes, a lot more work than expected.
Itâs no wonder weâre experiencing AI fatigue as the initial excitement wears off and the hard work begins.
So, whatâs the real state of AI in the business world today? It's a bit like cleaning up after a particularly wild party. The excitement has faded, and now we need to take a realistic look at the situation.
AI has been an incredible disruptor and a positive force in many ways, transforming industries and driving innovation.
However, we need to be realistic about what AI can and canât do. Blindly implementing AI for the sake of it is not the answer. The party might be over, but generative AI is clearly sticking around.
Taking the right steps to implement it where relevant, and in a responsible and ethical manner should ensure that itâll be welcome at the next technology all-night rager.
Generative AI is here to stay, and your employees are going to use it despite the risks. Book a personalised demo or get in touch with our team today to learn how Metomic can keep sensitive data safe while using AI tools.