Blog
August 28, 2024

The Party's Over, Back to Work! The Hangover After the AI Party

In this article, we delve into the aftermath of the AI boom, examining the surge of public distrust, implementation hurdles, and the sobering realities faced by industries as they grapple with rushed adoption and ethical controversies.

Download
Download

The AI boom has been quite the ride so far, hasn’t it?

Remember the buzz, the excitement, and the promises of a smarter, more efficient world when ChatGPT exploded into the public consciousness in late 2022 and early 2023?

Well, the party's over, and now we’re waking up with a bit of a hangover.

It’s time to face the realities of AI. Overload, AI fatigue, concerns about data security, and a growing public distrust have taken some of the lustre off the technology that was supposed to change everything.

None of this is to say that we should abandon AI; it has, after all, had a major impact on how society is doing things. It’s automated a lot of repetitive tasks, it can analyse huge amounts of data at speed, and it’s giving beleaguered security teams a much needed helping hand in the fight against escalating and evolving cyber threats.

But implementing AI needs a clear strategy, focus on ethical practices, and assurances that it’s truly adding value to our businesses and society.

Generative AI isn’t going anywhere, but now, waking up sober and with a pounding headache, we’re taking a much more honest look at it - without the beer goggles.

Public distrust in AI: The elephant in the room

Let's tackle the big issue first: the growing public distrust in AI. A recent report revealed that over half of Americans believe AI companies prioritise profits over ethics and safety.

And a survey conducted by the UK government shows that nearly half of Britons are worried about AI stealing their jobs.

We’ll get into the hard numbers behind that later, but it needs to be pointed out that this scepticism isn’t coming out of nowhere; it’s driven by real incidents and controversies that have eroded trust in the industry.

The OpenAI controversy

Take OpenAI, for example. They were recently accused of copying Scarlett Johansson’s voice for their AI models, a claim they denied. This peculiar case of life imitating art - Scarlett Johannson voiced the AI character Sam in the movie Her - was certainly not helped by the fact that it was revealed subsequently that OpenAI approached the actress multiple times, only to be refused.

And while they have since dropped the offending voice called “Sky”, this controversy was a major blow to their reputation, highlighting concerns over intellectual property and ethical boundaries.

On top of that, their Head of Alignment, Jan Leike, left the company, citing that safety had taken a backseat to flashy new products. This was further compounded by the news that Leike’s team had been disbanded after his departure.

And it wasn’t just OpenAI’s safety team that was defanged.

Other companies either restructured or scaled back their AI safety efforts, including Google, Amazon, Microsoft, X and Meta - with the latter already having plenty of public safety concern skeletons in their closet, including the only recently settled Cambridge Analytica scandal.

Then there's Slack's recent change in its data policy, where user data is used to train AI models unless users opt out. This default setting raised eyebrows around ethical concerns about privacy, transparency, and user consent. Such practices contribute to the growing mistrust, as users feel they have little control over their own data.

These kinds of stories feed into the public’s fear that AI companies, or those leveraging AI, are far more interested in profits than in doing what’s right.

The numbers don’t lie

The term "AI ethics-washing" is becoming common—companies make grand statements about ethics without following through in meaningful ways.

People are becoming increasingly wary of whether these companies are genuinely committed to ethical practices or just paying lip service to avoid scrutiny. And the numbers don’t paint a pretty picture.

A survey from the Markkula Center for Applied Ethics showed that 68% of Americans are worried about AI’s impact on humanity, and 86% think AI companies should be regulated.

And according to the "Public attitudes to data and AI: Tracker survey" from GOV.UK, 31% of respondents believe AI will have a negative impact on fairness in society, while 45% fear AI will lead to job losses.

AI’s reputation as a relentless, job-stealing juggernaut isn’t helped by high profile news of companies like Buzzfeed, Dukaan, and IBM laying off staff and replacing them with AI tools.

Metomic’s 2024 CISO survey further revealed that the apprehension isn't limited to the general public:

  • Two-thirds of CISOs and IT security leaders are most concerned about generative AI creating security breaches.
  • More than half worry about employees uploading sensitive business data to large language models, risking exposure of confidential information.
  • Four-fifths plan to use AI tools to combat AI-based security threats.

AI Overload and Fatigue

Adding to our AI hangover is the phenomenon of AI overload and fatigue. Since late 2022, we’ve seen businesses rushing to adopt AI, hoping to stay ahead of the curve.

But as the dust settles, many are realising that AI isn’t the miracle cure they thought it would be. Instead, we're dealing with the practical challenges of AI implementation.

Insights from Infosec Europe

We recently attended an insightful talk called “Wading through AI Overload” at the Infosecurity Europe conference, moderated by David Savage.

During the discussion, Forrester Senior Analyst Tope Olufon, Seidea CEO Stephanie Itimi, and Generative AI and Deepfake expert Henry Ajder shared their expertise.

Among the key points highlighted by the panellists were the challenges of navigating AI overload and the importance of strategic AI implementation:

  1. Nail down specific use cases: AI works best when you have a clear, specific use case. It’s not a magic wand for all problems. Think of it like a tool in a toolbox—it’s great for some jobs, but not for everything.
  2. Protect your data: Especially for small businesses, data protection is crucial. Where is your data being stored? Is it secure? Does it comply with industry regulations? These are questions every business should be asking.
  3. ROI isn’t always obvious: Many companies aren’t seeing the big returns they expected from AI. The hype doesn’t always match the reality, and it’s important to measure actual benefits.
  4. Know your data: Understanding what data you’re storing and what AI vendors can see is essential. This awareness helps mitigate risks and ensures you’re making informed decisions.
  5. Upskill your team: Implementing AI often means your team needs new skills. It’s not just about having the technology but knowing how to use it safely and effectively.

The sobering truth is this: AI is not a one-size-fits-all solution. It requires careful planning, ongoing management, and sometimes, a lot more work than expected.

It’s no wonder we’re experiencing AI fatigue as the initial excitement wears off and the hard work begins.

The reality of the AI landscape

So, what’s the real state of AI in the business world today? It's a bit like cleaning up after a particularly wild party. The excitement has faded, and now we need to take a realistic look at the situation.

  1. Trust issues: Just like the mess left behind, there are significant trust issues that need addressing. Companies must prioritise transparency and ethical practices to rebuild public confidence.
  2. Data protection: Think of the scattered debris—data protection is a major concern, and businesses must ensure they handle data responsibly and securely.
  3. Fatigue and burnout: Everyone's tired after the party. The initial AI excitement has led to fatigue, and now companies need to manage expectations and focus on sustainable, realistic applications of AI.
  4. Real ROI: Assessing the cost of the party is essential. Businesses need to evaluate the real return on investment from their AI initiatives. Simply slapping AI onto something doesn't guarantee profit. Business + AI doesn’t automatically = profit.
  5. Skill gaps: Imagine trying to clean up without the right tools. Companies need to upskill their teams to handle AI effectively, ensuring they understand the technology and its implications.

Conclusion

AI has been an incredible disruptor and a positive force in many ways, transforming industries and driving innovation.

However, we need to be realistic about what AI can and can’t do. Blindly implementing AI for the sake of it is not the answer. The party might be over, but generative AI is clearly sticking around.

Taking the right steps to implement it where relevant, and in a responsible and ethical manner should ensure that it’ll be welcome at the next technology all-night rager.

Generative AI is here to stay, and your employees are going to use it despite the risks. Take a virtual platform tour to see how Metomic can keep sensitive data safe while using AI tools.

The AI boom has been quite the ride so far, hasn’t it?

Remember the buzz, the excitement, and the promises of a smarter, more efficient world when ChatGPT exploded into the public consciousness in late 2022 and early 2023?

Well, the party's over, and now we’re waking up with a bit of a hangover.

It’s time to face the realities of AI. Overload, AI fatigue, concerns about data security, and a growing public distrust have taken some of the lustre off the technology that was supposed to change everything.

None of this is to say that we should abandon AI; it has, after all, had a major impact on how society is doing things. It’s automated a lot of repetitive tasks, it can analyse huge amounts of data at speed, and it’s giving beleaguered security teams a much needed helping hand in the fight against escalating and evolving cyber threats.

But implementing AI needs a clear strategy, focus on ethical practices, and assurances that it’s truly adding value to our businesses and society.

Generative AI isn’t going anywhere, but now, waking up sober and with a pounding headache, we’re taking a much more honest look at it - without the beer goggles.

Public distrust in AI: The elephant in the room

Let's tackle the big issue first: the growing public distrust in AI. A recent report revealed that over half of Americans believe AI companies prioritise profits over ethics and safety.

And a survey conducted by the UK government shows that nearly half of Britons are worried about AI stealing their jobs.

We’ll get into the hard numbers behind that later, but it needs to be pointed out that this scepticism isn’t coming out of nowhere; it’s driven by real incidents and controversies that have eroded trust in the industry.

The OpenAI controversy

Take OpenAI, for example. They were recently accused of copying Scarlett Johansson’s voice for their AI models, a claim they denied. This peculiar case of life imitating art - Scarlett Johannson voiced the AI character Sam in the movie Her - was certainly not helped by the fact that it was revealed subsequently that OpenAI approached the actress multiple times, only to be refused.

And while they have since dropped the offending voice called “Sky”, this controversy was a major blow to their reputation, highlighting concerns over intellectual property and ethical boundaries.

On top of that, their Head of Alignment, Jan Leike, left the company, citing that safety had taken a backseat to flashy new products. This was further compounded by the news that Leike’s team had been disbanded after his departure.

And it wasn’t just OpenAI’s safety team that was defanged.

Other companies either restructured or scaled back their AI safety efforts, including Google, Amazon, Microsoft, X and Meta - with the latter already having plenty of public safety concern skeletons in their closet, including the only recently settled Cambridge Analytica scandal.

Then there's Slack's recent change in its data policy, where user data is used to train AI models unless users opt out. This default setting raised eyebrows around ethical concerns about privacy, transparency, and user consent. Such practices contribute to the growing mistrust, as users feel they have little control over their own data.

These kinds of stories feed into the public’s fear that AI companies, or those leveraging AI, are far more interested in profits than in doing what’s right.

The numbers don’t lie

The term "AI ethics-washing" is becoming common—companies make grand statements about ethics without following through in meaningful ways.

People are becoming increasingly wary of whether these companies are genuinely committed to ethical practices or just paying lip service to avoid scrutiny. And the numbers don’t paint a pretty picture.

A survey from the Markkula Center for Applied Ethics showed that 68% of Americans are worried about AI’s impact on humanity, and 86% think AI companies should be regulated.

And according to the "Public attitudes to data and AI: Tracker survey" from GOV.UK, 31% of respondents believe AI will have a negative impact on fairness in society, while 45% fear AI will lead to job losses.

AI’s reputation as a relentless, job-stealing juggernaut isn’t helped by high profile news of companies like Buzzfeed, Dukaan, and IBM laying off staff and replacing them with AI tools.

Metomic’s 2024 CISO survey further revealed that the apprehension isn't limited to the general public:

  • Two-thirds of CISOs and IT security leaders are most concerned about generative AI creating security breaches.
  • More than half worry about employees uploading sensitive business data to large language models, risking exposure of confidential information.
  • Four-fifths plan to use AI tools to combat AI-based security threats.

AI Overload and Fatigue

Adding to our AI hangover is the phenomenon of AI overload and fatigue. Since late 2022, we’ve seen businesses rushing to adopt AI, hoping to stay ahead of the curve.

But as the dust settles, many are realising that AI isn’t the miracle cure they thought it would be. Instead, we're dealing with the practical challenges of AI implementation.

Insights from Infosec Europe

We recently attended an insightful talk called “Wading through AI Overload” at the Infosecurity Europe conference, moderated by David Savage.

During the discussion, Forrester Senior Analyst Tope Olufon, Seidea CEO Stephanie Itimi, and Generative AI and Deepfake expert Henry Ajder shared their expertise.

Among the key points highlighted by the panellists were the challenges of navigating AI overload and the importance of strategic AI implementation:

  1. Nail down specific use cases: AI works best when you have a clear, specific use case. It’s not a magic wand for all problems. Think of it like a tool in a toolbox—it’s great for some jobs, but not for everything.
  2. Protect your data: Especially for small businesses, data protection is crucial. Where is your data being stored? Is it secure? Does it comply with industry regulations? These are questions every business should be asking.
  3. ROI isn’t always obvious: Many companies aren’t seeing the big returns they expected from AI. The hype doesn’t always match the reality, and it’s important to measure actual benefits.
  4. Know your data: Understanding what data you’re storing and what AI vendors can see is essential. This awareness helps mitigate risks and ensures you’re making informed decisions.
  5. Upskill your team: Implementing AI often means your team needs new skills. It’s not just about having the technology but knowing how to use it safely and effectively.

The sobering truth is this: AI is not a one-size-fits-all solution. It requires careful planning, ongoing management, and sometimes, a lot more work than expected.

It’s no wonder we’re experiencing AI fatigue as the initial excitement wears off and the hard work begins.

The reality of the AI landscape

So, what’s the real state of AI in the business world today? It's a bit like cleaning up after a particularly wild party. The excitement has faded, and now we need to take a realistic look at the situation.

  1. Trust issues: Just like the mess left behind, there are significant trust issues that need addressing. Companies must prioritise transparency and ethical practices to rebuild public confidence.
  2. Data protection: Think of the scattered debris—data protection is a major concern, and businesses must ensure they handle data responsibly and securely.
  3. Fatigue and burnout: Everyone's tired after the party. The initial AI excitement has led to fatigue, and now companies need to manage expectations and focus on sustainable, realistic applications of AI.
  4. Real ROI: Assessing the cost of the party is essential. Businesses need to evaluate the real return on investment from their AI initiatives. Simply slapping AI onto something doesn't guarantee profit. Business + AI doesn’t automatically = profit.
  5. Skill gaps: Imagine trying to clean up without the right tools. Companies need to upskill their teams to handle AI effectively, ensuring they understand the technology and its implications.

Conclusion

AI has been an incredible disruptor and a positive force in many ways, transforming industries and driving innovation.

However, we need to be realistic about what AI can and can’t do. Blindly implementing AI for the sake of it is not the answer. The party might be over, but generative AI is clearly sticking around.

Taking the right steps to implement it where relevant, and in a responsible and ethical manner should ensure that it’ll be welcome at the next technology all-night rager.

Generative AI is here to stay, and your employees are going to use it despite the risks. Take a virtual platform tour to see how Metomic can keep sensitive data safe while using AI tools.

The AI boom has been quite the ride so far, hasn’t it?

Remember the buzz, the excitement, and the promises of a smarter, more efficient world when ChatGPT exploded into the public consciousness in late 2022 and early 2023?

Well, the party's over, and now we’re waking up with a bit of a hangover.

It’s time to face the realities of AI. Overload, AI fatigue, concerns about data security, and a growing public distrust have taken some of the lustre off the technology that was supposed to change everything.

None of this is to say that we should abandon AI; it has, after all, had a major impact on how society is doing things. It’s automated a lot of repetitive tasks, it can analyse huge amounts of data at speed, and it’s giving beleaguered security teams a much needed helping hand in the fight against escalating and evolving cyber threats.

But implementing AI needs a clear strategy, focus on ethical practices, and assurances that it’s truly adding value to our businesses and society.

Generative AI isn’t going anywhere, but now, waking up sober and with a pounding headache, we’re taking a much more honest look at it - without the beer goggles.

Public distrust in AI: The elephant in the room

Let's tackle the big issue first: the growing public distrust in AI. A recent report revealed that over half of Americans believe AI companies prioritise profits over ethics and safety.

And a survey conducted by the UK government shows that nearly half of Britons are worried about AI stealing their jobs.

We’ll get into the hard numbers behind that later, but it needs to be pointed out that this scepticism isn’t coming out of nowhere; it’s driven by real incidents and controversies that have eroded trust in the industry.

The OpenAI controversy

Take OpenAI, for example. They were recently accused of copying Scarlett Johansson’s voice for their AI models, a claim they denied. This peculiar case of life imitating art - Scarlett Johannson voiced the AI character Sam in the movie Her - was certainly not helped by the fact that it was revealed subsequently that OpenAI approached the actress multiple times, only to be refused.

And while they have since dropped the offending voice called “Sky”, this controversy was a major blow to their reputation, highlighting concerns over intellectual property and ethical boundaries.

On top of that, their Head of Alignment, Jan Leike, left the company, citing that safety had taken a backseat to flashy new products. This was further compounded by the news that Leike’s team had been disbanded after his departure.

And it wasn’t just OpenAI’s safety team that was defanged.

Other companies either restructured or scaled back their AI safety efforts, including Google, Amazon, Microsoft, X and Meta - with the latter already having plenty of public safety concern skeletons in their closet, including the only recently settled Cambridge Analytica scandal.

Then there's Slack's recent change in its data policy, where user data is used to train AI models unless users opt out. This default setting raised eyebrows around ethical concerns about privacy, transparency, and user consent. Such practices contribute to the growing mistrust, as users feel they have little control over their own data.

These kinds of stories feed into the public’s fear that AI companies, or those leveraging AI, are far more interested in profits than in doing what’s right.

The numbers don’t lie

The term "AI ethics-washing" is becoming common—companies make grand statements about ethics without following through in meaningful ways.

People are becoming increasingly wary of whether these companies are genuinely committed to ethical practices or just paying lip service to avoid scrutiny. And the numbers don’t paint a pretty picture.

A survey from the Markkula Center for Applied Ethics showed that 68% of Americans are worried about AI’s impact on humanity, and 86% think AI companies should be regulated.

And according to the "Public attitudes to data and AI: Tracker survey" from GOV.UK, 31% of respondents believe AI will have a negative impact on fairness in society, while 45% fear AI will lead to job losses.

AI’s reputation as a relentless, job-stealing juggernaut isn’t helped by high profile news of companies like Buzzfeed, Dukaan, and IBM laying off staff and replacing them with AI tools.

Metomic’s 2024 CISO survey further revealed that the apprehension isn't limited to the general public:

  • Two-thirds of CISOs and IT security leaders are most concerned about generative AI creating security breaches.
  • More than half worry about employees uploading sensitive business data to large language models, risking exposure of confidential information.
  • Four-fifths plan to use AI tools to combat AI-based security threats.

AI Overload and Fatigue

Adding to our AI hangover is the phenomenon of AI overload and fatigue. Since late 2022, we’ve seen businesses rushing to adopt AI, hoping to stay ahead of the curve.

But as the dust settles, many are realising that AI isn’t the miracle cure they thought it would be. Instead, we're dealing with the practical challenges of AI implementation.

Insights from Infosec Europe

We recently attended an insightful talk called “Wading through AI Overload” at the Infosecurity Europe conference, moderated by David Savage.

During the discussion, Forrester Senior Analyst Tope Olufon, Seidea CEO Stephanie Itimi, and Generative AI and Deepfake expert Henry Ajder shared their expertise.

Among the key points highlighted by the panellists were the challenges of navigating AI overload and the importance of strategic AI implementation:

  1. Nail down specific use cases: AI works best when you have a clear, specific use case. It’s not a magic wand for all problems. Think of it like a tool in a toolbox—it’s great for some jobs, but not for everything.
  2. Protect your data: Especially for small businesses, data protection is crucial. Where is your data being stored? Is it secure? Does it comply with industry regulations? These are questions every business should be asking.
  3. ROI isn’t always obvious: Many companies aren’t seeing the big returns they expected from AI. The hype doesn’t always match the reality, and it’s important to measure actual benefits.
  4. Know your data: Understanding what data you’re storing and what AI vendors can see is essential. This awareness helps mitigate risks and ensures you’re making informed decisions.
  5. Upskill your team: Implementing AI often means your team needs new skills. It’s not just about having the technology but knowing how to use it safely and effectively.

The sobering truth is this: AI is not a one-size-fits-all solution. It requires careful planning, ongoing management, and sometimes, a lot more work than expected.

It’s no wonder we’re experiencing AI fatigue as the initial excitement wears off and the hard work begins.

The reality of the AI landscape

So, what’s the real state of AI in the business world today? It's a bit like cleaning up after a particularly wild party. The excitement has faded, and now we need to take a realistic look at the situation.

  1. Trust issues: Just like the mess left behind, there are significant trust issues that need addressing. Companies must prioritise transparency and ethical practices to rebuild public confidence.
  2. Data protection: Think of the scattered debris—data protection is a major concern, and businesses must ensure they handle data responsibly and securely.
  3. Fatigue and burnout: Everyone's tired after the party. The initial AI excitement has led to fatigue, and now companies need to manage expectations and focus on sustainable, realistic applications of AI.
  4. Real ROI: Assessing the cost of the party is essential. Businesses need to evaluate the real return on investment from their AI initiatives. Simply slapping AI onto something doesn't guarantee profit. Business + AI doesn’t automatically = profit.
  5. Skill gaps: Imagine trying to clean up without the right tools. Companies need to upskill their teams to handle AI effectively, ensuring they understand the technology and its implications.

Conclusion

AI has been an incredible disruptor and a positive force in many ways, transforming industries and driving innovation.

However, we need to be realistic about what AI can and can’t do. Blindly implementing AI for the sake of it is not the answer. The party might be over, but generative AI is clearly sticking around.

Taking the right steps to implement it where relevant, and in a responsible and ethical manner should ensure that it’ll be welcome at the next technology all-night rager.

Generative AI is here to stay, and your employees are going to use it despite the risks. Take a virtual platform tour to see how Metomic can keep sensitive data safe while using AI tools.