Blog
October 21, 2024

What Are The Security Risks of Microsoft 365 Co-Pilot?

With access to sensitive data stored across customers’ Microsoft ecosystems, what security risks does Co-Pilot pose? We sat down with our VP of Engineering, Artem Tabalin, to find out.

Download
Download

Key points:

  • This new AI tool from Microsoft is designed to assist users with everyday tasks like creating documents, summarising content, analysing data, or writing code. It is tightly integrated with Microsoft 365 applications and can leverage users' data to provide better personalised answers.
  • Co-pilot is powered by a combination of advanced AI technologies, including generative AI models, Microsoft Graph, and Microsoft Azure.
  • Microsoft Co-pilot adheres to existing privacy and security commitments and complies with various regulations like GDPR and CCPA. It uses encryption and data anonymisation techniques to protect user data.
  • While Co-pilot offers many benefits, there are also potential security risks such as data leakage and unauthorised access. Organisations need to be aware of these risks and implement appropriate measures to mitigate them
  • Metomic can help organisations discover where sensitive data is stored across your Microsoft ecosystem and implement automated redaction rules to minimise the risk of a data breach. Get in touch to see how Metomic can support your data security policy.

In November 2023, Microsoft launched its new AI tool for Enterprise users - Microsoft 365 Co-pilot. With its announcement, Microsoft said, ‘It combines the power of large language models (LLMs) with your data in the Microsoft Graph and the Microsoft 365 apps to turn your words into the most powerful productivity tool on the planet.’ 

Embedded in the ‘Microsoft 365 apps you use every day — Word, Excel, PowerPoint, Outlook, Teams and more,’ the tool can produce first drafts of blog posts (it didn’t write this one, we promise!), create beautiful presentations, and analyse trends to create data visualisations. 

However, with access to sensitive data stored across customers’ Microsoft ecosystems, what security risks does Co-pilot pose? 

We sat down with our VP of Engineering, Artem Tabalin, to find out. 

Understanding Microsoft 365 Co-pilot

What is Microsoft Co-pilot and where can you use it?

Microsoft Co-pilot is an AI-powered tool assisting users with everyday tasks like creating documents, summarising content, analysing data or writing code. It’s similar to ChatGPT, but the key difference is its tight integration with Microsoft 365 applications and Microsoft ecosystem, which means Co-pilot can leverage users’ data, such as documents, emails and calendars to provide better personalised answers.

What underlying technologies power Microsoft Co-pilot, and how does it integrate with existing Microsoft 365 services?

Microsoft Co-pilot is powered by a combination of advanced AI technologies, which include:

  • Generative AI models - allows it to produce human-like text and creative content.
  • Microsoft Graph - aggregates user’s data like documents, emails and calendars from various Microsoft services.
  • Microsoft Azure - provides the necessary compute resources and infrastructure.

The tool can be used outside of Microsoft Ecosystem, but its key differentiator is how seamlessly Co-pilot integrated with other Microsoft Services, which enables wide range of capabilities from real-time recommendations while working on documents, emails, or presentations in Office 365 to converting a Word document into a Powerpoint presentation with key points and visuals.

Data Handling and Privacy

How does Microsoft Co-pilot process and analyse data provided to it? Are there any privacy-preserving techniques or encryption mechanisms employed during data transmission and storage?

Microsoft Co-pilot adheres to the existing privacy and security commitments for Microsoft 365 customers. The actual users’ data is not used to train the Machine Learning models, which means organisational data doesn’t influence the underlying models. The data is encrypted both in transit and at rest, which significantly reduces the risk of unauthorised access.

Co-pilot complies with GDPR (General Data Protection Regulation) and California Consumer Privacy Act (CCPA), guaranteeing that users’ data is processed, stored, and protected in accordance with recognised legal standards. It also complies with the European Union (EU) Data Boundary, making sure that EU customers data doesn’t leave EU boundaries.

With the use of machine learning models and AI algorithms, how does Microsoft Co-pilot ensure data privacy and confidentiality, especially when dealing with sensitive or proprietary information?

First, Co-pilot follows existing data permissions and policies set up for an organisation, which means users will only see responses based on data they personally have access to, and the data won’t leak between users and groups. Second, Microsoft uses data anonymisation techniques to remove Personally Identifiable Information (PII) from the data used for training its AI models, also ensuring that only the minimum necessary data is processed. Finally, users are provided with robust privacy controls that allow them to manage data (including the ability to delete or export it for personal use), adjust privacy settings, and opt out of certain data processing activities.

Is MS Co-Pilot safe?

What are the potential security risks and implications of using Microsoft Co-pilot 365 within enterprise environments, particularly in terms of data leakage, unauthorised access, or compliance with regulatory standards?

Indeed, there are implications to be aware of. Imagine a user has access to some sensitive information in the organisation, say a spreadsheet with everyone’s salary information. Even though Co-pilot follows the existing data permissions and policies, as the user has access to the spreadsheet, Copilot has access to it too. This can lead to such sensitive information being exposed, as AI models might include sensitive data in their outputs.

Integrating Co-pilot requires careful access controls management to make sure that only authorised users can leverage its capabilities, especially when dealing with sensitive data.

As for compliance, it often requires detailed auditing and reporting capabilities, which can be challenging when AI models process data in opaque ways. That's why it’s critical that Co-pilot's operations meet the regulatory standards.

Are there any known vulnerabilities or attack vectors associated with Microsoft Co-pilot that organisations should be aware of, and how does Microsoft address these concerns through continuous monitoring and updates?

The first vulnerability is potential data leakage due to incorrect access controls, when a user has access to sensitive information, which allows Co-pilot to access this data and can lead to unexpected exposure.

Another attack vector, known as model inversion attacks, is shared by all AI-powered solutions. This is when a model itself is susceptible to attacks designed to manipulate its behaviour or extract information from it.

Co-pilot integrates with Microsoft 365 services, which means all vulnerabilities in those services and their integrations could be also potentially exploited.

How does Microsoft Co-pilot handle sensitive data from a compliance standpoint, such as GDPR or HIPAA regulations? Are there specific features or controls available to help organisations maintain regulatory compliance while leveraging this technology?

Microsoft offers Data Processing Agreements that outline how data is processed on behalf of customers, ensuring compliance with GDPR requirements. Customers are able to choose data residency options, ensuring that data is stored in specific geographic locations to comply with regulations.

For healthcare organisations concerned with HIPAA compliance, Microsoft provides an option to enter into a Business Associate Agreement, which specifies how Protected Health Information (PHI) is handled in compliance with HIPAA.

As mentioned, organisations can configure access controls and permissions, which are respected by Co-pilot, to ensure that only authorised personnel can access sensitive data. Also data is encrypted both in transit and at rest to minimise the risks for data processed and generated by Co-pilot.

In the event of a security incident or data breach involving Microsoft Co-pilot, what measures are in place to facilitate incident response and mitigate potential damage to organisational assets and reputation?

As a part of a broader security and privacy framework, Microsoft has protocols designed to facilitate incident response and mitigate potential damage, which applies across all products and services, including Co-pilot. There is monitoring and advanced threat detection technologies in place across all cloud services, including Co-pilot.

Every incident is investigated to understand its scope and impact, identifying how the breach occurred and which data or systems were affected. Then all the affected customers are notified and get all the details about the nature of the breach and the measures taken in response.

Staying secure while using Large Language Models (LLMs) 

How can companies remain secure while still using LLMs such as Microsoft Co-pilot?

Here are a few general recommendations to minimise risks and protect your data:

  1. Define sensitive data - know what kind of sensitive data your company has (health information, credit card data, PII like SSNs etc.), classify the types of data.
  2. Discover where the data lives - identify locations for the sensitive data that you’d need to protect.
  3. Figure out sharing policies - what kind of restrictions you have in place around information sharing within and outside of the organisation and how it applies to the sensitive data.
  4. Review access controls - see which users / groups have access to the sensitive data, make sure there is a change management process in place.

The best approach to secure your sensitive information is to use a Data Loss Prevention (DLP) solution like Metomic, which allows organisations to discover sensitive data across Microsoft 365 services and set up automatic rules that take care of the information sharing and minimise the risk of a data breach.

Closing thoughts

While Microsoft Co-pilot can be a powerful tool when it comes to productivity, the security risks are apparent, particularly when it comes to sensitive data.

Using a modern DLP tool can be beneficial for identifying where sensitive data is stored across your Microsoft ecosystem, and allowing you to minimise it with automated redaction rules in place.

Metomic can help organisations use SaaS, AI and Cloud tools while keeping their team secure. To find out more about how Metomic can support your data security policy, book a personalised demo or get in touch with one of our team.

Key points:

  • This new AI tool from Microsoft is designed to assist users with everyday tasks like creating documents, summarising content, analysing data, or writing code. It is tightly integrated with Microsoft 365 applications and can leverage users' data to provide better personalised answers.
  • Co-pilot is powered by a combination of advanced AI technologies, including generative AI models, Microsoft Graph, and Microsoft Azure.
  • Microsoft Co-pilot adheres to existing privacy and security commitments and complies with various regulations like GDPR and CCPA. It uses encryption and data anonymisation techniques to protect user data.
  • While Co-pilot offers many benefits, there are also potential security risks such as data leakage and unauthorised access. Organisations need to be aware of these risks and implement appropriate measures to mitigate them
  • Metomic can help organisations discover where sensitive data is stored across your Microsoft ecosystem and implement automated redaction rules to minimise the risk of a data breach. Get in touch to see how Metomic can support your data security policy.

In November 2023, Microsoft launched its new AI tool for Enterprise users - Microsoft 365 Co-pilot. With its announcement, Microsoft said, ‘It combines the power of large language models (LLMs) with your data in the Microsoft Graph and the Microsoft 365 apps to turn your words into the most powerful productivity tool on the planet.’ 

Embedded in the ‘Microsoft 365 apps you use every day — Word, Excel, PowerPoint, Outlook, Teams and more,’ the tool can produce first drafts of blog posts (it didn’t write this one, we promise!), create beautiful presentations, and analyse trends to create data visualisations. 

However, with access to sensitive data stored across customers’ Microsoft ecosystems, what security risks does Co-pilot pose? 

We sat down with our VP of Engineering, Artem Tabalin, to find out. 

Understanding Microsoft 365 Co-pilot

What is Microsoft Co-pilot and where can you use it?

Microsoft Co-pilot is an AI-powered tool assisting users with everyday tasks like creating documents, summarising content, analysing data or writing code. It’s similar to ChatGPT, but the key difference is its tight integration with Microsoft 365 applications and Microsoft ecosystem, which means Co-pilot can leverage users’ data, such as documents, emails and calendars to provide better personalised answers.

What underlying technologies power Microsoft Co-pilot, and how does it integrate with existing Microsoft 365 services?

Microsoft Co-pilot is powered by a combination of advanced AI technologies, which include:

  • Generative AI models - allows it to produce human-like text and creative content.
  • Microsoft Graph - aggregates user’s data like documents, emails and calendars from various Microsoft services.
  • Microsoft Azure - provides the necessary compute resources and infrastructure.

The tool can be used outside of Microsoft Ecosystem, but its key differentiator is how seamlessly Co-pilot integrated with other Microsoft Services, which enables wide range of capabilities from real-time recommendations while working on documents, emails, or presentations in Office 365 to converting a Word document into a Powerpoint presentation with key points and visuals.

Data Handling and Privacy

How does Microsoft Co-pilot process and analyse data provided to it? Are there any privacy-preserving techniques or encryption mechanisms employed during data transmission and storage?

Microsoft Co-pilot adheres to the existing privacy and security commitments for Microsoft 365 customers. The actual users’ data is not used to train the Machine Learning models, which means organisational data doesn’t influence the underlying models. The data is encrypted both in transit and at rest, which significantly reduces the risk of unauthorised access.

Co-pilot complies with GDPR (General Data Protection Regulation) and California Consumer Privacy Act (CCPA), guaranteeing that users’ data is processed, stored, and protected in accordance with recognised legal standards. It also complies with the European Union (EU) Data Boundary, making sure that EU customers data doesn’t leave EU boundaries.

With the use of machine learning models and AI algorithms, how does Microsoft Co-pilot ensure data privacy and confidentiality, especially when dealing with sensitive or proprietary information?

First, Co-pilot follows existing data permissions and policies set up for an organisation, which means users will only see responses based on data they personally have access to, and the data won’t leak between users and groups. Second, Microsoft uses data anonymisation techniques to remove Personally Identifiable Information (PII) from the data used for training its AI models, also ensuring that only the minimum necessary data is processed. Finally, users are provided with robust privacy controls that allow them to manage data (including the ability to delete or export it for personal use), adjust privacy settings, and opt out of certain data processing activities.

Is MS Co-Pilot safe?

What are the potential security risks and implications of using Microsoft Co-pilot 365 within enterprise environments, particularly in terms of data leakage, unauthorised access, or compliance with regulatory standards?

Indeed, there are implications to be aware of. Imagine a user has access to some sensitive information in the organisation, say a spreadsheet with everyone’s salary information. Even though Co-pilot follows the existing data permissions and policies, as the user has access to the spreadsheet, Copilot has access to it too. This can lead to such sensitive information being exposed, as AI models might include sensitive data in their outputs.

Integrating Co-pilot requires careful access controls management to make sure that only authorised users can leverage its capabilities, especially when dealing with sensitive data.

As for compliance, it often requires detailed auditing and reporting capabilities, which can be challenging when AI models process data in opaque ways. That's why it’s critical that Co-pilot's operations meet the regulatory standards.

Are there any known vulnerabilities or attack vectors associated with Microsoft Co-pilot that organisations should be aware of, and how does Microsoft address these concerns through continuous monitoring and updates?

The first vulnerability is potential data leakage due to incorrect access controls, when a user has access to sensitive information, which allows Co-pilot to access this data and can lead to unexpected exposure.

Another attack vector, known as model inversion attacks, is shared by all AI-powered solutions. This is when a model itself is susceptible to attacks designed to manipulate its behaviour or extract information from it.

Co-pilot integrates with Microsoft 365 services, which means all vulnerabilities in those services and their integrations could be also potentially exploited.

How does Microsoft Co-pilot handle sensitive data from a compliance standpoint, such as GDPR or HIPAA regulations? Are there specific features or controls available to help organisations maintain regulatory compliance while leveraging this technology?

Microsoft offers Data Processing Agreements that outline how data is processed on behalf of customers, ensuring compliance with GDPR requirements. Customers are able to choose data residency options, ensuring that data is stored in specific geographic locations to comply with regulations.

For healthcare organisations concerned with HIPAA compliance, Microsoft provides an option to enter into a Business Associate Agreement, which specifies how Protected Health Information (PHI) is handled in compliance with HIPAA.

As mentioned, organisations can configure access controls and permissions, which are respected by Co-pilot, to ensure that only authorised personnel can access sensitive data. Also data is encrypted both in transit and at rest to minimise the risks for data processed and generated by Co-pilot.

In the event of a security incident or data breach involving Microsoft Co-pilot, what measures are in place to facilitate incident response and mitigate potential damage to organisational assets and reputation?

As a part of a broader security and privacy framework, Microsoft has protocols designed to facilitate incident response and mitigate potential damage, which applies across all products and services, including Co-pilot. There is monitoring and advanced threat detection technologies in place across all cloud services, including Co-pilot.

Every incident is investigated to understand its scope and impact, identifying how the breach occurred and which data or systems were affected. Then all the affected customers are notified and get all the details about the nature of the breach and the measures taken in response.

Staying secure while using Large Language Models (LLMs) 

How can companies remain secure while still using LLMs such as Microsoft Co-pilot?

Here are a few general recommendations to minimise risks and protect your data:

  1. Define sensitive data - know what kind of sensitive data your company has (health information, credit card data, PII like SSNs etc.), classify the types of data.
  2. Discover where the data lives - identify locations for the sensitive data that you’d need to protect.
  3. Figure out sharing policies - what kind of restrictions you have in place around information sharing within and outside of the organisation and how it applies to the sensitive data.
  4. Review access controls - see which users / groups have access to the sensitive data, make sure there is a change management process in place.

The best approach to secure your sensitive information is to use a Data Loss Prevention (DLP) solution like Metomic, which allows organisations to discover sensitive data across Microsoft 365 services and set up automatic rules that take care of the information sharing and minimise the risk of a data breach.

Closing thoughts

While Microsoft Co-pilot can be a powerful tool when it comes to productivity, the security risks are apparent, particularly when it comes to sensitive data.

Using a modern DLP tool can be beneficial for identifying where sensitive data is stored across your Microsoft ecosystem, and allowing you to minimise it with automated redaction rules in place.

Metomic can help organisations use SaaS, AI and Cloud tools while keeping their team secure. To find out more about how Metomic can support your data security policy, book a personalised demo or get in touch with one of our team.

Key points:

  • This new AI tool from Microsoft is designed to assist users with everyday tasks like creating documents, summarising content, analysing data, or writing code. It is tightly integrated with Microsoft 365 applications and can leverage users' data to provide better personalised answers.
  • Co-pilot is powered by a combination of advanced AI technologies, including generative AI models, Microsoft Graph, and Microsoft Azure.
  • Microsoft Co-pilot adheres to existing privacy and security commitments and complies with various regulations like GDPR and CCPA. It uses encryption and data anonymisation techniques to protect user data.
  • While Co-pilot offers many benefits, there are also potential security risks such as data leakage and unauthorised access. Organisations need to be aware of these risks and implement appropriate measures to mitigate them
  • Metomic can help organisations discover where sensitive data is stored across your Microsoft ecosystem and implement automated redaction rules to minimise the risk of a data breach. Get in touch to see how Metomic can support your data security policy.

In November 2023, Microsoft launched its new AI tool for Enterprise users - Microsoft 365 Co-pilot. With its announcement, Microsoft said, ‘It combines the power of large language models (LLMs) with your data in the Microsoft Graph and the Microsoft 365 apps to turn your words into the most powerful productivity tool on the planet.’ 

Embedded in the ‘Microsoft 365 apps you use every day — Word, Excel, PowerPoint, Outlook, Teams and more,’ the tool can produce first drafts of blog posts (it didn’t write this one, we promise!), create beautiful presentations, and analyse trends to create data visualisations. 

However, with access to sensitive data stored across customers’ Microsoft ecosystems, what security risks does Co-pilot pose? 

We sat down with our VP of Engineering, Artem Tabalin, to find out. 

Understanding Microsoft 365 Co-pilot

What is Microsoft Co-pilot and where can you use it?

Microsoft Co-pilot is an AI-powered tool assisting users with everyday tasks like creating documents, summarising content, analysing data or writing code. It’s similar to ChatGPT, but the key difference is its tight integration with Microsoft 365 applications and Microsoft ecosystem, which means Co-pilot can leverage users’ data, such as documents, emails and calendars to provide better personalised answers.

What underlying technologies power Microsoft Co-pilot, and how does it integrate with existing Microsoft 365 services?

Microsoft Co-pilot is powered by a combination of advanced AI technologies, which include:

  • Generative AI models - allows it to produce human-like text and creative content.
  • Microsoft Graph - aggregates user’s data like documents, emails and calendars from various Microsoft services.
  • Microsoft Azure - provides the necessary compute resources and infrastructure.

The tool can be used outside of Microsoft Ecosystem, but its key differentiator is how seamlessly Co-pilot integrated with other Microsoft Services, which enables wide range of capabilities from real-time recommendations while working on documents, emails, or presentations in Office 365 to converting a Word document into a Powerpoint presentation with key points and visuals.

Data Handling and Privacy

How does Microsoft Co-pilot process and analyse data provided to it? Are there any privacy-preserving techniques or encryption mechanisms employed during data transmission and storage?

Microsoft Co-pilot adheres to the existing privacy and security commitments for Microsoft 365 customers. The actual users’ data is not used to train the Machine Learning models, which means organisational data doesn’t influence the underlying models. The data is encrypted both in transit and at rest, which significantly reduces the risk of unauthorised access.

Co-pilot complies with GDPR (General Data Protection Regulation) and California Consumer Privacy Act (CCPA), guaranteeing that users’ data is processed, stored, and protected in accordance with recognised legal standards. It also complies with the European Union (EU) Data Boundary, making sure that EU customers data doesn’t leave EU boundaries.

With the use of machine learning models and AI algorithms, how does Microsoft Co-pilot ensure data privacy and confidentiality, especially when dealing with sensitive or proprietary information?

First, Co-pilot follows existing data permissions and policies set up for an organisation, which means users will only see responses based on data they personally have access to, and the data won’t leak between users and groups. Second, Microsoft uses data anonymisation techniques to remove Personally Identifiable Information (PII) from the data used for training its AI models, also ensuring that only the minimum necessary data is processed. Finally, users are provided with robust privacy controls that allow them to manage data (including the ability to delete or export it for personal use), adjust privacy settings, and opt out of certain data processing activities.

Is MS Co-Pilot safe?

What are the potential security risks and implications of using Microsoft Co-pilot 365 within enterprise environments, particularly in terms of data leakage, unauthorised access, or compliance with regulatory standards?

Indeed, there are implications to be aware of. Imagine a user has access to some sensitive information in the organisation, say a spreadsheet with everyone’s salary information. Even though Co-pilot follows the existing data permissions and policies, as the user has access to the spreadsheet, Copilot has access to it too. This can lead to such sensitive information being exposed, as AI models might include sensitive data in their outputs.

Integrating Co-pilot requires careful access controls management to make sure that only authorised users can leverage its capabilities, especially when dealing with sensitive data.

As for compliance, it often requires detailed auditing and reporting capabilities, which can be challenging when AI models process data in opaque ways. That's why it’s critical that Co-pilot's operations meet the regulatory standards.

Are there any known vulnerabilities or attack vectors associated with Microsoft Co-pilot that organisations should be aware of, and how does Microsoft address these concerns through continuous monitoring and updates?

The first vulnerability is potential data leakage due to incorrect access controls, when a user has access to sensitive information, which allows Co-pilot to access this data and can lead to unexpected exposure.

Another attack vector, known as model inversion attacks, is shared by all AI-powered solutions. This is when a model itself is susceptible to attacks designed to manipulate its behaviour or extract information from it.

Co-pilot integrates with Microsoft 365 services, which means all vulnerabilities in those services and their integrations could be also potentially exploited.

How does Microsoft Co-pilot handle sensitive data from a compliance standpoint, such as GDPR or HIPAA regulations? Are there specific features or controls available to help organisations maintain regulatory compliance while leveraging this technology?

Microsoft offers Data Processing Agreements that outline how data is processed on behalf of customers, ensuring compliance with GDPR requirements. Customers are able to choose data residency options, ensuring that data is stored in specific geographic locations to comply with regulations.

For healthcare organisations concerned with HIPAA compliance, Microsoft provides an option to enter into a Business Associate Agreement, which specifies how Protected Health Information (PHI) is handled in compliance with HIPAA.

As mentioned, organisations can configure access controls and permissions, which are respected by Co-pilot, to ensure that only authorised personnel can access sensitive data. Also data is encrypted both in transit and at rest to minimise the risks for data processed and generated by Co-pilot.

In the event of a security incident or data breach involving Microsoft Co-pilot, what measures are in place to facilitate incident response and mitigate potential damage to organisational assets and reputation?

As a part of a broader security and privacy framework, Microsoft has protocols designed to facilitate incident response and mitigate potential damage, which applies across all products and services, including Co-pilot. There is monitoring and advanced threat detection technologies in place across all cloud services, including Co-pilot.

Every incident is investigated to understand its scope and impact, identifying how the breach occurred and which data or systems were affected. Then all the affected customers are notified and get all the details about the nature of the breach and the measures taken in response.

Staying secure while using Large Language Models (LLMs) 

How can companies remain secure while still using LLMs such as Microsoft Co-pilot?

Here are a few general recommendations to minimise risks and protect your data:

  1. Define sensitive data - know what kind of sensitive data your company has (health information, credit card data, PII like SSNs etc.), classify the types of data.
  2. Discover where the data lives - identify locations for the sensitive data that you’d need to protect.
  3. Figure out sharing policies - what kind of restrictions you have in place around information sharing within and outside of the organisation and how it applies to the sensitive data.
  4. Review access controls - see which users / groups have access to the sensitive data, make sure there is a change management process in place.

The best approach to secure your sensitive information is to use a Data Loss Prevention (DLP) solution like Metomic, which allows organisations to discover sensitive data across Microsoft 365 services and set up automatic rules that take care of the information sharing and minimise the risk of a data breach.

Closing thoughts

While Microsoft Co-pilot can be a powerful tool when it comes to productivity, the security risks are apparent, particularly when it comes to sensitive data.

Using a modern DLP tool can be beneficial for identifying where sensitive data is stored across your Microsoft ecosystem, and allowing you to minimise it with automated redaction rules in place.

Metomic can help organisations use SaaS, AI and Cloud tools while keeping their team secure. To find out more about how Metomic can support your data security policy, book a personalised demo or get in touch with one of our team.