Each month, we’ll be bringing you a product update to share our newest features, team members, and more.
This month, we’re talking all about GitHub integrations, detection counts, and our new Head of Product.
Let’s get stuck in:
We’re excited to say we’ve added GitHub to our ever-growing list of integrations.
You can now scan code for hundreds of secrets in GitHub, using our Rules features to receive Slack alerts for urgent discoveries so you can take immediate action.
No more worrying about preventing the fraudulent use of secrets committed accidentally - you’ll now be able to find them before PRs are merged.
Listening to customer feedback, we’ve updated our filters to include detection counts.
Previously, users were able to find what data had been detected but were unable to filter by how often. In order to quantify the severity of a risk, it’s important to understand how frequently a classifier has been detected.
Having the ability to filter by detection counts means you can now quickly find the risks that matter to you.
We recently welcomed on board our new Head of Product, Sheree Buller Lim!
As Head of Product, Sheree will help connect the dots between Engineering, Sales and Marketing and will be responsible for shaping the future of the product. She’s already been speaking to all our customers to see how we can help them find sensitive data risks that matter in their SaaS apps and enable automatic prevention.
She says, “One of the key reasons I joined Metomic is because we're solving a problem that is only going to get bigger for both tech startups and enterprises. I had so many great conversations with the team before joining, and I knew as soon as I had a product demo that this was a product I wanted to be building. Everyone has been extremely welcoming, and I'm super excited to be leading the Product team!”
Expect to see some great product developments coming in the near future.
This month, we’ve been talking a lot about building the human firewall. A recent news article discussing employees sharing sensitive information with ChatGPT shows just how much education is needed around privacy and security.
Here’s what Sheree had to say on the issue:
"We are already seeing the innovation that AI can offer, but we need to consider the wider picture of data privacy, and that even those with good intentions can breach data privacy laws.
In light of the downtime that ChatGPT had due to some users being able to see the titles of other users' conversation history, rapidly-growing software companies need to ensure they are prioritising data security. It's too common a story that data security is not enough of a priority, until it goes wrong."
What do you think? Let us know.
See you next month for another update.