By Michael Hodson, Security Architect and AI Strategist at Digital Resilience
The problems of privacy discussed in this blog series are largely enabled by modern technology. The proliferation of internet technologies, such as social media, eCommerce, and online banking, has increased the ways and types of information that can be collected. But what’s more, modern technology has vastly increased the volume of data we can collect and the speed at which we can analyse it. Big Data and Machine Learning combine to allow companies to analyse the data of millions of people in a fraction of the time it would have taken teams of thousands. It is through modern technology that organisations like Facebook can infer the mood of teenagers, and Cambridge Analytica can build psychological profiles on every adult in the United States. These feats would be inconceivable through the manual analysis of, say, letters and phone calls.
With Artificial Intelligence making headlines since the huge leaps forward produced by OpenAI in 2022 with ChatGPT, privacy advocates have been trying to understand how this transformative technology will affect privacy. There is certainly the potential to take another massive leap forward in the ways we collect information, the speed at which we can analyse it, and the richness of information we can infer from it. While this is an enormous topic in and of itself, I will briefly discuss two subjects that elucidate the key issues.
Surveillance
Surveillance is one of the key spaces in which AI is poised to undermine privacy. Not only does it have the power to create the kind of dystopia Orwell warned us about, but it also has the potential to supercharge surveillance capitalism.
While Machine Learning has long played a part in analysing data to predict user behaviours, advancements in AI and the ability to find patterns in the enormous amounts of data collected about us mean that companies with sufficient data may soon know us better than we know ourselves. These technologies were key in enabling Cambridge Analytica to send targeted advertising to individuals and undermine the democratic process — and they are only getting better.
Personal AI Assistants

Sam Altman’s vision for the future of AI is that we will all have personal AI assistants, or agents, that act as a “super-competent colleague that knows absolutely everything about my whole life — every email, every conversation I’ve ever had.”
This may sound very useful. But from a privacy perspective, we are talking about an AI system that has access to an extraordinary amount of information about you. An AI with the potential to infer information about you and predict what you want, what you need, and what you will do next. This is the next level of data collection — which is one thing. But who owns that data, what will they do with it, and how will they protect it is another. There is huge potential for all the problems we have discussed.
Privacy – It’s Everyone’s Business
The theme of Privacy Awareness Week this year certainly rings true: Privacy is everyone’s business. Individuals, organisations, regulators, and governments all have a role to play. Given the work that we do at Digital Resilience, I will focus on what organisations can do to enhance Information Privacy.
While I will focus on organisations, it would be remiss of me not to point out the importance of individual contributions. As individuals, we must continue to show that we value privacy and take steps to protect our information online — including being conscious of what we share and adopting privacy-enhanced alternatives to common tools, such as email and search. If you would like to take privacy more seriously from an individual perspective, I highly recommend reading Privacy is Power by Carissa Véliz.
What Organisations Must Do
“Data is the new oil” is now a common adage. This is perhaps even more relevant in the age of Artificial Intelligence. Data is the fundamental building block of AI. We mine it to train models and produce new and interesting information. Without massive amounts of data, the current breakthroughs in AI would not have been possible — and the benefits would not be realised.
But if data is the new oil, then Personally Identifiable Information (PII) is the new toxic waste. You do not want any more in your organisation than is absolutely necessary. Because when this stuff leaks, it’s sure to cause a disaster. The Medibank and Optus data breaches are two well-known examples in Australia.

The disaster can take many forms: reputational damage, extortion for ransom, class action lawsuits, or fines from the regulator. The OAIC certainly seems keen to test their new statutory tort. And more regulation is coming. The Privacy Commissioner appears to be focusing on transparency in collection and use, as well as what is a fair and reasonable justification for collection. All organisations should be watching to see what comes of the Tranche 2 reforms to the Privacy Act.
As a cybersecurity professional, you may expect me to emphasise the importance of good cybersecurity. And while I do, it is important to note that cybersecurity is secondary to privacy. While we need to ensure that we secure the data we collect, collecting and storing the right data in the first place is more important. Once you have collected private information, you are the custodian of it — and this comes with the obligation to protect it.
Data Minimisation
Data minimisation is perhaps the most important privacy concept for organisations to understand. If we look at PII as toxic waste, we only want to collect and store that which is strictly necessary for our goals — and no more. Once we have used it for its intended purpose, we need to get rid of it. Ask yourself: do I really need to keep all that raw data? Would a statistical summary be sufficient? Could it be de-identified? Are you sure you are destroying data you don’t need anymore? Or, like Optus, if you suffer a breach, will you needlessly expose the data of customers from 10 years ago? This will all help inform your data lifecycle strategy.

Data Management
Data management is a topic worthy of its own blog series. I will simply note here the importance of a solid Data Management framework that includes data classification and handling, data discovery, and, as mentioned above, data lifecycle management. Do your staff know how to identify and handle sensitive information? Do you know where sensitive information is stored in your organisation and who owns it? Are you sure the data you are feeding into your AI models doesn’t contain PII?
Cybersecurity
Once we know what data we have that needs to be protected, and where it is, we need to ensure robust cybersecurity controls are in place. Paramount in this space are the controls that we use to enforce the principles of Least Privilege and Need to Know. That is: how do we ensure that only parties with a legitimate need can access information — and only that information they need?
This is particularly important as we look to integrate AI agents that may quickly exploit gaps in our Identity and Access Management structures. Are you sure Copilot can’t access the contracts in the HR folder?
Of course, these are large and complex topics, and I have merely scratched the surface here. For further assistance on what you must do to protect the private information for which you are a custodian, please reach out to us at Digital Resilience.