On Data Privacy Day, observed each year on January 28th, Australians and businesses are reminded of the vital importance of privacy.
It emphasizes the need for proactive measures to protect, respect, and ensure transparency in collecting personal customer information. The day prompts reflection, given the rising frequency of data breaches and cyber-attacks impacting organizations and customers. A recent privacy survey by the OAIC highlights significant shifts in people’s attitudes. A notable 60% distrust an organization’s data handling, and an additional 58% are unaware of how their data is used. As Australians become more aware of data management and privacy vulnerabilities, the ongoing relevance of Data Privacy Day in the 2024 business landscape becomes increasingly evident.
Given the severe impact of poor data management and the growing sophistication of cyber threats, it’s crucial for business leaders to take decisive steps. Identifying vulnerabilities and implementing effective measures to safeguard data is imperative in the face of evolving cyber threats. Our experts share insights to emphasize the paramount importance of data privacy.
Pete Murray, Managing Director ANZ at Veritas Technologies
“Ironically, Data Privacy Day is a reminder that data privacy isn’t something a business can achieve in a single day at all. Far from that, it’s a continual process that requires vigilance, 24/7/365. Top of mind this year is the impact artificial intelligence (AI) is having on data privacy. AI-powered data management can help improve data privacy and associated regulatory compliance, yet bad actors are using generative AI (GenAI) to create more sophisticated attacks. While GenAI is also making employees more efficient, guardrails are needed to help prevent accidentally leaking sensitive information. Considering these and other developments on the horizon, data privacy in 2024 is more important than ever.”
“Data privacy compliance continues to become more complex. New laws putting guardrails on using personal data in the large language models (LLMs) behind GenAI tools are gaining steam. For example, just last week, The Australian Government announced it will be strengthening regulatory frameworks and existing laws to mitigate any emerging risks of technologies such as AI. With approaches to regulation and compliance differing across continental, country and state borders, businesses and organisations will find the already complex regulatory environment even harder to navigate without help.”
“Whether to implement GenAI isn’t really a question. The value it provides employees to streamline their jobs means it’s almost a forgone conclusion. But that must be balanced with the risks GenAI could pose when proprietary or other potentially sensitive information is fed into these systems. To ensure they remain compliant with data privacy standards, whether or not regulatory bodies enact AI-specific rules, IT leaders need to provide guardrails to employees that will limit the likelihood that they accidentally expose something they shouldn’t. This isn’t the time for IT leaders to sit back and rely solely on regulatory organisations. They should be proactively reviewing their systems and processes, and identifying potential areas of vulnerabilities.”
“AI is making the cyber threat landscape more dangerous. Cybercriminals are already using AI to improve their ransomware capabilities and launch more sophisticated attacks that threaten data privacy. IT leaders need to think about how they, too, should be using AI-powered solutions and tools to keep up with, and enhance their organisations’ resilience against these AI-powered attacks.”
Chris Fisher, Regional Director for ANZ at Vectra AI
“Throughout last year, many Australian and New Zealand (ANZ) businesses made headlines for all the wrong reasons, as even large corporations investing ample funds into security measures were forced to announce breaches and customer data leaks.
“In September, for instance, Pizza Hut’s Australian operations were the victim of a cyber-attack that saw customer data, including delivery addresses and order details of as many as 193,000 customers, being claimed by an ‘unauthorised third party’. On top of this, for registered accounts, the hack claimed credit card numbers and encrypted passwords.
“As we enter a new year, the International Data Privacy Day, taking place on January 28th, is a clear opportunity to stop, take stock of security measures, and put in place both prevention and detection systems and processes. With artificial intelligence (AI) dominating headlines, this day is a chance to consider how AI can be baked into security strategies to achieve greater attack signal intelligence – especially as customers and consumers have begun to share more data than ever before with organisations.
“Even as these customers take action to keep their personal information secure and private, exposure incidents still occur. As we strive to make the world a safer and fairer place, companies have a responsibility to their customers, partners and end users to implement the right practices that will ensure that their privacy and data are protected.”
By Andrew Slavkovic, Solutions Engineering Director ANZ – CyberArk
“It’s encouraging to see Australia moving in the right direction with the proposed changes to the Privacy Act. However, it is imperative for organisations to go beyond regulatory compliance and proactively safeguard sensitive data. Organisations are now collecting and storing more data than ever. This trend will only continue as organisations invest in leveraging more AI initiatives in 2024.
“Parallel to this, organisations are relying on third-parties to protect data without ever validating how it is protected, stored or even interconnected with other organisations. There is often a lack of understanding on who can access the data and of even more concern the business impact if it were to be compromised. This is one of the reasons organisations should be adopting a robust and comprehensive cybersecurity strategy. One in which identities are front and center. Identity security is paramount to a zero-trust security mindset. We must never trust but always verify what the identity is doing and if abnormal activity is detected we must challenge that identity in real time by seamlessly applying security controls to validate the action.
“We must start by understanding how an identity accesses information and the value of that data, after this we can start to apply the appropriate level of security controls. A pattern of usual behavior will be established and then any deviation can be challenged in real time.”
Keir Garrett, Regional Vice President, Cloudera ANZ
“Generative Artificial Intelligence (GenAI) has been the tech story of 2023 as organisations scrambled to adopt the technology in the enterprise. Chatbots, automated report generation and personalised emails are just the tip of the iceberg of how GenAI -drives creativity and productivity while improving customer experience.
“It is worthwhile to note that AI models are only as good as the data that they are fed; therefore, the key to trusting your AI is to first trust your data. As companies look to deploy more AI and machine learning (ML) technologies across the business, there is an increasing demand for a trusted data platform to help organisations access their data across all environments. Advancements in AI/ML have even let organisations extract value from unstructured data, which makes the management, governance, and control of all data critical – if you have clean, trusted data within the data platform, that is the AI model you can trust.
“Organisations will become more data-driven with the continued rise of GenAI. In fact, an impressive 75% of Australian organisations are already adopting AI/ML technologies. As businesses adopt new GenAI iterations to democratise more of their data, the need to secure that data and empower the right people to access it becomes paramount. Privacy concerns are valid seeing how companies train or prompt GenAI models like ChatGPT with data.
“To navigate data security and privacy risks, organisations must build their strategies and plans with data security and governance front of mind as enforcing third-party security solutions is often a complicated process. Investing in modern data platforms and tools with built-in security and governance capabilities allows companies to democratise their data in a secure and governed manner, while successfully training enterprise AI/ML models.”
Carla Roncato, Vice President of Identity, WatchGuard Technologies
Advances in artificial intelligence (AI) and machine learning (ML) technologies are top of mind this Data Privacy Day, both for the potential benefits and troubling dangers these tools could unleash. Considering the widespread proliferation of AI tools in just this past year, it’s critical that we in the information security community seize this opportunity to raise awareness and deepen understanding of the emerging risk of AI for our data. As AI becomes a more integral – and infringing – presence in our everyday lives it will have real implications to our data rights.
Remember, if a service you use is “free,” it’s likely that you and your data are the product. This also applies to AI tools, so act accordingly. Many early AI services and tools, including ChatGPT, employ a usage model that’s similar to social media services like Facebook and TikTok. While you don’t pay money to use those platforms, you are compensating them through the sharing of your private data, which these companies leverage and monetise through ad targeting. Similarly, a free AI service can collect data from your devices and store your prompts, then use that data to train its own model. While this may not seem malicious, it’s precisely why it’s so crucial to analyze the privacy implications of processing scraped data to train generative AI algorithms. Say one of these companies gets breached; threat actors could obtain access to your data, and – just like that – have the power to weaponise it against you.
Of course, AI has potential upsides. In fact, many AI tools are quite powerful and can be used securely with proper precautions. The risks your business faces depend on your specific organization’s missions, needs and the data you use. In security, everything starts with policy, meaning that ultimately you must craft an AI policy that’s tailored to your organisation’s unique use case. Once you have your policy nailed down, the next step is to communicate it, as well as the risks associated with AI tools, to your workforce. But it’s important to continue to revise or amend this policy as needed to ensure compliance amid changing regulations – and be sure to reiterate it with your workforce regularly.
Raja Mukerji, Co-founder and Chief Scientist, ExtraHop
“A key focus this Data Privacy Day should be on generative AI. As a new approach gaining attention across enterprises, concerns about data security and privacy have run rampant. Most enterprises are eager to take advantage of generative AI, however, circumstances like employees uploading sensitive corporate data and IP, the opacity of criteria used to train the model, and lack of governance and regulations introduce new challenges.
During this time of development, enterprises should focus on ways to make generative AI work for their specific needs and protocols. Visibility into AI tools is critical, and enterprises should have solutions in place that monitor how they’re being both trained and used while educating employees on best practices for safe and ethical use. Investing in systems and processes that grant you this visibility and training will help position generative AI as an aid for productivity in the workplace, and help mitigate data privacy concerns. Eventually, enterprises will be able to take advantage of the opportunity to build their own unique AI tools to better serve their employees, customers, and processes, in a provably secure and repeatable manner.”