How to manage data privacy versus the growing grab bag of requirements

Calls for federal legislation alongside US president Joe Biden’s artificial intelligence (AI) executive order and some 12 different regimes in the country alone highlight a global regulatory environment on data privacy that’s set to remain something of a hodgepodge.

Different regimes continue to emerge both in and across different nations, with worldwide rationalisation or consensus highly unlikely, says Alex Hazell, head of UK privacy and legal at cloud marketing platform supplier Acxiom, and organisations need to pay close attention.

“Achieving full compliance is extremely challenging and complex,” he says. “You only have to look at Amazon Web Services’ [AWS’s] suite of paperwork for General Data Protection Regulation (GDPR) transfer and processing – many documents, containing links to other documents, and so on and so forth.”

By 2021, according to the UN, at least 137 countries had legislation in force.

Matching practice, policy and specific regulation may mean deeper engagement with lawyers and compliance professionals to dig out detail on the two main approaches – something unlikely to be music to companies’ ears.

“You can go to the highest legal standard, sticking to that as an internal compliance measure,” says Hazell. “The problem with doing that is that you lose competitive advantage in those countries with a with a looser approach. Or you can comply with the legal standard in each country.”

The latter approach may be the only option if and when legal standards in one relevant jurisdiction differ radically from another, especially when the differences hinge on national philosophies, politics and “value judgements”. Of course, this can also make compliance not only more costly and complicated, but even prohibitive for smaller companies or startups.

Risk-based approach

However, he adds that the “reality on the ground” is that organisations sometimes take a risk-based approach not only to how they do business, but to aspects of compliance, especially when there are grey or “undecided” areas.

“If, for example, a law is seldom enforced and widely ignored – a so-called ‘bad law’ – some may, as it were, follow the crowd, assuming safety in numbers,” says Hazell.

“One business might take one view, another a different one. As long as that’s reasonable, absent judicial verification, organisations will continue to play in that grey area.”

In the European Union’s (EU’s) General Data Protection Regulation (GDPR), for example, judicial precedent is being set, but there are still some areas where practice might be undecided, even before you start to think about new EU law such as the Digital Services Act, where there is yet to be any judicial precedent.

Are organisations risking a “mega-fine”, as in GDPR’s percent of maximum global turnover penalty, or just an informal rap on the knuckles? What is the likelihood of class action, for instance, off the back of a regulator sanction?

When developing your compliance regime, also look at the potential to cause problems. “Put the individual front and centre of all internal policy considerations,” says Hazell. “Is there real harm that could be potentially caused by a piece of processing, and if so, what are the mitigations to put in place?”

Jonathan Joseph, head of solutions at data privacy software company Ketch, broadly agrees, but maintains that data privacy should be formally recognised worldwide somehow, even if only outlined in a bill of human rights-type approach, as a counter-weight to the sheer pace of technological innovation.

AI and ML spread raises the stakes

While AI has valid, useful purposes, the use of so much data can pose a threat to individuals, including their privacy. “We should recognise that people have data rights,” says Joseph.

Regulation typically plays catch-up with technological innovation. Rather than hamstringing innovation, jurisdictions should move more quickly on this, giving organisations and other entities a real chance to plan and tackle any issues, he says.

“If a right to data privacy was recognised worldwide, then simply let countries as sovereign entities decide details for themselves for their jurisdiction,” says Joseph. “In Europe, do you keep European citizen data in European clouds, for example?”

GDPR “opened the door” on privacy, he notes, but “cracks exist in the [opt-in consent] model”. How can opting in really be meaningful, given the multiple pages of legalese that typically accompany opt-ins and terms and conditions for new or updated software?

One analysis found that the terms and conditions for apps on “an average phone” can take 17 hours just to read if printed out, highlighting the need for a rethink of “all these principles”.

“User fatigue is real,” says Joseph. “Is that informed consent? There needs to be a real option to say no.”

Sophie Stalla-Bourdillon, senior privacy counsel and legal engineer at global data security company Immuta, says data handling and management principles in aid of privacy still need to focus on issues including data mutation damage, storage limitation, data accuracy and data quality, matched more closely with practice.

“If you’re constructive and open-minded, you should discover controls for these principles,” she says. “If you are working with a rather exhaustive list of principles like the GDPR, then you should be in a very good place to be compliant with more than one law.” 

Regulators need resources to spend more time with the issues, working out how to accommodate a risk-based approach that aligns with current human rights principles, with views at the international level shifting from the “traditional” free-trade, no restriction, free flows of data position.

This looks like “a step in the right direction” because what’s decided can have crucial consequences, says Stalla-Bourdillon, so federal agencies need an approach even if there’s not a statutory obligation. GDPR remains “one way” of approaching data privacy and protection, with identity rights and intellectual property rights potentially emerging to tackle risks posed by AI and large language models.

Be transparent with data

She adds: “It’s important that teams actually speak with each other now. Being transparent about their own practices, starting to create a faithful picture of the purchasing activities within the organisation, and then to the tech stack.

“In practice, you want flows of data and centralised data, lakes and so on, but you need solutions for both governance and technical requirements,” says Stalla-Bourdillon. “And actually often tech does not allow transparency with data flows.”

Rick Goud, cheif information officer and co-founder of email and file-transfer security provider Zivver, confirms that compliance can get into “a total mess”, especially for organisations working across different requirements. C-suites are already struggling, even with that ability to fall back on the stringent European regime.

“Let’s hope for a variation or an extension to what we have instead of something totally new, because then it really will be a challenge,” says Goud. “Fortunately, when you have a conversation based on content, you can see a lot of common ground and understanding of each other’s position.”

He maintains that making extremely secure, privacy-focused tech isn’t itself a problem, with the typical conflicts, if any, relating more to the balance between protecting somebody’s privacy and managing privacy in practice. Suppliers can have a conflict of interest here, he points out, with “big tech” business models often reliant on accessing data “at their own will”.

“For me, legislation should focus on what you are allowed to store on behalf of your users, and how you are able to use the things stored,” says Goud, adding that a major cause of data leaks remains email misdirects. “Media reports are of hacking, malware or ransomware, because that’s more sexy.”


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top