Electronic Discovery/Disclosure: From Litigation to International Commercial Arbitration

Good to know that my paper dated 2008 has been cited in:

Revised UNCITRAL Arbitration rules Seen through the Prism of Electronic Disclosure, The [article]
Journal of International Arbitration, Vol. 28, Issue 1 (February 2011), pp. 51-66
Kozlowska, Daria
28 J. Int’l Arb. 51 (2011)

My contribution:
An early conference in the form of a case management meeting has been pointed out by numerous commentators as being a tool for a more efficient management of arbitral proceedings in cases involving electronic evidence.

Many thanks to Daria Kozlowska.

A post from another privacy site: How to Tell You Have Been Hacked

For those who want to read a bit about ‘hacking’ especially about signs that you have been hacked, do welcome to check out this ‘How to Tell You Have Been Hacked‘ by Bill Hess.
Enjoy!

Post #GDPR webinar: my talk on ‘Rights of the Data Subject’

For those who missed the Post #GDPR webinar hosted by @InfosecurityMag & @DanRaywood, replay/watch via:
Post GDPR, Is it Too Late to Comply?

You will need to register an account to login to view the webinar at www.infosecurity-magazine.com

I did my 10 mins talk on ‘Rights of the Data Subject’. The sound wasn’t too brilliant as I had to put my telephone handset on speakerphone.

If you’ve trouble with any of the ‘words/sentences’ welcome to drop me an email cher [at] jyutsu [dot] com.

The quote by Justice Louis Brandeis: If the broad light of day could be let in upon men’s actions, it would purify them as the sun disinfects. Essentially, sunlight is the best of disinfectants.

I mentioned ‘sensitive, nefarious data’ & the contentious nature of the ‘Right to be Forgotten’.

We live in interesting data privacy times!

Many thanks,
Cher

‘Right to be forgotten’ – 2 claimants v Google

Catching up with my backlog of news.
News on 13th April 2018:
Search engine giant Google has been ordered to remove links to articles about the historic criminal convictions of a businessman in the first ‘right to be forgotten’ case to be decided in England and Wales. Ruling in NT 1 and NT 2 v Google LLC today, Mr Justice Warby reached opposite conclusions about the two claimants, identified as NT 1 and NT 2, based on the nature of the criminal convictions and the extent to which publication of information related to the claimant’s private life
Read more about “Google must delete links in ‘right to be forgotten’ case” at the lawgazette.co.uk site.

A Poll on Post GDPR – I’m a speaker for a webinar on 26th June 2018

Please vote, share this and tune in to the webinar.

@DanRaywood @InfosecurityMag interviewed me.

A couple of days after the #GDPR Press Briefing at City, University of London @DanRaywood @InfosecurityMag interviewed me.

Check it out at:

#GDPR Press Briefing in City, University of #London

On 17th April 2018, I was one of the speakers in the GDPR Press Briefing held at City, University of London (City). Checkout the hot off the press ‘City academics discuss GDPR at press briefing’

My written prepared talk is shared below.

Privacy and the Individual – What difference will GDPR Make?

Thanks John for the introduction. A warm welcome to all.

Any talk on privacy and the GDPR invariably uses terms or phrases that may be blurry or obscure. So just to set the scene, when I say the ICO I’m referring to the UK’s data protection watchdog – The Information Commissioner’s Office. When I say ‘data’ I’m referring to personal data as described in the GDPR.

Although the GDPR did not reference privacy – itself a complex term, privacy is embedded as information or data privacy and expressed in phrases such as:
‘respect for human rights and fundamental freedoms (Art. 12 – exercise of the rights of the data subject); ‘High risk to the rights and freedoms of natural persons’ (Art. 35 -Data protection impact assessment), and ‘Risks to the rights and freedoms of natural persons (individuals)’ (Recital 75).

It is no longer just about protecting personal data or processing of personal data but data privacy.

With this comes obscure or unclear terms.

What is ‘high risk’? How do you express ‘rights and freedoms’ of natural persons (individuals) especially in the context of privacy impact assessment (PIA) or data protection impact assessment (DPIA)?

We know that the GDPR describes DPIA (Art. 35) and also breach notification (Art. 33 – notify the ICO, and Art. 34 – communicate to the data subjects).

I know fresh in our minds is the recent Facebook-Cambridge Analytica scandal. Flashback to October 2015, anyone here still remembers the TalkTalk data breach incident?

Would you all agree that both Facebook & TalkTalk responded or handled the data breach announcement or notification to affected individuals rather badly or failed to do so in the eyes of the public and the affected individuals?

Certainly, under the GDPR both would be required to notify the ICO within 72 hours and to affected UK individuals without undue delay or ‘as soon as possible’ (Guidelines on Personal data breach notification under Regulation 2016/679)

As we know the GDPR requires organisations to notify the ICO where there is a risk to the rights and freedoms of individuals, and only notify the individuals where there is high risk.

My research examines data incidents response, in particular, the privacy harm to individuals as a consequence of the data incident. I have designed a prototype dashboard and have conducted user evaluation study with industry practitioners. The dashboard is for assessing privacy data harm by addressing the initial breach notification question to notify or not affected individuals and to the ICO during initial data incident response.

There is still fear in organisations when it comes to disclosure of data incidents. However, the GDPR will held organisations accountable e.g. with the fines and penalties, and to be transparent to report data incidents. Affected individuals have the right to know.

The outcome of my study also revealed that it is possible to do an initial data breach assessment even with the unclear terms: ‘high risk’ and the ‘rights and freedoms’ of individuals. The prototype dashboard also shows notification alerts with the countdown to 72 hrs from the point of being aware of the incident. One participant remarked: ‘It (the dashboard) provides a calm objectivity in time of panic & stress. Because you’re going to be stressed, you immediately think your personal reputation and your organisation’s reputation. Would we be fined? And all these things come in rather than actual thinking of the consequences to individuals’.

When the data incident happened, the genie was out of the bottle, out in the wild – the harm was already done.

The GDPR would not bring the genie back into the bottle or stop the harm. So as a matter of good business practice and in the spirit of the law, organisations should notify their customers.

Thank you.
Cher
p.s.
May post a photo taken by John Stevenson (City’s Senior Communications Officer)

A Social Media Working Group from Brussels

Ah! social data will soon be ‘a type of personal data’. Will we see policymakers talking about personal-social or social-personal data? Note this press release from Brussels on ‘A Social Media Working Group (SMWG)

Would the ‘voice’ from this new SMWG on the Facebook-Cambridge Analytica scandal add fresh insights to the many ‘voices’ coming from various fronts in the UK and in the US?

I just did a quick scan of my latest tweets from the ICO @ICOnews and Cambridge Analytica @CamAnalytica). All pretty dull tweets/news compared to the snapshots of the Mark Zuckerberg Congressional hearing or ‘grilling’ or interrogation.
For the first time in American history, it’s not ‘The State of the Nation’ hearing but ‘The State of the Network’ – from Bloomberg newsflash!

Anonymisation & GDPR

Yesterday evening, 29th March 2018 I attended a BCS Law Specialist Group event – GDPR: Anonymisation,re-identification risk and GDPR profiling. The talk was presented by Dr. Amandine Jambert from the French Data Authority CNIL. The anonymisation slide is interesting.

I asked whether the WP29 thinking (& their opinions) about the 3 properties are for the ‘direct and indirect’ way of identification of the personal data. The answer was not in the method itself but that the properties are for ‘all data types’ i.e. any dataset. Her exact wordings ‘ use by anyone on any dataset’. Also, the DPA (DPO/Organisation?) needs to prove (or justify or show) that the dataset has indeed been anonymised (using any of the 2 options). My understanding is that the anonymisation if done (risk-based, database and/or algorithmic-driven) should not enable the direct and indirect re-identification of the individual(s).
As noted on this slide: ‘No single technique eliminates all risks’.

It’s near impossible to identify/isolate ‘all the direct/indirect re-identification risks’ associated with any dataset, assuming the dataset is available and not hidden in some Cloud and/or in a chain of hidden registers.

We really need to re-think personal data in terms of ‘the harm to individuals’ as there’s no absolutely sure way of preventing re-identification risks (i.e. singling out, linkability or inference/deduction etc.)

Overall a great talk.

I just noticed the slides and talk are available online: BCS Law talk 29th March 2018

ICO statement: investigation into data analytics for political purposes

The ICO statement on 24th March 2018.
I assume this is not the first-time such a civil & criminal investigation by the ICO.

The ICO’s investigation on the DeepMind-NHS saga (not a scandal?) revealed this:

However, an investigation by the ICO discovered several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test. (Extracted from ‘review-agrees-that-deepmind-nhs-deal-lacked-clarity’)

Based on the reported news the Facebook-Cambridge Analytica (CA) scandal (Top EU privacy watchdog calls Facebook data allegations the ‘scandal of the century) and the DeepMind-NHS saga did not involve a breach of technical security and/or organisational security measures (excluding privacy policies and app/SLA-type driven contractual agreements).

What data protection and privacy principles have been violated/breached?

Would ethics be a yardstick in the final determination of the Facebook-CA scandal?