10 December 2015 - 12 December 2015

The digital economy: power and accountability in the private sector

Chair: Ms Nuala O’Connor

Against a backdrop of soggy December weather, we took a long hard look at how private sector companies can best manage the rapid proliferation of data, both balancing concerns about privacy and security, and achieving the potential benefits of big data; and at how governments, regulators and consumers fit into all this. Our group was less geographically diverse than usual but contained a good balance of people from the public and private sectors. The discussion was a fascinating mixture of broad philosophy, for example about what constitutes identity and how to protect it, and more technical exchanges about how consumers can interact with the devices and software on which they rely, and protect themselves. We could not resolve such a wide spectrum of issues over two and a half days, but we did have practical recommendations to make as well as plenty of areas to explore further. We were very grateful for the financial support of Vodafone for this conference.

An underlying theme of our discussion was the kind of world we wanted to live in. Were we nearing the point where we risked losing control of our lives under the flood of machine-generated data and the rise of artificial intelligence? How long did we have as human beings to fix the rules before we found they had been fixed for us? How could we ensure that there were agreed ethical standards guiding what both governments and companies were doing in this area? We had no clear answers and often shifted debate onto more easily graspable questions. But we certainly recognised the need not to ignore such issues and the urgency of finding at least partial solutions quickly. We also agreed that the medical/biotechnology world was a place to look for analogous problems and answers.

We accepted that the private sector was far from monolithic, with different companies using different business models. The big companies were conscious of their responsibilities and keen to be, and appear to be, ethical in their approaches. The same was not always true lower down the food chain, among data-brokers and the like. But even the big companies worried that they were being asked to play roles which were not rightfully theirs, in other words to set the rules and police the standards, instead of governments whose job this was.

For many companies, data was both oil and asbestos – a source of profit, but also a toxic material. They needed to monetise what they had, not least because customers expected their basic services to be free, but were well aware of the risks, in particular of losing the trust of their customers, so fundamental to their businesses. How were they supposed to be responsible and accountable in such circumstances? Informed consent and greater possibilities for customers to control their own personal data through more sophisticated control settings were certainly part of the answer, but were equally clearly not enough. One problem here was the difficulty, if not impossibility, of anonymising data, or even deleting it.

Were too many consumers too cavalier about what happened to their personal data? The evidence was mixed. The sector had perhaps not yet had its ‘Chernobyl’ moment, but the damage to individuals, for example from abuse of financial data, could still be very great. The concerns were far from confined to privacy, fundamental though this was. Protection of data was also crucial, at a time when the battle against malware and hacking was not being won. Companies needed to be far more proactive and transparent about what they were doing and what could happen to their customers’ data. It was of course vital not to ignore in all this the benefits to the public from responsible and innovative use of big data. We were also very conscious of the difficulty of managing many of these issues when companies were based in one national jurisdiction but active in many, and data could hardly be regarded as national in any meaningful way. We could be heading for the fragmentation of the internet if we were not careful.

What were the responsibilities of governments in these circumstances? They had to set the fundamental rules, but lagged far behind in their technological sophistication. Legislation tended to be a long way behind the game. It should therefore be about principles, which could then be applied by the regulators. But there were many questions about the role of the latter. Could they be simultaneously ombudsmen, regulators and enforcers; who should pay for them; and how could they have the resources and technological skills to keep up? Meanwhile many law enforcement agencies were hopelessly ill-equipped to deal with the explosion of cyber crime.

Public and private sector interests could also easily clash, as they had been doing again recently over the issue of encryption: companies needed to protect their customers’ privacy but public security agencies wanted the continued possibility of access to online content as well as to metadata. Back doors were a very problematic answer, at best. The other big issue was international exchange of data for security or crime-related reasons, which currently worked far too slowly to be effective, even between like-minded countries. A list of commonly agreed crimes for which data exchange would be rapid and more or less automatic was urgently needed, starting bilaterally or between small groups of countries. More widely, new transnational institutions were needed, and new international agreements, but these should be approached bottom up, not through some universal ‘Doha’ style process.
Consumers had an important role to play in protecting their own data, but could not be expected to take the leading part in this, given the imbalance between their knowledge and power and that of the big companies. That was why governments had to set the fundamental rules and principles. The focus should wherever possible be on controlling the use to which data could and should be put, rather than on its collection and retention.

Specific recommendations for the future include more intensive education about data risks, in schools and elsewhere; more proactive roles for consumer groups and organisations; greater choices for consumers from companies; and professional codes of conduct for all dealing with data. More broadly, a new social contract was required between citizens, companies and governments about what was acceptable in this area. The concept of data husbandry or stewardship could be an important way of looking at the issues, aiming to combine responsibility with the potential for innovation. There was also scope for much greater use of independent review boards to monitor privacy and ethical behaviour.

Defining the problem
An important theme running through our discussions was the nature of the problem we were trying to solve. What was it which worried us most about the large internet companies, now such a prominent feature of our lives, and about the astonishing proliferation of internet-based devices and the even more astounding amounts of data these were now generating? There seemed to be many angles:

  • privacy: governments or companies knowing more about us as individuals than we wanted them to know;
  • data abuses: governments or companies (or criminals) using data in ways which could harm us;
  • protection: the ability and willingness of those holding data to keep it safe from others, particularly ill-intentioned hackers;
  • transparency: the need/desire to know who holds what information and what they are using it for;
  • fairness: concerns that those holding data are using it to discriminate between individuals, commercially or otherwise;
  • big data benefits: the need to ensure that we do not, through excessive caution, block off innovation in areas like health which could be of great collective benefit;
  • international data flows: how to ensure that data can move across borders when we need/want it to, for example to catch criminals operating internationally;
  • security: the need to use technology to help keep us safe from cyber-attacks, terrorism, etc.

We touched on all these and more during our discussions, as recorded below. An obvious starting point was that the more data was collected, the greater the risks associated with it. But there was a broader, underlying point which surfaced insistently from time to time to trouble our debate, and which can perhaps be expressed as follows. What kind of world do we actually want to live in, and how can we ensure that our own technological sophistication does not lead to a situation where we are no longer in control of our own lives and our own futures? Or, as someone only half-frivolously put it, would the next Ditchley discussion about these subjects be between machines, not human beings? Rapid developments in artificial intelligence, machine learning and machine-generated data, and the impact of the coming internet of things, were important factors behind these existential worries. For example, individualised insurance policies would mean the companies concerned wanting to know virtually everything about you. At what point could a company be said to possess your identity?

There was a more general concern about how ethical standards could be built in to what was now happening in the digital world, and who would actually be able and willing to take decisions about such issues. Were we actually in the hands of the code writers, and therefore of operators whose activities most people had no possibility of understanding, and the ethical basis of whose actions was outside all our control?

We were concerned not to become too philosophical or too gloomy, not least since we had no clear answers to these kinds of questions. For the most part therefore we concentrated on more practical issues of the respective powers and responsibilities of private companies, governments, regulators and citizens/consumers. But we did agree that we needed to be careful not to find ourselves in a world where, just because things could be done technologically, that meant they would be done. We also agreed that one area where we needed to look for guidance was that of medicine and biotechnology, which had faced similar concerns at an earlier stage, for example about human embryology and cloning.

Private sector attitudes and responses
We were reminded that the private sector in the digital sector was far from monolithic, with a wide variety of business models. There were fundamental differences between telecom companies and internet content providers, and between, say, Apple and Google. We therefore needed to be careful about generalisations. However, all companies, in or out of the digital sector, were now generating and using data. Data was an increasingly important and valuable part of any company’s business. This would only intensify in the future, and all company managements needed to recognise this.

We thought that the large companies in the digital field did tend to have strong internal codes of conduct and to be aware of the ethical dimensions of what they were doing. However, they were nevertheless struggling with the demands of running their businesses while simultaneously coping with the requirements on them covering data management and security, privacy concerns and customer relations. They often felt that they were being asked to take on roles which were more properly performed by governments and regulators: the latter should set and enforce the rules, and the private sector companies should be responsible for implementing them and staying within them.

If the big companies were trying to do the right things because they knew how much was at stake for them, and how much these issues mattered to their customers, it was less obvious that the same applied to others in the sector. There were some murky and dodgy operators around the bottom end of the data/digital market, amongst the list brokers, lead-generators etc. It was not easy to track or police what they were doing. Most smaller digital companies were no doubt honest, but their capacity to manage and protect their own data was often not as good as it should be.

The basic point here was that for many companies, data could be both oil and asbestos – oil because it was valuable, and could form the profitable basis of their company; and asbestos because it could be highly toxic and damaging if badly handled. It was vital for companies to understand that data should only be retained by them for specific and defined purposes, and should be deleted/destroyed as soon as it was no longer needed. It was very hard to define how long the period of retention should be. This obviously differed by sector, with for example health data likely to be useful for far longer than advertising data.

One issue which came up regularly was the extent to which data could be said to be non-personal – anonymised, pseudonymised or whatever. The firm view from many round the table was that this was an illusion. One data set might be said to be anonymous but, when combined with other data sets, the identities of those concerned could be relatively easily established or re-established. Anonymisation was not therefore the definitive answer to customer/consumer concerns about data. Even deletion could be an illusion, since information, once on the internet, was remarkably resilient.

We discussed the extent to which there was a difference between those selling data as their business, and those generating and using it but not passing it to others. Some thought the differences between business models was crucial. Others seemed to see it more as a distinction without a difference. In both cases the data was the valuable commodity and was being monetised, directly or indirectly. Consumers these days were aware (even if only vaguely, for the most part) that the information that they were providing about themselves, directly or indirectly, would be used by private companies for advertising or gain; and they accepted (even if only vaguely, for the most part) that this was what they had to give up in exchange for the services they were seeking. However, the question was whether too many of them were too willing to trade data about themselves for cost/convenience gains. This was complicated by the default assumption, particularly among younger generations, that most basic internet services would and should be provided free of charge – if there was no willingness to pay directly for such services there was no other choice for private companies but to generate revenue from their customers in other ways, most of which inevitably involved either targeted advertising or other exploitations of data. Could individuals reasonably insist on free services and still use ad blockers?

Was the solution here informed consent, as seemed to be implied by the constant emphasis on individuals’ specific agreement to terms and conditions when signing up to new services? Clearly this could be part of the answer, together with greater ability to manage your own data through control settings. But many round the table did not think these could be enough to make companies genuinely responsible or accountable, as they needed to be. How many consumers actually read the pages of Terms and Conditions or understood exactly what it was they were signing up to, or could really manage their own control settings? The onus should be more squarely on companies to be clear and transparent about what was happening, and to explain more simply what the trade-offs were. Indeed this kind of communication with customers should be hard-wired into everything companies were doing, not seen as just a last-minute add-on to provide legal protection. Companies in the sector needed to be very wary of following politicians and bankers into the category of “not to be trusted”, because their business models in the end relied on trust.

Why did consumers seem relatively complacent about the risks they were running? One view was that there had not yet been a major “data disaster” to shake them out of their complacency – an “Exxon Valdez” or “Chernobyl” moment. Others were inclined to dispute this. There had been a succession of data losses with potentially significant consequences, not least the theft of the personal details of 25 million US public employees. A “boiling frogs” phenomenon may be operating here, as the public gets used to data loss stories. But it was also suggested that the hacking/data insecurity scandals so far had not resulted in deaths or large-scale losses, even if individuals could obviously be seriously affected if, for example, their bank details were stolen. So what would a real data disaster look like? A cyber-attack bringing critical infrastructure to a halt would certainly be very serious, but that was arguably a different sort of risk. It could well be that the biggest data risks now were not loss of essential details, such as bank details, which could be changed/replaced relatively quickly, but manipulation of internal company or financial data without this being detected.

In any case the battle against malware and hackers was certainly not being won at the moment. The best protection at the moment seemed to be that there were so many targets that the criminals (or whoever) could only attack comparatively few at any one time. This was hardly reassuring. Part of the problem was that hackers were so often acting from another country’s jurisdiction, quite apart from being difficult to identify in any case. It was also important to convince consumers that security precautions and data protection were vital, rather than just stuff that got in the way.

We had no problem agreeing that there could be plenty of benefits to the public at large from the services offered by the big digital players, and from the proper use of big data. The question was how this could be done without compromising privacy and preventing other abuses. Privacy was of course not an unqualified right, but this did not mean it could be disregarded at will. This was where governments needed to set broad rules, and regulators needed to implement them sensibly, to reflect both people’s rights and societal choices about where it was worth using data even if there were some risks involved. However, there would always be choices faced by individuals or small teams inside organisations about where to draw the lines. That was where the ethical standards of companies and their ability to communicate with their users became so important, and where trust was both so vital, and so easily lost.

Other questions to arise in this area concerned the right geographical level at which to gain consumer consent for big data benefits – a town seemed too small while a country the size of the UK seemed too large; and the desirability of establishing good practice for organising use of data in such a way that it was stripped of its personal identifier characteristics as far as possible, and then reliably deleted after use as far as possible – and what this process should be called.

We also discussed the extent to which it mattered whether major digital companies could be said to have a nationality. Were they like banks, global in life but national in death, meaning that some national jurisdiction needed to accept ultimate responsibility for controlling them? Or were they becoming truly global? The companies themselves seemed to have a dual attitude to this – they obviously had a legal HQ somewhere, but in each country in which they operated, they needed to regard themselves as a national company and adapt to local conditions accordingly.

How far was this sustainable when the data itself was held in trans-national ways? If, say, a major Chinese internet company was listed on the US stock exchange, what did this mean for regulation and responsibility? More widely, to what extent did it matter whether data was being physically held in a particular country or continent? There was a fear that we were heading for a fragmentation of the global internet, perhaps between continents. How far would this matter? The answers were less than totally clear for now.

The role of the public sector
A constant theme here was the difficulty for governments and legislatures in keeping up with the speed of technological development in the digital world, and the consequent problem of ensuring that whatever laws they did pass were not overtaken even before they entered into force. This was only likely to get worse. The power and the capability were overwhelmingly with the private companies, leaving aside the security sector, and the gap was growing. A related problem was the tendency of governments to legislate in response to specific events without thinking through the consequences or the enforcement mechanisms, thereby simply prompting technical evasion. Yet consumers, not unreasonably, expected governments to take action, and to set the basic rules.

Legislation therefore needed to be essentially principles-based, with sufficient clarity and tools to allow regulators to operate them effectively, to evolve with the technology, and to allow space for discussion and exploration outside the courtroom. But these laws could not be so general as to be effectively content-free. Moreover a compulsion-first approach was neither viable nor desirable. Poorly-defined law was not only bad for innovation but also created wider confusion. Companies did not want to find themselves operating on an uncertain terrain where laws and interests seemed to conflict.

This put a lot of weight on the role of regulators. They probably had the theoretical powers they needed for the most part. The question was more about their practical ability to use such powers. Their roles might need more clarification, and many other awkward questions arose. Was it possible for the same office to be a regulator, an ombudsman and a supervisor? How was it possible to combine enforcement against companies with collaboration with the same companies of the kind which was necessary in technical terms? Did regulators have sufficient resources and technical expertise to do all they were being asked to do? How should they be funded to ensure their independence both from political manipulation and the companies they were supposed to be regulating? How could they prevent their best talent being poached by much better-paying private companies? There was a need for creative use of multi-stakeholder groups, while recognising that incumbents were not averse to trying to use regulators to stifle the innovations of new competitors.

Meanwhile other parts of the public sector were not evolving fast enough to be effective against the threats. For example many law enforcement agencies did not remotely have the capacity or skills to cope with the explosion of cyber-crime, and would struggle to develop them as long as they had so many other responsibilities. Industry and consumers were therefore being largely left on their own to cope as best they could. Collaboration was in fact reasonably good between the security agencies and industry in facing up to cyber espionage. Nevertheless there was a need to re-establish the rules of the game to face up the common threats. Here as elsewhere, companies could not be expected to substitute for the state in making decisions about legality or morality. They also had to protect their staff, shareholders and reputations.

This was far from straightforward when interests did not coincide. Encryption was a topical case in point. Greater encryption of their products by companies was inevitable to provide data protection and increase consumer confidence, but could obviously frustrate the wishes of security agencies worried about the activities of terrorists or criminals who were using the internet for their own purposes. It was not easy to see a way through here. The companies seemed clear that building back doors into their products was either technically impossible or at least too difficult without opening up vulnerabilities more generally, including to ill-intentioned state or private actors; and without losing the trust of their consumers in the process – the effect of the Snowden revelations had in some ways been more serious for the companies than for governments and had left deep scars. These arguments were bound to continue, and also bound to swing back and forth depending on how far people and governments at any particular time perceived a real terrorist or other threat, and were therefore ready to make some sacrifices of privacy to stay safe. This neatly illustrated how there were no absolutes here. Context was everything.

There was a particular problem about cooperation in exchanging data across boundaries, where governments had so far been unable to make the existing Mutual Legal Assistance Treaty (MLAT) provisions fit for purpose. The processes were too slow to be effective, even between countries which were close in many other ways. The recent European Court of Justice ruling on the Safe Harbour conditions for allowing data to be transferred to the US threatened to make things much worse, and had at the very least contributed to new levels of uncertainty. These were no longer obscure issues of interest only to the techno-nerds, but vital conditions for the security and prosperity of two continents, and a real problem in the current negotiations for a new Transatlantic Trade and Investment Pact.

It was therefore suggested that a list be drawn up of uncontentious crimes, in other words offences which every country agreed were criminal offences, for which new rapid, more or less automatic data exchange processes could be established. This would probably cover about 80% of international information requests. This could be started either at bilateral level or between a small number of like-minded countries, and then extended. The G8 and/or G20 might be able to play useful roles here. Further work could then be done on other more contentious offences, for example those which some countries might consider political or were related to human rights not accepted by everyone.

In any case we needed new institutions to deal with these problems, to include all the stakeholders and to be as trans-national as the data itself now was. We also needed to ask ourselves how the system would cope with a major crisis, such as the disaster which had overtaken the financial sector in 2008. That had been extremely difficult to deal with but it had at least been clear which major players, public and private, needed to be in the room to try to find a solution. Who knew who they should be in the digital context? In general, international efforts to find solutions should avoid ‘Doha’ style universal processes, and try to build up instead from Track 2 informal discussions and bottom-up agreements between like-minded companies and governments – such ‘mini-lateralism’ was far more likely to be effective over time.

Consumer rights and responsibilities
Privacy was the main right and concern of digital consumers (or users, or citizens), but not the only one. Other important values were secure protection of personal data, transparency about what happened to such data, fairness of treatment between individuals and groups, and accessibility of terms and conditions of service. How far should consumers regard themselves as responsible for their own data, and for insisting on certain practices? They could certainly vote with their feet, and refuse to use services they did not approve of – there were already cases of this. But they could hardly lead the process themselves. On the whole, therefore, consumers – reasonably – seemed to see the onus as being elsewhere when it came to control and accountability – either with the companies or with governments, or some combination of the two.

Problems and challenges for consumers included the following:

  • the companies with whom they were mostly dealing were much more powerful and technologically advanced than they were;
  • many parties collecting data were invisible to them, for example data brokers;
  • algorithms used to process consumer data were effectively “black boxes”;
  • lack of knowledge about how widely their data could be spread, for example on social networks;
  • the  number of ways in which their data could be used to profile and rate them e.g. for credit, insurance or housing purposes;
  • the difficulty of distinguishing between the responsibilities of governments and major companies taking on public or quasi-public roles.

Companies also faced problems in dealing with consumers because their levels of knowledge and degree of tech savviness could vary so widely. It was difficult in such circumstances to provide appropriately differentiated degrees of information and explanation. This complicated considerably the task of obtaining the right level of informed consent. One related question was whether digital natives who understood their devices and their capabilities much more than others were in fact less concerned about privacy and security of data than others from older generations. Anecdotal evidence suggested they might be, but surveys did not seem to support such a conclusion. Meanwhile small companies found it hard to build into their business models the same degree of data protection for consumers as large companies now did routinely.

We noted that there seemed to be different perceptions in different countries. Users in the US seemed to have more trust in companies than governments, while the opposite seemed to be the case in many European countries (with the UK, as so often, somewhere between the two). Europeans also tended to see privacy as a basic human right, while Americans tended to look at it as a consumer right. This led to different approaches in some areas. We noted that US legislation in some ways seemed to be significantly more consumer-protecting than in Europe, for example the requirement on companies to inform consumers if their credit rating or similar was being adversely affected by data held about them (under the Fair Credit Reporting Act), and to reveal publicly if they had suffered data breaches. The latter gap on the European side should be addressed by the new General Data Protection Legislation currently going through the adoption processes in Brussels. On the other side, European governments and regulators often seemed more concerned about protecting the fundamentals of privacy than their US counterparts, as the Safe Harbour problem illustrated.

Another concern was the risk that the wealthy and well-informed would be able to buy themselves a degree of privacy not accessible to ordinary citizens. It was not clear how great this risk actually was, but it was certainly an outcome to be avoided.

Overall our view was that there needed to be a shift in concern from whether and how data was collected, to regulating the actual use to which such data was being put; and to the outcomes to which it could lead in practice, such as overuse of directed advertising or discrimination against certain groups or individuals. Which of these uses and outcomes constituted actual abuse?
Such an approach led us to the following recommendations:

  • Education about data and how it could be used should be built into curricula in all schools, colleges and training programmes. This was now a crucial life skill, and should be treated as such;   
  • Consumer protection organisations (including government bodies) should do more to raise awareness and explain privacy and security risks to the public, and also engage directly with governments and companies on these issues;
  • Companies needed to do more to find the right balance, when considering terms and conditions and consent, between texts/choices which were too simple to be meaningful, and long complicated screeds which were too difficult to follow;
  • Companies should also look for ways to give their consumers more real choices about how their data was used;
  • A professional code of conduct was needed for data scientists working in all sectors, along the lines of those codes which existed in the medical profession;
  • Means to regulate data brokers effectively were needed;
  • Europe should seek to copy the Fair Credit Reporting Act, to link privacy concerns to real world outcomes;
  • Overall a new social contract was needed between governments, companies and consumers about how far data could and should be used to provide common goods, and what level of potential invasion of privacy was justified for particular aims.
  • The idea of data which companies wanted to use being held elsewhere (e.g. the Crosscloud idea), to help protect against abuses, was worth exploring further, though probably not a panacea.

On the whole we thought that people did and would accept that there were good uses of big data, and that their own data could be used to help bring about the potential benefits. But this had to be explained clearly and repeatedly, including the risks. Otherwise, as had happened with health data in the UK, they could easily take fright and refuse to cooperate – and trust once lost took a long time to regain. Were we already missing out on a lot of useful innovation because of excessive privacy concerns? We were not convinced that we were, but this needed to be watched. Effective use of big data could come to be seen as vital to a healthy and dynamic economy.

We kept coming back to the fundamental question of whose job it was to decide where the limits and parameters were, and to devise useful sets of expectations and standards to guide the actions of collectors and analysts of data. There was no simple answer, but there was a view around the table that in any case we needed an extra level of scrutiny. One option was to establish on a more systematic basis public/private review boards with broad representation whose remit would essentially be ethical.

Conclusion
It is clear from the above account that there are many more questions than answers in this rapidly developing world of digital devices and big data. The frequent use of metaphors in our discussions perhaps illustrated the extent to which even experts were struggling to understand what was happening, and to find the right language to express it. The issues will have to continue to be debated. But it was strongly suggested that we might not have as much time to manage these questions, find the right answers and set the rules as we would like to think, before machine-generated data and artificial intelligence overwhelmed us. We therefore had to exploit this rapidly closing window quickly. More discussion was urgently needed, including in national legislatures.

One interesting broad concept to reflect on for the future was that of data “husbandry” or “stewardship”, in other words a generalised fiduciary duty to consumers and individuals, as well as a risk management approach which would focus on the purpose, relevance and accuracy of data being collected and held, and the limitations to that, as well as the need to discard whatever was not needed as quickly as possible. Governments should seek to define this approach and the fundamental principles which should underlie it, as the basis for ethical codes of conduct which could then be applied to particular areas, without constraining innovation and collective benefits. Again, regulators could be very important sources of doctrine and advice.

This Note reflects the Director’s personal impressions of the conference. No participant is in any way committed to its content or expression.


PARTICIPANTS

CHAIR: Ms Nuala O’Connor
President and CEO, Center for Democracy and Technology, Washington, DC. Formerly: Vice President of Compliance and Consumer Trust and Associate General Counsel for Data and Privacy Protection, Amazon.com; Global Privacy Leader, General Electric; DoubleClick; Deputy Director, Office of Policy and Strategic Planning, Chief Privacy Officer and Chief Counsel for Technology, U.S. Department of Commerce; (first) Chief Privacy Officer, Department of Homeland Security.
 
AUSTRALIA
Dr Greg Austin
Professorial Fellow, EastWest Institute (EWI) and Visiting Professor, University of New South Wales. Formerly: Vice President, Worldwide Security Initiative, and Vice President, Program Development and Rapid Response, EWI; Director of Research, Foreign Policy Centre, London; Director of Research, International Crisis Group, London.

CANADA
Mr Sameer Dhar
CEO & Co-Founder, Sensassure, Toronto (2014-). Formerly: Morgan Stanley; Clairvest; Alberta Investment Management Corporation.
Mr Tom Jenkins OC, CD, FCAE, LLD, MBA, MASc, B Eng&Mgt
OpenText, Waterloo, Ontario: Chair of the Board, formerly Executive Chairman and Chief Strategy Officer (2005-13), President and Chief Executive Office (1994-2005). Board Member: Manulife Financial Corporation, Thomson Reuters Inc., TransAlta; Chair, National Research Council of Canada; Canadian Chair, Atlantik Bruecke; Director: C.D.Howe Institute and Canadian Council of Chief Executives; Chancellor, University of Waterloo; Chair of Advisory Board, School of Public Policy, University of Calgary. Member of Government of Canada Advisory Panel on Open Government.
Professor Justin Longo
Assistant Professor and Cisco Systems Research Chair in Big Data and Open Government, Johnson-Shoyama Graduate School of Public Policy, University of Regina, Saskatchewan. Formerly: post-doctoral Fellow in Open Governance, Centre for Policy Informatics, Arizona State University; Visiting Research Fellow, The Governance Lab, New York University and Centre for Global Studies, University of Victoria.

CHINA
Mr Shao Zheng 
Counsellor, Chinese Embassy to the United Kingdom (2015-). Formerly: Director, Foreign Ministry Spokesman's office; Counsellor, Chinese Embassy, Washington, DC; Counsellor, Information Department, Chinese Ministry of Foreign Affairs. A member of The Ditchley Foundation Programme Committee.

EUROPEAN COMMISSION/NETHERLANDS
Dr Paul Timmers
Director, Digital Society, Trust and Security, DG Connect, European Commission.

FRANCE
Professor Benoît Dupont 
Professor, School of Criminology, Scientific Director, Canada's Smart Cybersecurity Network (SERENE-RISC) and Canada in Security and Technology, University of Montreal.

GERMANY
Mr Tobias Lutzi
Master of Philosophy (MPhil) Candidate in Law, University of Oxford; Rhodes scholar; founding member, Wikipedia Redaktion Recht. Formerly: Research Assistant, Institute of Foreign Private and Private International Law, University of Cologne (2011-14); Research Assistant, Osborne Clarke Cologne (2010-14).
Ms Isabel Skierka
Research Associate, Global Public Policy Institute, Berlin. Formerly: Carlo Schmid Fellow, NATO; Blue Book Trainee, Task Force for Internet Policy Development, DG Connect, European Commission; Visiting Researcher, Institute of Computer Science, Free University of Berlin.
Professor Hendrik Speck 
Professor of Digital Interactive Media, and Head, Information Architecture and Search Engine Laboratory, University of Applied Sciences, Kaiserslautern. Formerly: Chief Information Officer and Assistant Director, Media and Communications Department, European Graduate School; e-Commerce Consultant.

IRELAND
Ms Helen Dixon BA Hons, MA, MGov
Data Protection Commissioner (2014-). Formerly: Registrar, Companies Registration Office (2009-14); Principal Officer, Department of Enterprise, Trade and Innovation (2007-09); Manager, Technical Support Services EMEA, Citrix Systems (2000-04).

UK
Mr James Arroyo OBE 
Her Majesty's Diplomatic Service (1990-): Director for Data, Foreign and Commonweath Office (2014-).
Mr Josh Bottomley
Global Head of Digital, HSBC, London. Formerly: Global Head of Display, Google Inc., California; Goldman Sachs; McKinsey & Co.
Ms Liz Coll
Digital Policy Analyst, Citizens Advice (formerly Consumer Futures, statutory consumer body in UK); Co-author, 'Realising consumer rights: from JFK to the digital age' and author 'Personal data: time for a fairer data deal?'; Board member: Nominet stakeholder forum, Digital Catapult Trust framework, Midata strategy board; Adviser to Consumers International digital programme.
Ms Ruby Dixon BA Hons MA MBA
Head of Local Government Practice, Alpine Group, London; Deputy Chair, Our Place! Partnership (West London); Academy of Digital Business Leadership (Candidate). Formerly: strategic advisor to: Design Council UK; Tomorrow's People, Kent County Council; Department of Business, Innovation and Skills; Mydex CIC; Senior Manager, IDeA/LGA Group.
Mr Christopher Graham
Information Commissioner (2009-). Formerly: Director General, Advertising Standards Authority; Chairman, European Advertising Standards Alliance; Secretary of the BBC; Non-Executive Lay Representative, Bar Standards Board; Non-Executive Director, Electoral Reform Services Ltd.
Professor Ian Hargreaves CBE
Professor of Digital Economy, Cardiff University (2010-); Trustee, Wales Millennium Centre (2015-); Member, Digital Wales Advisory Network (2012-). Formerly: led a review of intellectual property for the UK Government, published in May 2011 as 'Digital Opportunity: a review of intellectual property and economic growth'; Director, Strategic Communications, Foreign and Commonwealth Office, on secondment from Ofcom (2008-10); Senior Partner and Board Member, Ofcom (2003-08); Group Director of Corporate and Regulatory Affairs, BAA plc (2003-06); Editor, The New Statesman (1996-98); Editor, The Independent (1994-95); Director, BBC News and Current Affairs (1987-91).
Mr Matthew Kirk
Group External Affairs Director and Executive Committee Board Member, Vodafone Group Services Ltd, London (2006-). Formerly: HM Diplomatic Service (1982-2006): HM Ambassador to Finland (2002-06); Head, ICT Strategy, Foreign and Commonwealth Office, London (1999-2002); Head of Division, EU Secretariat, Cabinet Office, London (1998); Head, EU Presidency Department, FCO (1997).
Mr William Malcolm 
Senior Privacy Counsel, Google (Europe); European Advisory Board member, IAPP.
Professor Helen Margetts 
Director (2011-) and Professor of Society and the Internet (2004-), Oxford Internet Institute, University of Oxford; Economic and Social Research Council Professorial Fellow (2011-14); Co-Director, OxLab; Professorial Fellow, Mansfield College (2004-); Author, 'Political Turbulence: How Social Media Shapes Collective Action' (Princeton University Press, 2015). Formerly: Professor of Political Science and Director, School of Public Policy, University College, London (2001-04).
Mr Rajay Naik
CEO, Keypath Education (2015-). Formerly: Director, The Open University (2010-15); UK Digital Skills Commission; Education Technology Action Group, Department for Business, Innovation & Skills; Chairman, Big Lottery Fund (2009-15); Commissioner, Department of Health; Panel member, Independent Review of Higher Education Funding and Student Finance (2009-10); Chairman, British Youth Council (2007-10); Council Member, Learning and Skills Council (2005-08).
Mr Jason Nathan
dunnhumby: Head of Global Data. Formerly: Global Multi Channel Capability Director (2013-14); Global Director of Products (2011-13); Solutions Director Brazil (2010-11); Global Solutions Director for Category Management (2005-10).
Mr Nick Pickles
Head of UK Public Policy, Twitter (2014-), Fellow, Royal Society of Arts; internationally published music photographer. Formerly: Director, Big Brother Watch (current member of the advisory council); President, Durham Students' Union..
Mr Renate Samson
Chief Executive, Big Brother Watch
Mr Richard Spearman CMG, OBE
Corporate Security Director, Vodafone (2015-). Formerly: Her Majesty's Diplomatic Service (1989-2015); Save the Children Fund (1984-89).
Ms Jo Swinson
Non-Executive Director, Clear Returns. Formerly: Minister for Consumer Affairs in the Department for Business, Innovation and Skills (2012-15); Vice Chair, Prime Minister's Digital Taskforce (2014-15); Member of Parliament (Liberal Democrat) for East Dunbartonshire (2005-15). A member of the Programme Committee and a Governor of The Ditchley Foundation.
Ms Claire Thwaites
Head of Government Affairs for Europe, Middle East, Africa and India, Apple Inc.

UK/USA
Mr Andrew Keen
Executive Director, salonFutureCast; author: 'Cult of the Amateur', 'Digital Vertigo' and 'The Internet Is Not The Answer' (2015); Senior Fellow, CALinnovate; Host, INNOVATE2016; CNN and GQ columnist; Founder, Audiocafe.com (1995).

USA
Mr Peter Bass
Managing Director, Promontory Financial Group, LLC, Washington DC. Formerly: Executive Assistant to the National Security Adviser, The White House; Deputy Assistant Secretary of State for Energy, Sanctions and Commodities; Senior Adviser, Office of the Secretary, U.S. Department of State; Vice President, Chief of Staff to President and co-COO, Goldman Sachs & Co.; Treasurer and Director, The American Ditchley Foundation.
Commissioner Julie Brill
Commissioner, Federal Trade Commission (2010-). Formerly: Senior Deputy Attorney General and Chief of Consumer Protection and Antitrust, North Carolina Department of Justice; Lecturer-in-Law, Columbia University School of Law; Assistant Attorney General for Consumer Protection and Antitrust, State of Vermont; Associate, Paul, Weiss, Rifkind, Wharton & Garrison, New York.
Professor David Cole
Professor of Law, Georgetown University Law Center; Legal Affairs Correspondent, The Nation; regular contributor, New York Review of Books.
Mr Kenneth Cukier 
The Economist (2003-): Data Editor. Formerly: Technology Correspondent; Japan Correspondent. Technology Editor, Wall Street Journal Asia, Hong Kong; International Herald Tribune, Paris; Research Fellow, John F. Kennedy School of Government, Harvard University (2002-04). Co-Author, 'Big Data: A Revolution that Will Transform How We Work, Live and Think' (2013).
Ms Jane Horvath
Senior Director of Global Privacy, Apple Inc., Cupertino, California (2011-). Formerly: Global Privacy Counsel, Google Inc. (2007-11); First Chief Privacy and Civil Liberties Officer, U.S. Department of Justice (2006-07).
Mr Cameron Kerry
Ann R. and Andrew H. Tisch Distinguished Visiting Fellow, Brookings Institution (2013-); Senior Counsel, Sidley Austin LLP, Boston and Washington; Visiting Scholar, MIT Media Lab; Member, World Economic Forum Global Agenda Council for Data Driven Development; Member, U.S. Department of State Advisory Committee on International Communications and Information Policy. Formerly: General Counsel and Acting Secretary, U.S. Department of Commerce; Co-Chair, National Science and Technology Council Subcommittee on Privacy and Internet Policy; attorney, Mintz Levin, Boston and Washington; Adjunct Professor of Telecommunications Law, Suffolk University Law School.
Mr Erik Neuenschwander
Product Security and Privacy Manager, Apple Inc.