L'intelligence artificielle menace-t-elle la démocratie
dans le monde ?
Oui, selon un conseiller principal à la Commission européenne
Le 15 janvier 2019, par Bill Fassinou, Chroniqueur Actualités
Les TIC ont connu une croissance sans précédent au cours des dernières décennies. Ils se développent très rapidement tandis que le cadre juridique qui est censé l’encadrer prend du temps pour être établi. Leur évolution et notamment l’utilisation d’Internet a eu un impact sur l’organisation du travail et aussi du comportement des employés au travail. La multiplication et la variété des usages possibles des TIC rendent nécessaires un contrôle de leurs utilisations tant par l’employeur que les salariés pour limiter les abus et les atteintes aux droits fondamentaux. L’IA qui est un élément clé dans l’informatique des temps modernes a gagné tous les secteurs d’activité dans le monde, que ce soit la santé, le commerce, l’économie, l’éducation, pour ne citer que ceux-là. Cette technologie (l'lA) n'aurait-elle pas des inconvénients dont on ne parle jamais ?
Dans un document intitulé « La démocratie constitutionnelle et la technologie à l'ère de l'intelligence artificielle » et publié en septembre 2018, Paul Nemitz (conseiller principal à la Commission européenne et professeur de droit au Collège d’Europe de Bruges) décrit pour ce qui est selon lui, les quatre éléments centraux responsables de la concentration du pouvoir aux mains des grandes entreprises informatiques. Ces éléments constituent à la fois une menace pour la démocratie et le bon fonctionnement des marchés, dit-il. Dans le même document, il présente l’expérience acquise avec l'Internet sans loi et la relation entre la technologie et le droit telle qu'elle s'est développée dans l'économie numérique. Il parle également de l'expérience du RGPD avant de poser certaines questions en ce qui concerne l'IA pour la démocratie. Par exemple, il se demande : quels sont les défis de l'intelligence artificielle qui respectent les règles d'éthique ? Et quels défis de l'IA doivent être soumis à la légitimité du processus démocratique ?
Pour Paul Nemitz, l’intelligence artificielle est un complément de l’économie numérique, qu’elle dominera peut-être un jour. Il ne fait aucun doute qu’il faut, dit-il, distinguer l'Internet en tant que support technique et ce qui se passe sur Internet. Il faut faire la distinction entre le potentiel théorique de l'IA pour de bon et le contexte dans lequel elle est réellement développée. Il serait naïf d’ignorer que, pour la plupart des sociétés, la façon dont elles utilisent l'Internet et ce qu’Internet leur fournit est façonné par quelques grandes entreprises, comme il serait naïf d'ignorer que le développement de l'IA est dominé exactement par ces entreprises et leurs écosystèmes dépendants. Selon lui, il faut faire une analyse différenciée, allant au-delà de la théorie d’Internet et des revendications simplistes et plausibles de la liberté et les avantages qu'un Internet gratuit peut avoir. L’analyse doit faire ressortir les impacts réels des nouvelles technologies numériques et des activités basées sur Internet et sur l'intelligence artificielle, en particulier les activités des GAFAM qui ont façonné notre expérience avec Internet et les technologies numériques telles que l'IA.
Il indique que l’accumulation du pouvoir dans le domaine du numérique qui conditionne le développement et le déploiement de l’IA par ces grandes firmes ainsi que le débat sur sa régulation, repose sur quatre sources de pouvoir dont l’argent, la manipulation de l’information (ces sociétés contrôlent de plus en plus les infrastructures du discours public et l'environnement numérique décisif pour les élections. Ils détiennent les meilleures sources d’informations que ce soit économique ou politique en particulier pour la jeune génération au détriment du journalisme classique avec l'ambition de contrôler le pouvoir, si important pour la démocratie). En troisième position, il parle de la collecte de données personnelles à des fins lucratives et économiques et le profilage de n'importe qui d'entre nous en fonction de notre comportement en ligne et hors ligne, l’un des capacités majeurs de l’IA. Il estime que ces sociétés en savent plus sur nous que nous-mêmes ou nos amis et ils utilisent et mettent à disposition cette information pour le profit, la surveillance, la sécurité et des campagnes électorales, le scandale Cambridge Analytica en est une preuve.
En dernière position, il fait savoir que ces entreprises dominent le développement et l’intégration de services basés sur l'IA. Bien que leurs recherches fondamentales sur l’IA puissent, en partie, être accessibles au public, dit-il, ces entreprises travaillent toujours en amont sur les applications d'intelligence artificielle à usage commercial avec des ressources dépassant les investissements publics. La combinaison de leur domination antérieure dans le profilage des personnes et dans la collecte, l’analyse et la distribution de l'information sur Internet avec les nouvelles capacités d'optimisation issue de l’IA, renforce tout autant cette domination dans tous les domaines d’activités dans lesquels elles se trouvent déjà.
L’IA a-t-elle dépassé les capacités de réglementation actuelles du droit ? Paul souligne dans son document que le débat sur l'éthique d'Amnesty International a déjà identifié de nombreux défis auxquels sont confrontés les droits fondamentaux et la règle de droit que pose l’IA. Avant cet important travail, continue-t-il, il est clair que l'IA ne peut pas et ne servira pas le bien public sans règles strictes en place. « Aujourd’hui, avec l’IA développée par les grandes entreprises investissant des milliards de dollars, l'intelligence artificielle risque de rencontrer les mêmes problèmes que l'Internet à ses débuts. La conception de l'IA pour le développement de systèmes autonomes et son utilisation très répandue pourrait bien conduire à plus de catastrophes que ceux déjà produits par l’Internet non réglementé. Il est étonnant de constater combien les défenseurs de la loi sur l’IA sont aujourd'hui sur la défensive. Les défis importants que l'intelligence artificielle pose pour l'état de droit et les droits individuels sont donc multiples », ajoute Paul à ce sujet. C’est probablement sur la base de tout ceci qu’il qualifie tous les louanges sur l’IA de “battage médiatique”.
Certains internautes ne semblent pas être du même avis que lui. Ils estiment que la couverture médiatique de presque toutes les technologies est exagérée, pas seulement celle concernant l’IA. Selon eux, l’intelligence artificielle et l’apprentissage automatique seront dans les années qui viennent, les plus grands sujets en informatiques et aucun système ne pourra survivre sans ces deux technologies. Par conséquent, dire que l'IA fait plus de mal que de bien signifie, mettre un frein à l’avancée technologique. D’autres vont dans le même sens que Paul Nemitz. Ils pensent que le développement très accéléré de l’IA, sans une règle à la hauteur et sans réglementation gouvernementale pour maintenir l’équilibre, présente un risque énorme. Pour eux, l’IA des prochaines années sera une technologie qu’on jugera excellente mais qui pourrait affecter négativement des vies à cause des algorithmes fortement biaisés qui permettent de prendre des décisions automatiquement en fonction des besoins de son développeur.
Rappelons qu'il y a quelques jours, lors d’un sondage organisé par le Future of Humanity Institute de l’Université d’Oxford, quarante et un pour cent des 2000 personnes interrogées ont déclaré qu’elles soutenaient assez ou fortement le développement de l’IA, alors que 22 % s’y sont opposées. Les 28 % restants ont déclaré qu’ils n’avaient aucun sentiment fort. L'enquête définissait l'intelligence artificielle comme « des systèmes informatiques qui effectuent des tâches ou prennent des décisions qui nécessitent généralement une intelligence humaine ».
Ces résultats peuvent probablement être considérés comme une victoire pour le parti pro-IA, mais pas par beaucoup. Les réponses des répondants étaient également nettement corrélées aux données démographiques. Les individus jeunes, instruits et masculins étaient tous plus susceptibles d’être en faveur du développement de l’IA. Par exemple, 57 % des diplômés des universités étaient en faveur par rapport à 29 % des personnes s’étant arrêtés aux études secondaires au maximum. Ceci est un clivage notable, étant donné que de nombreuses recherches suggèrent que l'IA et la robotique augmenteront les inégalités sociales.
Quoi qu'il en soit, Paul appelle à une nouvelle culture d'intégration des principes de démocratie et des droits de l'homme dès la conception de l'IA et à une évaluation de son impact sur la société.
Source Royal Society
Constitutional democraty and technology in
the age of artificialintelligence
by PAUL NEMITZ
Agence.Phil. Trans. R. Soc. A 376: 20180089. http://dx.doi.org/10.1098/rsta.2018.0089 Accepted: 14 August 2018 One contribution of 9 to a theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’. Subject Areas: artificial intelligence Keywords: artificial intelligence, GDPR, democracy, rule of law, ethics, law, digital power Author for correspondence: Paul Nemitz e-mail:firstname.lastname@example.org Constitutional democracy and technology in the age of artificial intelligence Paul Nemitz European Commission, Directorate General for Justice and Consumers, 59, Rue Montoyer, 1049 Brussels, Belgium PN,0000-0003-0488-0976 Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy. This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws. The paper closes with a call for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose. This article is part of a theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’. 1. Introduction The triad of human rights, democracy and the rule of law are the core elements of western, liberal constitutions, the ‘Trinitarian formula’ of our constitutionalist faith.1 1Term coined by Kumm ; cf on the role of this formula in and for international law see Kumm . 2018 The Author(s) Published by the Royal Society. All rights reserved. 2rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ These principles are the supreme law of the land—all actions of government, legislators and indeed societal reality are measured against them. Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, it is legitimate and necessary to ask the question of how this new technology must be shaped to support the maintenance and strengthening of the constitutional ‘Trinitarian formula’ rather than weakening it. The answer proposed here is that we need a new culture of technology and business development for the age of AI which we call ‘rule of law, democracy and human rights by design’. The principle of rule of law, democracy and human rights by design in AI is necessary because on the one hand the capabilities of AI, based on big data and combined with the pervasiveness of devices and sensors of the Internet of things, will eventually govern core functions of society, reaching from education via health, science and business right into the sphere of law, security and defence, political discourse and democratic decision making. On the other hand, it is also high time to bind new technology to the basic constitutional principles, as the absence of such framing for the Internet economy has already led to a widespread culture of disregard of the law and put democracy in danger, the Facebook Cambridge Analytics scandal being only the latest wake-up call in that respect.2 The need for framing the future relationship between technology and democracy cannot be understood without an understanding of the extraordinary power concentration in the hands of few Internet giants. This paper therefore first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It will then recall the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and move on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws. The paper will close with a call for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose. 2. A unique concentration of power inthe hands of digital Internet giants which develop artificial intelligence AI is developed as an add-on to the digital Internet economy, which it may dominate one day. There is no doubt that one must differentiate between the Internet as such, being a technical structure for connecting people and information, and what is happening on the Internet, as one must differentiate between the theoretical potential of AI for good and the context and purposes for which it is actually developed by those who largely control its development. It would be naive to ignore that for most in our societies today, the reality of how they use the Internet and what the Internet delivers to them is shaped by a few mega corporations, as it would be naive to ignore that the development of AI is dominated exactly by these mega corporations and their dependent ecosystems. A differentiated analysis is therefore called for, going beyond the theory of the Internet and simplistic, prima facie plausible claims of the freedom and benefits a free Internet can provide or a simplistic claim of the theoretical public benefits from AI. The analysis must include the real impacts of new digital technologies and business activities based on the Internet and AI, in particular the activities of the ‘Frightful five’ [3,4] shaping our experience with the Internet and digital technologies like AI: Google, Facebook, 2See in more detail the chapter ‘The internet and its failures have thrived on a culture of lawlessness and irresponsibility’ below, and most recently the interim report on Disinformation and ‘Fake News’ by the select Committee on Culture Media and Sports of the UK House of Commons, 29 July 2018, available at https://publications.parliament.uk/pa/cm201719/ cmselect/cmcumeds/363/36302.htm. 3rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ Microsoft, Apple and Amazon. These corporations, together with a few others, shape not only the delivery of Internet-based services to individuals. They are extremely profitable, leading to rises in stock market valuations, and therefore wield economic power which does not only guarantee disproportionate access to legislators and governments, but also allows them to hand out freely direct or indirect financial or in kind support in all areas of society relevant to opinion building in democracy: governments, legislators, civil society, political parties, schools and education, journalism and journalism education and—most importantly—science and research. The pervasiveness of these corporations is thus a reality not only in technical terms, but also with regard to resource attribution and societal matters. Politics, civil society, science, journalism and business traditionally try to keep a certain distance from each other, some call it an arm’s length relationship. But today, the frightful five are present in all of these fields, to gain knowledge and learn for their own purposes, but also, to put it in diplomatic terms, to gain sympathy and understanding for their concerns and interests. The critical inquiry into the relationship of the new technologies like AI with human rights, democracy and the rule of law must therefore start from a holistic look on the reality of technology and business models as they exist today, including the accumulation of technological, economic and political power in the hands of the ‘frightful five’, which are at the core of the development and systems integration of AI into commercially viable services. The accumulation of digital power, which shapes the development and deployment of AI as well as the debate on its regulation, is based on four sources of power. First, deep pockets, money being the classic tool of influence on politics and markets. Not only can the digital mega players afford3 to invest heavily in political4 and societal influence as already mentioned, they can also afford to buy up new ideas and start-ups in the area of AI or indeed any other area of interest to their business model—and they are doing just that.5 Second, beyond informal influence based on money, these corporations increasingly control the infrastructures of public discourse and the digital environment decisive for elections. No candidate in democratic process today can afford not to rely on their services. And their internet services increasingly become the only or main source of political information for citizens, especially the younger generation, to the detriment of the Fourth Estate, classic journalist publications, with the ambition to control power, so important to democracy.6 Their targeted advertising business model drains the Fourth Estate, journalism, concentrating today more than 80% of new advertisement revenue previously the basis for the plurality of privately financed press in the hands of just two companies, namely Google and Facebook.7 They are not the only, 3Apple, Alphabet, Facebook, Amazon, Microsoft hold the top positions in global stock market valuations 2018, see https:// www.statista.com/statistics/263264/top-companies-in-the-world-by-market-value/; and they had the highest absolute increase of stock market capitalization since 2009 worldwide, see the latest PWC report on the top 100 companies, slide 32, available at https://www.pwc.com/gx/en/audit-services/assets/pdf/global-top-100-companies-2017-final.pdf 4See on the US Tech giants lobby spending in Brussels: https://www.statista.com/chart/11578/us-tech-giants-lobbyingin-europe/ and in the US: https://www.statista.com/chart/10393/lobbying-expenditure-of-tech-companies/; for a realistic comparison of lobbying spending in the EU and the US, one would have to include the lobbying in capitals of Member States of the EU, as it is the governments of the EU which determine to a large part the content of EU legislation; the figures are not including the substantial costs of cultural and “information” events which serve to make friends and gain hearing among policy makers, nor do they include the spending on journalism education, journalisms projects, civil society organization and stakeholder fora and think tanks or academic research in the areas of social sciences related to the internet. For a full understanding of the influence of the tech giants on the public debate relating to AI and internet policy, all this spending would have to be seen in cumulation, and in addition to the spending of trade organizations to which they belong or which they finance. 5The Race for AI: Google, Intel, Apple in a Rush to Grab Artificial Intelligence Startups, CBInsights.com, 27 February 2018, https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/. 6See for details on the increasing market share of social networks and search as a source for news to the detriment of traditional media, Where do people get their news? The British media landscape in 5 Charts, Rasmus Kleis Nielsen, 30 May 2017, published on ‘Medium’ by Oxford University, https://medium.com/oxford-university/where-do-people-get-theirnews-8e850a0dea03. 7Financial Times, 4 December 2017: Google and Facebook dominance forecast to rise, Tech duopoly to account for 84% of online advertising spend this year, forecasts reports, https://www.ft.com/content/cf362186-d840-11e7-a039- c64b1c09b482; Adexchanger, 10 May 2018: Digital Ad Market Soars To $88 Billion, Facebook And Google Contribute 90% Of Growth, https://adexchanger.com/online-advertising/digital-ad-market-soars-to-88-billion-facebook-and-googlecontribute-90-of-growth/. 4rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ but a major cause of the dying of newspapers and Fourth Estate journalism as a profession, both in Europe and in the USA. Third, these mega corporations are in the business of collecting personal data for profit and of profiling anyone of us based on our behaviour online and offline. They know more about us than ourselves or our friends8—and they are using and making available this information for profit, surveillance and security and election campaigns. They are benefitting from the claim to empower people, even though they are centralizing power on an unprecedented scale. The Cloud and profiling are the key tools of centralization. Fourth, these corporations are dominating development and systems integration into usable AI services. While their basic AI research may, in part, be publicly accessible, the much more resource intensive work on systems integration and AI applications for commercial use is taking place in a black box with resources surpassing the public investments in similar research in many countries. The combination of their prior dominance in profiling of people and collecting as well as analysing and distributing information over the Internet with the new optimizing capabilities of AI will strengthen their dominance in all areas of activities they are already in, and extend it to others. It is this cumulation of power in the hands of a few—the power of money, the power over infrastructures for democracy and discourse, the power over individuals based on profiling and the dominance in AI innovation, which must be seen together. The Internet giants are the single group of corporations in history which have managed to keep their output largely unregulated, to dominate markets and be most profitable at the top of the stock exchange, to command important influence on public opinions and politics, and at the same time stay largely popular with the general public. It is this context of a unique concentration of power, the experience with the absence of regulation for software and Internetbased services and the history of technology regulation by law which must inform the present debate about ethics and law for AI, together with the potential capabilities and impacts of this new technology. 3. The Internet and its failures have thrived on a culture of lawlessness and irresponsibility In a history of ‘Google books’, Scott Rosenberg  describes the early attitude of Silicon Valley as engineering driven and without respect for the law. This ‘better ask forgiveness than permission’ attitude brought ‘Google Books’ into conflict with copyright laws and ‘Uber’ with labour law and regulation of public transport.9 Google took away a ‘lesson’, namely to consider ‘lobbyists and lawyers’ earlier to be able to play the ‘sometimes’ necessary game of politics. The Silicon Valley and its current culture follow the Californian ideology . Its roots reach back to the 1960’s youth movement. This movement, in part, sought the withdrawal from politics dominated by Washington and from the domination of the—at the time—IBM mainframe. Normatively, there was a strong impetus for personal freedom and individual empowerment through decentralization at the core of this movement. The development of the personal computer and the emblematic first advertisement in 1984 of the Apple Macintosh were expressions of this 8See for details on Google, which has a market share in search of over 90% in Europe: What Google Knows About you & How to Control your data, Tech Advisor, 15 June 2018, https://www.techadvisor.co.uk/how-to/security/whatdoes-google-know-about-you-3592743/; on Facebook, see Andrew Quodling, Shadow Profiles – Facebook knows about you, even if you’re not on Facebook, in: The Conversation, 13 April 2018, https://theconversation.com/shadow-profilesfacebook-knows-about-you-even-if-youre-not-on-facebook-94804; What is Psychographics ? Understanding the ‘Dark Arts’ of Marketing that brought down Cambridge Analytica, CBInsights.com, 7.6.2018; Facebook ad feature claims to predict user’s future behavior, Guardian, 16 April 2018, https://www.theguardian.com/technology/2018/apr/16/facebook-adfeature-predict-future-behaviour; Facebook runs about 200 trillion predictions per day, according to YannLeCun, Chief of AI development at Facebook, 3.5.2018 at 8.03 on Twitter, https://twitter.com/ylecun/status/991936213249650688. 9Most recently, the Court of Justice of the European Union (CJEU) considers that Uber is a transport service and can be forced to obtain the necessary licences and authorizations under Member State law, see Case C-434/15 AsociaciónProfesional Elite Taxi v Uber Systems Spain SL  ECLI:EU:C:2017:364. 5rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ quest for individual freedom and self-fulfilment, away from societal restraints and dependency on the state and its institutions. Famously, in his ‘Declaration of the Independence of Cyberspace’, John Perry Barlow rejected the idea that any law might suit the Internet, claiming that traditional forms of government, those which we would argue can only be based on the rule of law, ‘have no sovereignty where we (the actors of cyberspace) gather’.10 It is no coincidence that this declaration was presented in 1996 at the World Economic Forum.11 This idea was mirrored in legal discourse too, as Johnson and Post posited that, if users of a particular space on the Internet wanted to establish a set of rules that would not violate vital interests of non-users ‘the law of sovereigns in the physical world should defer to this new form of self-government’ . But, it was not only the California ideology which drove disrespect for the law. The teaching of disruptive innovation, widespread in business schools, eventually legitimized even the disruption of the law.12 The heroes of the disruptive Internet did not just speak out against governments and parliamentary law, break intellectual property rights and transport law, they also became a fashion to trick the system of tax collection based on national jurisdiction, making necessary such decisions by the European Commission as that on Apple having to pay 13 billion Euros of previously unpaid taxes in Ireland,13 or to disrupt regulators by not telling the truth, as it happened in the Facebook/WhatsApp merger case, which led the European Commission to impose a fine of 110 million Euro on Facebook.14 Avoiding the law or intentionally breaking it, telling half truths to legislators or trying to ridicule them, as we recently saw in the Cambridge Analytica hearings by Mark Zuckerberg of Facebook, became a sport on both sides of the Atlantic in which digital corporations, digital activists and digital engineers and programmers rubbed shoulders. Their explicit or implicit claim that parliamentarians and governments do not understand the Internet and new technology such as AI, and thus have no legitimacy to put rules for these in place, is not matched with a self-reflection on how little technologists actually understand democracy and the functioning of the rule of law as well as the need to protect fundamental rights in a world in which technology increasingly tends to undermine all these three pillars of constitutional democracy. On the contrary, the figures of argumentation presented by tech corporations and activists alike against new law15 over and over demonstrate that still today they put technology before and above democracy. They do not realize or do not want to realize that their crusade against the law puts democracy in a pincer movement between technology on the one hand and populists as well 10Barlow ; see the relativization by Barlow himself: Barlow . 11Internet Hall of Fame, https://internethalloffame.org/news/in-their-own-words/declaration-independence-cyberspace. 12See on this Biber, Eric and Light, Sarah E. and Ruhl, J. B. and Salzman, James E., Regulating Business Innovation as Policy Disruption: From the Model T to Airbnb (12 April 2017). Vanderbilt Law Review, Vol. 70:5:nnn; Vanderbilt Law Research Paper No. 17–24; UCLA School of Law, Public Law Research Paper No. 17–18, SSRN: https://ssrn.com/abstract=2951919 ; Copyright, defamation, employment: how tech is disrupting every corner of the law, https://www.theguardian.com/legalhorizons/2018/jan/03/copyright-defamation-employment-how-tech-is-disrupting-every-corner-of-the-law, and Narayan Toolan, World Economic Forum, 13 April 2018: 3 ways the Fourth Industrial Revolution is disrupting the law, https://www. weforum.org/agenda/2018/04/three-ways-the-fourth-industrial-revolution-is-disrupting-law/. 13State aid: Ireland gave illegal tax benefits to Apple worth up to e13 billion, Press release of the European Commission, 30 August 2016, http://europa.eu/rapid/press-release_IP-16-2923_en.htm; full text of the decision: http://ec.europa.eu/ competition/state_aid/cases/253200/253200_1851004_674_2.pdf. 14Mergers: Commission fines Facebook e110 million for providing misleading information about WhatsApp takeover, Press release of the European Commission, 18 May 2017, available at http://europa.eu/rapid/press-release_IP-17-1369_en.htm; full text of the decision: http://ec.europa.eu/competition/mergers/cases/decisions/m8228_493_3.pdf 15See for the latest big tech lobby campaigns against new law “Inside the ePrivacy Regulation’s furious lobbying war, David Meyer, on IAPP.org, 31 October 2017, https://iapp.org/news/a/inside-the-eprivacy-regulations-furiouslobbying-war/; and “Google and Facebook are quietly fighting California’s Privacy Rights initiative, emails reveal, Lee Fang, TheIntercept.com, 26 June 2018, https://theintercept.com/2018/06/26/google-and-facebook-are-quietly-fightingcalifornias-privacy-rights-initiative-emails-reveal/; Big Tech worried as California Law Signals US Privacy Push, Ben Brody, Bloomberg, 29 June 2018, https://www.bloomberg.com/news/articles/2018-06-28/big-tech-worried-as-californialaw-brings-privacy-push-to-u-s. 6rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ as dictators on the other. The automated public sphere (Frank Pasquale), the new communications environment of the Internet, in addition plays into the hands of populists, as they are best able to communicate their ideology in short messages adapted to the new agora of political discourse, the mobile phone screen. Trump ruling by tweet is the best example for this. Constructive policies, which address the complexity of modern societies and seek democratic, inclusive debate, do not fit into a tweet or only on one mobile phone screen. The struggle to evade democratic law, and thus responsibility, is not limited to the political arena or to any specific area of law. It is also fought before the judiciary. A prominent example for this fight against responsibility in a judicial proceeding, directly relevant to AI, is the ‘Google Spain’ case before the CJEU. In this case, Google first disputed the applicability of EU data protection law to the benefit of EU citizens despite it having set up a subsidiary on EU territory.16 In doing so, Google essentially claimed that because the answer to the search questions posed in Europe came from Californian servers, only Californian law was applicable. Second, it disputed that its search engine operation could be regarded as ‘data processing’,17 which would have relieved Google from any responsibility for search answers, basically claiming that the selection process of its search engine is beyond its control due to automation in the form of an algorithm. Google thus in this case revealed its worldview as to the rule of law: first, automation in the form of an algorithm providing a service to individuals should shield the intermediary using this technology from any legal responsibility. In this view, John Perry Barlow’s dream would come true—technology would win over the rule of law in the digital age. Second, if there should be any form of regulatory responsibility, it can only be one global regime, presumably dominated by US law, adjudicated by US judges. Ideal for Google in terms of reducing business compliance costs when operating in different jurisdictions, fully in line with its goal of maintaining a unique global Internet structure, not fragmented by national regulation, thus making justice much more difficult to pursue for non-US citizens/residents. It is in this context all the more satisfying how the CJEU, in what has been coined as ‘return of the law’ , reacted to these arguments: it answered Google’s submissions within the boundaries of legitimate statutory interpretation, but with a clear ‘sensitivity’ for the significance these submissions posed for the greater context. Effectively, it rejected the claims Google made and thus enabled effective protection of fundamental rights for European citizens based on primary law and a longstanding European legal tradition of protecting privacy and personal data. The Google Spain Judgement is today the leading case allowing one to object to any attempt to separate responsibility for automation and autonomous development of AI from the original creation and putting into business of such programs. One may speculate whether Google in making the arguments denying its responsibility for what the search algorithm did was already thinking of creating a general precedent for shielding it from responsibility for what autonomous AI algorithms would do one day. In any case, the answer of the CJEU in this case was clear and reassuring: there cannot be such shielding against responsibility. Seeing all this in context, the common denominator is indeed an effort to evade responsibility, first on the level of the making of law, second on the level of the application of the law, and this by a group of companies which concentrate power in their hands without precedent in history. It is important not to ignore this history of failure to assign and assume responsibility in the Internet age, both by legislators and by tech corporations, which led to the fiascos of the Internet, in the form of spreading of mass surveillance, recruitment to terrorism, incitement to racial and religious hate and violence as well as multiple other catastrophes for democracy, the latest being the Cambridge Analytica scandal and the rise of populists, often the most sophisticated beneficiaries of the assistance of Facebook, Youtube, Twitter and co, combining the advertising and network techniques of targeted advertising developed for profit with political propaganda. 16Case C-131/12 Google Spain SL v Mario Costeja González  ECLI:EU:C:2014:317, para 47, https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=CELEX%3A62012CJ0131. 17Ibid, para 22. 7rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ 4. Ethics and law to ensure that AI serves the public good The debate on ethics for AI has already identified the numerous challenges to fundamental rights and the rule of law which AI poses.18 It is clear before this important body of work that AI cannot and will not serve the public good without strong rules in place. The potential capabilities of AI simply forbid repeating the risk taking which led to the lawlessness in the emergent Internet age, as these capabilities can create major and irreversible damage to society. But today, with AI being developed by mega corporations with investments of billions, the infancy argument, so much abused for the Internet and having created so much damage, is back— this time for AI. In a move of genius, the corporations interested have started to finance multiple initiatives to work on ethics of AI, thus, while pretending best intentions, effectively delaying the debate and work on law for AI. There is no doubt that ethics is a good thing, in particular intra-company ethics for the leadership and employees which go beyond what the law requires. But, any effort to replace or avoid the necessary law on AI through ethics must be rejected, as this effectively cuts out democratic process. It is also clear that the numerous conflicts of interests which exist between the corporations and the general public as to the development and deployment of AI cannot be solved by unenforceable ethics codes or self-regulation. This is not to say that the corporations have no contribution to make in the debates on ethics and law for AI, and that many people working for them have the best intentions and can be important partners in this debate. Given the long reach of the finances of the mega corporations, it is however important that very clear accounts are given of any relations of employment, financing or other benefits received or expected from any of the interested parties by those participating in the debate. Those who are keen to debate ethics in the first place should follow this rule of transparency and honesty as to possible conflicts of interests—and leave the audience to judge the value of arguments made in light of their relations with interested parties. The design of AI for autonomous development and its very widespread use may well lead to much more catastrophic impacts than those the unregulated Internet has already produced. There is therefore a strong argument, only looking at the experience with the Internet and the potential capabilities of AI as well as its potential widespread use, to mitigate in favour of a precautionary legal framework, setting down the basic rules necessary to safeguard the public interest in the development and deployment of AI. It is astonishing how much in the defensive the proponents of law for AI are today, as after all there is a long history of technology regulation by law. Every architect must already learn during studies the building code and work according to its legal rules, which give form to the public interest not to have buildings collapse. Every car on the street must go through type approval, for reasons of safety. The legal duty to put on seatbelts, heavily fought against by industry and automobile clubs alike, eventually reduced the number of traffic deaths by half. Over and over society has confirmed the experience that law, and not the absence of law, relating to critical technology serves the interests of the general public. Closer to the present, the work on ethics for AI has already identified not only numerous very important challenges AI poses for the rule of law, democracy and individual rights. It has also already led to numerous catalogues of ethic rules for AI and autonomous systems. Alan Winfield counted 10 codes of ethics for AI.19 The latest add-on to this list are the Statement on Artificial Intelligence, Robotics and ‘Autonomous Systems’ by the European Group on Ethics in Science and New Technologies of 9 March 201820 and the paper by the French Data Protection Authority CNIL on rules for Artificial Intelligence of 15 December 2017.21 18Instead of many, see the AI now annual report 2017, https://ainowinstitute.org/AI_Now_2017_Report.pdf. 19A Round Up of Robotics and AI Ethics: Part 1 Principles, 23 December 2017, http://alanwinfield.blogspot.com/2017/12/ a-round-up-of-robotics-and-ai-ethics.html. 20See http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf. 21See https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues. 8rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ There is thus no scarcity of proposals of ethical principles for AI, and the ongoing work of the High Level Group on AI, created by the European Commission in its Communication on Artificial Intelligence of 25 April 2018,22 will certainly work through all this material and develop yet another catalogue of proposals by the end of 2018.23 With all this material now on the table, it is time to move on to the crucial question in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and based on democratic process, thus laws. In answering this question, responsible politics will have to consider the principle of essentiality which has guided legislation in constitutional democracies for so long. This principle prescribes that any matter which is essential because it either concerns fundamental rights of individuals or is important to the state must be dealt with by a parliamentary, democratically legitimized law.24 Another important element to consider in answering this question will be whether after the experience with the lawless Internet, our democracies can yet again afford the risk of a new, all pervasive and decisive technology, which remains unregulated and thus is likely to produce, like the Internet before it, substantial negative impacts. Also, AI is now—in contrast with the Internet—from the outset not an infant innovation brought forward mainly by academics and idealists, but largely developed and deployed under the control of the most powerful Internet technology corporations. And it is these powerful Internet technology corporations which have already demonstrated that they cannot be trusted to pursue public interest on a grand scale without the hard hand of the law and its rigorous enforcement setting boundaries and even giving directions and orientation for innovation which are in the public interest. In fact, some representatives of these corporations may have themselves recently come to this conclusion and called for legislation on AI.25 There are functional advantages of binding law for dominant players as they may be better situated than others to influence the content of the law and binding law may allow them to keep free riding competitors and new market entrants in check in a level playing field of common binding rules which are properly enforced against all. 5. The EU GDPR is the first piece of legislation for AI In the same way the ‘Greening of GE’ and generally of industry came about after environmental protection legislation incentivized and forced innovation in the direction of environmental sustainability, so now will the new general data protection regulation (GDPR)26 of the EU drive 22https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN. 23To follow the work of the Europea AI Alliance, see https://ec.europa.eu/digital-single-market/en/european-ai-alliance; the German Government has appointed an independent Committee to advise on Data Ethics, see https://www.bmjv.de/ DE/Ministerium/ForschungUndWissenschaft/Datenethikkommission/Datenethikkommission.html; the author is one of 16 Members of this Committee; see also Ben Wagner, Ethics as an Escape from Regulation: From ethics-washing to ethicsshopping?, 11 July 2018, available at https://www.privacylab.at/ethics-as-an-escape-from-regulation-from-ethics-washingto-ethics-shopping/. 24Jo Eric Khushal Murkens considered the judgement of the UK High Court on decisions relating to Brexit not being left alone in the hands of the government but giving Westminster a say a case of application of the principle of essentiality, see The High Court’s Brexit Decision: A Lesson in Constitutional Law for the UK Government, in: Verfassungsblog – On matters constitutional, 3 November 2016, available at https://verfassungsblog.de/the-high-courts-brexit-decision-a-lessonin-constitutional-law-for-the-uk-government/; for the principle of essentiality in German, US and EU law, see Johannes Saurer, EU Agencies 2.0: the new constitution of supranational administration beyond the EU Commission, in: Comparative Administrative Law: Second Edition 2017, edited by Susan Rose-Ackerman, Peter L. Lindseth, Blake Emerson, p. 619.628; further recent examples in the jurisprudence of the CJEU are the judgments in cases C-355/10, ECLI:EU:C:2012:516, European Parliament v. Council, para 64ff, with further precedent, available at https://eur-lex.europa.eu/legal-content/EN/TXT/? qid=1534186617433&uri=CELEX:62010CJ0355, and C-293/12 and 594/12/, ECLI:EU:C:2014:238, Digital Rights Ireland, para 54ff, available at http://curia.europa.eu/juris/documents.jsf?num=C-293/12. 25Microsoft Says AI Advances Will Require New Laws, Regulations, Bloomberg 18 January 2018, https://www.bloomberg. com/news/articles/2018-01-18/microsoft-says-ai-advances-will-require-new-laws-regulations. 26Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing 9rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ innovation for a way of collecting and processing personal data which respect individual rights and the importance of privacy in democracy. All arguments, which are now being presented against legislation for AI, have been presented against legislation for data protection, already in the years before 1995, when the first Directive on the protection of personal data was put in place in the EU, and now again from 2012 to 2016, in the 4 years of negotiations of the GDPR. None of these arguments convinced legislators—and rightly so. The claim that the law is not able to develop as fast as technology and business models is disproved by the continued application of good, technology neutral law in the US and Europe. The GDPR is a modern example of technology neutral law, the meaning and relevance of which changes with the progress of technology, including through AI. For example, what are personal data in the sense that these data do not directly identify an individual but allow identifying that an individual will certainly change with the use of AI self-learning algorithms to track and identify humans online and offline. The recent publication of a 5000-point per human body in movement AI analysis tool, by Facebook Paris-based basic Machine Learning Research Director Antoine Bordes and colleagues, will be a great tool for the fashion industry to match body forms and fashion.27 But, it will most probably also allow identifying any individual from the back, given the uniqueness of body forms and movement patterns of human bodies, thus bringing body form patterns, even when taken from the back and without seeing the face, into the sphere of personal data. The claim that the law is not precise enough to regulate complex technology and that a law which is below the detail, precision and user-friendliness of a good code, is not a good law and should thus not be adopted by the legislator, is another fallacy of the engineering view of the world. By definition, law adopted in democratic process requires compromise. The GDPR was negotiated between the co-legislators with nearly 4000 individual requests for amendments on the table. Compromise texts of laws produced in democratic process are the noblest expression of democracy, and a democracy which has lost the ability to negotiate compromise is in crisis. Compromise texts of laws in democracy normally fulfil their function perfectly, as compromise is reached, in an ideal case with long public deliberation, is a societal progress towards a consensus—never reached—on the rules according to which we want to live. And these compromise laws like any other law are not written to be applied—like code—by machines and automation. Laws are produced to be applied by human beings who can reason themselves and to be interpreted in case of dispute by reasonable judges. It is this process of openness of the law and legal process to later interpretation by wise judges (with the help of academia28) which gives the law the flexibility to adopt the new requirements of the times without having to be rewritten like code, which needs to be revised from version 1.0 onwards constantly. To be very clear: requiring that law be either as precise as code or be rewritten as fast as code is updated is simply anti-democratic, as this ignores the need for deliberation and compromise in democracy as well as the time required for due process under the rule of law. Lobbyists by the way had no problem to criticize at the same time successive drafts of the GDPR as not providing sufficient legal certainty now while also criticizing it as too prescriptive and not sufficiently open to provide flexibility for the future: these contradictory claims demonstrated that they only had one aim, namely to avoid the law by discrediting it no matter what. Those advertising ethics for AI make exactly the claim that law for AI would be too inflexible to take account of all possible future developments, a claim which was also made against the GDPR. But, this claim ignores the power of technology neutral legislation and the power of general laws to be concretized in their application by evolving application practice and jurisprudence. The Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, pp. 1–88, https://eur-lex.europa.eu/eli/reg/ 2016/679/oj. 27See details at http://densepose.org/. 28In a society increasingly ruled by markets and technologies provided by the private sector, it is vital to increase the share of public funding for academia in order to maintain its public good orientation and independence and to avoid capture by private interests. 10rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ argument of law being too inflexible to take account of technological development is basically a more elegant way of saying what corporations and neo-liberals have always said: we want no obligations by law, as with law we can be held accountable through enforcement. Business has no problem with the fact that any ethics code lacks democratic legitimacy and cannot be enforced. But, this is actually the core advantage of the law when compared with ethics and self-regulation in the face of concentration of power in the hands of big corporations: the law has democratic legitimacy and it can be enforced, even against powerful mega corporations. It thus creates, combined with a credible threat of deterring sanctions and proper enforcement, a level playing field beneficial to all and beyond that gives orientation to innovators by providing incentives directing innovation towards the public interest. In the world of technological dreams for domination and populist ambition to undermine democracy, we need to strengthen democracy by giving back to the law the noble function which it has in constitutional democracy: to express the will of the people in a form obligatory for everyone and able to be brought into reality, even against resistance and non-compliance, by the use of public and private enforcement powers. 6. The future of technology law: three levels of impact assessment and democracy, rule of law and human rights by design in AI We live in a world which is shaped at least as much by technology as it is by law and democracy. And in the same way that the people shape the law and the law shapes the behaviour of people, we need to get used to—and practice—that the law is shaped by technology and technology is shaped by the law. Any technology has lived with being shaped by the law so far, and it is high time that Silicon Valley and the digital Internet industry also accept this necessity of democracy. At a time when the Internet and AI are becoming all-pervasive, not regulating these all pervasive and often decisive technologies by law would effectively amount to the end of democracy. Democracy cannot abdicate, and in particular not in times when it is under pressure from populists and dictatorships around the world. There can be no all-pervasive, powerful and decisive technology, whether the Internet or AI, which is not subject to the rules set by democracy in law. The works on ethic rules for technology can be precursors of the law; they can give orientation on the possible content of legal rules. But, they cannot replace the law, as they lack democratic legitimacy and the binding nature which allows enforcement with the power of government and the judiciary. There will also always be space for ethics going beyond what the law requires: intra-company ethics for engineers and leaders of major business are a good thing, if they include the principle of full compliance with the law of the land, and go beyond that in terms of public interest orientations of the company, for example. Much of what Silicon Valley evangelists claim that technology companies are doing in terms of good things are welcome and not required by the law, but no ethics can absolve from the obligation to comply with the law and to respect and support democratic process and all other rules of constitutional democracy. The principle of essentiality, discussed above, will guide us in making the necessary decisions on what rules for AI to cast in law. Once a challenge of AI is discovered which touches on fundamental rights of individuals or important interests of the state, we have to ask ourselves whether a law already exists which can apply to AI and which addresses the challenge in a sufficient and proportional manner. So before making a new law, the potential scope of application and problem-solving ability of existing law in relation to AI must be determined. To give three examples, there is no doubt that AI, when it develops autonomously or in certain areas of application, may cause issues of civil responsibility. Whether existing law is sufficient to cover these issues of responsibility or new law is necessary is presently planned to be studied by an expert group convened by the European Commission.29 29See details at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=615947. 11rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ Similarly, it is clear that GDPR will always apply to AI when it processes personal data. GDPR contains important rights for users relating to any processing of their personal data as well as obligations of processors which will shape the way AI will be developed and applied. The principles of privacy and data protection by design and default set out in the GDPR will certainly become very important for AI as will the limitations to automated processing and the related rights to meaningful information on the logic involved, the significance and the envisaged consequences of processing personal data with AI for those concerned. No new law is necessary in this respect. On the other hand, in democratic discourse, it is important to know whether one’s counterpart in discussion is a human or a machine. If machines could participate in the political discourse without being identified as such or even impersonating humans without sanction, this would amount to an important distortion of discourse, untenable in democracy. No law secures that we are made aware if machines enter into dialogue with us in the political context. As transparent political discourse among humans is key to democracy, the principle of essentiality prescribes that transparency must be created by law as to whether a machine or a human is speaking. Intransparent machine speech and a fortiori impersonation should be sanctioned, and those who maintain major infrastructures of political discourse should be held responsible to ensure that there is full transparency regarding machine speech on their infrastructures. This will require new law. Again, conversely, we can be optimistic about the future application of the extensive EU legal acquis on non-discrimination and consumer protection in the context of AI. This being said, some general principles on the approach to the law for AI now need work. There are a number of avenues worth reflecting on when it comes to discerning general principles for AI law. A first avenue may be to simply ask the question whether there are any actions which may be illegal if carried out by a natural person but legal if carried out by AI, abstraction made of subjective elements of illegality, such as intent or negligence. If on an objective level the answer to this question is no, it will be important to codify in law the principle that an action carried out by AI is illegal if the same action carried out by a human, abstraction made of subjective elements, would be illegal. Such a simple codification would maintain the rule of law in the age of AI and at the same time give a clear orientation to developers and appliers of AI. A second avenue for reflection would be to test whether regulatory principles found in specific bodies of law should be generalized for AI. For example, in most areas of sensitive human– technology interaction and most prominently in the law on pharmaceuticals, there is not only an extensive duty to test products and procedures of type approval as a precondition of access to the market, there is also a duty to follow the effect of the application of the technology or the pharmaceuticals on human beings through the life time of the product. The purpose of these rules is to avoid harm and secure other public interests, as regards cars for example also the public interest of environmental protection, so badly disregarded by the fraudulent technology manipulations of VW and other car companies. AI may be a candidate for such procedures and obligations, both on a general level, and with specific mutations, if developed for or applied in specific domains. A third avenue is to return to old principles of technology impact assessment and to apply the most recent state of the art of technology impact assessment systematically to AI. The renaissance of technology impact assessment, a good tradition of parliaments in Europe and the US since the 1970s, would be in line with the much needed increased dialogue between democracy and technology. It would also help to instil a general culture of responsibility in the tech industry, in a way that is both obligatory and flexible enough to allow for and encompass any new technology developments. While in Europe parliamentary technology impact assessment grew into a standard routine, based on Hans Jonas’ Principle of Responsibility, which considered investments in such impact assessment as a key element of the precautionary approach, in the US the Congressional Office 12rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ of Technology Impact Assessment was closed during the Reagan reign, an early victory of the anti-science movement. Hillary Clinton in her election campaign had actually promised to re-instate that office or a similar process. The American Academy for Arts and Sciences preserves the memory of this important institution. In Europe, the principles and methodologies to assess impacts of technology in the short and long term, with a view to informing policy developers and legislators, have however lived on. The European Association for Parliamentary Technology Assessment consolidates the methodologies and individual impact assessments for parliaments in Europe in a common database, which already includes a number of pre-studies relating to the capabilities and impacts of AI.30 What would be necessary to strengthen trust in technology in the age of Artificial Intelligence, in which high technology increasingly colonizes every aspect of life, is obligatory impact assessments for new technologies on three levels. First, as just discussed above, the parliamentary technology impact assessment, on the level of policy making and legislation, in order to ascertain whether essential interests are touched on by the technology in question and thus what legislation to put in place to guarantee the public interest in that context. This impact assessment should ideally take place before deployment of high-risk technologies. Decisions as to the consequences to draw from the risk assessments carried out by experts are in the hands of governments and legislators, and on the EU level in the hands of the Commission and the Council and Parliament as co-legislators. Second, on the level of the developers and users of such technology. For AI, it would certainly be warranted to extend by law the obligation of an impact assessment, which already exists when AI is processing personal data in the context of automated decision making,31 to all aspects of democracy, rule of law and fundamental rights, at least when AI has the potential to be used in the context of the exercise of public power, the democratic and political sphere or in the provision of services of general use, independent of whether personal data are processed or not. The importance of these impact assessments on developer and user level would be that they would underpin the public knowledge and understanding on AI, which presently is suffering from a lack of transparency as to capabilities and thus impacts of AI. They would also help the corporations, their leaders and the engineers developing the new technologies and their applications to own up to the power they exercise. They would thus help to instil a new culture of responsibility32 of technology for democracy, rule of law and fundamental rights. The standards for such AI impact assessment, to be carried out at the latest before making a new AI program public or marketing it to clients, would have to be set in law, in an abstractgeneral manner, as they were set in law in the GDPR for the specific case of data protection impact assessments.33 And as in the GDPR, the compliance with the standards for the impact assessment would have to be controlled by public authorities and non-compliance should be subject to sufficiently deterrent sanctions. In cases of AI to be used in the exercise of public power or in wide public use, the impact assessment would have to be made available to the public, and in high-risk cases, the public authority making use of AI would have to carry out its own complementary assessment and present a risk reduction and mitigation plan. Without distinguishing between private and public sectors, the most elaborate plan so far on setting up an EU Agency for ex ante certification and registration as well as a legal framework for substantial rules governing the 30http://www.eptanetwork.org/; to distinguish from the Technology Assessment is the Legislative Impact Assessment, which assess the impact of draft legislation, originally in terms of costs for enterprises, today also for societal impacts; see for details https://ec.europa.eu/info/better-regulation-toolbox_en. 31See Art. 35 (3) a GDPR. 32A practical example of elements for a possible intra-company technology impact assessment for AI is the Guide developed by the Omidyar Network and the Institute of the Future, available at https://ethicalos.org/wp-content/uploads/2018/08/ Ethical-OS-Toolkit-2.pdf. 33See Articles 35–36 GDPR and Recitals 74–77, 89–92, 94–95; see also the Guidelines on Data Protection Impact Assessment by the Article 29 Data Protection Working Party of 4 October 2017, EN/17 WP 248 rev. 01, available at https://edpb.europa. eu/node/70. 13rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ research, development and use of AI and robotics is contained in the Resolution of the European Parliament of 16 February 2017.34 Third, individuals concerned by the use of AI should have a right, to be introduced by law, to an explanation of how the AI functions, what logic it follows, and how its use affects the interests of the individual concerned, thus the individual impacts of the use of AI on a person, even if the AI does not process personal data, in which case there is already a right to such information under GDPR.35 In this context, the claim of tech giants that explanations of how AI functions and how it has arrived at decisions are not possible must be rejected. There is already vivid research on interpretability of AI.36 And given that the obligation to give reasons is part of the rule of law, at least where public authorities act in the exercise of public power, the simple reality is programs of AI which do not give reasons and the decisions of which can also not be explained by humans cannot be used in the exercise of public power, as by using such programs, the public authority could not fulfil its obligation to state reasons.37 A new intensity of three-level impact assessment of technology is a necessary component of a new intensity of the dialogue between technology and democracy which is vital at a time when we are entering a world in which technologies like AI become all-pervasive and are actually incorporating and executing the rules according to which we live in large part. AI will in many areas of life decide or prepare decisions or choices which previously were made by humans, according to certain rules. If thus AI now incorporates the rules according to which we live and executes them, we will need to get used to the fact that AI must always be treated like the law itself. And for a law, it is normal to be checked against higher law, and against the basic tenants of constitutional democracy. The test every law must go through is whether it is in line with fundamental rights, whether it is not in contradiction with the principle of democracy, thus in particular whether it has been adopted in a legitimate procedure, and whether it complies with the principle of the rule of law, thus is not in contradiction to other pre-existing law, sufficiently clear and proportional to the purpose pursued. It is this test which AI that incorporates rules for society and applies them through decisions or the preparation of decisions must also go through. And AI will only pass this test if by design, the principles of democracy, rule of law and compliance with fundamental rights are incorporated in AI, thus from the outset of program development. In the same way that an architect from the outset of designing a house has to think of compliance with the building code, programmers of AI will have to think from the outset of program development about how their future program could affect democracy, fundamental rights and the rule of law, and how to ensure that the program does not undermine or disregard, but respect and in an ideal case strengthen these basic tenants of constitutional democracy. If the debate on AI brings about this new responsibility of engineers and tech corporations for democracy, fundamental rights and the rule of law, then AI will already have earned important credits of trust it needs to find broad acceptance in society. Data accessibility. This article has no additional data. Competing interests. I declare I have no competing interests. Funding. I received no funding for this study. 34See in particular the sections ‘Ethical Principles’ (para 10ff) and ‘A European Agency (para 15ff) and the Annex to the Resolution with recommendations as to the content of the legislative proposal requested to the European Commission, European Parliament Resolution’ of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), http://www.europarl.europa.eu/sides/getDoc.do?type=TA&reference=P8-TA-2017- 0051&language=EN&ring=A8-2017-0005. 35Articles 13–15 in connection with Art. 22 GDPR. 36See for an overview of literature ‘Interpretability in AI and its relation to fairness, transparency, reliability and trust, Marius Miron, Joint Research Center of the European Commission’, 9 April 2008, at https://ec.europa.eu/jrc/communities/ community/humaint/article/interpretability-ai-and-its-relation-fairness-transparency-reliability-and; see also the programs and literature at ‘Awesome machine learning interpretability’, by Patrick Hall, https://github.com/jphall663/ awesome-machine-learning-interpretability/blob/master/README.md. 37For the US debate see Pasquale . 14rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 376: 20180089 ........................................................ Author profile Paul Nemitz is Principal Adviser in the European Commission, visiting Professor of Law at the College of Europe in Bruges and a Member of the Data Ethics Commission of the German Government and of the World Council on Extended Intelligence. As a Director, in a previous function, he was leading the work to put the General Data Protection Regulation (GDPR) in place, under the authority of the European Commission. He is expressing his own opinion and not necessarily that of the European Commission. Thanks go to Alexander Schiff, Berlin, who contributed important elements to this article. References 1. Kumm M. 2013 The cosmopolitan turn in constitutionalism: an integrated conception of public law. 20 Indiana J Global Legal Studies 605. 2. Kumm M. 2016 Constituent power, cosmopolitan constitutionalism, and post-positivist law. 14 Int J Constitutional L 697. 3. Manjoo F. 2016 Tech’s ‘frightful 5’ will dominate digital life for foreseeable future. N.Y. Times, 20 January 2016 https://www.nytimes.com/2016/01/21/technology/techs-frightful-5-willdominate-digital-life-for-foreseeable-future.html. 4. Manjoo F. 2017 ‘Tech’s frightful five: they’ve got us. N.Y. Times, 10 May 2017. https://www. nytimes.com/2017/05/10/technology/techs-frightful-five-theyve-got-us.html. 5. Rosenberg S. 2017 How google book search got lost. Backchannel, 11 April 2017 https:// backchannel.com/how-google-book-search-got-lost-c2d2cf77121d. 6. Barbrook R, Cameron A. 1995 The Californian ideology. 1 Mute http://www.metamute.org/ editorial/articles/californian-ideology, accessed11 June 2017. 7. Barlow JP. 1996 A declaration of the independence of cyberspace. Electr. Front. Found. 8 February 1996 https://www.eff.org/de/cyberspace-independence. 8. Barlow JP. 2006 Is cyberspace still anti-sovereign? April 2006. https://alumni.berkeley. edu/california-magazine/march-april-2006-can-we-know-everything/cyberspace-still-antisovereign 9. Johnson DR, Post DG. 1995 Law and borders – the rise of law in cyberspace. 48 Stanford L R 1367, 1393. 10. Kühling J. 2014 Rückkehr des Rechts: Verpflichtung von “Google & Co.” zu Datenschutz. Europäische Zeitschrift für Wirtschaftsrecht 527. 11. Pasquale FA. 2017 Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society (14 July 2017). Ohio State Law Journal, Vol. 78, U of Maryland Legal Studies Research Paper No. 2017-21. Available at SSRN: https://ssrn.com/ abstract=3002546.
Voir aussi sur DIACONESCO
INTELLIGENCE ARTIFICIELLE ET ROBOTIQUE AU XXIème SIÈCLE
.... La notion même de «l'intelligence artificielle» suggère une certification de substitution...," Intelligence Artificielle 59, no. 1-2 (1993), 233-240. 157 John Haugeland, Intelligence artificielle... - Marvin Minsky, "Étapes vers l'intelligence artificielle", MIT Media Laboratory, le 24...
WEB BOT ET CONSCIENCE GLOBALE
... un type de l’IA (intelligence artificielle). Les WebBots ont prédit certains... Prolog. (le langage de l’intelligence artificielle) Les données recueillies sont ensuite...
LES LOGICIELS ONT-ILS UNE ÂME ? & LES MEMOIRES D'UN LOGICIEL
... de développer des logiciels d’Intelligence Artificielle, aujourd’hui un peu passés... d'orthographe, des algorithmes d'intelligence artificielle, des logiciels sophistiqués de télécommunications... fini par comprendre, avec leur intelligence animale forcément limitée, qu'ils...
LE NEW ARTICLE DE JEAN-MICHEL BILLAUT
..., robots industriels et de services, intelligence artificielle, séquenceurs génomiques à très haut... pion aux Américains dans l’Intelligence artificielle et la e-santé notamment...