AI Governance, Democracy, & Technology in the Legal Field with Dr. Anjanette Raymond


Interviewed and written by: Shrinithi Venkatesan and Rachel Sleiman

Dr. Anjanette Raymond is an Associate Professor in the Department of Business Law & Ethics at Indiana University and Director of the Program on Data Management and Information Governance at the Ostrom Workshop.[1] Dr. Raymond’s primary research areas include online dispute resolution, data governance, privacy, and artificial intelligence. The Comparative Jurist interviewed Dr. Raymond to address technology’s consequences on U.S. elections, privacy concerns, and the implications of AI governance in the legal field. The following is the transcript from the interview, edited for length and clarity.

Tell us about your journey to and through the field of law and what sparked your interest in the technology, privacy, and national/cyber security space–especially through a comparative lens.

I have a social science and mental health background, and I at first went to law school to learn how to bring those worlds together. Technology has always been an interest; for years, I took apart computers, put them back together, and learned about the field. While in law school, I fell in love with Contracts because it stimulated my math and social sciences background. Law school came at the perfect time because Amazon One-Click had recently launched,[2] and I thought, “That is cognitive behavioral science right there.” So, I started to think about e-commerce and how humans interact with the digital world at a time when technology was integrating into our everyday lives rapidly. Because I never wanted to practice law, I was more attracted to bringing my areas of interest together and exploring the legal consequences of the technological world we live in.

What surprises you most about this field? 

What surprises me is that individuals use technology without even a basic understanding of how technology works. We have self-help books on how to eat healthier, drink less, be more attentive at work, etc., which provide behavioral plans to intervene in and improve our lives. Shockingly, people are not interested in how technology works, considering that it is a substantial part of our everyday lives. Technology users do not fully understand that technology so heavily influences their lives, so much so that they cannot control or manage it. It does worry me that individuals are not giving much thought to the implications of technology use in their lives. 

Where do you think that the lack of understanding of technology stems from? Are technology companies purposefully attempting to hide information to get consumer information, and how can we work to become more transparent?

The business model of technology today has merely made easier and more widespread what companies have done for a long time—gather information about their consumers. For instance, before we carried digital surveillance devices in our pockets, we received coupons in the mail targeted to us, and we had loyalty cards that captured our data; companies have always been interested in its consumers. Although technology has triggered more problems in privacy, it should not be blamed for something that businesses have always done.

In terms of transparency, the European Union’s (EU) General Data Protection Regulation (GDPR) proved a promising start. Enacted in 2018, the GDPR provides a cohesive framework for rights and regulations in data protection and privacy.[3] It was the first legal framework of its kind that protected consumers while promoting technological development.

The U.S. still needs federal data privacy legislation, but because the U.S. and Europe have very different conceptual frameworks, the GDPR model may not work for the U.S. The values of the GDPR reflect the European cultures that molded it, including that internet access and the protection of personal data are considered fundamental rights.[4] Europe has a different type of human rights law, and it stems from a very different place than the U.S., which derives its rights from the Constitution. The U.S. also has a different way of attaching liability to companies. So, it will be tougher to integrate the GDPR into the U.S. framework or create a holistic framework until we can agree upon a list of five or six factors that constitute “rights” here.

Do you believe our desire to implement technology in every aspect of our lives causes us to lose our human touch?[5]

We want human beings in our decision-making processes, even if technology could have done it better. For example, there is a far more negative reaction when we learn that our insurance claims were denied by a “robot” rather than a human. Our society encourages technological advancement, but we still hope that humans are making these life-changing decisions, even if we prove that humans are doing the work far less efficiently than technology can. When humans misuse technology, they should expect negative consequences. At the same time, as a lawyer, I believe that the consequences must be proportional to the scale of the improper conduct.

The federal government spent $6 billion on AI governance in 2021.[6] What are the implications of such significant government spending on these new technologies that need to be better understood? 

Government spending on new technologies will no doubt continue to increase. The “tech-hype curve” is not hype—it’s legitimate that we jump on bandwagons at different times on the new technology of the moment. And after doing so, we often realize that we may have jumped too soon because the ideas have not been developed enough.[7]

So, it’s deeply concerning when the government spends so much on AI to regulate it yet fails to do so in part because the technologies and the regulations behind them are not yet defined or understood. The government should invest more in research, creating stakeholder groups, and other initiatives to engage the industry in greater dialogue and better understand these new technologies.

Private companies abroad are researching and developing ways to integrate AI into human rights, industry, and business. For example, your article, Should We Trust a Blackbox to Safeguard Human Rights? A Comparative Legal Analysis of AI Governance, mentions the implementation of AI bots to improve efficiency in agriculture in India and New Zealand.[8] Can such a framework be applied to the United States?

I would hesitate to allow the U.S. government to implement anything like other countries have done abroad to improve efficiencies in the industry. Right now, we are one of the few democracies that really distrust our government, whether in health, democracy, or access to services. These fundamental rights are so important to our institution that it causes distrust among specific groups of Americans. The government may be willing to spend millions on artificial intelligence, but efforts to implement it in the industry may be futile since so many distrust anything the government does. In our current political climate, garnering widespread support for a government-implemented AI regime would be challenging. Until we overcome our distrust of the government, the technologies it implements will not succeed.

Based on your article Defending Democracy: Taking Stock of the Global Fight Against Digital Repression, Disinformation, and Election Insecurity,[9] how has technology impacted the way people consume information, and what are the potential consequences of this shift for society and democracy?

The digital world has caused people to outsource their trust in the news to online sources that do not have the same level of credibility and vetting as traditional news sources. This is a significant problem because people assume that everything labeled as “news” online has the same credibility as conventional news sources, but that is false. Because of the nature of the internet, misinformation can spread quickly and be repeated across multiple sources and platforms, leading to incorrect assumptions of its validity. That is deeply troubling, and the spread of misinformation and disinformation has significantly affected our democracy. Although it has tried, the digital world has been unable to replicate the traditional vetting process undertaken decades ago by newspapers or broadcast news stations before the internet age. So, people must learn to assess the information online and understand the technology and business models perpetuating it to combat this issue. This is crucial for maintaining the integrity of democracy by ensuring that voters have access to accurate information.

What trends do you foresee in democracy and AI governance in the next few years, and where do you hope it goes?

Regarding trends, we will continue to see overreactions to a few things, including voter fraud and AI. With voter fraud, the numbers do not reflect the U.S.’s perceptions of fraud in our elections, yet we see states pursuing legislation seeking to address this alleged phenomenon. I hope if we are going to talk about democracy in general that, we will get a handle on it and that we will be able to assess the actual problems and invest our money and resources in resolving those, as opposed to chasing the ghost that we are pursuing right now, like those catchy phrases like “voter fraud.” I hope we also invest in educating individuals about the election process because there is a ton of misinformation about how the process works. I highly recommend it to those reading or interested in volunteering at a polling place to learn how our voting process functions. 

In addition, we’ll likely see overreactions to AI in general. Legislation is tricky here because technology is being rolled out in various ways that are incredibly detrimental to individuals, and we will see regulation because it must happen. We can’t have, for example, people losing hundreds of thousands of dollars in a crypto-wallet and not have a response that warrants increased regulation in that sector. So, I hope we take a tempered view of understanding the technology, and we historically have not captured a proper understanding of these technologies in our legislation. So we must do better, but we need to do it soon. And as an academic, I know that many other academics and expert groups are discussing these issues. They must participate in the conversation to understand these technologies and draft appropriate legislation.

I also understand that the Federal Trade Commission (FTC) has requested public comment, which is closed now, that received over 11,000 responses regarding how we should think about data privacy and security practices under Section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.”[10]  This intrigues me because 11,000 responses are an incredibly high number for legislative activity. I hope that Congress is looking at them and picking people out to get them in a room to have a more robust conversation about these issues so that the process is done in a way that does not stifle business does not stifle technological development, does not get in the way of the First Amendment, but places some much-needed guardrails. Otherwise, we will be going down a path that would be very difficult to untangle.

Where is your research taking you next? 

Ostrom Workshop and Datasphere are taking on research about data, mainly how it is gathered, stored, and used. Data is the starting point of all these conversations, so we are researching to understand more about how data should be appropriately shared. We use data to make decisions, and we want to make good decisions to be more efficient. That said, there are limitations to data, so our research will also address how data may be the starting point but cannot do everything. 

What advice do you have for young lawyers seeking to break into the niche field of privacy and technology law?

Privacy is an ever-evolving field requiring lawyers to understand its legal and technological aspects. So along with encouraging students to take privacy and cybersecurity law courses at their universities and law schools and attend workshops by expert groups, I recommend that students and young lawyers spend time in the technology industry to understand the processes, goals, and operations behind it. Technology companies are always looking for ways to positively impact society, and young professionals should be in the room where this occurs.

About Dr. Anjanette Raymond

Dr. Anjanette (Angie) Raymond is the Director of the Program on Data Management and Information Governance at the Ostrom Workshop, an Associate Professor in the Department of Business Law and Ethics at the Kelley School of Business-Indiana University, and an Adjunct Associate Professor of Law at Maurer Law School-Indiana University. She graduated from Loyola University, New Orleans School of Law, and received her Doctorate Degree at the Centre for Commercial Law Studies at Queen Mary University of London, where she developed her thesis in Managing Bias, Partiality, and Dependence in Online Justice Environments.[11] Since then, Dr. Raymond has written on a wide range of subjects––from online and commercial dispute resolution[12] to artificial intelligence governance,[13] privacy,[14] and international business ethics.[15] Dr. Raymond currently serves as a U.S. National Consultant Delegate to the United Nations Commission on International Trade Law (UNCITRAL), where she reports on Electronic Commerce-related matters. In addition, Dr. Raymond is a recognized expert in Online Dispute Resolution (ODR) at Asia-Pacific Economic Cooperation (APEC), where she is leading a pilot project on cross-border ODR. Dr. Raymond is also a Weimer Faculty Fellow and has been recognized for her work and community involvement through numerous honors and awards, including the Kelley Service Award (2021) and the Best Paper Award for Outstanding Environmental Paper (2021).


[1] The Ostrom Workshop is a research center at Indiana University in Bloomington, Indiana, dedicated to interdisciplinary research on governance, including in areas of the internet and cybersecurity, data management and information, and the environment and natural resources. For more information, see Ostrom Workshop, Ind. U., https://ostromworkshop.indiana.edu/index.html.  

[2] Press Release, Amazon, Amazon.com Catapults Electronic Commerce to Next Level with Powerful New Features (Sept. 23, 1997), https://press.aboutamazon.com/1997/9/amazon-com-catapults-electronic-commerce-to-next-level-with-powerful-new-features

[3] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119), https://gdpr.eu/tag/gdpr/

[4] Eur. Parl. Ass., The Right to Internet Access, Res. 1987 (2014), https://pace.coe.int/en/files/20870/html; Charter of Fundamental Rights of the European Union, 2000 O.J. (C 364), https://www.europarl.europa.eu/charter/pdf/text_en.pdf. Many countries within the European Union have embedded theses fundamental rights within their own legal frameworks. One such example regarding a right of internet access is Greece, which includes within its Constitution a provision prescribing “the right to participate in the Information Society;” in other words, the right to access electronically transmitted information. Greece Const. art. 5A(2), https://www.constituteproject.org/constitution/Greece_2008?lang=en

[5] The interviewers cited recent news of public uproar surrounding the use of ChatGPT, an artificial intelligence chatbot. In one instance, professors at Vanderbilt University used ChatGPT to craft a statement expressing the university’s support of its students in the wake of a recent mass shooting at Michigan State University. Will McDuffie, Vanderbilt University Apologizes After Using ChatGPT to Console Students, ABC News (Feb. 21, 2023, 5:35 PM), https://abcnews.go.com/US/vanderbilt-university-apologizes-after-chatgpt-console-students/story?id=97365993. In another example, a free mental health services provider and nonprofit company has run into hot water after its CEO publicized the company’s use of GPT-3 chatbots to help develop responses to its users. Bethany Biron, Online Mental Health Company Uses ChatGPT to Help Respond to Users in Experiment—Raising Ethical Concerns Around Healthcare and AI Technology, Business Insider (Jan. 7, 2023, 2:34 PM), https://www.businessinsider.com/company-using-chatgpt-mental-health-support-ethical-issues-2023-1

[6] Jon Harper, Federal AI Spending to Top $6 Billion, Nat’l Def. Mag. (Feb. 10, 2021), https://www.nationaldefensemagazine.org/articles/2021/2/10/federal-ai-spending-to-top-$6-billion

[7] Dr. Raymond provided the federal government’s spending on drone technology as a poignant example. In investing in drone technology, the government had moved forward on implementing drone programs without proper planning, process, oversight, or privacy safeguards. See, e.g., Jennifer Lynch, The Federal Government Moves Forward with Drone Programs Despite Poor Planning and Lack of Oversight, Elec. Frontier Foundation (June 13, 2012),https://www.eff.org/deeplinks/2012/06/federal-government-moves-forward-drone-programs-despite-poor-planning-and-lack.  

[8] Scott Shackelford, Isak Nti Asare, Rachel Dockery, Anjanette H. Raymond, & Alexandra Sergueeva, Should We Trust a Blackbox to Safeguard Human Rights? A Comparative Legal Analysis of AI Governance, 26 UCLA J. Int’l L. & Foreign Affs. 35, 76 (2022), https://escholarship.org/uc/item/1k39n4t9.  

[9] Scott Shackelford, Angie Raymond, Abbey Stemler, & Cyanne Loyle, Defending Democracy: Taking Stock of the Global Fight Against Digital Repression, Disinformation, and Election Insecurity, 77 Wash. & Lee L. Rev. 1747 (2020), https://scholarlycommons.law.wlu.edu/wlulr/vol77/iss4/7

[10] 15 U.S.C. § 45(a). See also Fed. Trade Comm’n, Proposed Trade Regulation Rule on Commercial Surveillance and Data Security (Aug. 22, 2022), https://www.regulations.gov/document/FTC-2022-0053-0001

[11] Anjanette H. Raymond, Managing Bias, Partiality, and Dependence in Online Justice Environments (2021) (Ph.D. thesis, University of London, Queen Mary, Centre for Commercial Law Studies), https://qmro.qmul.ac.uk/xmlui/handle/123456789/72859.

[12] E.g., Patricia Živković, Denis McCurdy, Mimi Zou, & Anjanette H. Raymond, Mind the Gap: Tech-Based Dispute Resolutions in Global Supply Blockchains, 66 Bus. Horizons 13 (2023), https://doi.org/10.1016/j.bushor.2021.10.008

[13] E.g., Shackelford et al., supra note 8.

[14] E.g., Scott Shackelford, Anjanette Raymond, Martin A. McCrory, & Andrea Bonime-Blanc, Cyber Silent Spring: Leveraging ESG+T Frameworks and Trustmarks to Better Inform Investors and Consumers About the Sustainability, Cybersecurity, and Privacy of Internet-Connected Devices, xx U. Pa. Bus. L. J. xx (2022, in press), available at SSRN: https://dx.doi.org/10.2139/ssrn.4003576

[15] E.g., Scott J. Shackelford, Anjanette H. Raymond, & Eric L. Richards, Legal and Ethical Aspects of International Business (2d ed. 2021).

*Source of featured image: Dirk Helbing et al., Will Democracy Survive Big Data and Artificial Intelligence?, Sci. Am. (Feb. 25, 2017), https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/.

Categories: Artificial Intelligence, Election, EU, PrivacyTags: , , , , , , ,
%d bloggers like this: