Main Content

Casebook Credits History Find
Regulating AI: Legal and Policy Perspectives
First published Nov 2023

A template syllabus

This template syllabus aims to provide legal and policy scholars a foundation off of which to design their own course related to the identification and mitigation of AI risks.

Several individuals contributed to the creation of this template syllabus. I am particularly grateful to Paige Arnold and Ana Barreto, my research assistants. Thanks also to Legal Priorities Project for encouraging and informing this work.

This headnote includes:

  • The Purpose of the Course
  • The Goals of the Course
  • Student Learning Objectives
  • Guidance for Instructors
  • Potential Student Evaluation Framework
  • Overview of Assignments 

PURPOSE OF THE COURSE

The law and, by extension, lawyers can alter the trajectory of AI development. Regulators around the world have discussed different legal frameworks to limit the magnitude and likelihood of risks presented by AI. Lawyers will play a major part in drafting and interpreting these efforts. And, today's students will oversee the amendment and alteration of any enacted frameworks in the near future.  

This course begins by exposing students to the potentially catastrophic consequences of AI developing absent sufficient norms, regulations, and laws. Next, students will confront specific manifestations of the risks created by AI. Finally, students will consider the tools at their disposal to steer AI development in a way that promotes the flourishing of future generations.

AI threatens substantial, potentially existential risks. When, how, and why those risks will emerge is unknown and will remain unknown for the foreseeable future. The already ubiquitous use of AI means that some of the traditional methods of detecting and mitigating risks created by technological innovation will not suffice.

The development of AI has exposed a disconnect that has become more expansive with each passing wave of technological innovation: the gap between widespread adoption of that technology and a corresponding set of norms, regulations, and laws (“safety measures”) to mitigate the risks posed by that technology. Consider that it took nearly 70 years after the invention of the telephone for more than 30 percent of the American public to own a phone; it took just 25 years for TV to reach the same threshold. More recently, the gap between invention and creation has continued to shorten—think about the rapid adoption of the PC (about 20 years to reach 30 percent ownership), the Internet (around 15 years to reach 30 percent), and cell phones (fewer than ten years).

Each wave of technological innovation has been more rapidly adopted—shortening the window for the development of safety measures. More than 100 million people used ChatGPT just two months after its launch. The speed of AI adoption combined with the risks created by the continued spread and development of AI technology requires the education of AI emergency planners—lawyers, technologists, computer engineers, etc. who understand the risks of AI and the need for a novel approach to developing and deploying safety measures.   

This is not a technical course, though students will learn some fundamentals of AI. This is not a doctrinal course, though students will dive into myriad fields of law. And, this is not a “light” course, though some of the readings may come from popular media, other readings and concepts will lie beyond the technical and legal knowledge of students—that’s intentional and necessary. 

Everyone, including the professors teaching this course, is still learning about AI and how society will respond to its development. This course depends on the willingness of all its participants to come prepared, learn from and listen to one another, and bravely work through complex topics and materials.

Students should approach this course as a catalyst—a launch point for future studies into what role they can play in reducing the odds of catastrophic risks, especially those posed by AI. Curiosity, humility, and persistence will define the students that thrive in this course. After all, those are the same traits that define emergency planners—individuals thrown into a crisis and expected to react in a calm, yet comprehensive manner.

GOALS OF THE COURSE COURSE

This open-source syllabus is intended to spark a wave of legal education on the specific risks posed by AI. Dozens of individuals developed this syllabus with a few goals in mind:

  1. The establishment of a cohort of legal and policy scholars dedicated to introducing more students to AI, AI Safety, and the legal pathways to mitigating risks posed by emerging technology. These areas of inquiry are in constant flux—no scholar alone can keep track of all of the new opportunities, regulations, and resources pertaining to AI and, more generally, emerging technology. We sincerely hope that if you’re teaching this course, you will formally join the AI, Emerging Tech, and Risk Reduction Law listserv ("the AI Listserv")--a cadre of professors dedicated to sharing readings, syllabi, and exercises related to this field. Note that this listserv has yet to be formally launched. Please email me if you'd like to help get it up and running (kevintfrazier@gmail.com).

  2. A course that can be tailored to myriad different educational contexts. Though contributors to this course predominantly thought of law students when developing this course, there is no reason why it cannot be rearranged to cater to undergraduate students, graduate students in different fields, and “life” students seeking out this knowledge for reasons other than earning credit.

  3. An expansion of the number of early-career professionals interested in AI safety and eager to work in the space. Commercial developers of AI have attracted some of the smartest technical and legal minds to come up with transformative technology. A similar community of minds must develop to build out the corresponding norms, regulations, and laws. This course aspires to contribute to that field building effort.

  4. A spike in the amount of legal (and other) scholarship on the law and AI safety, emerging technology, and risk reduction. The exercises in this course can and should serve as launchpads for the deeper scholarly investigation. If you or a student are eager in conducting such an investigation or have already done so and need a publication outlet, please feel free to reach out.

STUDENT LEARNING OBJECTIVES

No previous knowledge of AI, catastrophic risks, or emerging technology is required. By the conclusion of the course, students will be able to:

  • employ various risk assessment frameworks to categorize the risks posed by AI;
  • conduct interdisciplinary research related to AI, AI safety, and emerging technology; 
  • evaluate the merits of a proposal to mitigate AI risks based on current norms, regulations, and laws; and,
  • persuasively write about and discuss AI Safety.

For instructors

ADAPTING THE COURSE TO YOUR NEEDS

This course is NOT a one-size-fits-all approach to AI Safety and the Law. We selected the H20 platform to host this course so that instructors can:

  • Alter the course content

    • We acknowledge that some instructors may not want to teach each section of the course and that some may want to add new sections. That’s why we placed the course on H20. You can learn more about H20 here.

  • Alter the course length

    • The current version of the course is intended for a 16-week semester. You can alter the number of sections and classes per section to scale the course back based on your time constraints and students’ familiarity with the topics. The first course within each section is intended to provide students with background information on the section topic—so if your students have a basic understanding of a certain topic and you need to eliminate some content to fit your schedule, you should consider eliminating the first course in the section.

  • Share new versions of the course

    • The H20 platform is designed to facilitate a collaborative approach to curriculum design and development. If you intend to use this syllabus or a version of it, please publish your course on H20. Additionally, please provide information on your students so that instructors of similar students can find your course.

TEACHING GUIDE

This course intentionally resides on the H20 platform. Based on the background knowledge of your students, the out-of-class obligations you plan to impose on your students, and constraints on the number and length of your classes, you can and should alter the sections and their contents. In short, there’s likely too much content to cover in a single semester, but we wanted to empower professors to shape courses according to their interests and those of their students.

If you opt to add material, please further the collaborative spirit of this course by making your version of the course available on this platform or by emailing me a copy of your syllabus to distribute to the AI Listserv. 

We place a heavy emphasis on assigning the exercises (or variants of those exercises). Students will likely retain more course content by completing these exercises and they may act on that knowledge in their practice by helping society as a whole develop proper responses to these threats. 

Though more scholarship in this field is and will be necessary, there’s still lots of pertinent analysis that we felt necessary to assign (we even created a large list of recommended readings to try to reduce the amount of required reading). If you disagree with our assessment of the need for a particular reading, feel free to truncate or eliminate that assignment.

Finally, we again encourage you to join the AI Listserv so that you can converse with professors teaching this and related courses. This cohort has tremendous potential to help professors provide the most accurate and timely answers to any student questions and to ensure each iteration of the class includes the latest and more accurate information.

STUDENT EVALUATION

Assignment Weight Due Dates
AI Safety Project 60% Lit Review: TBD
Policy Memo: TBD
Final Draft: TBD
Threat Exercises 25% (based on 2 highest-scoring exercises) Must submit at least 2 exercises.
Exercise 1: TBD
Exercise 2: TBD
Exercise 3: TBD
Exercise 4: TBD
AI Safety News Briefing 5% Students will sign up for a particular class to brief their colleagues on a development in AI Safety.
Participation 10%  

AI SAFETY PROJECT (60%)

Students must complete an AI Safety Project that (1) identifies a short- or long-term risk posed by AI, (2) specifies one to three original proposals to mitigate that risk, (3) maps out the means to and feasibility of realizing those proposals. [Note for professors, with student consent, please consider uploading each of the components of this project to this Google Doc folder].

Literature Review (15%)

AI Safety research requires casting a broader research net than a Lexis search. This assignment necessitates that students practice and take seriously the search for information in a complex, developing field of inquiry. 

There is no required number of sources, however students that earn full credit for this assignment will likely review and provide one-to-two paragraph long summaries of at least ten sources.

Literature reviews will be assessed on the following:

  • Demonstration of interdisciplinary research (5%)
  • Quality of synopsis of the material (5%)
  • Relevance to the student’s identified risk (5%)

Legal / Policy Memo (30%)

AI will not develop in a safe fashion if policymakers do not understand its short- and long-term risks and means to mitigate those risks. Students must author a policy memo directed to a specific official who has the potential to advocate for or implement the student’s risk-reduction proposal. In most cases, students will identify a legislator or agency official as their target. Students may write to a judge, CEO, or other individual with the means and authority to impact AI safety. In all cases, students must first discuss the topic and target of their memo with their professor. 

The memo will be assessed on the following:

  • Writing quality (brevity, clarity, suitability for intended audience) (5%)
  • Thoroughness and accuracy of analysis of the risk (10%)
  • Responsiveness of proposal to risk and quality of evaluation of its efficacy (15%)

Policy Pitch (15%)

This portion of the project will permit students an opportunity to practice the key task of eloquently and persuasively pitching an idea. Students will have ten minutes to pitch their proposal to their target audience. Their classmates will evaluate them on the following:

  • Clarity of presentation (1 to 5 points)
  • Persuasiveness of presentation (1 to 5 points)
  • Demonstration of research (1 to 5 points)
  • Inclusion of legal basis and policy justification for the idea (1 to 5 points)

Threat Analysis Exercises (25%-based on top two scores)

For a minimum of two of the identified threats, students will need to do:

  • Identify key risk factors that impact probability of the threat;
  • identify key actors that could shape these factors;
  • identify key actions these actors could take to shape these factors; and
    • as well as the legal basis for and barriers related to that action
  • identify key pathways to ensuring actors take key actions

Each exercise should span two and four pages (single spaced) and be in memo format. 

Exercises will be scored on the following:

  • Quality of writing
  • Demonstration of research
  • Rigor of analysis of possible key actions

Students may submit as many as three exercises. Their cumulative score will not include the exercise that earned the fewest points. [Note that professors should provide extensive feedback on the first exercise and may continue to do so at their discretion].

AI Safety News Briefing (5%)

Another key aspect of contributing to AI safety is staying up to date on AI developments. Students will pick a week to deliver a five-minute update on news pertaining to AI Safety. 

The update will be assessed on the following:

  • Clarity of presentation
  • Thoroughness of analysis
  • Identification of applicable norm, regulation, law (if any)
  • Identification of relevant stakeholders, including potential regulators

Participation (10%)

Certain pre-class exercises are “participation points” assignments. These are optional assignments evaluated only for adherence to directions and completion. Completion of these assignments and your participation during class will factor into your participation score. 

Note that participation point assignments are not busy work. Completion of these assignments will prepare you for success on graded tasks.

*All assignments are due at least 24 hours before class.