security-course

Security, Privacy, and Consumer Protection

View the Project on GitHub noise-lab/security-course

AI Accountability

Resolution

Companies deploying AI systems should be held legally liable for any harms caused by their technologies.

The debate will be “Oxford Style” and follow this format.

Resources

Here are some resources and questions to help guide and nuance the debate:

Questions

  1. How should we define and measure “harm” caused by AI systems?
  2. What is the appropriate standard of liability: strict liability, negligence, or something else?
  3. How do we attribute responsibility when AI systems involve multiple parties (developers, deployers, users)?
  4. Should liability depend on whether the AI system was deployed in a high-risk domain (e.g., healthcare, criminal justice)?
  5. How does AI liability affect innovation and the willingness of companies to deploy beneficial AI systems?
  6. What role should transparency and explainability play in determining liability?

Readings