California’s privacy watchdog eyes AI rules with opt-out and access rights

California’s Privacy Protection Agency (CPPA) has released draft regulations for the use of automated decisionmaking technology (ADMT), commonly known as AI, with a focus on opt-out rights and access to personal data. The regulations are aimed at providing state residents with control over how their data is used for automation and AI technology.

The draft regulations, which are open for consultation, are regarded by the CPPA as the most comprehensive and detailed set of rules in the AI space. Inspired by the General Data Protection Regulation (GDPR) in the European Union, the CPPA’s approach includes opt-out rights, pre-use notice requirements and access rights. This would enable California residents to obtain meaningful information about how their data is being used for automation and AI technology.

The regulations also take aim at AI-based profiling, with potential implications for adtech giants such as Meta, which relies on tracking and profiling users for targeted advertisements. The proposed regulations could require companies to offer California residents the ability to opt out of their commercial surveillance.

The CPPA’s approach is risk-based and echoes the EU’s AI Act, a dedicated risk-based framework for regulating applications of artificial intelligence. The impact of California’s AI rules is expected to remain local, focusing on providing protections and controls to state residents. However, companies might choose to extend these privacy protections to residents of other US states.

The proposed regulations align with the California Consumer Privacy Act (CCPA) and aim to provide consumers with control over their personal information while ensuring that automated decisionmaking technologies, including those made from artificial intelligence, are used with privacy in mind and in design.

Overall, the CPPA’s draft regulations aim to establish a regime that allows California residents to request an opt-out from their data being used for automated decisionmaking, with limited exemptions for specific purposes such as security, fraud prevention, and consumer-requested goods or services. The regulations also require businesses to provide pre-use notice of ADMT and to demonstrate a reasonable need for using consumer data for specific purposes.

The proposed regulations have received positive feedback from the CPPA, as they aim to support privacy-protective innovation in the use of emerging technologies, including those leveraging artificial intelligence. The Agency Board is expected to provide feedback on the proposed regulations at the upcoming board meeting.“`html

Notices” to affected consumers — so they can decide whether to opt-out of their data being used (or not); or indeed whether to exercise their access right to get more info about the intended use of automation/AI.

This too looks broadly similar to provisions in the EU’s GDPR which put transparency (and fairness) obligations on entities processing personal data — in addition to requiring a valid lawful basis for them to use personal data.

Although the European regulation contains some exceptions — such as where info was not directly collected from individuals and fulfilling their right to be informed would be “unreasonably expensive” or “impossible” — which may have undermined EU lawmakers’ intent that data subjects should be kept informed. (Perhaps especially in the realm of AI — and generative AI — where large amounts of personal data have clearly been scraped off the Internet but web users have not been proactively informed about this heist of their info; see, for example, regulatory action against Clearview AI. Or the open investigations of OpenAI’s ChatGPT.)

The proposed Californian framework also includes GDPR-esque access rights which will allow state residents to ask a business to provide them with: Details of their use of ADMT; the technology’s output with respect to them; how decisions were made (including details of any human involvement; and whether the use of ADMT was evaluated for “validity, reliability and fairness”); details of the logic of the ADMT, including “key parameters” affecting the output; and how they applied to the individual; information on the range of possible outputs; and info on how the consumer can exercise their other CCPA rights and submit a complaint about the use of ADMT.

Again, the GDPR provides a broadly similar right — stipulating that data subjects must be provided with “meaningful information about the logic involved” in automated decisions that have a significant/legal effect on them. But it’s still falling to European courts to interpret where the line lies when it comes to how much (or how specific the) information algorithmic platforms must hand over in response to these GDPR subject access requests (see, for example, litigation against Uber in the Netherlands where a number of drivers have been trying to get details of systems involved in flagging accounts for potential fraud).

The CCPA looks to be trying to pre-empt attempts by ADMT companies to evade the transparency intent of providing consumers with access rights — by setting out, in greater detail, what information they must provide in response to these requests. And while the draft framework does include some exemptions to access rights, just three are proposed: Security, fraud prevention and safety — so, again, this looks like an attempt to limit excuses and (consequently) expand algorithmic accountability.

Not every use of ADMT will be in-scope of the CCPA’s proposed rules. The draft regulation proposes to set a threshold as follows:

  1. For a decision that produces legal or similarly significant effects concerning a consumer (e.g., decisions to provide or deny employment opportunities).
  2. Profiling a consumer who is acting in their capacity as an employee, independent contractor, job applicant, or student.
  3. Profiling a consumer while they are in a publicly accessible place.

The Agency also says the upcoming consultation will discuss whether the rules should also apply to: profiling a consumer for behavioral advertising; profiling a consumer the business has “actual knowledge is under the age of 16” (i.e. profiling children); and processing the personal information of consumers to train ADMT — indicating it’s not yet confirmed how much of the planned regime will apply to (and potentially limit the modus operandi of) adtech and data-scraping generative AI giants.

The more expansive list of proposed thresholds would clearly make the law bite down harder on adtech giants and Big AI. But, it being California, the CCPA can probably expect a lot of pushback from local giants like Meta and OpenAI, to name two.

The draft proposal marks the start of the CPPA’s rulemaking process, with the aforementioned consultation process — which will include a public component — set to kick off in the coming weeks. So it’s still a ways off a final text. A spokeswoman for the CPPA said it’s unable to comment on a possible timeline for the rulemaking but she noted this is something that will be discussed at the upcoming board meeting, on December 8.

If the Agency is able to move quickly it’s possible it could have a regulation finalized in the second half of next year. Although there would obviously need to be a grace period before compliance kicks in for in-scope companies — so 2025 looks like the very earliest for a law to be up and running. And who knows how far developments in AI will have moved on by then.

* The CPPA’s proposed definition for ADMT in the draft framework is “any system, software, or process — including one derived from machine-learning, statistics, other data-processing or artificial intelligence — that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking”. Its definition also affirms “ADMT includes profiling” — which is defined as “any form of automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements”

Source link
“`

Leave a Comment