Papers
arxiv:2408.02373

Operationalizing Contextual Integrity in Privacy-Conscious Assistants

Published on Aug 5
· Submitted by iliashum on Aug 6
Authors:
,
,
,
,
,
,

Abstract

Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of synthetic data and human annotations, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.

Community

Paper submitter

Do you want your LLM to share your financial history or your SSN when interacting with an airplane booking LLM? Probably not. To steer information-sharing assistants to behave in accordance with privacy expectations, the paper proposes to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, the paper designs and evaluates a number of strategies to steer assistants' information-sharing actions to be CI compliant.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.02373 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.02373 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.02373 in a Space README.md to link it from this page.

Collections including this paper 1