llm-access-control.github.io - LLM Access Control

Description: LLM Access Control Instructions

security (10054) privacy (2031) llm (259)

Example domain paragraphs

Large multimodal language models have proven transformative in numerous applications. However, these models have been shown to memorize and leak pre-training data, raising serious user privacy and information security concerns. While data leaks should be prevented, it is also crucial to examine the trade-off between the privacy protection and model utility of proposed approaches. In this paper, we introduce PrivQA --- a multimodal benchmark to assess this privacy/utility trade-off when a model is instructed

Links to llm-access-control.github.io (2)