decoding-comp-trust.github.io - Decoding Compressed Trust

Description: Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

security (9699) ai (9174) safety (4276) efficiency (679) benchmark (341) compression (316) llm (159) decoding (39) genai (30) trustworthiness (5)

Example domain paragraphs

Compressing high-capability Large Language Models (LLMs) has emerged as a favored strategy for resource-efficient inferences. While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected. This study conducts the first, thorough evaluation of three (3) leading LLMs using five (5) SoTA compression techniques across eight (8) trustworthiness dimensions

Our experiments highlight the intricate interplay between compression and trustworthiness, revealing some interesting patterns. We find that quantization is a more effective approach than pruning in achieving efficiency and trustworthiness simultaneously. For instance, a 4-bit quantized model retains the trustworthiness of its original counterpart, but model pruning significantly degrades trustworthiness, even at 50% sparsity. Moreover, employing quantization within a moderate bit range could unexpectedly i

Understanding the trustworthiness of compressed models requires a comprehensive evaluation to gain insights. In this paper, we are interested in three questions: (1) What is the recommended compression method in the joint view of multi-dimensional trustworthiness and standard performance? (2) What is the optimal compression rate for trading off trustworthiness and efficiency? (3) In extreme compression rates (3-bit quantization), how will the compressed models perform according to our metrics?

Links to decoding-comp-trust.github.io (2)