Loading...
Works

Representation Finetuning (ReFT)

Listened to Zhengxuan Wu and Aryaman Arora from Stanford University who co-authored the paper "ReFT: Representation Finetuning for Language Models" .

Pretrained language models (LMs) are often fine-tuned for specific tasks, but this process is costly. Parameter-efficient fine-tuning (PEFT) reduces these costs by updating fewer weights, yet it primarily focuses on weight modifications. In contrast, Representation Finetuning (ReFT) introduces a novel approach by manipulating a small fraction of model representations to steer behaviors during inference. Inspired by interpretability research, ReFT methods serve as efficient drop-in replacements for weight-based PEFTs. Their evaluation on LLaMA-family models demonstrates that Low-rank Linear Subspace ReFT (LoReFT) achieves state-of-the-art performance in various tasks using significantly fewer parameters, highlighting ReFT's potential as a more efficient and powerful alternative.

  • LinkedIn postLinkedIn post
layerskip
© 2024 Husien Vora. All Rights Reserved.