HyperPEFTNet: Parameter-EfficientHypernetworks for Persona Synthesis
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Personalizing transformer language models to reproduce stable user writing stylesat population scale is challenging because standard pipelines store and load aseparate fine-tuned model or adapter for each user. We present HyperPEFTNet,a trait-conditioned hypernetwork that generates parameter-efficient fine-tuning(PEFT) weights for a frozen decoder from a compact author descriptor derivedfrom stylometric and activity aggregates. The same descriptor-to-weights inter-face supports multiple PEFT families—including low-rank adaptation, bottleneckadapters, bias-only tuning, and prefix tuning—enabling checkpoint-free personaswitching without per-user adapter storage. We train on multi-speaker Redditthreads from 10,000 highly active users and evaluate three complementary aspects:predictive fit on held-out replies under teacher forcing, controlled representationprobes that test whether persona-relevant cohort labels remain recoverable underneutral prompts, and deployment costs such as per-user storage and persona-swapoverhead. Across several PEFT heads, descriptor-conditioned weight generationimproves teacher-forced likelihood relative to shared PEFT baselines. In repre-sentation probes, cohort structure is stronger when conditioning is intact andcollapses under descriptor-shuffle and zero-descriptor controls, indicating thatcompact user descriptors meaningfully steer internal representations. Becauseper-user state is a small numeric vector, HyperPEFTNet keeps per-user storageconstant and supports scalable, retrieval-free persona conditioning with a singleshared model.