LLM-Deflate: Extracting LLMs into Datasets
Large Language Models compress massive amounts of training data into their parameters. This compression is lossy but highly effective—billions of parameters can encode the essential patterns from terabytes of text. However, what’s less obvious is that this process can be reversed: we can systematically extract structured datasets from trained models that reflect their internal knowledge representation. I’ve been working on this problem, and the results are promising. We’ve successfully applied