Command Palette
Search for a command to run...
Dataset Distillation
Dataset distillation refers to the synthesis of a small-scale dataset such that a model trained on this small-scale distilled dataset can achieve high performance on the original large-scale dataset. The algorithm takes a large real dataset as input, outputs a small synthetic distilled dataset, and evaluates by testing the model performance on an independent real dataset. An excellent distilled dataset not only aids in dataset understanding but also has broad applications in areas such as continual learning, privacy protection, and neural architecture search.