Classical public-key steganography (Algorithm 1 from [54]) has a 100% failure rate when encoding a 16-byte message using GPT-2, because GPT-2's per-token entropy drops near zero frequently and standard rejection sampling cannot find an acceptable token. Entropy bounding reduces failure to 0–10% but introduces detectable statistical bias: selected tokens come from a visibly different probability distribution than baseline samples.
From 2021-kaptchuk-meteor — Meteor: Cryptographically Secure Steganography for Realistic Distributions
· §4 Adapting Classical Steganographic Schemes / Figure 2b
· 2021
· Computer and Communications Security
Implications
Do not use entropy-bounding or static rejection sampling against real language models — the resulting token-probability bias is detectable by a passive adversary with access to the same model
Any transport using generative-model steganography must handle variable entropy natively, not by skipping low-entropy events