In my PhD thesis I made a claim that the NPMI method presented in our NeurIPS paper can be extended to other architectures, such as autoencoders, as they are, in essence, the same setup as the basic emergent communication agents. So to verify my own claim I wrote a very basic analysis of the latent space of a categorical VAE trained on MNIST. While there is almost certainly more to the latent space representation, even with some very basic code, I could potentially identify some patterns in how the data is represented.