# Harnessing the Universal Geometry of Embeddings > Strong Platonic Representation Hypothesis 및 vec2vec을 소개한 논문. [Strong Platonic Representation Hypothesis](https://wiki.g15e.com/pages/Strong%20Platonic%20Representation%20Hypothesis.txt) 및 [vec2vec](https://wiki.g15e.com/pages/vec2vec.txt)을 소개한 논문. ## Abstract > We introduce the first method for translating [text embeddings](https://wiki.g15e.com/pages/Embedding%20(machine%20learning.txt)) from one vector space to another without any paired data, encoders, or predefined sets of matches. Our unsupervised approach translates any embedding to and from a universal latent representation (i.e., a universal semantic structure conjectured by the [Platonic Representation Hypothesis](https://wiki.g15e.com/pages/Platonic%20Representation%20Hypothesis.txt)). Our translations achieve high [cosine similarity](https://wiki.g15e.com/pages/Cosine%20similarity.txt) across model pairs with different architectures, parameter counts, and training datasets. The ability to translate unknown embeddings into a different space while preserving their geometry has serious implications for the security of [vector databases](https://wiki.g15e.com/pages/Vector%20database.txt). An adversary with access only to embedding vectors can extract sensitive information about the underlying documents, sufficient for classification and attribute inference.