Hey!
Thanks for sharing your cool snippets and a cool paper which is very nicely written as well. I was trying to understand the implementation of the DLSH in the coders.py file. And below are some of my queries,
- Why do we do
GaussianRandomProjection beforehand? Is it simply because we want to reduce the higher dimensional embeddings (like vecs of 768, 512, 1024 etc?)
- Secondly, in
transform_to_absolute_codes, why do we adjust the offsets? I am not sure I understood that part clearly.
Thanks Again for your paper and clean snippets!
Best,
Aditya.
Hey!
Thanks for sharing your cool snippets and a cool paper which is very nicely written as well. I was trying to understand the implementation of the DLSH in the
coders.pyfile. And below are some of my queries,GaussianRandomProjectionbeforehand? Is it simply because we want to reduce the higher dimensional embeddings (like vecs of 768, 512, 1024 etc?)transform_to_absolute_codes, why do we adjust the offsets? I am not sure I understood that part clearly.Thanks Again for your paper and clean snippets!
Best,
Aditya.