Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
3
14
reey
raihannabiil
Follow
0 followers
ยท
1 following
raihannabiil
raihannabil21
AI & ML interests
GENSYN
Recent Activity
reacted
to
codelion
's
post
with ๐
10 days ago
Reverse Engineering a $500M Mystery: From HashHop to Memory-Augmented Language Models I wrote a deep dive into how Magic AI's 100M token context window might work, starting from their HashHop benchmark and building up to MALM - a Memory-Augmented Language Model. Key insight: treating each key as a single token enables perfect retrieval at unlimited context lengths. The article covers: - How HashHop works and why its perfect accuracy is suspicious - Building a tokenized solver that achieves 100% accuracy - Scaling to MALM for real code search tasks - Why this approach could handle 100M+ tokens Read the full article: https://huggingface.co/blog/codelion/reverse-engineering-magic-hashhop Try the model: https://huggingface.co/codelion/malm-165m Code: https://github.com/codelion/hash-hop
reacted
to
codelion
's
post
with ๐
10 days ago
Reverse Engineering a $500M Mystery: From HashHop to Memory-Augmented Language Models I wrote a deep dive into how Magic AI's 100M token context window might work, starting from their HashHop benchmark and building up to MALM - a Memory-Augmented Language Model. Key insight: treating each key as a single token enables perfect retrieval at unlimited context lengths. The article covers: - How HashHop works and why its perfect accuracy is suspicious - Building a tokenized solver that achieves 100% accuracy - Scaling to MALM for real code search tasks - Why this approach could handle 100M+ tokens Read the full article: https://huggingface.co/blog/codelion/reverse-engineering-magic-hashhop Try the model: https://huggingface.co/codelion/malm-165m Code: https://github.com/codelion/hash-hop
reacted
to
codelion
's
post
with โค๏ธ
10 days ago
Reverse Engineering a $500M Mystery: From HashHop to Memory-Augmented Language Models I wrote a deep dive into how Magic AI's 100M token context window might work, starting from their HashHop benchmark and building up to MALM - a Memory-Augmented Language Model. Key insight: treating each key as a single token enables perfect retrieval at unlimited context lengths. The article covers: - How HashHop works and why its perfect accuracy is suspicious - Building a tokenized solver that achieves 100% accuracy - Scaling to MALM for real code search tasks - Why this approach could handle 100M+ tokens Read the full article: https://huggingface.co/blog/codelion/reverse-engineering-magic-hashhop Try the model: https://huggingface.co/codelion/malm-165m Code: https://github.com/codelion/hash-hop
View all activity
Organizations
None yet
raihannabiil
's datasets
None public yet