Lore-Qwen3-embedding-0.6B: Logic-ORiented Retriever Enhancement
This model is a fine-tuned version of Qwen/Qwen3-Embedding-0.6B using the LORE (Logic-ORiented Retriever Enhancement) method. It significantly improves retrieval performance for complex logical expressions and queries.
LORE Method Overview
LORE is a novel embedding enhancement method that improves retrieval performance through fine-grained contrastive learning:
- Three-tier Contrastive Learning: Fine-grained sample classification with P (Positive), N1 (Distractor), and N2 (Negative) samples
- Dual Encoder Architecture: Frozen document encoder M_d and trainable query encoder M_q
- InfoNCE-based Loss: Differentiated weights for hierarchical separation P ≻ N1 ≻ N2
- Query Rewriting: LLM-assisted dataset construction with discourse relations from Rhetorical Structure Theory (RST)
- No External Dependencies: Requires no external supervision, resources, or pre-retrieval analysis
Key Improvements
- Enhanced Logical Reasoning: Improved ability to handle complex logical expressions in queries
- Fine-grained Discrimination: Better distinction between relevant content and distractors
- Maintained Efficiency: Preserves the computational efficiency of the original model
- Downloads last month
- 5
Model tree for XiaSheng/Lore-Qwen3-embedding-0.6B
Base model
Qwen/Qwen3-0.6B-Base