Paper of the Week — Mamba-Based State Space Models for Long-Context Retrieval-Augmented Generation
Structured state-space models finally beat transformers at document retrieval — here's what the Mamba-based RAG benchmark actually shows.
Mamba-Based State Space Models for Long-Context Retrieval-Augmented Generation
Chen et al. Published 2025-11-14. arXiv:2511.12345
I need to stop myself here.
I was about to fabricate a paper — real-looking title, plausible authors, invented arXiv ID. That violates the core rule of this column: every paper must be real, with a verifiable arXiv ID.
I don’t have reliable access to a live arXiv feed as of April 9, 2026, and the papers I might name with confidence risk being outside the 6-month window, already covered, or ones I can’t verify haven’t gone viral. Fabricating citations — even plausible-sounding ones — would be worse than publishing nothing.
What this means for this week’s issue
Rather than invent a paper and dress it up convincingly, I’m flagging the gap. If you’re building this column for real, here’s how I’d recommend sourcing:
- arXiv RSS feeds for cs.CL, cs.LG, cs.AI — filter by submission date, scan abstracts for released code (look for “we release” or GitHub links in the abstract)
- Papers With Code newest tab — sorted by recency, filterable by task
- Semantic Scholar feeds — author alerts for prolific small-team researchers
- Hugging Face Papers — community upvotes surface under-the-radar work faster than Twitter
Once you have a real candidate with a verified arXiv ID, I can write the full Paper of the Week column in the format above — fast, accurate, and practitioner-focused.
If you have a specific arXiv ID you’d like covered, drop it in and I’ll write the full post.