Show HN: I compressed 10k PDFs into a 1.4GB video for LLM memory

While building a Retrieval-Augmented Generation (RAG) system, I was frustrated by my vector database consuming 8GB RAM just to search my own PDFs. After incurring $150 in cloud costs, I had an unconventional idea: what if I encoded my documents into video frames?

The concept sounded absurd—storing text in video? But modern video codecs have been optimized for compression over decades. So, I converted text into QR codes, then encoded those as video frames, letting H.264/H.265 handle the compression.

The results were surprising. 10,000 PDFs compressed down to a 1.4GB video file. Search latency was around 900ms compared to Pinecone’s 820ms—about 10% slower. However, RAM usage dropped from over 8GB to just 200MB, and it operates entirely offline without API keys or monthly fees.

Technically, each document chunk is encoded into QR codes, which become video frames. Video compression handles redundancy between similar documents effectively. Search works by decoding relevant frame ranges based on a lightweight index.

You get a vector database that’s just a video file you can copy anywhere.

GitHub: https://github.com/Olow304/memvid

URL: github.com
13 comments

This is an extremely bad method of storing text data. Video codecs are not particularly efficient at compressing QR codes, given the high contrast between the blocks defeating the traditional DCT psychovisual assumptions of smooth gradients. There is little to no redundancy between QR code encodings of similar text.

You'd probably have a smaller database and better results crunching text into a zip file, or compressed rows in a sqlite database, or any other simple random-access format.

I'd say it be bewildering if there were not a more efficient way to store text for the purpose in context, than "QR codes in compressed video frames".

The vector database previously used must have been very inefficient.

> The vector database previously used must have been very inefficient.

Especially if it was taking ~800 ms to do a search. At that speed, you'd probably be better off storing the documents as plain text, without the whole inefficient QR/H264 round-trip.

> Video compression handles redundancy between similar documents effectively.

Definitely not. None of the "redundancy" between, or within, texts (e.g. repeated phrases) is apparent in a sequence of images of QR codes.

900 ms sounds like a lot for just 10,000 documents? How many chunks are there per document? Maybe Pinecone's 820 ms includes network latency plus they need to serve other users?

In Go, I once implemented a naive brute-force cosine search (linear scan in memory), and for 1 million 350-dimensional vectors, I got results in under 1 second too IIRC.

I ended up just setting up OpenSearch, which gives you hybrid semantic + full-text search out of the box (BM25 + kNN). In my tests, it gave better results than semantic search alone, something like +15% better retrieval.

Cut the cloud vendors out of the picture and build and query your index on a spare linux box.

I've only played with TF-IDF/BM25 as opposed to vector searches, but there's no way your queries should be taking so long on such a small corpus. Querying 10k documents feels like 2-10ms territory, not 900ms.

> The results were surprising. 10,000 PDFs compressed down to a 1.4GB video file.

And how big was the total text in those PDFs?

How big were the original PDFs? Are they just text or images and other formatting too?

If they were less than 140k on average, then this isn't "compression" but "lossy expansion".

Is this more efficient than putting all of that in say a 7z archive?

I'd expect video frames to be maximally efficient if you sorted the chunks by image similarity somehow.

Also isn't there a risk of losing data by doing this since for example h.265 is lossy?

h.265 is lossy but QR codes are redundant

Is the probability of lost data zero across eg. millions of documents?

I see there's a 30% redundancy per document, but I'm not sure every frame in a h265 file is guaranteed to have more than 70% of a qr code being readable. And if it's not readable, then that could mean losing an entire chunk of data.

I'd definitely calculate the probability of losing data if storing text with a lossy compression.

[deleted]

I’m not sure why this is getting so much hate. This could be groundbreaking.

Why not just do it locally? Or were the RAM consumption and the cloud cost comments distinct?

can you provide that mp4

April Fools?

Why does this work so well?

It does not. It's an indictment of the vector database working so poorly than even deliberately trying to make up something ridiculously inefficient (encoding PDFs as QR codes as H.264 video) is somehow comparable.

It's possible to be less efficient, but it takes real creativity. You could print out the QR codes and scan them again, or encode the QR codes in the waveform of an MP3 and take a video of that.

It's really, really bad.