The easiest way to break the mental barrier caused by short form content for me is to remind myself that knowing something is not the final product. The final product is trusting the knowledge and communicating that trust. Any information that finds itself to me without me asking for it is inherently less trustworthy and less communicable than information I hunted for with intention
Short form content feeds are like unloading a dumptruck full of random items into your driveway. Is it actually better if all that stuff you didnt ask for is real information that needs to be organized and pieced together with what you already know without any of the associated context that helps you do that? Or is it better if you know 99% of it is trash and you dont need to remember any of it?
I think a tool like this is great for people who want to use short form content intentionally, and personally that only happens when I am bored and in need of a new topic to research. I think of all short form content like marketing/ads, just showcasing something i might be interested to dig into on my own. It's how i used StumbleUpon website back in the day.
But I have noticed I am rarely using short form content with intention. its because i want to check what my friends have posted, and then with the extra downtime i scroll a little bit, and sometimes get stuck
I think what is interesting is that it is not necessarily the content of the brainrot that makes us underperform, but the act of swiping [0] and the context switching [1].
These attempts to make educational short form content still suffer from the same drawbacks, so I wonder how effective they truly are.
[0] https://cyberpsychology.eu/article/view/33099
[1] https://www.tandfonline.com/doi/10.1080/09658211.2025.252107...
I think it's "this one is good but if I swipe the next one can be even better", i.e. classic dopamine addiction. Stop digging!
i would place my money on the vast gap between effort and reward. you dont even need to think "if i swipe..." because the thought takes longer than the action. So why would you stop to consider what you might have to gain by swiping when you can literally swipe and find out faster than you can think about it?
Then you go about your regular day and suddenly everything feels harder in comparison. You have to think about what youre doing, you have to coordinate or plan your actions, you have to put work in. The swiping rots your ability to maintain and coordinate your chain of actions.
it weakens your ability to have intent.
This is it. Modern social media is a Skinner box. The context switching is a feature (short term dopamine hit in exchange for deep learning).
A slot machine affect. Viewing our society as an addicted one clarifies most of our social ills.
I suppose the idea is that if we're gonna be underperforming due to endless swiping and context switching, might as well get stimulated by educational content instead of brainrot. Similar to a nicotine patch to help quit smoking.
This is a very neat idea. I am not sure why the page needs to load 40mb of data and make me wait 5 mins before the first view. I'd probably also add some ranking criteria to surface good quality articles that maximize the "I learnt something new today" factor. Overall kudos to the developer for original thinking.
Presumably the 40mb of data is not from Wikipedia, but the Javascript tracking code bundle needed to turn it into a doomscrollable social media feed. ;) By those standards, I think it’s pretty lightweight! For comparison, the Instagram iOS app is 468.9mb, more than ten times the size…
The 40MB of data is Wikipedia data, the site itself is 21kB.
40mb is way too much for a JS bundle... Even with a framework you could do this with 5mb or less.
> you could do this with 5mb or less
How quick the times change... Back in my days, we put the limit on bundles being maximum 1MB, and it felt large even then.
Don't get me wrong. 5mb is a lot for this, yes. This app, coded with love and interest could easily be made under 1mb.
This app IS made in under 1mb. The entire app, including all the assets minus all the actual Wikipedia data, is 21kB (no minification or compression). And all of it is in a single html file with human-readable code.
Interesting. I haven't investigated, so I don't know where the 40mb comes from.
It's JSON.
Now imagine how big the builds are for Instagram's server side doomscrollable feed algorithm, given their inverse incentives to this project.
Yeah, the implementation is odd. Cool idea though. Also see:
Yeah. Should be able to load in the background once you start scrolling
probably vibe coded
I wrote this project by hand in Sublime Text, which is more of a text editor than an IDE. I don't use ai, even for autocomplete.
All of the code is unminified/unobfuscated in the index.html file, you can right click the page and view-source it.
I do the same as you, except in vim. But increasingly I'm getting the nagging feeling that I'm wasting my time.
You’re not wasting time by being deliberate about what you build and having to choose what to dedicate yourself to. Vibe coding is not synonymous with productivity, and a lot of what we call “productive” today is just a different form of wasting life that is more socially acceptable.
I made this project from start to finish in less than a day, and I feel it was well worth my time and effort.
Thank you for spending your time and effort on something fun you believe in and that explores a point about human nature, and for showing we often underestimate how much we can get done in so little time on our own when we set our attention to it.
[deleted]
Should that be a criteria for deciding whether it's cool or not?
I ran across a grammar mistake in one of the entries and clicked into the actual wikipedia entry to fix it. That was satisfying. Imagine being able to do that on social media.
Oh man, there are so many times I find myself wanting to click the edit button on websites that aren’t wikipedia to fix typos or other minor errors.
that's really cool!!
Please fix the loading issue and I’ll return! I think you don’t need to pull all the data at initialization, you could lazily grab a couple from each category and just keep doing it as people scroll.
The loading issue is just a hug of death, the site's currently getting multiple visitors per second, and that requires more than a gigabit of bandwidth to handle.
I sort of need to pull all the data at the initialization because I need to map out how every post affects every other - the links between posts are what take up majority of the storage, not the text inside the posts. It's also kind of the only way to preserve privacy.
I think I'm missing something, but does every user get the same 40MB? If so, can you just dump the file on a CDN?
I feel very strongly that you should be able to serve hundreds or thousands of requests at gbps speeds.
Why are you serving so much data personally instead of just reformatting theirs?
Even if you're serving it locally...I mean a regular 100mbit line should easily support tens or hundreds of text users...
What am I missing?
> Why are you serving so much data personally instead of just reformatting theirs?
Because then you only need to download 40MB of data and do minimal processing. If you were to take the dumps off of Wikimedia, you would need to download 400MB of data and do processing on that data that would take minutes of time.
And also it's kind of rude to hotlink a half a gig of data on someone else's site.
> What am I missing?
40MB per second is 320mbps, so even 3 visitors per second maxes out a gigabit connection.
no but...why are you passing 40mb from your server to my device in a lump like that?
All I'm getting from your serve is a title, a sentence, and an image.
Why not give me say the first 20 and start loading the next 20 when I reach the 10th?
That way you're not getting hit with 40mb for every single click but only a couple of mb per click and a couple more per scroll for users that are actually using the service?
Look at your logs. How many people only ever got the first 40 and clicked off because you're getting ddosed? Every single time that's happened (which is more than a few times based on HN posts), you've not only lost a user but weakened the experience of someone that's chosen to wait by increasing their load time by insisting that they wait for the entire 40MB download.
I am just having trouble understanding why you've decided to make me and your server sit through a 40MB transfer for text and images...
> no but...why are you passing 40mb from your server to my device in a lump like that?
Because you need all of the cross-article link data, which is the majority of the 40mb, to run the algorithm. The algorithm does not run on the server, because I care about both user privacy and internet preservation.
Once the 40MB is downloaded, you can go offline, and the algorithm will still work. If you save the index.html and the 40MB file, you can run the entire thing locally.
> actually using the service
This is a fun website, it is not a "service".
> you've not only lost a user but weakened the experience of someone that's chosen to wait by increasing their load time
I make websites for fun. Losing a user doesn't particularly affect me, I don't plan on monetizing this, I just want people to have fun.
Yes, it is annoying that people have to wait a bit for the page to load, but that is only because the project has hundreds of thousands of more eyes on it than I expected it to within the first few hours. I expected this project to get a few hundred visits within the first few hours, in which case the bandwidth wouldn't have been an issue whatsoever.
> I am just having trouble understanding why you've decided to make me and your server sit through a 40MB transfer for text and images...
Running the algorithm locally, privacy, stability, preservation, ability to look at and play with the code, ability to go offline, easy to maintain and host etc.
Besides, sites like Twitter use up like a quarter of that for the JavaScript alone.
It's incredible how rude and entitled people are about a toy site. It's like they are looking for any reason to take a shit all over it.
You did a great job and I love hearing that you did it all by hand in a day rather than having AI make it for you.
I believe in privacy but generally people are fine with rec algorithms running on a server if it's transparent enough/self hostable. Mastodon/DuckDuckGo/HN/etc all don't need to download a huge blob locally. (If you do want it to run locally, hosting the blob on a CDN or packaging this as an app and letting someone else host it would probably improve the experience a lot)
Mastodon/HN do not have a personalized weighted algorithm. On HN you see what everyone else sees, and on Mastodon the feed is chronological. DuckDuckGo offers some privacy, but still sends your search queries to Bing.
Also, all three of the examples are projects that have years of dev effort and hosting infrastructure behind them - Xikipedia is a project I threw together in less than a day for fun, I don't want to put effort into server-side maintenance and upkeep for such a small project. I just want a static index.html I can throw in /var/www/ and forget.
And re: hosting, my bare metal box is fine. It's just slow right now because it's getting a huge spike of attention. I don't want to pay for a CDN, and I doubt I could host a file getting multiple gigabits per second of traffic for free.
I really like how you have done things. Didn’t mind the waiting time.
Thank you for making my day a little brighter.
Seconding this—I had to wait a little bit to download it and play around and have some fun with it. I didn't mind.
What I appreciate the most about this string of comments (from OP) is that digging into "doing it for fun", hosting on your own machine, wanting simplicity for you as the maintainer and builder. This has been a big focus for me over a number of years, and it leads to things being not efficient, or scalable or even usable by others—but they bring me joy and that is more than enough for most things.
The reality is that there are of course ways to make this more efficient AND it simply doesn't need to be.
Good job on making something that people are clearly interested in, it brought me some joy clicking around and learning some things.
If you want it to be more than just this, of course you'll have to make it faster or have it be a different interface—installable offline typa thing so we can expect a bundle download and be fine with waiting. For example I can see this as a native app being kinda nice.
If you don't want it to be more than this, that's okay too.
Regardless, well done
Yeah, that's fair, though I'd think you can get a CDN/someone else to host the blob this for fairly cheap/free.
Having too many users is a pretty good problem to have anyway!
(Could have the client download the blob from where the repo is hosted on GitHub, which takes under a second for me to download: https://github.com/rebane2001/xikipedia/raw/refs/heads/mane/...)
Who made you do anything? It's a fun website. If you don't like it, move along or make one yourself. I could understand if you were paying for something, but this is free.
Why not…. Load it on demand?
That's my point. So confused. Got a ton of users clicking off because of this.
The point you're missing is that this website is actually a submarine ad for the domain, xikipedia.org, which the owner is probably trying to sell.
That's a very silly claim considering I bought the domain the same day I released the project. I'm sure whoever would've been interested in buying the domain could've already swept it up for 10 bucks before me.
It's blocked for me :( I think it must have been a typosquatted domain before.
I was not aware of WikiSpeedRuns, that's a fun one (and then the 2nd link you shared basically allows you to check how well you did)
I abadoned facebook since I couldnt stand the feed experience anymore. Recently learned about using https://www.facebook.com/?sk=h_chr in combintation with extension https://www.fbpurity.com/ which give me a chronological feed of groups and people i actually want to hear from. The most astonishing experience is how calm I feel consuming this cleaned up feed. Almost all negative emotions seem to stem from the uncontrollable feed experience.
TIL:
The United States Virgin Islands are a group of islands in the Caribbean Sea. They are currently owned and under the authority of the United States Government. They used to be owned by Denmark (and called Danish West Indies). They were sold to the U.S. on January 17, 1917, because of fear that the Germans would capture them and use them as a submarine base in World War I.
https://simple.wikipedia.org/wiki/United_States_Virgin_Islan...
Yes. The sale has been in the news lately because in that agreement the US formally agreed to cede its interest in Greenland.
Love the concept. Wikitok also exists [1] but the recommendation aspect that you're bringing you the table is a very intriguing original spin on it. I would be fascinated to see what a smart algorithm could discover on my behalf on Wikipedia given enough time.
I think it would be nice if you could do a non simple English version but nevertheless happy with what you've put together, and I've added a shortcut to my phone. Please don't let the negativity stop you from continuing to work on it.
First thing I see: https://en.wikipedia.org/wiki/Esophageal_cancer
Thank you.
I love the concept. But the long load at startup really kills it. Even clicking off the site and reloading makes me have to go through the download all over again.
Everything is unclickable on the first page for me, the word "Estonia" is typed in grey font on the dark-gray layout, I can not do anything except of selecting text.
I've been waiting for someone to implement this well! I think in the future we might even have tiktok-style influencer videos generated from wikipedia content, who knows.
I've been swiping a lot for the last 10 minutes and I'm not sure how much it's learning. I have some feedback.
- I have never liked or clicked a biography but it keeps suggesting vast amounts of those
- It does not seem to update the score based on clicking vs liking vs doing both. I would assume clicking is a solid form of engagement that should be taken into consideration
- It would be interesting to see some stats. I have no idea how many articles i've scrolled through or the actual time spent on liked vs disliked article previews. If you can add such insight it would be interesting
- A negative feedback mechanism would be interesting as well. There is no way to signal whether I'm just neutral towards something (and swipe through) or actively negative about it (which is a form of engagement the doomscroll would actually use to show me such content once in a while)
- since this website has already shown me multiple pages about things I'm learning about thanks through it, it might benefit from a "share" button (another engagement signal) as HN folks are likely to want to share on HN things they've just learned
- Would you be willing to make the experiment open source?
It's opensource insofar that the javascript is not minified or obfuscated. You can see it at https://github.com/rebane2001/xikipedia too.
I want to try reimplementing it for Wikipedia in another language, would you mind sharing how you went from the 400MB Wikipedia export to the .1x (40MB) file that is downloaded here?
Yeah I plan on putting the code for that on GitHub soon too.
Perfect! Thanks for clarification. I thought there was server-side preparation of the content, but it seems from the other posts that it's all local, and I commend you for that.
Funny that I selected some subjects such as art, technology and human sexuality, and after a bit of scrolling, I get an article on Technosexuality. TIL.
Took several minutes to load for me, and when my download got to 100%, the browser (safari on ios) refreshed the page and started at 0% again.
Back when Wikipedia was new people had a tendency to spend all day in it clicking deeper and deeper. It was called "ratholing."
It was timewasting or avoidant behavior for sure, and often described negatively. But at least you were learning something.
Silicon Valley then spent the next few decades trying to understand that behavior so they could isolate it, strip all positive value out of it, and make it highly profitable.
I don't get it. It shows the intro paragraph of some articles in card list and that's it? Clicking the card takes you away from the feed, instead of creating e.g. some kind of a path of interest.
> instead of creating e.g. some kind of a path of interest.
You need to like them to update the weights of the algo, it works well
I just kept scrolling, hoping it would learn from how long I paused over content to read it the way FB's seems to, but it seems you're right, in this case "likes" are required.
Ooh, it's indeed fascinating how quickly you can end up with a focus on a singular subject.
I was genuinely excited to try this and it sounded in theory like a lot of fun! Unfortunately yeah too slow to load.
I wonder what the cognitive difference is between the link to link addictive clicking on wikipedia and doomscrolling? I assume doomscrolling is even more pernicious because it's the lowest possible level of friction. I'm sure somebody has done studies.
It's ironic that doomscrollable social media feeds are built for low attention spans, because this website is the opposite. Gave up after 20 seconds.
I have been waiting for a project like this since last year when someone on Twitter mentioned it. I can finally doomscroll without my brain rotting.
Update, it does not run properly in Firefox on iOS. After it has loaded to 100%, the site refreshes.
With that domain name, I expected to see this Grok-based mutation of Wikipedia. I guess, the "x" just has a negative association with me now.
That's incredible.
I was thinking the same, and read it as “Xi”kipedia. Then sure enough, one of the articles that it immediately showed when it loaded was for “General Secretary of the Chinese Communist Party”, or Xi Jinping.
Coincidence?! Yeah, probably.
It took a while to load but once I was in I found it like reading a magazine. I was expecting it to be more of a Tiktok UX.
It's kind of like the opposite to my Wikipedia project Redactle.net which takes a lot of effort.
OP, since you're encountering load issues I would suggest narrowing your corpus to Wikipedia vital level 3 and caching all the content since it's only 1000 articles.
Page crashed after downloading and extracting. On safari iPhone that’s a few years old, latest iOS. I was really interested in trying / why I waited Ed: tried again it crashed at 66% loading (after 100% loading)
> you will likely see NSFW content. Please only continue if you're an adult.
Should be: Please only continue if you're not at work.
The NSFW content is a big bummer for me. I can't let a kid go to Wikipedia unsupervised because of this.
Not everyone works for Valley Virgins
[dead]
> It is made as a demonstration of how even a basic non-ML algorithm with no data from other users can quickly learn what you engage with to suggest you more similar content.
Yeah, it got really sticky real fast. From the random (?) selection it starts off where I couldn't recognize anything but popular TV shows, it immediately over-indexed on that content and I had to fight for my life to see anything else in the feed that I would recognize and consider a good algorithmic pick for my interests.
Which is brilliant, because Instagram has the same issue for me - absolute metric tons of garbage and whenever there is a gem in that landfill of a feed that I interact with positively, it's nothing but more of that on my feed for weeks until I grow sick of that given thing. In conclusion, Instagram could have used this 30 line algorithm and I'd have the same exact experience when using it.
Algorithmic feeds are obviously problematic for turning several generations into lobotomized zombies, but they are also just not very good at nuance, so it is not even a case of something that's bad for you but it just feels so good. It's just something that's bad, but is able to penetrate the very low defenses in human psychology for resisting addiction and short-term gratification and there is no incentive to improve them for the sake of the user as long as they work for the advertisers.
You know, I enjoyed this, it's nice to get some random, interesting stuff to browse on occasion.
I assumed that obviously this is very clever performance art to show how even the healthiest food can be turned into brainrotting sludge.
But then I look at the comments, and it really looks like some people want this.
Now I'm depressed.
I wonder if this would be a "better" way to build this thing: https://www.infoq.com/news/2026/01/duckdb-iceberg-browser-s3...
DuckDB loaded in the browser via WebAssembly and Parquet files in S3.
This is unfortunately loading very, very slowly for me.
This is really cool. And in only 500 lines of code is really impressive. I would have thought this was much more.
Very small suggestion: Can you make the entries actual links/anchor tags so that it is possible to copy link, middle-click to open in a new tab, and so on?
An issue I have with these apps that claim to be for doomscrolling is that you don't open apps like Instagram or Facebook to doomscroll, you open them to check messages or stories. The doomscrolling is an afterthought. These things assume you can realize you're doomscrolling and not only break out of it, but choose to hypnotize yourself in their app.
This could be a product. I'd pay for an app that fwd'd messages from other apps and gave me a wikipedia feed to scroll on the elevator / other places where the phone is a social respite
Did you write your own summary parser for this? I wrote one in the past and found the wiki markup quirky to deal with. The wiki dumps do provide summaries but they seem to suffer similar issues.
Built something similar for research papers: https://www.producthunt.com/products/soch
How does it actually work? Can you add an "about" page that goes into the algo? Or can you add more info on the readme on github? I'd love to learn more.
I might add a proper explanation at some point, but for now you can view-source the page and read the code, there really isn't that much of it.
Impressive! We're a university lab and published recommendation algorithms. Never knew that doomscrolling could be this addictive this fast, thnx!
Please consider taking an hour and push this to a Github with quick readme. Scientists and developers would get it. We have been building a torrent-based alternative to Youtube for a few years. Not many knowledge out there around operational frontpage algorithm.
I don't get it, it keeps showing me completely random articles.
I've been meaning to do something like this for the books I want to read, and things I want to learn.
This is more like neat-scrolling, I like it
Jeopardy training tool.
Man, this is the greatest thing I have seen on the internet.
Reminds one of Sesame Street - let’s put educational content in this new hyponotic medium!
Is it down ? I can't access it right now
I am so lucky to be basically immune to short form video garbage like TikTok, but I am not immune to Wikipedia's allure.
I easily have over 100 tabs of wikipedia open at any one time, reading about the most random stuff ever. I'm the guy who will unironically look up the food I'm eating on wikipedia while I'm eating it.
No need to try to make it "doomscrollable" when it's already got me by the balls.
Would be cool / horrific to do this, but then have AI influencer bots that teach you the things in influencer style click bait style videos.
Wait for the API costs or open-source models to be cheap enough and we'll get there. I mean it's a guaranteed HN frontpage (and currently also a guaranteed epic credit card bill).
Great idea, i find this is better than just doom scrolling X or Instagram
> Xikipedia is loading... (3% of 40MB loaded)
I gave up after about a minute.
This would actually be really fun if built around social feature like curators who could quote-repost the posts, popular/trending sorting and a threaded comment system.
Can you not?
If you load it in Chrome, it loads MUCH faster
why does everyone keep making this exact same thing again and again
See also: https://www.wikitok.io/
And a plug for my own (fiendishly difficult) Wikipedia-based game:
I'm sorry but starting a 40MB download unprompted on a website is a bit rude, people roaming abroad might have half their data used up in 5 minutes or pay loads for this.
This is not a lot compared to how much data the average website will download. I just clicked a wired.com link which is also on the frontpage of HN right now. In a Chromium browser with no adblock, the network tab says 12 MB transferred.
The difference is just that this web page shows you how much data it transfers instead of doing it in the background.
surprisingly... boring?
Nice loading indicator! People just don't know how to make those anymore. I think you mistitled your submission, though?
You can do this manually, obviously. The key is the starting point. The design of thermonuclear weapons is always a good place to begin.
[deleted]
It took forever to load
How cool is that! Feature request: Let the user select another language than English.
all images overflow for me
[dead]
[dead]
[flagged]
So they took the worst aspect of Wikipedia (Wikipedia), and the worst aspect of "social" media (doom scrolling), and combined them? Brilliant concept. When can we expect the IPO?
I like the concept, but I'm not going to be reading Simple English Wikipedia.
Please only continue if you are an adult? You realize Wikipedia has no age restrictions right...
Man wikipedia is full of trash
human history*