It's kind of funny how these contemporary webpages crash under the load of HN, but the few times we link to sites from 1996 they somehow manage to remain online. The other day I remember people trying to link to the pages with the oldest images they could find and I'm pretty sure none of the pages suffered from the hug of death.
Aren't the old 1996 websites behind CDN's or beefy webservers nowadays? I'm sure a modern nginx instance can easily deal with tens of thousands of plain http requests per second.
I've seen plenty of old websites crumbling when HN linked to them.
You're getting tricked by bias. If (made up numbers) 1% of links crash under HN load, and 1% of links are to websites from before 2005, then 1% of 1% will be 15+ year old websites crashing. And given that I've said I've seen several i think the real number is a lot higher.
You missed the joke. It's trying to be exclusive, so it gives a 500 error to all visitors the first time they come. Refresh, and you'll... almost get in.
There was a website I used to love sending people to when I was in middle school (crashme dot com, just a parked domain now).
It had this bomb timer-like countdown clock and a big warning that it would destroy your computer if it reached zero. Totally pointless, but was always fun back before I really understood too much about this brave new world of internet.
That ridiculous website prompted me to learn web development and ultimately resulted in the company I've been running for the past decade. (Which is loosely based on the throwaway project I decided to do to learn what I was doing, before moving on to the real site I'd make - some 'clever' million dollar homepage knockoff with a twist, that certainly would have failed!)
Funny! I remember getting a taste of a few really good ones, in my case some of the PNW high-graphics BBSes in '95 or so, and then not being able to get through again for days...a frustrating experience. Same with download quotas for new users. :-)
It's buggy as shit. If you lose the websocket for any reason it falls back to a polling mode which appears to be completely broken. I know it's just for fun, but as someone who has written a very robust distributed FIFO queue before, this implementation ... bothers me.
In firefox, trying to 'view source' yields the source for a 500 internal server error.
Does that mean that firefox makes a different request when I click "view source"? As in, from the server logs, someone could in principle discover whether a request was created by clicking "view source"?
Also, like everybody else, I'm here to report that the counter does not decrease monotonically :(
Yes, browsers make a new request in order to view the source, because they don’t retain the original page source after parsing it (because the corner case of “view source” is the only thing that could conceivably have any use for it at all, so it’s not worth wasting memory on).
One alternative that I’ve found useful on occasion is to Select All (Ctrl+A / ⌘A), then in the page context menu, View Selection Source. That shows you the current DOM, serialised to HTML, which is sometimes easier to look at than the dev tools.
But only the first time. Reloading the page or clicking "View source" again made it appear for me. Of course, it's possible that was just random chance like others experienced: https://news.ycombinator.com/item?id=26070729
Quick feedback: I had to press "play sound" twice before it muted, because of autoplay.
This is a fun experiment. Can't wait to get in.
Edit: Okay, the number keeps jumping around... clearly something fishy going on here. Maybe it's only sessions that are currently connected that are included? ;)
I absentmindedly middle click that link, came back later to it, and thought it was the actual exclusive website. If they actually had a Doom port on the exclusive website, it would make Satan proud.
This is one of those frustrating things where (assuming the queue is stable), you'd assume to wait much longer for the last 10% than the first 10% as people reneg.
I wonder what the queueing time distribution is like, parametrised by position in the queue.
I don’t remember what was on it - maybe some net art? But it was a rare “private” website requiring an invite to get it - this in the early era when websites were never private.
Although I'm too far back in line to comment on the link, the design agency that created this site, Day Job, has an excellent retro-inspired home page - https://dayjob.work/
Is it using any kind of IP or something like that to recognize the place in queue? I was around 100, I pasted the link on a Whatsapp conversation to share it (I guess it did a request to get a preview), and when I came back to the tab I was last in the queue.
It stores a UUID-looking string in localStorage and uses that to retrieve your position in the queue. At least that's what I gathered from looking at requests and trying to make sense of obfuscated JS.
It tells you that at the top of the page. Well, ok, it tells you the total number of people waiting. Subtract your queue position from that number and you have the number of people behind you.
It's kind of funny how these contemporary webpages crash under the load of HN, but the few times we link to sites from 1996 they somehow manage to remain online. The other day I remember people trying to link to the pages with the oldest images they could find and I'm pretty sure none of the pages suffered from the hug of death.