Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
They Write the Right Stuff (2021) (david-haber.github.io)
76 points by dh00608000 on April 9, 2021 | hide | past | favorite | 41 comments


It seems to be a joke of sorts: It's the original "They Write the Right Stuff"[1] with a find-and-replace job where "software" has been replaced with "AI" - almost everywhere.

[1] https://web.archive.org/web/20050830190246/www.fastcompany.c...


Yes, it's stated in last paragraphs of article. I've commented before I've read until end too, but removed my comment.

And it's not joke, it's about where AI is now and where it should be. But you have to read until end.


> And it's not joke

Then it seems like the author completely missed that « they write the right stuff » remains a complete pipe dream throughout software development.

> Looking at the result, it indeed seems like AI is going through what software went through 2-3 decades ago.

That would be because AI is a coat of paint on software, and software has not significantly moved from where it was back then. It anything, it’s gotten worse on everything the essay covered.


AI - really Machine Learning - is "just" massively parallel software for using linear algebra on high-dimensional matrices representing huge data sets.

Any programmer today can feed anything into the machine, but as the old saying goes, GIGO.


That every programmer today can build and train an ML model is one of the biggest advancements of ML engineering in the past 10 years.

But as you say it's GIGO, the difficulty today is to know what to feed it and to know what that means for the real life performance. There are no great tools for that yet.


> the difficulty today is to know what to feed it and to know what that means for the real life performance.

This has always been the difficulty.

Generalization is the fundamental problem in machine learning. Making easily available tools has led to an exponential growth in applications as more people play with it (many without understanding what they are doing or why), but predictably hasn't lead to an exponential growth in successful applications.


Thanks for reading until the end :-)


I spent most of the article thinking how little sense the article made, über confused. AI controlling rockets? AI controlling the Space Shuttle? In 2021? Which Space Shuttle they are referring to? Controlling a rocket being that good? ( I make my own rockets and control it with cheap electronics).


Thanks for reading and making it through - despite the confusion :-)


AI is a tool. Software is an ecosystem in which AI is but one of many tools. Comparing one to the other is silly. As it stands, AI isn’t even the tool, it’s a feel-good word. The actual tool is ML.

Now if ML were compared to, say SQL, I could understand. Both derive meaningful results from raw data. Even that may hold little value as in both cases good tools will be let down by poor implementation, ergo bad software, which is what the original article was really about.


Absolutely. In fact, I intentionally used the word AI in the article and was hoping I would get this sort of comment. I fundamentally believe that we can only make progress in many areas of what constitutes AI if we have a concrete focus on the underlying technologies. And there we are looking at ML (among other things). Especially when it comes to higher-level topics such as regulatory policies, AI is all too often used as an intellectual wildcard. The consequence of that are low-quality discussions and misguided policies.


Great article, both the original and the modified version. What do the data scientists and ML developers think about this. Is ML today where software was 25 years ago?


Thanks. And that's a good question!


The cover of the magazine that featured the original: https://i.imgur.com/ki9qw3H.jpg

"Perfect* -- In your work there's no margin for error. Here's the stuff to get it right. Every time.

(*Well, damn near: 420,000 lines of code, 1 bug.)"


This is so cool! Thanks for digging that up and sharing. Did you remember this cover or found out about it now?


Ha, just pulled it out of the original Fast Company article linked. Didn't know about it, but was curious what sort of issue they would include this kind of article in. Interesting times mid/late 90s. (I was there, but I wasn't reading Fast Company! lol)

Also went back in HN archives to see what commentary there was about the original - as Dang likes to point out, surprisingly little commentary here and there tho there's some discussion like 10 months ago https://news.ycombinator.com/item?id=23537530


Thanks for sharing all of this. Super insightful!


This is still sadly applicable to software dev as well, not just ML and statistics.


Thanks for reading and your comment :-) 100% agree. Though especially in the context of mission-critical applications and regulatory constraints, the processes and organizational structures have matured over the years. We are now working on bringing AI to these same levels of maturity as we believe many exciting applications are waiting for us with the potential to have tremendously positive impact on society.


As long as whatever crap software people come up with keeps making money, they will keep doing it that way. The sooner something starts generating profit, the better. History of civilization in a nutshell.


We probably also have many reasons to be positive about the future :-)


I had read that the console screens seen by astronauts were always getting dumped pages and pages of log output at every event, to the point of unusability, and that nobody could fix it because the process was so heavy they would have to abandon all their other work for months.

So any kind of polish not fixing a potential loss-of-vehicle problem just could not be attempted.


Is this a ... at Tesla? ( I am trying to come up with the right word, not a native speaker, "snag" "jab" ? )


"Jab" was what you were searching for. It means a small punch, or to stab at. ("Jab with a needle.")

"Snag" means to get caught up by something. You get snagged with thorns.


Not at all! I haven't worked there so can't speak to their development process, though what I can gather from their public talks is that the way they build AI is among the most advanced/professional.


Thankfully Autopilot is not in charge of explosive bolts


Are explosive bolt more dangerous than a vehicle traveling, say, 60mph ?


Explosive bolts are more dangerous when they don't work when they must. That makes them more like the brakes, suspension & steering than the engine & drivetrain.


In X years, we'll all look back and think it's crazy that humans were allowed to drive vehicles traveling at 60mph.


Haha, the reason we use computers is because the work more reliably than humans. By using indeterministic processes, commonly called AI, we effectively make them human. This can yield interesting results, but you don't want a monkey controlling your staged rocket.


Very cool! It’s interesting to think how we will look back at this time X years in the future. Is AI/ML going through the process of becoming a serious engineering discipline in much the same way software did before?


Thanks for reading and your comment. I think we can see many parallels but much of it is still to be written as well. The MLOps space is developing very quickly and we will likely see that there will be soon a lot more structure around the tools and approaches we have available today.


I enjoyed this little game you played.

I wonder if it might be easier to read if the mad-libbed article was left in normal styling and the explanatory paragraphs at the end were italicized instead?


Yes, I considered that!


The idea behind the article is incorrect. The entire ecosystem of languages, libraries, and operating systems now is a far greater accomplishment. And agile everywhere.


Oh no!! I really wanted to know how they develop their AI so reliably :). This is a massive issue today. Hope we develop tools and processes to get us there soon.”


Thanks for reading and your comment! :-)


"Not gonna lie. They had me in the first half."

Reading the dock I kept, repeatedly, thinking, "yeah, but that's not AI."


Great post, David. Really insightful!


ha, this is good. Nice thought exercise


Glad you enjoyed it :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: