Let's assume that AI is human level (it's not, but humour me). Let's assume that businesses can buy an off-the-shelf black box of AI that will run whatever business IT process they want to run. Customer data goes in, actions for humans to perform come out. It's basically an automated call centre full of humans.
Now something goes wrong. A customer isn't getting what they ordered. What do you do about it?
Do you just drop the customer? Well that's lost revenue, or an easy win for another company, you don't want to do that. Maybe you tell the AI to fix the customer in a particular way and hope it will do so. If it does, all it takes is having a human available to fix it (a customer support dept, even if tier 2), but if it doesn't fix it now you still need all the software you needed before to fix it.
I've built line-of-business software before, and I've over-automated things before. Being able to inspect the state of systems, and being able to manually change things in software, is incredibly valuable. This is why spreadsheets run businesses.
My bet is that companies will still mostly buy the software they were buying before, but that some of the interactions with that software that humans would have done before, AI tools will action instead. All the same CMSs, CRMs, ERPs, and other TLAs will exist, they'll just be augmented with AI. They'll probably do some more interesting things with that AI, but they'll still exist in fundamentally the same way. Companies are not going to entrust their business to a black box that sometimes says no.
> Being able to inspect the state of systems, and being able to manually change things in software, is incredibly valuable.
I was hired as a CTO for a start-up that had a shitty solution (bad performance, bad practices - such as storing passwords in plaintext and customer service would just copy the password and log in the customer's account manually).
They had a custom back-end where they were basically handcuffed to a couple of flows. They couldn't see the full state of the system. Not even a fraction of it. Everyone hated it to the point where most were using whiteboards to work.
My first move was to normalize the database and throw in an off the shelf back-end where they got access to the full state of the system (with some aspects of course conveniently configured out).
Then, based on their own whiteboard flows and how they were using the new all you can eat back-end, we implemented a number of flows that mapped to how they actually worked. This is on top of being able to manually pull any switch. From what I could tell, they loved it, both the freedom and the ease.
It was truly satisfying to see those empty whiteboards.
I completely agree. I do not believe that AI is a disruptive technology like computers and the internet was. It will at best be a + 25 % efficiency for business processes - which is awesome and I am all for it. But it is not a game changer that will fundamentally change how things are done.
As with all disruptive technologies. Uber was 10x better than taxis when it was being pumped full of VC cash, now it's better enough that it's my go-to choice, but not by a 2x margin. AirBnB was 10x better than hotels when it was being pumped full of cash, hadn't realised that people might trash places, and chancers hadn't filled the platform with crap properties, now it's a wash and only another option alongside hotels. Bitcoin, well, lol.
AI looks 10x better than the alternative, while it's being pumped full of VC cash and while we haven't realised all the downsides (because so far we've only seen idealised marketing promises). It'll work out to be an expensive and not much better option.
Any prediction of the future is either right or wrong. Yes, some can be really wrong (we'll never need more than 640k of memory) but ultimately a prediction is just a prediction.
Personally, I think AI is in the hope cycle right now. It's new (in an industry that sees very few new things.) I can see it being a useful tool in our belt in the coming years.
But equally, I don't think it'll replace (most of) us. I think we'll use it to be more productive.
Your target market will also matter. We're B2B, and companies don't buy "software" from us, they buy solutions to problems. And that very much includes the ability to talk to a human and have them solve the problem. (The human might use AI to solve the problem)
The point is, this is a large complex future space, and I believe the statement above will age well. AI will move the needle, but at the moment it's future capabilities and effects are also largely speculation.
Or maybe I'm just jaded having seen this cycle a lot, where most fall short, some disappear, amd a few stick around.
How so? I think he's got a point. VC Pumps things with cash, forces them to work well on the surface, and eventually it becomes a thing that isn't so great. Uber pays shit to the drivers and takes a large chunk for themselves. Airbnb is contributing to the housing crisis. What's the problem going to be with AI products 10 or 15 years from now? I think that's a valid question if you look at the history of these types of ventures.
AI may take over the world, but we probably need 2 orders of magnitude of cost reduction, plus a significant improvement in intelligence, plus significant global legislative support, for AI to meet the promises being made right now. To be clear, I do think it'll be transformative, I just think it's "internet" level transformative over a career-long time horizon.
This is a decision that companies can take, and some do (although I'd somewhat contest the google example, I've had good experiences before working here, and I'm biased of course).
I think the issue will come when companies start dropping customers that don't fit certain templates. They'll be accused of using racist or sexist AI, or something like that, and it'll be hard to defend against that because AI is a black box. With humans you can say "we have a policy against this", or "we'll retrain our employees", but that won't be possible for AI in the same way. Yes you can "retrain" an AI, but that retraining looks a lot less like a corporate education program, and a lot more like an IT change request, and surprise surprise, we're back to buying software systems.
I don’t see how that would be relevant from the companies perspective.
Companies get accused of this and much worse all the time and it almost never has any real effect, happy to be corrected
Google drops users not customers. It's an important distinction.
Ok, they drop customers too, and even as a customer its gard to talk to a human. And that in itself is a huge barrier to entry to some possible customers.
Personally we use Google for ads. We pay them money, but don't get to talk to a human. They'd drop us if we behave in a way the algorithm doesn't like. They get away with this because there are very few alternatives.
But we don't use GCP. We use AWS, and talk to a human a couple times a year. And issues get fixed.
So yes, it works for Google, but only really in the space where they have a somewhat-monopoly.
In other spaces like Gmail or Google+, they happily drop users all the time because those are users, not customers.
FWIW, I used to use GCP and Workspace, used to talk to someone once or twice a year, had no problems with the support. I've also seen AWS drop the ball on security disclosures, so I think experiences between all of them can be pretty mixed.
I see this argument all the time, and they always miss one key thing.
The comparison is made is 'Can AI take over X, and do as good as a job as someone who's good at X'. But I think the reality is that these products aren't coming after the ones that are doing 'great work'.
There's tons of mediocre software being written and in many cases mediocre is kind of good enough. I think it's going to empower people who _aren't_ thoughtful and they'll be more productive than ever.
So my prediction is that companies will start trusting this black box, because it's cheap and fast. I also think a quality crisis in many fields is inevitable. We're already seeing this a bunch in very obvious ways, but its way more scary when the models are better and the errors are subtle.
Let's assume that AI is human level … Now something goes wrong. A customer isn't getting what they ordered. What do you do about it?
Seems like you’re also assuming some AI-inherent problem which is left unexplained.
If it’s a human level AI you tell it to solve the issue with the company’s philosophy in mind and wonder why off the shelf human level AI doesn’t have that in its “system prompt” or whatever term it will be.
This whole problem is blown out of something that every AI-phobe seems to assume but keeps secret, which is a common trope at this point.
> If it’s a human level AI you tell it to solve the issue with the company’s philosophy in mind and wonder why off the shelf human level AI doesn’t have that in its “system prompt” or whatever term it will be.
That IF is very big case here.
> This whole problem is blown out of something that every AI-phobe seems to assume but keeps secret, which is a common trope at this point.
It happens again and again CURRENTLY that AI's behave exactly like Djinns. They do what you tell them, not always what you mean. Like, "Geologists Recommend Eating At Least One Small Rock Per Day". They make perfectly coherent advice, but sometimes it's hilariously wrong and AI doesn't know it's wrong. But people trust it anyway.
Yes, but that IF was introduced by this subthread’s OP themselves. Statements like that make no sense if it’s possible and not applicable otherwise. And everyone seems to be ok with that on HN, regularly. Notice how inconvenient comments get simply ignored.
What current “AI” does is irrelevant in this context, but I think you’re overdramatizing on this tangent a little. Almost everyone knows that 2024 AI still hallucinates and you can’t take its output at face value. Those who don’t know may be still researching or just generally gullible/nonanalytic. There’s no problem here, taking our regular baseline into account. People believe yellow press and influencers everyday.
This is my point, you can't. You can't hold an AI accountable in the way you can with a human, whether that's accountable to a contract or accountable to the law, and the only ways in which you can hold an AI accountable involve escalating to a human, at which point we're back to where we are now without AI and requiring essentially the same software and business processes.
Also AI (as promised) will deliver a customized solution to an extent that for a human to even solve the problems in it will need a lot of context. This in itself makes it dependent on humans. Also you have to factor in knowledge redundancy, non-availability of your personnel. So you will always have to account for more people than AI promises.
The world of atoms is being disrupted as well. Spurned by the anxiety of AI replacing my codemonkeying skills, I decided to get knJust recently I bought a 3d printer, and after spending a couple weeks learning CAD, it's insane how much of the act of physically making things has turned into just modelling stuff on the computer and hitting print.
I've designed 'complex' (for me haha) mechanisms with levers, gears and springs, the manufacturing of which would've required a machine shop, multiple trips to the hardware store, tons of hands-on experience and lots of careful measuring and swearing, if not for the miracle of CAD software and my printer.
I just modelled the thing, hit print, and after 2-3 iterations of fixing dumb mistakes, I had a product that worked, and fit to tighter tolerances than what I could've realistically made by hand.
Of course not everything can be made out of cheap thermoplastic, but the jump to higher grade materials is available even today for the everyman.
Yea but business needs are constantly evolving, most of the time one step ahead of the accurate understanding of what needs to be done. We get paid, in theory, to build that, but main expense is not actually building, it's finding what needs to be done and then maintaining it. No ai can address that imho
We often don't solve technical problems or write algorithm. We're here to understand a client who doesn't know what he wants. Good luck for the general purpose AI to solve this [1] "If Architects had to work like Programmers"
I think "could" is substantially different from "practically will in a time-frame that matters" and the former is being used as a shortcut for the latter.
Software is eating the world, but according to [0] there's only 27 million of us in the whole world.
We need all the tools we can get to amplify our efforts.
Further, I find that AI makes it much easier to dabble outside my wheelhouse and to interface with other technical people much quicker and better. Where I used to have to read endlessly to understand the context of unknown-to-me technology, now I can get that in seconds. Previously this luxury was only available to large corporates who can hire a wide variety of experts.
I think we'll see dramatically more 1-10 person software businesses in future.
There's no shortage of technical info on the internet. If you want to learn about networks or printing or Active Directory or databases or Windows APIs or Web Servers or security or whatever you like, it's sll there.
Yes, the AI let's you ask questions and get answers. It's a more interactive approach, and it helps to clarify thinking, but having a solid foundation will always lead to better understanding.
As someone who has always worked in a small company, and as someone who has by necessity had to learn many different parts of IT, up and down the IT stack, it's always been possible.
Equally I work with businesses, the vast majority of whom are less than 10 people - and those small businesses produce code every day which is used all over the place. Turns out, if you knew a domain really well, could code, and don't have 3 layers of management, good things can get done really quickly.
Building a broad base certainly has always been possible, but it just got a lot quicker, is what I'm saying. And that speed leads to tighter feedback loops too.
There's no shortage of technical info on the internet
Right, but last time I checked it was behind pages of seo bullshit. I suspect many of us still have that inertia since our learning age, so knowing where to get information feels natural, while in reality it’s knowledge aquired with years or decades. Imagine forgetting all you know and trying to figure out the learning meta of that medium article. Multiply it by what google “search” is today. You’re basically screwed.
Having built a relatively standard, not particularly complex Stripe integration with 3DSecure, there is no way I'd trust code written by an AI for it. Hell, I set up code review sessions just to walk through the code and discuss. Transactionality, idempotency, what happens if a customer drops out of the flow at the 3DSecure prompt, how do you re-engage them, what happens if they want to continue, etc. It's hard both in terms of having a detailed understanding of the technology components, and in terms of understanding the UX requirements, customer support requirements, etc.
Taking a payment with Stripe is a 5 minute job. Taking thousands of payments a day, without ever double charging someone, and without ever losing an order, while optimising the UX and allowing a customer support team to provide good service, that's a huge task. (Also no criticism of Stripe, having looked at Braintree circa 2019 I expect it would be significantly harder with them).
One change I expect to see over the next few years in API documentation is a series of complicated examples. This isn't for people to review, but for coding assistants to use as to inform RAG or something similar.
Did we figure out GenAI thats beyond a mashup of Conway's Game of Life tutorials and Stack Overflow answers, which humans created hoping to be credited?
“ We are regularly building and launching rockets into space. We are fighting climate change with science and industry”
99% of the people who were building SaaS probably arent skilled enough to launch rockets or fight climate change with science.
Now if youre talking about kids still in school, who have yet to decide on what to major, then yes perhaps most of them will forgo learning how to build software and pursue hard tech fields instead
My experience tells me that the world of atoms requires a perfectly level playing field in order to support reliable businesses. The world of atoms is inherently stochastic with many unexpected events. We cannot plan for the future if the government intervenes in the markets. It's hard enough to plan for 'natural' events. We shouldn't also have to worry about the thousands of ways in which the government could destroy the incentives and dynamics that are vital to your business. In the world of atoms, failure incurs high costs. Whenever you opt for value creation as a business, you put yourself at the mercy of government policies because the government, through monetary policy, can distort markets in extremely harmful, unpredictable ways.
This is true for even regular software APIs. By all accounts, we could have a beautifully interwoven quilt of standardized APIs that connect any system to any other system with ease.
But business realities mean that just cannot exist, it actually doesn't survive first contact with business logic let alone all the auth* issues. An atomic level of interoperability will hit this issue immediately.
If they are seeing AI as a sort of fuzzy connector between components, it is a non-starter from my perspective. Even if you had squishy brains doing the work, it wouldn't pan out.
imo most of the software play is going to be an infra play where the max of value is being provided/captured by the big tech
there will def be some opportunities for mid-tier software(not necessarily saas) companies where they end up leveraging either some niche distribution or some neat packaging.
another ray of hope is - distribution takes time, there is a long cycle of adoption for any new tech. currently we are in "the chasm" and some of us(us being software builders) will end up benefitting from a long adoption curves. but i have my doubts if it's a vc backed thing as it will be a slow cycle.
>Humans are good at fixing problems mostly because we are good at making them.
Are we? Just look at climate change.
>We are fighting climate change with science and industry.
We are also fighting fighting climate change with pseudo science and industry money.
We aren't good at fixing things, we are good at surviving even if we don't fix things. Maybe lots of people die, maybe we repeat the same mistakes over and over again but at least some will survive to start all over again.
Every high culture went down the hill and a new emerged from the rubbles of the previous one.
But that's not fixing, that surviving.
Now something goes wrong. A customer isn't getting what they ordered. What do you do about it?
Do you just drop the customer? Well that's lost revenue, or an easy win for another company, you don't want to do that. Maybe you tell the AI to fix the customer in a particular way and hope it will do so. If it does, all it takes is having a human available to fix it (a customer support dept, even if tier 2), but if it doesn't fix it now you still need all the software you needed before to fix it.
I've built line-of-business software before, and I've over-automated things before. Being able to inspect the state of systems, and being able to manually change things in software, is incredibly valuable. This is why spreadsheets run businesses.
My bet is that companies will still mostly buy the software they were buying before, but that some of the interactions with that software that humans would have done before, AI tools will action instead. All the same CMSs, CRMs, ERPs, and other TLAs will exist, they'll just be augmented with AI. They'll probably do some more interesting things with that AI, but they'll still exist in fundamentally the same way. Companies are not going to entrust their business to a black box that sometimes says no.