The biggest risk with AI, in the medium-term at least, is it will be used by governments and organizations with power to surveil and manipulate people on a previously-impossible scale. Automated systems monitoring everybody, pulling levers to prevent anybody from speaking out or causing trouble.
In the long run, it will be the end of human freedom
For example, it looks like Xi has been pretty actively pursuing this, based on the news over the last 10 years
> China has one surveillance camera for every 2 citizens (...) These camera [sic] checks if people are wearing face mask, crossing the road before the green lights for pedestrians are turned on. If caught breaking rules, people lose their social credit points, are charged higher mortgage, extra taxes and slower internet speed. Not only that, public transport for them gets expensive as well, and the list goes on. [1]
It's not like we're immune to this. All the malls I go to lately are packed with facial recognition systems to analyze our behaviour.
I was doing some research on facial recognition for a job where we were considering its use. I came across examples of sentiment analysis being used at Walmart and Target. They have big and conspicuous cameras in every one of their stores now. Most people assume it is for shoplifting mitigation, which it is. But that is not all. They can use it to track individual customer's paths through the store and then use cameras at the checkout to analyze your facial expression and rank your mood. They use this data to optimize their store layouts.
The other use case was at high-end retail stores. Places like Luis Vuitton, Hermes, etc. They have facial recognition to log high spenders. If you drop 10k at Coach and then go down the street to Valentino their security system will recognize you and highlight you as a VIP customer. A specialized customer assistant then comes out to give you personal attention, maybe to invite you to the private shopping experience.
I learned about these in 2017 I believe. Most non-technical people who I've told about this think it is some conspiracy theory and they often don't believe it. For some reason people are scared of the government but they remain totally docile or willfully ignorant in the face of corporate use.
When we were evaluating employee entrance systems for a FAANG back in ~2012 we were demo'd systems that could do retinal scanning on streams of people as they walked through the turn styles and they could read your eyes even through polarized sun glasses.
I cant recall its name though - but yeah - OpenAI basically brought capabilities for extreme real-time surveillance to an 11.
“Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.”
…will more likely turn into an indoctrination and compliance machine under authoritarian regimes.
"But you can train it however you want!" is the main counterargument I hear against this (alas, my strawman).
Sure, you could, assuming accessible resources to decent compute nodes and good training data, but something tells me that this will be in the hands of a very few.
Also, even if decent AI remains affordable for most people, most people will still mindlessly take the default route of a corporate/government pushed apps.
We haven’t found a way yet to prevent authoritarian regimes from arising and spreading, so it’s unclear how AI will save the world. On the contrary, AI will make it easier for authoritarian regimes to expand and maintain their control.
All regimes asymptotically tend towards authoritarian in the long run; from their POV it's just easier to do their job that way. AI will greatly accelerate this trend.
AI will also make it easier for freedom fighters and democracy. If everyone on earth held a four-year bachelors degree, don't you think that would pose an issue for demagogues and emotion-based politicians?
Hopefully it's obvious to all of us that no technology, no matter how neat, will solve the social problems of humanity for us. And this is just the start of how I disagree with the author. But I would say it's not exactly a detriment either
Would it? If everyone on earth did have a 4-year bachelor’s degree, but you lived in a social-credit driven, machine vision, surveillance world, and you’re punished every time you step out of line—what are all those educated people going to do? They can’t even meet or communicate in private
The idea is that educated people would not be as likely to support authoritarian regimes. If you take authoritarian power as a given… idk what’s the point of anything?
I do agree that increased surveillance is very scary. But we have to maintain hope that the future holds promise for the less powerful to influence their government - in the 1770s it was via muskets, and in the 2070s who knows what it will be. (As an aside, I personally doubt it’s going to be guns…)
I don’t know about that. Ideologues who provide the justifications for authoritarian regimes tend to be very educated people. In addition, opposing authoritarian tendencies might require more the qualities of activists and physical action than intellectual education.
How about a world in which bullies who spit on the social contracts (and society in general) get a free ride, benefit from it and win? Would you prefer that?
Weren’t they responding specifically to a comment that said AI would make it easier for “freedom fighters and democracy”? I don’t understand how your questions relate to that. Furthermore, you seem to be implying that AI will help deal with the bully problem… how?
You’ve also given an example that the world in which educated people can’t communicate in private is a poor place.
I’m giving a counter-example that a world in which there are no records and no supervision of antisocial behaviors could also be a poor place.
The way that AI can deal with bullying is exactly the same a good teacher could. You build a model of a good and intelligent teacher that deals well with bullying and put it out. Bullying gets solved. Everyone who is not a bully is slightly better off and can enjoy it.
Not to mention members of certain "high-risk groups" getting their own AI police officers to issue warnings and citations. Obviously not based on race, just based on objective risk factors such as having a direct social link to someone with an arrest record...
>>The biggest risk with AI, in the medium-term at least, is it will be used by governments and organizations with power to surveil and manipulate people on a previously-impossible scale. Automated systems monitoring everybody, pulling levers to prevent anybody from speaking out or causing trouble. In the long run, it will be the end of human freedom..
THIS is exactly what I see happening. I personally think the "pause" on development is bullshit NationState jockeying for dominance by trying to gAIn AI Dominance - Israel, MI5, NSA, CCP <--- Every Intel Agency on the planet is building/buying/stealing/weaponizing whatever they can.
I wonder what/where Palintir is in this fight?
It feels REALLY Anime with the Sabre Rattling btwn the US and China over Taiwan and TSMC's chip fabs for AI cores.
The hardware is still relatively infancy - but in 5 years it will be really interesting when we see the perfomance for 1hr or 1D problems cut down to minutes seconds for massive AI apps 5 years from now
The examples that you’ve given (obeying traffic laws and wearing masks during pandemic l) seem to be perfectly good social behaviors.
It’s a balancing act between freedom and law. Go one way too far - you get Tiananmen Square and reeducation camps. Go another way too far - you get storming the White House and school shootings.
I hate this sort of thinking. You are making the implicit assumption that everything about our social environment happens simply on 1 variable: heavy-handed enforcement.
When I put it like this, I hope you can see that it doesn't work like that. There are hundreds of variables you could change that would affect everything. We can prevent Congress storming (it was, btw, Congress, not the White House, that got stormed) without moving even 1 micrometer in the direction of reeducation camps.
I’m not sure. About half a year ago I’ve put a perfectly good sign, kindly asking to let the grass recover on my front yard. The grass was getting a bit too much of dog urine, from the dog owners trespassing onto my property to urinate their dogs and poop there.
You’d think that the kind neighbors could read and pause for a bit. But no. They care about their pets. And happily let them go, resulting in the damage to landscaping and the bills to clear contaminated ground and replace the grass.
I’m guessing these are the same people who wouldn’t wear a mask and spread their disease somehow, during the pandemic.
I initially read the comment you are responding to differently, in that I saw the ‘observer’ in the statement as not the state but the community, on re-reading I’m not sure that makes sense.
All the same, reading HN politics, it often seems that a spectrum is presented that spans from freedom to state oppression.
There are democracies where the public will not accept the state using power for its own benefit, but is comfortable with the state enforcing the social contract, because there is a stronger sense that this is defined democratically. This may be simply a matter of population size, the state in a nation of 20 million is a different beast to a state of 350m
This brings up another question... How is a social contract defined when you have 20m people and 50m AI enabled bots forming relationships with them trying to change their mind on said social contract?
Yes. If this is a competence test, which allows you to demonstrate your understanding of social contracts, you should absolutely wait for the green light.
For example red light cameras. Makes perfect sense right, running red lights is bad as it can cause harm, and harm would be a violation of the social contract.
.... except the cities were commonly taking the yellow light timing down far below recommended levels in order to maximize profits.
AI in a world that demands profits spells the end of freedom.
The example that you’ve given of light profiteering with yellow lights doesn’t sound like the end of freedom to me.
Particularly, if you’re allowed to contest that yellow light fairly and efficiently, using the records from the same cameras and AI technology.
Ideally, you’d just register a complaint and the thing would give you a video and a clear explanation of what did you do wrong and why it was not a good idea. Traffic laws are relatively straightforward after all.
See, this is what the privilege of someone that has the resources to defend themselves sounds like.
>if you’re allowed to contest that yellow light fairly and efficiently,
That's a pain in the ass already in our current system and there is no alignment in our politics that will make it better. If you have money and don't care, you'll pay the ticket. If you have money and do care you'll spend a lot of time with records requests, and resubmitting records requests and pushing trial dates because the system won't give you the records you need in time.
The legal system does in no way work in ideals. It's not until you're on the wrong side of the law you realize exactly how fucked up it can be. It is then you realize how many laws exist that are unenforced until you bring too much attention to yourself.
It still doesn’t sounds like the end of freedom. But yes, ideally the system should be rigged in a way that the amount of money you have doesn’t influence how much it sucks to be on the wrong side of the law. It’s a regressive tax otherwise.
How is it a competence test? It doesn't show any competence. It's a compliance test to see if you are willing to behave non-optimally and sacrifice your time to prove that you are compliant.
In fact a person who goes over with red light when it's safe to do so might be in total paying more attention compared to the one who only watches the lights. The one who watches the lights may miss a car speeding by even though the light was green.
I'd even say that the social networks are a precursor to this. Everyone is constantly observed by everyone else there, and many use a fake persona to try to "fit in", or god forbid say something they will regret later. And those aren't on them have trouble keeping in touch with the rest. Smh
In the long run, it will be the end of human freedom
For example, it looks like Xi has been pretty actively pursuing this, based on the news over the last 10 years
> China has one surveillance camera for every 2 citizens (...) These camera [sic] checks if people are wearing face mask, crossing the road before the green lights for pedestrians are turned on. If caught breaking rules, people lose their social credit points, are charged higher mortgage, extra taxes and slower internet speed. Not only that, public transport for them gets expensive as well, and the list goes on. [1]
It's not like we're immune to this. All the malls I go to lately are packed with facial recognition systems to analyze our behaviour.
[1] https://www.firstpost.com/world/big-brother-is-watching-chin....