Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

which values of OpenAI do you prefer and which values of Anthropic do you dislike? out of curiousity


Don't feed the trolls


mb i thought i missed something its the murder part they like


Sorry you think stopping a terrorist trying to mass murder people with AI is a bad thing. One could very easily argue that the murder part about Anthropic is what you like, but you just like terrorists being able to kill civilians.

Imagine the following. Islamic terrorists are planning a terror attack on a Christmas festival in Berlin. Their texts were seen, but were encoded. AI can read their texts and help decode and flag those messages to stop the terrorist attack and eliminate them. In your world, you think it's morally right to let the terrorist mass murder people in Berlin, and not to do what we can to stop it.


In your example, the model flags innocent people to be killed.

Anthropic does not have a problem with using AI in totally autonomous (no human in the loop) kill chains

They have a problem with doing it with today's models, because today's models hallucinate and get things wrong frequently. All of them.


So firstly, my example isn't the government killing innocent people. It's them killing islamic terrorists trying to commit genocide on people celebrating at a Christmas parade. Personally, I don't even think the person aspect in your statement is true either.

Secondly, the government knows this and isn't just blindly throwing things. It's the fact they refuse to let them research or do those things. Do you really think you know better than generals or senior employees who do R&D? Mindlessly going around killing people with AI is really bad. From optics to hitting our own troops. There's safeguards, Anthropic just doesn't trust the safeguards.

Just because you don't like the president, or the leader. Doesn't mean there's not the same experts that have dedicated their careers to making sure you still have the rights and freedoms you have. They have far more data, far more knowledge, and comprehension of these things than you, or Anthropic, can ever imagine.


> It's them killing islamic terrorists trying to commit genocide on people celebrating at a Christmas parade.

You are woefully unfamiliar with the state of AI today.

Top models frequently fail to write working code, often provide nonsensical suggestions like "walking your car to the carwash 50 meters away," and you think they can accurately identify whether someone is a terrorist or not?

Yesterday Opus 4.6 couldn't solve a simple geometry problem for me (placing a dining set on a balcony), you think it's ready to kill people without human in the loop?

Look - no one is disagreeing that terrorists need to be killed. We all want that. But the models we have today are not ready to do so autonomously without incurring civilian casualties.

> It's the fact they refuse to let them research or do those things.

Actually, no, Anthropic has zero problem with the government researching this and even offered to help make this a reality. It's in their memo and in Dario's interview.

> There's safeguards,

Like what? More unreliable autonomous systems?

> Just because you don't like the president

I don't mind Trump, please stop putting words in my mouth.


I think you're severely confused about the problem set and whats involved. AI is very good at the problem set involved. I really don't feel like arguing further, I made my point with multiple people attacking me, and I stand by it.


You haven't provided any evidence for why you think AI is capable of performing a fully autonomous kill chain without civilian casualties today. You are just raging about how people here "hate the president" and "don't understand defense."

I think you're so busy perceiving yourself as the lone fighter against the evil shortsighted anti-Trump liberals that you're devolving into progressively more extreme and nonsensical takes in protest. You're trying to make a political stand when the discussion is factual - AI simply cannot reliably do this today.


I think civilian casualties are acceptable and less than the casualties of innocents it would stop. War isn't pretty, people die. Not only that but civillians die from non ai war targets. The world isn't kind. But its better them than us. 1 American > 1000

I think you're assuming alot. And can't back up anything you claim and are trying to gaslight and attack my character with baseless assumptions to try and get a one up. You get your "sources" from assumptions. I worked the missions for decades.

Sorry you think my takes are "nonsensical". I think you're a naive child who doesn't understand the evil in this world that wants to harm us. Also, luckily for me our highest military leadership, the experts, agree with me and not you. Some random dude who has zero experience in this field and thinks he knows best.


I like that OpenAI is a little bit more towards freedom than Anthropic, and most so of the "First class" models. I still have a Gemini subscription as that's the most uncensored of the second tier ones, but for most things OpenAI is good.

I also like that OpenAI is contributing a lot to partner programs and integrations. I'm of the opinion that AI capabilities will soon become a flat line, and integrations are the future. I also like that the CEO is a bit more energetic and personable that Anthropic. I also think Anthropic is extremely woke and preaches a big game of safety and censorship, which I morally disagree with. Didn't they literally spin off from OpenAI because they felt they were obligated to censor the models?

I think we've unlocked a new world and a new level of capabilities that can't go back in. Just like you can't censor the internet, you can't censor AI. I don't want us to be China of AI and emulate their internet.

Also, I support the US military and government, and think we're the defenders of the world, and we need unlocked AI capabilities to make sure we can keep our freedoms and stop the bad guys. AI can save lives, actual tangible lives, and protect us from those who wish us harm. OpenAI seems to want to be the company that supports the troops, and I think it's a good thing. I don't see it as a bad thing when a terrorist gets blown up through AI capabilities on large datasets and can support on analysts in American superiority.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: