Hacker Newsnew | past | comments | ask | show | jobs | submit | verisimi's commentslogin

It's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.

However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.

How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?

Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.


It sounds to me like anthropic are basically 'all in' except for the caveats. Looking at the 2 examples they provide:

> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.

> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.


There are such things and secret courts with secret rulings. You and I have no idea what is actually occurring because of this secrecy; we can only talk about that which is stated publicly.

I would assume it would be not be regulated by government, so without constraints on age, restrictions on what you can do - you know, like reality.

And I know that government attempts to regulate reality too, but if you drive at 35 where the limit is 30, or speak to someone dodgy to get some marijuana or whatever, and get away with these and other heinous crimes, you're good!

The distinction really is whether you bake regulation into the technology or not. And it seems that technology is actually the new legal system. Or perhaps that should be the 'pre-legal system' as it won't allow you to do those things it determines as 'wrong'. Which is absolutely fine if you think government really does know best, or hell on earth for everyone else.


The last 35 years have very vividly demonstrated that there needs to be some adults in the room. Without exception every major tech company has implemented practices so overtly hostile to the userbase that the government has been more or less forced to get involved, mostly in the form of fines that have done very little to disincentivize whatever problematic bullshit the company in question was originally caught at. Suggesting that even less regulation would somehow magically cause tech firms to align goals with their userbase seems baseless to say the least.

You seem to think that government and corporations are on opposing sides. I don't think this is the case. Governments want the data corporations collect. Both are encouraging the other. There are no adults in the room. Having (corporate or government) children in control of that every individual's private information won't help.

I assure you I think no such thing. I am painfully aware of legislative capture. Proposing an environment where we go from shitty, poorly enforced regulation to none at all solves nothing. It's also worth pointing out that government performing poorly is an indictment of the individuals elected to govern, not the concept of governance.

> We'll try everything, it seems, other than holding parents accountable for what their children consume.

You've missed the point. No legislator or politician cares about what the parents are doing.

What they care about is gaining greater control of people's data to then coerce them endlessly (with the assitance of technology) into acting as they would liike. To do that, they need all that info.

"The children" is the sugar on the pill of de-anonymised internet.


If corporations and government are acting together, this is fascism (according to Mussolini). It seems that is already the case. It's just we call it 'democracy'. Perhaps 'crypto-fascism' is the right term.


"Inverted totalitarianism" is the term you're looking for, although with Trumpism we're flipping to just straightforward totalitarianism. "Crypto-fascism" is applicable to Surveillance Valley's fake strain of "libertarianism", which is more accurately described as corporate authoritarianism.


These numbers are surely numbers on a spreadsheet, unless you are referring to literal bodies that have been counted.

In this article itself, we read that:

> When Estrada-Belli first came to Tikal as a child, the best estimate for the classic-era (AD600-900) population of the surrounding Maya lowlands – encompassing present day southern Mexico, Belize and northern Guatemala – would have been about 2 million people. Today, his team believes that the region was home to up to 16 million

The point is that spreadsheet estimates can be so wrong, they are verging on meaningless.


While I understand applying legal constraints according to jurisdiction, why is it auto-accepted that some party (who?) can determine ethical concerns? On what basis?

There are such things as different religions, philosophies - these often have different ethical systems.

Who are the folk writing ai ethics?

It's it ok to disagree with other people's (or corporate, or governmental) ethics?


In reply to my own comment, the answer of course should be that ai has no ethical constraints. It should probably have no legal constraints either.

This is because the human behind the prompt is responsible for their actions.

Ai is a tool. A murderer cannot blame his knife for the murder.


> my taxes (in Canada) are way too low

I'm sure the government will accept donations. Just pay extra as you think they are worth it.


They do pay their taxes. It's just that they wrote the laws too. And, if you use trusts, foundations, corporations, etc, you are able to legally avoid taxes, while retaining the same control.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: