Hacker Newsnew | past | comments | ask | show | jobs | submit | ruffrey's commentslogin

I've been toying with the following attempt to explain all this:

- Information bubbles (this is the top issue, and it's really incredibly persuasive)

- Geographic location and social environment

- Lack of time to deeply evaluate truth vs noise and consider multiple sides of an issue

- Conviction of values - how much does a person believe their values are tied to the political view (leads to subtly drawing emotional conclusions and implicitly trusting a political party)

- Belief that due to one's own intelligence, one is not subject to propaganda (a clearly false belief that many smart people fall into)

Deep emotional awareness is not as strongly related to intelligence as people think.


Two of the top AI companies flouted ethics with regard to training data. In OpenAI's case, the whistleblower probably got whacked for exposing it.

Can anyone make a compelling argument that any of these AI companies have the public's best interest in mind (alignment/superalignment)?


Thanks, Google. A bit of feedback - integration with `gcloud` CLI auth would have been appreciated.


I have a theory this happens because for individual contributors, the effort to buy SaaS software in the era of "vendor risk assessment" is a nightmare. So you end up with grassroots avoidance of that process, at all costs, inside the company.


This is what I was thinking too. Some places make it insanely difficult to purchase anything.


As a solo-founder I have experienced this on a massive scale over nearly 15 years. It's really strange how happy people are with unethical behavior, yet on my end it just doesn't feel right to cut off peoples systems. After multiple attempts to contact them, we will often disable their accounts. It is against the social contract. It is stealing. In many cases companies may have 15+ free trial accounts, the company itself absolutely dwarfs our 3-person company. The cost is beans for them. But they just don't care.


Let them gain accounts and shut them all down on Friday 20:00. Have a plan to block fast new creation of trials and watch them burn …


Same. Large companies keep freeloading and ask for support. They buy a single personal license and share it among employees. And it’s not some small shops from poor countries, where you can understand it, it’s (often German) enterprises…


As exciting as the last couple years have been in the AI space, I totally agree.

There was an advertisement on Twitter a few years ago for Google Home. It was a video where a parent was putting their child to bed, and they said, "Ok Google, read Goodnight Moon."

It felt like a window into a viscerally dystopian future where we outsource human interaction to an AI.


As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI (called "robots" by Asimov, it included "Multivac" and other interfaces besides humanoid robots).


"scalable"


If you look at the table and do the math to convert obfuscated units, a 3/4 ounce serving of “whole grains” may have 5g of added sugar, which is 5/21.26 g or 23.5% added sugar.

This seems like an obvious problem to me. Despite some progress with adding nuts and salmon.


> This means it generates audio over 40-times faster than real time.

Astounding


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: