I've been toying with the following attempt to explain all this:
- Information bubbles (this is the top issue, and it's really incredibly persuasive)
- Geographic location and social environment
- Lack of time to deeply evaluate truth vs noise and consider multiple sides of an issue
- Conviction of values - how much does a person believe their values are tied to the political view (leads to subtly drawing emotional conclusions and implicitly trusting a political party)
- Belief that due to one's own intelligence, one is not subject to propaganda (a clearly false belief that many smart people fall into)
Deep emotional awareness is not as strongly related to intelligence as people think.
I have a theory this happens because for individual contributors, the effort to buy SaaS software in the era of "vendor risk assessment" is a nightmare. So you end up with grassroots avoidance of that process, at all costs, inside the company.
As a solo-founder I have experienced this on a massive scale over nearly 15 years. It's really strange how happy people are with unethical behavior, yet on my end it just doesn't feel right to cut off peoples systems. After multiple attempts to contact them, we will often disable their accounts. It is against the social contract. It is stealing. In many cases companies may have 15+ free trial accounts, the company itself absolutely dwarfs our 3-person company. The cost is beans for them. But they just don't care.
Same. Large companies keep freeloading and ask for support. They buy a single personal license and share it among employees. And it’s not some small shops from poor countries, where you can understand it, it’s (often German) enterprises…
As exciting as the last couple years have been in the AI space, I totally agree.
There was an advertisement on Twitter a few years ago for Google Home. It was a video where a parent was putting their child to bed, and they said, "Ok Google, read Goodnight Moon."
It felt like a window into a viscerally dystopian future where we outsource human interaction to an AI.
As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI (called "robots" by Asimov, it included "Multivac" and other interfaces besides humanoid robots).
If you look at the table and do the math to convert obfuscated units, a 3/4 ounce serving of “whole grains” may have 5g of added sugar, which is 5/21.26 g or 23.5% added sugar.
This seems like an obvious problem to me. Despite some progress with adding nuts and salmon.
- Information bubbles (this is the top issue, and it's really incredibly persuasive)
- Geographic location and social environment
- Lack of time to deeply evaluate truth vs noise and consider multiple sides of an issue
- Conviction of values - how much does a person believe their values are tied to the political view (leads to subtly drawing emotional conclusions and implicitly trusting a political party)
- Belief that due to one's own intelligence, one is not subject to propaganda (a clearly false belief that many smart people fall into)
Deep emotional awareness is not as strongly related to intelligence as people think.