Their "API" isn't what's being accessed here. As far as I understand it's using their subscription account oauth token in some third party app that's the issue here.
It is basically impossible to disallow the token to work that way on a technical level. It would be akin to trying to trying to set up a card scanner that can deny a valid card depending on who is holding it. The only way to prevent it from working is analyzing usage patterns/details/etc in some form or fashion. Similar to stationing a guard as a second check on people whose cards scan as valid.
> TFA most commonly refers to Trifluoroacetic acid, a highly persistent, mobile "forever chemical" (PFAS) found globally in water and soil, widely used in organic chemistry as a solvent.
But surely your search engine must have given you the answer within your first three clicks, if not, perhaps you should consider a better search engine.
Don’t know about your parent, but I am certainly on of those “AI can’t make anyone more productive”.
Well, at least I would say that while being a bit hyperbolic. But folks like us who prefer to see claims by corporations trying to sell you stuff backed by behavioral research before we start taking the corporation’s word for it.
The irony is that web searches for an explanation of something often lead to a discussion thread where the poster is downvoted and berated for daring to ask people instead of Google. And then there's one commenter who actually actually explains the thing you were wondering about.
Literally in the first paragraph of Simon's post if you cared to read it:
> this has actual legal weight to it as the IRS can use it to evaluate if the organization is sticking to its mission and deserves to maintain its non-profit tax-exempt status.
A thing which happens to me very often: I realize I'm experiencing a very real visceral discomfort nagging at me in the back of my mind.
It happens because I will have ctrl+c'd something several minutes ago. My mind subconsciously "holds" onto the info that I have text copied in my clipboard. It's only when I ctrl+v it and consciously discard it does the nagging go away.
I have no idea why it happens or if others experience this too. But I fully agree with the author about starting from nothing and getting rid of the clutter you think isn't bother you but which you're probably subconsciously holding onto.
I have the opposite problem. I often forget what the last thing I copied was, or whether I copied it, and have to go back multiple times to get the copy + paste achieved. A clipboard history would help me, too, but thus far I've been unable to make using one a permanent part of my toolkit (I'd have to remember the history exists).
That said, copying and pasting (and the attendant switching between windows/tabs) does often feel like one of the biggest cognitive frictions I have to deal with in any given day. That's a nut I'd like to crack one day.
One thing that has helped me the most in that regard is Alfred's multi-clipboard feature, where I can append to clipboard, which means I can copy-paste N links in N+1 actions instead of N*2 actions.
I've been using clipboard history for several years now. I could not go back. I realised it released an unknown continuous pressure on my brain I wasn't aware of.
Happens to me too. Especially when a secret token or API key is on the clipboard, then all senses are heightened until I replace it with something non-sensitive.
I've been thinking about this recently as I hear it often. Would people who want to buy a car in the Tesla price range really choose a slightly cheaper Chinese EV if those were available?
Personally I have a hard time believing this. But even if you had similarly priced Chinese options, I would guess the main reason for buying a Tesla is not just because you want an EV. While a Tesla will be a reliable baseline EV, surely the reason you (or at least I) would buy one is for the supervised self-driving feature.
Chinese EVs self-drive too. You can buy level 3 cars today that are cheaper, have more features, better build quality, and better reliability. Having just been in China.. yeah it’s not close they are way ahead of us and the gap is growing fast.
This was far more thrilling and exciting to watch than I thought it would be. Which feels wrong when I say it, but I don't mean it was a good watch because of the consequences of failing. Rather because it was amazing watching a human perform at such a peak level.
It was amazing. But when it concluded I realized how watching it had made it seem, in retrospect, easy, inevitable, safe. Crazy as that sounds, but watching it updated my perspective. It was "easy", and "safe" if you had trained and worked for it. The possibilities of humanity!
It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.
Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.
It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.
Well then email spam will never have an incentive. That is a relief! I was going to predict that someday people would start sending millions of misleading emails or texts!
It's not a band-aid at all. In fact, recognition is nearly always algorithmically easier than creation. Which would mean fake-AI detectors could have an inherent advantage over fake-AI creators.
I have no insight, but I assume they are doing it because they can use AI to make a few variations of a video and then automatically A/B test them to see which ones get more engagement, and then use that to make videos that are more engaging than what the author actually uploaded.
This is "innocent" if you accept that the author's goal is simplify to maximize engagement and YouTube is helping them do that. It's not if you assume the author wants users to see exactly what they authored.
I have a multitude of complaints about Tahoe, many of which others have already pointed out. One more thing that doesn't get mentioned as often but probably should is their new placement of the volume / brightness level UI which pops up when you change those two.
It used to be in the middle of the screen and worked just fine. But then someone thoughts of putting it exactly where browser tabs usually are and I _constantly_ find myself in a situation where I change the volume and try to click on a tab that this UI is on top of. Then I need to move my mouse outside the UI otherwise it stays there, and wait for it to disappear before I can change tabs. It's infuriating.
reply