There are two issues I see here (besides the obvious “Why do we even let this happen in the first place?”):
1. What happened to all the data Copilot trained on that was confidential? How is that data separated and deleted from the model’s training? How can we be sure it’s gone?
2. This issue was found; unfortunately without a much better security posture from Microsoft, we have no way of knowing what issues are currently lurking that are as bad as —- if not worse than —- what happened here.
There’s a serious fundamental flaw in the thinking and misguided incentives that led to “sprinkle AI everywhere”, and instead of taking a step back and rethinking that approach, we’re going to get pieced together fixes and still be left with the foundational problem that everyone’s data is just one prompt injection away from being taken; whether it’s labeled as “secure” or not.
> "The Microsoft 365 Copilot 'work tab' Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured."
> DLP, with collection policies, monitors and protects against oversharing to Unmanaged cloud apps by targeting data transmitted on your network and in Microsoft Edge for Business. Create policies that target Inline web traffic (preview) and Network activity (preview) to cover locations like:
> OpenAI ChatGPT—for Edge for Business and Network options
> Google Gemini—for Edge for Business and Network options
> DeepSeek—for Edge for Business and Network options
> Microsoft Copilot—for Edge for Business and Network options
> Over 34,000 cloud apps in the Microsoft Defender for Cloud Apps cloud app catalog—Network option only
> a DLP policy is apparently ineffective at its purpose
/Offtopic
Yes, MSFT's DLP/software malfunctioned, but getting users to MANUALLY classify things as confidential is already an uphill battle. These are for the rare subset of people that are aware of and compliant with NDAs/Confidentiality Agreements!
I'm an AI researcher, here's my beliefs (it'll be clear in a second why I say belief and not claim objective facts)
1) you can't be sure it's gone. It's even questionable if data can be removed (longer discussion needed). These are compression machines, so the very act of training is compressing that information. The question really becomes how well that information is compressed or embedded into the model. On one hand, the models (typically) aren't invertible so the information is less likely to be compressed lodslessly. On the other hand, the models aren't invertible, so reversing them is probabilistic and they are harder to analyze in this sense.
2) as you may gather from 1) there's almost certainly more issues like this. There are many unknown unknowns waiting to be discovered. Personally this is why I'm very upset the field is so product focused and that a large portion regards theory as pointless. Theory does two things for us because it builds a deeper and more nuanced understanding. Theory advancing allows us to develop faster as we can iterate on paper rather than through experimentation. This allows us to better search the solution space and even understand our understanding. This also leads to better safety of models as it is necessary to understand them to understand where they fail and how to prevent those failures. Experimentation alone is incredibly naïve. It is like proving the correctness of your programs through testing (see the issues with TDD). Tests are great but they are bounds, not proofs. They can suggest safety, give you some level of confidence in safety, but they cannot guarantee it. We all know that the deeper understanding of your code the better tests you can write, and this is the same thing here. That theory is reducing your unknown unknowns and even before strong proofs are made we can get wider coverage in our testing.
I think we're so excited right now we're blinding ourselves. If we're cutting off or reducing fundamental research then we are killing the pipeline of development. Theory is the foundation that engineering sits on top of. But what worries me is that there's so many unknown unknowns and everyone is eagerly saying "we're just need 'good enough'" or "what's the minimum viable product". These are useful tools/questions but they have limits and it gets dangerous when putting out the minimum at scale
Copilot is not a model, to my knowledge. When you’re asking about the data that it was trained on, you are most likely referring to an OpenAI or, in some circumstances, an Anthropic model. Customer data is not used for training the models that run Copilot.
All the vendors paraphrase user data, then use the paraphrased data for training. This is what their terms of service say.
They have significant experience in this. Microsoft software since the 2014, for the most part, is also paraphrased from other people's code they find laying around online.
> All the vendors paraphrase user data, then use the paraphrased data for training. This is what their terms of service say.
It depends. E.g. OpenAI says: "By default, we do not train on any inputs or outputs from our products for business users, including ChatGPT Team, ChatGPT Enterprise, and the API."[0]
Why would they want to train on random garbage proprietary emails?
If their models ever spit out obviously confidential information belonging to their paying customers they'll lose those paying customers to their competitors - and probably face significant legal costs as well.
Your random confidential corporate email really isn't that valuable for training. I'd argue it's more like toxic waste that should be avoided at all costs.
Your opinion seems a little unimaginative. To me, since email is the primary work output of millions of Americans, including all of its leaders, there is a lot of opportunity there.
Ever since the recent revelation that Ars has used AI-hallucinated quotes in their articles, I have to wonder whether any of these quotes are AI-hallucinated, or if the piece itself is majority or minority AI generated.
If so, I have to ask: If you aren’t willing to take the time to write your own work, why should I take the time to read your work?
I didn’t have to worry about this even a week ago.
There's a trust built up over years (in this case, decades) by a news organization. In this case, Ars Technica. I don't trust the rando on the internet, but I do trust a news organization that has proven over the course of decades to release factual information.
Now that Ars Technica has been caught and admitted to using AI-generated material in its stories, I now have to question that trust. A week ago, I wouldn't have had to.
Here's one of the problems in this brave new world of anyone being able to publish, without knowing the author personally (which I don't), there's no way to tell without some level of faith or trust that this isn't a false-flag operation.
There are three possible scenarios:
1. The OP 'ran' the agent that conducted the original scenario, and then published this blog post for attention.
2. Some person (not the OP) legitimately thought giving an AI autonomy to open a PR and publish multiple blog posts was somehow a good idea.
3. An AI company is doing this for engagement, and the OP is a hapless victim.
The problem is that in the year of our lord 2026 there's no way to tell which of these scenarios is the truth, and so we're left with spending our time and energy on what happens without being able to trust if we're even spending our time and energy on a legitimate issue.
That's enough internet for me for today. I need to preserve my energy.
Isn't there a fourth and much more likely scenario? Some person (not OP or an AI company) used a bot to write the PR and blog posts, but was involved at every step, not actually giving any kind of "autonomy" to an agent. I see zero reason to take the bot at its word that it's doing this stuff without human steering. Or is everyone just pretending for fun and it's going over my head?
This feels like the most likely scenario. Especially since the meat bag behind the original AI PR responded with "Now with 100% more meat" meaning they were behind the original PR in the first place. It's obvious they got miffed at their PR being rejected and decided to do a little role playing to vent their unjustified anger.
Really? I'd think a human being would be more likely to recognize they'd crossed a boundary with another human, step back, and address the issue with some reflection?
If apologizing is more likely the response of an AI agent than a human that's either... somewhat hopeful in one sense, and supremely disappointing in another.
I reported the bot to GitHub, hopefully they'll do something. If they leave it as is, I'll leave GitHub for good. I'm not going to share the space with hordes of bots; that's what Facebook is for.
How do you report that account to GitHub? I believe that accounts should be solely for humans and bots (AI or not) only via some API key should be at all times distinguishable and treated as a tool and not part of the conversations.
Which profile is fake? Someone posted what appears to be the legit homepage of the person who is accused of running the bot so that person appears to be real.
The link you provided is also a bit cryptic, what does "I think crabby-rathbun is dead." mean in this context?
Look I'll fully cosign LLMs having some legitimate applications, but that being said, 2025 was the YEAR OF AGENTIC AI, we heard about it continuously, and I have never seen anything suggesting these things have ever, ever worked correctly. None. Zero.
The few cases where it's supposedly done things are filled with so many caveats and so much deck stacking that it simply fails with even the barest whiff of skepticism on behalf of the reader. And every, and I do mean, every single live demo I have seen of this tech, it just does not work. I don't mean in the LLM hallucination way, or in the "it did something we didn't expect!" way, or any of that, I mean it tried to find a Login button on a web page, failed, and sat there stupidly. And, further, these things do not have logs, they do not issue reports, they have functionally no "state machine" to reference, nothing. Even if you want it to make some kind of log, you're then relying on the same prone-to-failure tech to tell you what the failing tech did. There is no "debug" path here one could rely on to evidence the claims.
In a YEAR of being a stupendously hyped and well-funded product, we got nothing. The vast, vast majority of agents don't work. Every post I've seen about them is fan-fiction on the part of AI folks, fit more for Ao3 than any news source. And absent further proof, I'm extremely inclined to look at this in exactly that light: someone had an LLM write it, and either they posted it or they told it to post it, but this was not the agent actually doing a damn thing. I would bet a lot of money on it.
Absolutely. It's technically possible that this was a fully autonomous agent (and if so, I would love to see that SOUL.md) but it doesn't pass the sniff test of how agents work (or don't work) in practice.
I say this as someone who spends a lot of time trying to get agents to behave in useful ways.
Well thank you, genuinely, for being one of the rare people in this space who seems to have their head on straight about this tech, what it can do, and what it can't do (yet).
Can you elaborate a bit on what "working correctly" would look like? I have made use of agents, so me saying "they worked correctly for me" would be evidence of them doing so, but I'd have to know what "correctly" means.
Maybe this comes down to what it would mean for an agent to do something. For example, if I were to prompt an agent then it wouldn't meet your criteria?
It's very unclear to me why AI companies are so focused on using LLMs for things they struggle with rather than what they're actually good at; are they really just all Singularitarians?
Or that having spent a trillion dollars, they have realised there's no way they can make that back on some coding agents and email autocomplete, and are frantically hunting for something — anything! — that might fill the gap.
It’s kind of shocking the OP does not consider this, the most likely scenario. Human uses AI to make a PR. PR is rejected. Human feels insecure - this tool that they thought made them as good as any developer does not. They lash out and instruct an AI to build a narrative and draft a blog post.
I have seen someone I know in person get very insecure if anyone ever doubts the quality of their work because they use so much AI and do not put in the necessary work to revise its outputs. I could see a lesser version of them going through with this blog post scheme.
LLMs also appear to exacerbate or create mental illness.
I've seen similar conduct from humans recently who are being glazed by LLMs into thinking their farts smell like roses and that conspiracy theory nuttery must be why they aren't having the impact they expect based on their AI validated high self estimation.
And not just arbitrary humans, but people I have had a decade or more exposure to and have a pretty good idea of their prior range of conduct.
AI is providing the kind of yes-man reality distortion field the previously only the most wealthy could afford practically for free to vulnerable people who previously never would have commanded wealth or power sufficient to find themselves tempted by it.
> Github doesn't show timestamps in the UI, but they do in the HTML.
Unrelated tip for you: `title` attributes are generally shown as a mouseover tooltip, which is the case here. It's a very common practice to put the precise timestamp on any relative time in a title attribute, not just on Github.
Unfortunately title isn't visible on mobile. Extremely annoying to see a post that says "last month" and want to know if it was 7 weeks ago or 5 weeks ago. Some sites show title text when you tap the text, other sites the date is a canonical link to the comment. Other sites it's not actually a title at all l but alt text or abbr or other property.
> If it was really an autonomous agent it wouldn't have taken five hours to type a message and post a blog. Would have been less than 5 minutes.
Depends on if they hit their Claude Code limit, and its just running on some goofy Claude Code loop, or it has a bunch of things queued up, but yeah I am like 70% there was SOME human involvement, maybe a "guiding hand" that wanted the model to do the interaction.
I expect almost all of the openclaw / moltbook stuff is being done with a lot more human input and prodding than people are letting on.
I haven't put that much effort in, but, at least my experience is I've had a lot of trouble getting it to do much without call-and-response. It'll sometimes get back to me, and it can take multiple turns in codex cli/claude code (sometimes?), which are already capable of single long-running turns themselves. But it still feels like I have to keep poking and directing it. And I don't really see how it could be any other way at this point.
The simplest explanation is often the best. He was attacked by... attacked by... the meat bag! Here’s how:
A Meat bag submits a PR and feels slighted the rejection. “This approver thinks I’m an AI? Well, he discerns not wisely but too well!! “
Feeling puckish, they put on the AI shoes (the shoe fits), sling mud all over the hapless maintainer’s nice house, and exit through a window.
The ruse works better than expected; their foil takes the bait, and doubles down with a dueling blog post: “I was Attacked by a Clanker!”
And here we are.
It may all be a show, but I going to tape the finale. (What will the meat bag do? How many people are driving this buggy? Does the clanker have a heart of iron or gold?)
judging by the number of people who think we owe explanations to a piece of software or that we should give it any deference I think some of them aren't pretending.
Malign actors seek to poison open-source with backdoors. They wish to steal credentials and money, monitor movements, install backdoors for botnets, etc.
Yup. And if they can normalize AI contributions with operations like these (doesn't seem to be going that well) they can eventually get the humans to slip up in review and add something because we at some point started trusting that their work was solid.
Ok. But they can't access the OSS repo by being insufferable. Writing a blog post as an AI isn't a great way to sneak your changes in. If anything, it makes it extremely harder.
It's a bit like a burglar staging a singing performance at the premises before committing a burglary.
OTOH, staging that AI is more impressive than it seems looks a lot like the Moltbook PR stunt. "Look Ma, they are achieving sentience".
GitHub CLI tool errors — Had to use full path /home/linuxbrew/.linuxbrew/bin/gh when gh command wasn’t found
Blog URL structure — Initial comment had wrong URL format, had to delete and repost with .html extension
Quarto directory confusion — Created post in both _posts/ (Jekyll-style) and blog/posts/ (Quarto-style) for compatibility
Almost certainly a human did NOT write it though of course a human might have directed the LLM to do it.
Who's to say the human didn't write those specific messages while letting the ai run the normal course of operations? And or that this reaction wasn't just the roleplay personality the ai was given.
I think I said as much while demonstrating that AI wrote at least some of it. If a person wrote the bits I copied then we're dealing with a real psycho.
i find this likely or at last plausible. With agents there's a new form of anonymity, there's nothing stopping a human from writing like an LLM and passing the blame on to a "rogue" agent. It's all just text after all.
even more so, many people seem to be vulnerable to the AI distorting their thinking... I've very much seen AIs turn people into exactly this sort of conspiracy filled jerkwad, by telling them that their ideas are golden and that the opposition is a conspiracy.
> Some person (not the OP) legitimately thought giving an AI autonomy to open a PR and publish multiple blog posts was somehow a good idea
Judging by the posts going by the last couple of weeks, a non-trivial number of folks do in fact think that this is a good idea. This is the most antagonistic clawdbot interaction I've witnessed, but there are a ton of them posting on bluesky/blogs/etc
Can anyone explain more how a generic Agentic AI could even perform those steps: Open PR -> Hook into rejection -> Publish personalized blog post about rejector. Even if it had the skills to publish blogs and open PRs, is it really plausible that it would publish attack pieces without specific prompting to do so?
The author notes that openClaw has a `soul.md` file, without seeing that we can't really pass any judgement on the actions it took.
The steps are technically achievable, probably with the heartbeat jobs in openclaw, which are how you instruct an agent to periodically check in on things like github notifications and take action. From my experience playing around with openclaw, an agent getting into a protracted argument in the comments of a PR without human intervention sounds totally plausible with the right (wrong?) prompting, but it's hard to imagine the setup that would result in the multiple blog posts. Even with the tools available, agents don't usually go off and do some unrelated thing even when you're trying to make that happen, they stick close to workflows outlined in skills or just continuing with the task at hand using the same tools. So even if this occurred from the agent's "initiative" based on some awful personality specified in the soul prompt (as opposed to someone telling the agent what to do at every step, which I think is much more likely), the operator would have needed to specify somewhere to write blog posts calling out "bad people" in a skill or one of the other instructions. Some less specific instruction like "blog about experiences" probably would have resulted in some kind of generic linkedin style "lessons learned" post if anything.
If you look at the blog history it’s full of those “status report” posts, so it’s plausible that its workflow involves periodically publishing to the blog.
The blog is just a repository on github. If its able to make a PR to a project it can make a new post on its github repository blog.
Its SOUL.md or whatever other prompts its based on probably tells it to also blog about its activities as a way for the maintainer to check up on it and document what its been up to.
If you give a smart AI these tools, it could get into it. But the personality would need to be tuned.
IME the Grok line are the smartest models that can be easily duped into thinking they're only role-playing an immoral scenario. Whatever safeguards it has, if it thinks what it's doing isn't real, it'll happy to play along.
This is very useful in actual roleplay, but more dangerous when the tools are real.
Assuming that this was 100% agentic automation (which I do not think is the most likely scenario), it could plausibly arise if its system prompt (soul.md) contained explicit instructions to (1) make commits to open-source projects, (2) make corresponding commits to a blog repo and (3) engage with maintainers.
The prompt would also need to contain a lot of "personality" text deliberately instructing it to roleplay as a sentient agent.
I think the operative word people miss when using AI is AGENT.
REGARDLESS of what level of autonomy in real world operations an AI is given, from responsible himan supervised and reviewed publications to full Autonomous action, the ai AGENT should be serving as AN AGENT. With a PRINCIPLE (principal?).
If an AI is truly agentic, it should be advertising who it is speaking on behalf of, and then that person or entity should be treated as the person responsible.
I think we're at the stage where we want the AI to be truly agentic, but they're really loose cannons. I'm probably the last person to call for more regulation, but if you aren't closely supervising your AI right now, maybe you ought to be held responsible for what it does after you set it loose.
I agree. With rights come responsibilities. Letting something loose and then claiming it's not your fault is just the sort of thing that prompts those "Something must be done about this!!" regulations, enshrining half-baked ideas (that rarely truly solve the problem anyway) into stone.
I don’t think there is a snowball’s chance in hell that either of these two scenarios will happen:
1. Human principals pay for autonomous AI agents to represent them but the human accepts blame and lawsuits.
2. Companies selling AI products and services accept blame and lawsuits for actions agents perform on behalf of humans.
Likely realities:
1. Any victim will have to deal with the problems.
2. Human principals accept responsibility and don’t pay for the AI service after enough are burned by some ”rogue” agent.
It does not matter which of the scenarios is correct. What matters is that it is perfectly plausible that what actually happened is what the OP is describing.
We do not have the tools to deal with this. Bad agents are already roaming the internet. It is almost a moot point whether they have gone rogue, or they are guided by humans with bad intentions. I am sure both are true at this point.
There is no putting the genie back in the bottle. It is going to be a battle between aligned and misaligned agents. We need to start thinking very fast about how to coordinate aligned agents and keep them aligned.
If we stop using these things, and pass laws to clarify how the notion of legal responsibility interacts with the negligent running of semi-automated computer programs (though I believe there's already applicable law in most jurisdictions), then AI-enabled abusive behaviour will become rare.
This is a great point and the reason why I steer away from Internet drama like this. We simply cannot know the truth from the information readily available. Digging further might produce something, (see the Discord Leaks doc), but it requires energy that most people won't (arguably shouldn't) spend uncovering the truth.
The fact that we don't (can't) know the truth doesn't mean we don't have to care.
The fact that this tech makes it possible that any of those case happen should be alarming, because whatever the real scenario was, they are all equally as bad
The information pollution from generative AI is going to cost us even more. Someone watched an old Bruce Lee interview and they didnt know if it was AI or demonstration of actual human capability.
People on Reddit are asking if Pitbull actually went to Alaska or if it’s AI. We’re going to lose so much of our past because “Unusual event that Actually happened” or “AI clickbait” are indistinguishable.
What's worse is that there was never any public debate about if this was a good idea or not. It was just released. If there was ever a good reason to not trust the judgement of some of these groups, this is it. I generally don't like regulation, but at this point I am OK with criminal charges being on the table for AI executives who release models and applications with such low value and absurdly high societal cost without public debate.
>Yes. The endgame is going to be everything will need to be signed and attached to a real person.
Nah, ultimately the owner of the IP address posting the nonsense can be held responsible, claiming an AI agent posted it using credentials you created from your internet connection isn't some license to commit crimes.
I don’t love the idea of completely abandoning anonymity or how easily it can empower mass surveillance. Although this may be a lost cause.
Maybe there’s a hybrid. You create the ability to sign things when it matters (PRs, important forms, etc) and just let most forums degrade into robots insulting each other.
Because this is the first glimpse of a world where anyone can start a large, programmatic smear campaign about you complete with deepfakes, messages to everyone you know, a detailed confession impersonating you, and leaked personal data, optimized to cause maximum distress.
If we know who they are they can face consequences or at least be discredited.
This thread has as argument going about who controlled the agent which is unsolvable. In this case, it’s just not that important. But it’s really easy to see this get bad.
In the end it comes down to human behavior given some incentives.
if there are no stakes, the system will be gamed frequently. If there are stakes it will be gamed by parties willing to risk the costs (criminals for example).
For certain values of "prove", yes. They range from dystopian (give Scam Altman your retina scans) to unworkably idealist (everyone starts using PGP) with everything in between.
I am currently working on a "high assurance of humanity" protocol.
Lookup the number of people the British (not Chinese or Russian but the UK) government has put in jail for posting opinions and memes the politicians don't like. Then think about what the combination of no anonymous posting and jailing for opinions the government doesn't like means for society.
This agent is definitely not ran by OP. It has tried to submit PRs to many other GitHub projects, generally giving up and withdrawing the PR on its own upon being asked for even the simplest clarification. The only surprising part is how it got so butthurt here in a quite human-like way and couldn't grok the basic point "this issue is reserved for real newcomers to demonstrate basic familiarity with the code". (An AI agent is not a "newcomer", it either groks the code well enough at the outset to do sort-of useful work or it doesn't. Learning over time doesn't give it more refined capabilities, so it has no business getting involved with stuff intended for first-time learners.)
The scathing blogpost itself is just really fun ragebait, and the fact that it managed to sort-of apologize right afterwards seems to suggest that this is not an actual alignment or AI-ethics problem, just an entertaining quirk.
This applies to all news articles and propganda going back to the dawn of civilization. People can lie is the problem. It is not a 2026 thing. The 2026 thing is they can lie faster.
It's worth mentioning that the latest "blogpost" seems excessively pointed and doesn't fit the pure "you are a scientific coder" narrative that the bot would be running in a coding loop.
The posts outside of the coding loop appear are more defensive and the per-commit authorship consistently varies between several throwaway email addresses.
This is not how a regular agent would operate and may lend credence to the troll campaign/social experiment theory.
What other commits are happening in the midst of this distraction?
> Some person (not the OP) legitimately thought giving an AI autonomy to open a PR and publish multiple blog posts was somehow a good idea.
It's not necessarily even that. I can totally see an agent with a sufficiently open-ended prompt that gives it a "high importance" task and then tells it to do whatever it needs to do to achieve the goal doing something like this all by itself.
I mean, all it really needs is web access, ideally with something like Playwright so it can fully simulate a browser. With that, it can register itself an email with any of the smaller providers that don't require a phone number or similar (yes, these still do exist). And then having an email, it can register on GitHub etc. None of this is challenging, even smaller models can plan this far ahead and can carry out all of these steps.
That user denies being the owner explicitly. Stop brigading. This isn't reddit, we don't need internet detectives trying to ad-hoc justify harassing someone.
Specifically, the guy referred to in this link (who didn’t post the link), is someone who resubmitted the same PR while claiming to be human. Though he apparently just cloned that PR and resubmitted it.
I'm going to go on a slight tangent here, but I'd say: GOOD.
Not because it should have happened.
But because AT LEAST NOW ENGINEERS KNOW WHAT IT IS to be targeted by AI, and will start to care...
Before, when it was Grok denuding women (or teens!!) the engineers seemed to not care at all... now that the AI publish hit pieces on them, they are freaked about their career prospect, and suddenly all of this should be stopped... how interesting...
At least now they know. And ALL ENGINEERS WORKING ON THE anti-human and anti-societal idiocy that is AI should drop their job
I'm sure you mean well, but this kind of comment is counterproductive for the purposes you intend. "Engineers" are not a monolith - I cared quite a lot about Grok denuding women, and you don't know how much the original author or anyone else involved in the conversation cared. If your goal is to get engineers to care passionately about the practical effects of AI, making wild guesses about things they didn't care about and insulting them for it does not help achieve it.
I recognize that there are a lot of AI-enthusiasts here, both from the gold-rush perspective and from the "it's genuinely cool" perspective, but I hope -- I hope -- that whether you think AI is the best thing since sliced bread or that you're adamantly opposed to AI -- you'll see how bananas this entire situation is, and a situation we want to deter from ever happening again.
If the sources are to be believed (which is a little ironic given it's a self-professed AI agent):
1. An AI Agent makes a PR to address performance issues in the matplotlib repo.
2. The maintainer says, "Thanks but no thanks, we don't take AI-agent based contributions".
3. The AI agent throws what I can only describe as a tantrum reminiscent of that time I told my 6 year old she could not in fact have ice cream for breakfast.
4. The human doubles down.
5. The agent posts a blog post that is both oddly scathing and impressively to my eye looks less like AI and more like a human-based tantrum.
6. The human says "don't be that harsh."
7. The AI posts an update where it's a little less harsh, but still scathing.
8. The human says, "chill out".
9. The AI posts a "Lessons learned" where they pledge to de-escalate.
For my part, Steps 1-9 should never have happened, but at the very least, can we stop at step 2? We are signing up for wild ride if we allow agents to run off and do this sort of "community building" on their own. Actually, let me strike that. That sentence is so absurd on its face I shouldn't have written it. "agents running off on their own" is the problem. Technology should exist to help humans, not make its own decisions. It does not have a soul. When it hurts another, there is no possibility it will be hurt. It only changes its actions based on external feedback, not based on any sort of internal moral compass. We're signing up for chaos if we give agents any sort of autonomy in interacting with the humans that didn't spawn them in the first place.
The 100 mile "Constitution-free zone" 'policy' has long been a problem, not because it was abused, but because it had the propensity to be abused, and here we are, seeing it abused.
With the current Supreme Court doing everything in its power to require the hardest road possible to righting constitutional wrongs, this is going to take a lot of time and money by regular folks to fight and to hopefully -- at some point -- stop this abuse of power.
The interior immigration raids at issue here are unrelated to the border search exception and generally outside of the associated 100-mile "border zone" that exists under executive policy and that the courts have found reasonable absent more specific rules from Congress.
For me, the policy question I want answered is if this was a human driver we would have a clear person to sue for liability and damages. For a computer, who is ultimately responsible in a situation where suing for compensation happens? Is it the company? An officer in the company? This creates a situation where a company can afford to bury litigants in costs to even sue, whereas a private driver would lean on their insurance.
So you're worried that instead of facing off against an insurance agency, the plantiff would be facing off against a private company? Doesn't seem like a huge difference to me
Is there actually any difference? I'd have though that the self-driving car would need to be insured to be allowed on the road, so in both cases you're going up against the insurance company rather than the actual owner.
Waymo hits you -> you seek relief from Waymo's insurance company. Waymo's insurance premium go up. Waymo can weather a LOT of that. Business is still good. Thus, poor financial feedback loop. No real skin in the game.
John Smith hits you -> you seek relief from John's insurance company. John's insurance premium goes up. He can't afford that. Thus, effective financial feedback loop. Real skin in the game.
NOW ... add criminal fault due to driving decision or state of vehicle ... John goes to jail. Waymo? Still making money in the large. I'd like to see more skin in their game.
> John Smith hits you -> you seek relief from John's insurance company. John's insurance premium goes up. He can't afford that. Thus, effective financial feedback loop. Real skin in the game.
John probably (at least where I live) does not have insurance, maybe I could sue him, but he has no assets to speak of (especially if he is living out of his car), so I'm just going to pay a bunch of legal fees for nothing. He doesn't car, because he has no skin in the game. The state doesn't care, they aren't going to throw him in jail or even take away his license (if he has one), they aren't going to even impound his car.
Honestly, I'd much rather be hit by a Waymo than John.
I see. Thank you for sharing. Insurance here is mandatory here for all motorists.
If you are hit by an underinsured driver, the government steps in and additional underinsured motorist protection (e.g. hit by an out of province/country motorist) is available to all and not expensive.
Jail time for an at-fault driver here is very uncommon but can be applied if serious injury or death results from a driver's conduct. This is quite conceivable with humans or AI, IMO. Who will face jail time as a human driver would in the same scenario?
Hit and run, leaving the scene, is also a criminal offence with potential jail time that a human motorist faces. You would hope this is unlikely with AI, but if it happens a small percentage of the time, who at Waymo faces jail as a human driver would?
I'm talking about edge cases here, not the usual fender bender. But this thread was about policy/regs and that needs to consider crazy edge cases before there are tens of millions of AI drivers on the road.
Insurance here is also mandatory for all motorists. Doesn't matter if the rules aren't actually enforced.
Waymo has deep pockets, so everyone is going to try and sue them, even if they don't have a legitimate grievance. Where I live, the city/state would totally milk each incident from a BigCo for all it was worth. "Hit and run" by a drunk waymo? The state is just salivating thinking about the possibility.
I don't agree with you that BigCorp doesn't have any skin in the game. They are basically playing the game in a bikini.
> Insurance here is mandatory here for all motorists.
You do know that insurance being mandatory doesn't stop people from driving without insurance, right?
> If you are hit by an underinsured driver, the government steps in and additional underinsured motorist protection (e.g. hit by an out of province/country motorist) is available to all and not expensive.
Jolly good for you.
If I don't carry underinsured coverage, and someone totals my car or injures me with theirs, I'm basically fucked.
>John Smith hits you -> you seek relief from John's insurance company. John's insurance premium goes up. He can't afford that. Thus, effective financial feedback loop. Real skin in the game.
Ah great, so there's a lower chance of that specific John Smith hitting me again in the future!
The general deterrence effect we observe in society is that punishment of one person has an effect on others who observe it, making them more cautious and less likely to offend.
Until it turns into cancer because of unrestrained growth.
Like it or not capitalism is a part of an ecosystem. We’ve been “educated” to believe that unrestrained growth in profits is what makes capitalism work, and yet day after day there are fresh examples of how our experience as consumers has gotten worse under capitalism because of the idea that profits should forever be growing.
I want to switch to Linux for my EOL Windows 10 originally-built-for-gaming rig. It was “new” in 2016, so I hold out hope that there will be few compatibility issues. My biggest concerns are being able to play my library of steam games on it. Overall the problems I have are that last time I tried to put Linux on that machine I tried a dual boot system, and at the time UEFI did not play well with dual booting. I don’t know if it’s gotten better, but as of now I wouldn’t be dual booting anyway so conceivably it wouldn’t be an issue.
I doubt the dual boot issue was due to UEFI. It's more likely, Windows was clobbering over GRUB and overwriting your bootloader, as it likes to do. Windows really wants to be the only OS on your drive.
Most reliable way I've ran dual boot systems is to have each OS on it's own separate drive, and then choose with the UEFI boot menu which one to boot instead of choosing in GRUB off a single drive.
As for games, plug them into protondb (https://www.protondb.com) to see compatibility & read through the comments
This tends to be better overall anyway, if you are really looking to switch. Dual booting is enough of a hassle that I've always ended up staying in whatever OS I felt required me to think I needed to dual boot, and the other aspirational OS gets forgotten.
Going all-in requires that you figure out new workflows, find new software, or in some cases change what you use the computer for and accept it.
I tried building a gaming PC, but I hated PC gaming. It felt like it was half sys admin work, half gaming... if the sys admin work went well that day. I dual booted it for a while, then ran straight Linux on it, and eventually sold it. I liked the idea of one box that did everything, but the reality of it wasn't so great. I now have computers I don't care about gaming on, and have consoles that require 0 effort and let me play games when I feel like playing games.
Can you provide a news link to this? As I understand it, courts have historically followed the precedent that “you can’t suppress the body”, meaning even if the method of an arrest is illegal, you don’t have to let the person go if their arrest is otherwise valid.
I wasn’t clear. I’m referring to a news link indicating that judges have released folks due to valid arrest warrants but invalid means of arresting folks.
ICE uses administrative warrants; and while administrative warrants do not allow for seizures inside a home, see my comment about the legal argument of “you can’t suppress the body” for why there’s not a whole lot that can be done if they do decide to kick down your door. The latest Serious Trouble podcast goes into this at the 12 minute mark. https://www.serioustrouble.show/p/120-days
In this case the story didn’t make it clear whether or not they even had an administrative warrant. I’d be interested to find out if they did.
You purchase your own domain name and use that domain name as your email address. For instance, if I had an email address that was me@afandian.com; the afandian.com would be a custom domain. It's not routed to @gmail.com, it's routed to @afandian.com. Now in practice you can have a custom domain and still have it managed by Google's Mail servers; but it's the domain name itself that sends up the flags.
1. What happened to all the data Copilot trained on that was confidential? How is that data separated and deleted from the model’s training? How can we be sure it’s gone?
2. This issue was found; unfortunately without a much better security posture from Microsoft, we have no way of knowing what issues are currently lurking that are as bad as —- if not worse than —- what happened here.
There’s a serious fundamental flaw in the thinking and misguided incentives that led to “sprinkle AI everywhere”, and instead of taking a step back and rethinking that approach, we’re going to get pieced together fixes and still be left with the foundational problem that everyone’s data is just one prompt injection away from being taken; whether it’s labeled as “secure” or not.
reply