There's been a lot of question about the motivation for this feature. Am I the only one that thinks this could be related to things like the EARN IT (or something similar).
Here's an article from a year ago on EARN IT:
> Theoretically, a system that uses client-side scanning could still send messages encrypted end to end, and so the Leahy amendment would not offer any protection, but many of the same confidentiality concerns with backdoored “e2ee” systems would continue to apply.
Sensitizing people to the idea that a benevolent dictator is going through content on their local device for their benefit.
It's all about conditioning. First it's to save the children. Then it will be about preventing the terrorists. Finally it will be about preventing "hate speech" - which these days hate speech is pretty much speech from anyone you disagree with :p
Like the 1984 novel, Minority Report wasn't an instruction manual!
I'm doubtful of the harms of misinformation and the role it plays in persuading people's opinions (I believe people are fairly intelligent rather than undiscerning parrots of thought). However, if I were to accept the claim misinformation is rampant and that it is persuading large numbers of people, then I would think the solution would be to better educate people on how to "not believe everything you read" by checking sources, questioning what motives the author might have, checking who the author is, finding a related article from a different source, etc. IMO this would prevent the need for platforms to remove misinformation (something that can be easily abused to remove content that doesn't fit a certain political agenda or similar) since the public would capable of filtering out misinformation themselves.
The war on misinformation seems to be driven by the democrats today; however, it could easily be driven by the republicans in the future. Regardless of who is pushing the war, IMO a war on misinformation will always lead to polarization. After all, your opponents aren't rational, rather, they've been brainwashed. As such, there is no need to engage with your opponents views, conveniently leaving your own views completely unopposed (and thus obviously correct and good).
I think framing it as a war is problematic. We’re not waging war against pollution and CO2, we’re trying to limit and manage it. Disinformation and misinformation can be conceptualized similarly as information pollution. Our brains don’t have magic powers: garbage in, garbage out. Just like AI.
Objective reality does exist, even if it's hotly denied. And brazen denial is one thing we do repeatedly see in all this, which has drawn my attention to the specific things that get the most brazen denial, with the most emphasis (and, where possible, the most brigading and reinforcing by mysterious upvotes/downvotes/manipulations)
I get that it's desirable and appealing to frame it as random noise and organically produced info pollution from dumb people who just want attention. It would be nice to think this.
There's also an argument that this is indeed war waged through other means… interestingly, with a death toll very comparable to the old-fashioned, less deniable forms of open warfare. If there were no pandemic, someone would've had to invent one… or make do with bombings, vehicular assaults, and other sorts of terrorist action. But since there is a pandemic, the war becomes essentially a matter of maximizing the performance of the pandemic by any means necessary.
Could be worse, could be nukes. That would be more obvious, mind you.
I'm not sure that I understand your point. I don't consider something worse just because it's called a war, and I don't mean to suggest in any way that this isn't a real and serious problem: in fact, I compared it to pollution, and I consider pollution to be a serious and urgent problem.
I just feel like the war metaphor is not great: it evokes unnecessary violence and connotes confrontation. We wage war against each other, but we can solve serious problems together. Neither war nor problem-solving are zero-sum games, arguably, but not everyone can win in war.
Yes, it does evoke unnecessary violence. If it's modern warfare and a guy is piloting a drone over a little screen and kills dozens of people, it's still war and violence even if he's not bayoneting them directly. If it's postmodern warfare and a guy is piloting a meme over a keyboard and kills hundreds of thousands of people, that's still war too. Ingenious war, but still war.
Could you provide the source for the supreme court agreement? It seems to me that forcing someone to get a vaccine would just as much violate their individual liberties so I'm rather curious what issue the supreme court was specifically addressing
Why should society be concerned with rehabilitating the most violent of offenders? To be a bit more specific, assume we are talking about a parent that has brutally beaten their child to death, or someone who has knelt on someone else's neck until death was inevitable. This person has killed someone, an action that is absolutely irreversible. Why should that person be allowed back in society? I don't want society spending time and money rehabilitating this person. In my opinion, this person has forfeited their right to live in society when they chose to take someone else's life.
To be clear, I'm not talking about non-violent crime or even most types of violent crimes - I'm referring to the most violent of offenders. There are 7 billion people in this world, I think society will carry on just fine if we remove the tiny fraction of people from society that commit the most heinous of violent crimes (i.e wanton murder).
So what's society get out of rehabilitating this person? Let's assume this person can add moderate value to society, such as being capable of working an average job decently well (thus bringing value to their employer, the customers they help, and greater society through taxes). Now weigh that value added against the fact that their victim will never re-enter society again. Is that value added worth it, and is it fair to their victim?
I am open to having my opinion changed on this topic so if you have a good argument for why we should be concerned with the most violent offenders, please do share and I will weigh what you say carefully. However, please make sure you are addressing the case of deliberate, unprovoked murder since my response is only addressing this form of crime.
Because society does not only have to protect its weakest from criminals, but also itself from its justice. If the society only ever punishes, it begins to hurt itself at some point.
> I see it as a currently-necessary annoyance, as the least bad option... shutting down education until the pandemic's over is unfeasible.
Is it necessary though? Have you considered there are ways other than testing for a student to demonstrate their knowledge on a subject? Projects, presentations, and writing all come to mind as effective ways to measure knowledge on a subject and do not require treating all students like cheaters because a few choose to do so.
Yes, but my subject is maths :-). 1st and 2nd year engineering maths don't really have projects, presentations, or writing as options, as we mostly care about whether they know particular fundamental mathematical techniques and skills. All those options also have the problem of knowing who did the work.
From talking to remote students, I don't think they feel like they're being treated like cheaters. Instead, they seem happy we're making their study possible, and accepting of what they're asked to do. They know it's important that they can demonstrate unequivocally that they have particular skills.
My sister is still in school and the anti-cheating software gives her a lot of anxiety - not that she's a cheater or anything, but because it is well known that this software flags non-cheaters as cheaters. For example, she is not allowed to look around or talk to herself while working on a problem, both of which help her to demonstrate her knowledge effectively. If the goal is for testing to demonstrate a student's knowledge, then employing techniques that hinder a student's ability to do so in the hopes of catching cheaters is counter productive to the original goal. After all, you want to know if she can apply fundamental techniques and skills - not whether she can apply these fundamental techniques and skills while behaving under a very strict set of rules.
Even in early level mathematics, there are plenty of opportunities to introduce word problems that can only be solved by applying the relevant techniques. As long as the teachers are defining these word problems themselves (rather than pulling them from an online resource), they stand as a pretty good guard against cheating since they require students to first recognize the technique that needs to be applied, and then to extract the relevant variables from the word problem to apply that technique.
Furthermore, in early level mathematics, you can still have students present solutions to problems and explain why the solution works. For instance, say you were interested in whether or not a student has grasped the basics of derivatives - simply get on a call with that student, give them a random function to solve the derivative for, and then have them do so in front of you.
These are all things I've quickly thought of that would have at least be partially effective in measuring knowledge. I imagine any person with a career dedicated to instructing students could come up with many more options that could be even more effective.
> Instead, they seem happy we're making their study possible, and accepting of what they're asked to do. They know it's important that they can demonstrate unequivocally that they have particular skills.
Students are happy to be able to study and know that it's important to demonstrate their skills - but that doesn't mean that they wouldn't be happier if they could demonstrate their skills without the invasive testing software. I argue this is setting up a false choice: "you can either learn nothing at all, or do so under this cheating software". But the reality of the situation is that they can still learn and demonstrate their skills without the cheating software.
I tend to agree that the world generally is more dull than people like to believe. However, I'd say that mostly applies to people trying to make more ordinary events more extraordinary. But in the case of covid, we already have an extraordinary situation.
I don't have any evidence that the CDC was deliberately withholding information (not to say it doesn't exist), but I do have evidence that medical leaders have felt it okay to lie to the public with regards to covid.
> In the pandemic’s early days, Dr. Fauci tended to cite the same 60 to 70 percent... And last week, in an interview with CNBC News, he said “75, 80, 85 percent” and “75 to 80-plus percent.”
> In a telephone interview the next day, Dr. Fauci acknowledged that he had slowly but deliberately been moving the goal posts. He is doing so, he said, partly based on new science, and partly on his gut feeling that the country is finally ready to hear what he really thinks.
> “When polls said only about half of all Americans would take a vaccine, I was saying herd immunity would take 70 to 75 percent,” Dr. Fauci said. “Then, when newer surveys said 60 percent or more would take it, I thought, ‘I can nudge this up a bit,’ so I went to 80, 85.”
I think this direct quote (from your article) best sums up his position:
> “We need to have some humility here,” he added. “We really don’t know what the real number is. I think the real range is somewhere between 70 to 90 percent. But, I’m not going to say 90 percent.”
He's not lying, like the rest of us, he doesn't know. Especially, with the advent of the new variants.
Also, it's odd you're pointing out him giving information during an on-the-record interview as evidence that he's lying.
> Dr. Fauci acknowledged that he had slowly but deliberately been moving the goal posts... partly on his gut feeling that the country is finally ready to hear what he really thinks.
> Lying by omission, also known as a continuing misrepresentation or quote mining, occurs when an important fact is left out in order to foster a misconception. Lying by omission includes the failure to correct pre-existing misconceptions.
Dr. Fauci believed that the real range was somewhere between 70 to 90 percent as you pointed out; however, he gave lower estimates to the public because he didn't believe the public was "ready to hear what he really thinks". So not only was he failing to correct pre-existing misconceptions, but he was actively spreading misconceptions about how much of the population he believed needed to be vaccinated. As such, he was lying by omission.
I did a bit of googling and from what I can tell, Dr. Fauci has been has been messaging the higher figures since at least 1 December 2020. So starting from the period just before we've had vaccines approved for emergency use, Dr Fauci has been using the higher figures.
When Dr. Fauci was claiming lower figures: no vaccine had completed clinical trials; variants had not yet emerged; and governments were more willing to implement lockdowns, mask orders, and social distancing restrictions.
If he's guilty of anything, it is misleading the public into thinking he knows what the number is. Nobody knows what the real number is, there are so many confounding factors that it'll take years of research to come up with a decent estimate as to the real herd immunity level of Covid-19 immunisation.
This is a general problem in public scientific messaging. Science deals with uncertainty and nuance, but the fearful public seek certainty and simplicity. People in these positions don't always get the balance right (especially in a crisis), but I wouldn't impune somebody's reputation on that basis.
So by your logic, it would be okay for a twitter employee to modify a tweet from of the president's account to declare war (or to any number of things that would have very real repercussions in the real world)
I take no issue with the modification. Trying to start a war is wrong, but that's true regardless of method. Giving someone food isn't a problem, giving someone food you know they are deathly allergic to on the other hand is. They would certainly be responsible for any damages they caused through malice or negligence, but they have the right to face those consequences.
It appears that reddit prefers that people believe it is the original author's words when they edit someone's post:
> "I messed with the “f*** u/spez” comments, replacing "spez" with r/the_donald mods for about an hour," Huffman said, indicating that the only thing he secretly altered was the target of the insults.
> We’re seeing a number of good questions regarding where our policies around public information, personal information, and harassment intersect. While we’re unable to comment on specific employment details, we do want to address a few of these questions, especially around what is or isn’t allowed to be posted. A few answers:
May we allow articles about an admin's personal and professional history?
Yes, articles are allowed to be posted on Reddit as long as they do not spread private information or invite harassment against others.
May we allow proper names of admins?
It depends on the context - posting of any personal information, including names, coupled with harassment of any sort may result in action by us. Some admins are public figures by virtue of their job, so those names are okay. Other employees may have chosen to explicitly link their usernames to their real life, that’s also okay. Some employees may have taken pains to not associate themselves with their specific usernames for safety reasons, in which case linking their names to their account is not ok.
Can we allow wikipedia pages if they mention the names of admins?
As long as it’s not being posted in conjunction with other rule breaking content, nor as a springboard for harassment.
If we approve this kind of content can we be banned?
We know mods make mistakes and it’s only a problem if we see it becoming a pattern. If we see that we will talk to you before further steps are taken. That said, we sometimes make mistakes too, as we did in this instance. When we do so, we will correct the situation as quickly as possible.
Nevertheless, there have been instances where mods have been removed from their positions or suspended over repeatedly ignoring site wide rules or encouraging others to break them.
Given that this person is a public figure, why is this standard in place? They ran for public office and have been covered in the media.
Our intent was never to remove any and all mentions of this admin’s name. Just an overzealous automation when attempting to prevent doxxing and harassment.
Ok, so why did you suspend the mod last night just for posting the name of an admin? (this is not a quoted question, but a sentiment we’re still seeing here so wish to address)
As we mentioned, this was an error on our part and quickly rectified with the mod team in question. We also communicated clearly with them while we were in the process of resolving this.
Here's an article from a year ago on EARN IT:
> Theoretically, a system that uses client-side scanning could still send messages encrypted end to end, and so the Leahy amendment would not offer any protection, but many of the same confidentiality concerns with backdoored “e2ee” systems would continue to apply.
Source: https://cdt.org/insights/the-new-earn-it-act-still-threatens...