> Captivity: The monkey is likely being kept in captivity, as evidenced by the harness and the lack of natural surroundings.
> Animal Abuse: The image raises concerns about animal welfare. The monkey's tethering and the apparent lack of suitable living conditions could indicate potential abuse.
> Exotic Pet Trade: The monkey might be part of the illegal exotic pet trade. Many countries have strict regulations against keeping wild animals as pets, especially primates.
The article claims that it means the AI detected the abuse perfectly, but I don't think it did. It points out things that could be signs of abuse (I'm guessing the prompt primed Gemini to look for them), but signs of abuse alone wouldn't be enough to delete a video, especially if the clip was part of, say, a documentary.
The AI missed two things: that the monkey's leash wasn't just a restraint but a form of deliberate torture, and that the purpose of the video is to make a display of the abuse (as opposed to the abuse being an incidental result of bad living conditions). Those are the things you'd really want to flag for deletion, and I think a current tech AI is unlikely to find them without some deliberate prompt engineering.
I do think we're moving towards platforms taking more and more responsibility for this kind of shit, and they should. But it's naive to assume there's some product manager at google somewhere who is aware of the millions of animal abuse videos on the platform and went "oh no we should preserve them, they bring in ad revenue". That shit is niche.
I realize this isn’t something people want to think about, but that’s precisely why these people are doing what they do unchecked. The most surprising thing about this is that the people generally are living pretty normal lives and those close to them don’t know that they are running this industry
Probably not a "cut corner," as that's unfortunately normal[0][1] (people have been trying to change it for decades now).
Happily Pfizer and Biontech have had extensive clinical trials ongoing for pregnant women since February 2021[2][3], and there was a preliminary analysis done across 35,691 recipients that had good findings[4]. It's something that regulators have paid a lot of attention to, with tens of thousands enrolled in the V-safe COVID-19 Vaccine Pregnancy Registry.[5]
Here's a short list of some aspects of the vaccine trials that most people would find surprising and out of step with how safety is described:
1. More people in the vaccinated arm died than in the unvaccinated arm.
Effectiveness against death = negative. This was ignored because, they said, the difference was too small to be statistically significant. That's not a logical way to use the concept of statistical significance. What these results meant is that the vaccines might kill more people than they save, or it might be a statistical fluke. A normal person would expect such a result to drive demand for more data to resolve the question definitively, but that didn't happen and now heavily vaccinated countries have non-COVID excess death that started at the time of the rollout. The goal of the vaccines was to save lives but now people are dying of non-COVID causes at a greater rate than expected.
2. At least one of the deaths in the Pfizer trial was advertised as non-vaccine related even though it was.
Anonymized study subject C4591001 1162 11621327 was found dead in his apartment several days after taking the first dose, likely time of death was only one or two days after the first dose given that police were called due to a welfare check and his body was found cold. The coroner didn't know he was in a vaccine trial and ruled the death cardiac/arteriosclerosis related, no autopsy was done and this report was then presented as evidence that the death wasn't vaccine related despite the obvious temporal association.
3. No studies of the effects on pregnancy at all. This is normal and to protect babies.
Now we have the birthrate figures for the first quarter 9 months after the vaxx rollout reached mothers of childbearing age and they are down 15%-25% which is a huge difference, quite unprecedented. In Taiwan births are down 27%! So it looks a lot like the vaccines have trashed our already low fertility rates, which is a catastrophic outcome especially as they should never have been administered to women of that age to begin with (look at excess mortality by age for before the vaxx rollout, there's none under 45 for all of Europe and in some places like Sweden, none under 75).
3. The placebo arm received another vaccine, not saline as you would expect.
This is because the bad reactions would unblind people otherwise, so you have to give people something that will give them equally bad reactions. There are two problems with this: (a) the counter-factual in reality is not some random other vaccine but rather no vaccine at all, so they weren't testing against what would actually happen in the real world, and (b) although the trials are advertised as the pinnacle of scientific rationality there is absolutely nothing rational about the placebo effect. Think about it.
4. Cases of severe cardiac damage were fraudulently recorded.
Study subject C4591001 12312982, now known to be a 35 year old Argentinian lawyer named Augusto Roux, started to immediately feel unwell and developed a high fever on his way home after taking his second dose. A couple of days later he fainted and went to hospital where a CAT scan revealed heart inflammation; the doctor concluded vaccine damage. Augusto was told by nurses that there had been a huge influx of patients coming to the hospital from the trial. One nurse estimated maybe 300 people which would have been 10% of the patients from that part of the trial alone, which is why they were able to quickly identify the likely cause.
When he contacted the trial operators to inform them of his hospital visit they wrote down that it was not vaccine related, in contradiction to the diagnosis by the hospital, and that he'd been admitted for "bilateral pneumonia". Later on they updated the diagnosis to be COVID-19, which wasn't even then counted against vaccine effectiveness because he had a negative test.
Even worse, Roux appealed through the regulator to get the trial operators to unblind him (which they falsely claimed they could not do). Immediately before the appeal was due to be heard, the lead trial doctor (a pediatrician!) put in his trial record around the time of the regulatory appeal that Augusto was mentally ill, due to supposed "anxiety". No actual medical work was done to establish this fake diagnosis. It simply appeared.
it does work though, because the sandwhichers will just take the delta between the accepted amount out, and the actual expected amount out, front run that, then back run the bulk of the trade with selling that delta on top of the bulk of the trade and benefiting from the price impact that the sandwhiched trade created
I'm far from an expert on this, you probably know better than I do, but wouldn't forking the blockchain at the block height you're interested in with Ganache and execute it there be enough? or is the issue the transaction queue?
I'm curious about exploring this myself, would you mind explaining in more detail what is lacking after forking the main net with ganache?
Is it that you need a high powered server to fork it continuously, and that smart contracts tend to cost a lot to execute, or is there more to it than that? I don't understand what contracts and variables that you're targeting won't be included with the solution we're discussing
You need an archival node to store all the states and also monitor unconfirmed transactions. The primary client go-ethereum (geth) creates archival nodes that grow 1 terabyte per year (7tb now).
Despite archival nodes not being needed for consensus, some people’s kneejerk response is that this means Ethereum is not sustainable, but really geth just has unoptimal coding.
There are other clients out there that have chosen different data structures and reduced this by an order of magnitude. Turbo-geth, for example can sync an archival nodes at 1tb and is working on other improvements.
There is more support for geth at the moment but this is also an area of development if you want to improve upon.
Its likely many nodes out there are using their own optimizations and are not offered to the community.
I've had the opposite experience too. I have an Nvidia linux card and Wayland on Fedora has always worked very well for me. In comparison Xorg on Ubuntu was a nightmare on the same machine
you completely missed the point. the fact that the squeeze fails means that it fails to squeeze the shorts of the hedge fund, which is the one that will be coughing up billions to pay all the retailers. So yes, the retailers that are buying shares at higher prices will indeed lose actual real money if the short squeeze fails. And it will be due to what is likely to be market manipulation (ie screwing over its clients) by Robinhood
Scenario 1:
1. Client buys share for $400
2. Hedge funds get short squeezed (because trading is not restricted by Robinhood)
3. Stock shoots up to $1000. Client made $600
Scenario 2:
1. Client buys share for $400
2. Robinhood restricts buying the shares (so the short squeeze doesn't happen)
3. Shares plunge to $50 because the short squeeze failed to happen (ie the hedge fund did not have to purchase the shares at higher prices, which is where the capital that the traders would share amongst themselves would come from)
4. Client lost $350
comboy is saying that the lottery ticket buyer is being kept from participating in a winning strat, which is what's happening to retail investors right now, which is the same thing you outlined. You're both saying the same thing. Not downvoting you btw.
it's not the same because many retailers do lose money because the short squeeze fails to execute. They dont just not make money, they buy shares for $300 and have to sell them for $100
It turns out that the benchmarks for M1 vs latest generation Intel & AMD CPU's are indeed overblown, and it is an incremental improvement more than a great leap forward.
The source of the confusion has been the benchmarking software. To saturate one core on an Intel processor you need to run two threads, because that's the way they are designed. So the single thread benchmarks that have been used so far have been using 50% of the capacity of an Intel CPU core and comparing it with 100% of the capacity of an M1 CPU core.
I didn't until you linked to it, but having read it it still does appear that comparing single thread to single thread is not very accurate. Neither would core to core considering single thread does hold some weight compared to real world workloads.
I guess at the very least CPU benchmarking software should have a "thread to thread" benchmarker alongside a "core to core" benchmarker, or something along those lines.
That would be in the spirit of having benchmarks indicate real world usage
Eh? I mean, by the same token you could say that single-core benchmarks of Intel chips are invalid, because you need eight threads or whatever to saturate a POWER7's multi-way SMT.
The main purpose of single-threaded benchmarks is to approximate performance for things which are actually single-threaded.
Of course, the scheduler can easily be told to run both benchmark threads on the two hyperthreads of a given core.
The argument to be made is that because of the resources dedicated to SMT, single thread on AMD/Intel versus single thread on Apple is not measuring the true potential performance of the whole core. In principle, some multithreaded workload over all available threads could be a better metric for whole-processor performance.
Fair enough.
But if you're comparing single threaded performance, there isn't a reason to split the workload into two threads for AMD/Intel and have it as a single thread for Apple.
If I had a single threaded application or a single threaded critical path of a multithreaded application.
Benchmarking multicore Performance+Efficiency (Apple) versus SMT (AMD) versus SMT+Wide vectors (Intel) is never going to provide perfect apples-to-apples comparisons. There's an entirely reasonable argument that single-thread performance is oversold as a metric, and that the focus on it advantages some platforms over others.
At the end of the day, benchmarks are inherently only an approximate measure of how real-world code will perform. SMT, basically by definition, is rarely going to benchmark well, but is inherently going to show more of a benefit when running real-world mixed workloads.
if only they'd do this for apps accessing the microphone, or do the same thing as they do with the camera making the application visibly request access
You wouldn't know of any reason to believe that the NSA had access to google traffic back when they did if Snowden didn't allow you to know. Why choose to assume they don't rather than do?