In 1985, a microwave was affordable and portable, if you had to do a rapid flit. If a house was set up for gas, then the stove (which typically was left in a house) was gas.
If you were illegally occupying a house scheduled to be demolished, with only electricity and a judges order to quit over your head, what would you do? Go and buy a full oven or cook in a microwave?
I do not have a microwave, but I remember having one, and never managed to intuitively use it to iterate on my cooking.
Meanwhile, throw stuff in the pan, move it around, adjust the temperature, add in some stuff as it goes, is a much more interactive type of cooking that is much more likely to take me where I want to go (tasty food).
It depends what you're trying to make. There are two things I almost always cook from scratch in the microwave, and that's trifle sponge (because I don't care if the sponge cake is going to be a bit dry and heavy, because I'm about to break it all up, mix it with diced fruit, and pour some sherry and quite a lot of jelly over it) and onion paste for curry.
If you want to make curry from scratch you can either do the whole thing in one pan and get "homestyle" curry - which is good - or you can make an onion paste by either cooking a very mildly spicy but ultimately rather bland onion soup for an hour to make the "base gravy", or by just chopping three or four onions and sticking them in the microwave on full blast for ten minutes before mooshing them with the hand blender.
Then you just bloom your spices in a bit of oil, chuck in some garlic and ginger paste (literally about the same amount of peeled garlic cloves and peeled ginger root mooshed up with the blender in a little oil and water) and let it bubble a bit, chuck in whatever veg and meat you're adding, and then slowly start adding your onion gloop, and boom, restaurant-style curry.
If you make the garlic and ginger paste in advance, and precook the meat a little (beef kind of wants to be stewed until it's tender, and then you can fire in the stock it's stewing in) then you can knock out an incredibly tasty curry in the same amount of time it takes to cook the rice.
And that's how restaurants do it, because you're not going to wait two hours for a homestyle curry to cook off properly.
I'd wager a lot of money that the huge majority of software engineers are not aware of almost any transformations that an optimizing compiler does. Especially after decades of growth in languages where most of the optimization is done in JIT rather than a traditional compilation process.
The big thing here is that the transformations maintain the clearly and rigorously defined semantics such that even if an engineer can't say precisely what code is being emitted, they can say with total confidence what the output of that code will be.
> the huge majority of software engineers are not aware of almost any transformations that an optimizing compiler does
They may not, but they can be. Buy a book like "Engineering a Compiler", familiarize yourself with the Optimization chapters, study some papers and the compiler source code (most are OSS). Optimization techniques are not spell locked in a cave under a mountain waiting for the chosen one.
We can always verify the compiler that way, but it's costly. Instead, we trust the developers just like we trust that the restaurant's chef are not poisoning our food.
They can't! They can fairly safely assume that the binary corresponds correctly to the C++ they've written, but they can't actually claim anything about about the output other than "it compiles".
Yea, the pervasiveness of this analogy is annoying because it's wrong (because a compiler is deterministic and tends to be a single point of trust, rather than trusting a crowdsourced package manager or a fuzzy machine learning model trained on a dubiously-curated sampling of what is often the entire internet), but it's hilarious because it's a bunch of programmers telling on themselves. You can know, at least at a high level of abstraction, what a compiler is doing with some basic googling, and a deeper understanding is a fairly common requirement in computer science education at the undergrad level
Don't get me wrong, I don't think you need or should need a degree to program, but if your standard of what abstractions you should trust is "all of them, it's perfectly fine to use a bunch of random stuff from anywhere that you haven't the first clue how it works or who made it" then I don't trust you to build stuff for me
I think you're mistaken on that. Maybe me and the engineers I know are below average on this but even our combined knowledge of the kinds of things _real_ compilers get up to probably only scratches the surface. Don't get me wrong, I know what compilers do _in principle_. Hell I've even built a toy compiler or two. But the compilers I use for work? I just trust that the know what they're doing.
Not in any great detail. Gold vs ld isn't something I bet most programmers know rigorously, and thats fine! Compilers aren't deterministic, but we don't care because they're deterministic enough. Debian started a reproducible computing project in 2013 and, thirteen years later, we can maybe have that happen if you set everything up juuuuuust right.
They also realize that adding two integers in a higher level language could look quite different when compiled depending on the target hardware, but they still understand what is happening. Contrast that with your average llm user asking it to write a parser or http client from scratch. They have no idea how either of those things work nor do they have any chance at all of constructing one on their own.
Valid question. Mostly because I wanted linking and couldn't be bothered to lookup vlookup. Naturally, the first alternative approach I considered was build a terminal app.
This sounds like almost the best business environment for criminals.
"I am sorry judge, yes, it could be that we are involved in crime, but we have been too busy counting billions of dollars each year. As you might understand, businesses are not part of society, they should only be judged on their shareholder value. We reap the profits, society pays for the collateral damage, that's only fair."
Yes, you mentioned leeway. That would only make sense in the context of an entity understanding it's role. It does like in the way above.
For what reason should we allow such leeway? No hosted platform in the 80s was responsible for a similar amount. Maybe if Meta can't properly police such a large platform it shouldn't be allowed to operate one. Facebook doesn't have to exist and we don't have to accept weak cries of "it's our best effort!"
Why shouldn't we? It seems an incredibly difficult problem. They have reviewers who make subjective calls on subjective rules. The leeway not only gives the opportunity for the user to improve but also gives the reviewers leeway to flag borderline posts without harshly punishing users.
17 is a weird number but having a number is perfectly reasonable to me.
It's also possible that users could misuse the reporting system, in order to get other users' accounts suspended.
Requiring N distinct reports of a suspension reason would seem to reduce misuses of the reporting system.
The 17-reports threshold might have been found to balance type-1 and type-2 errors, as account removals are costly actions when made in error or as a result of reporting-system misuse.
79% of ALL child sex trafficking. 4 out of 5 child sex slaves exist thanks to Facebook's policies.
But sure, go on and talk about "leeway" and "limited capabilities" for a company worth nearly a trillion dollars. Do you honestly believe this is acceptable? What are your vested interests here?
Since you're emphasizing the ALL, I am obligated to nitpick that it is not all. The source article says that, but it's wrong; the underlying link clarifies that it's 79% of sex trafficking which occurs on social media. As has been discussed downthread, a social media platform with large marketshare is always going to have a large percentage of every bad thing that can happen on social media.
Do you have a citation for that? You may be right for all I know. I don't know much about it. But that seems unlikely to me, and if it's true, I'd like a reference I can show others when I'm trying to get them to finally close their account.
> [the report] found that 65% of child sex trafficking victims recruited on social media were recruited from Facebook
Even in 2020, I'm very skeptical that so many children were on Facebook that it could account for 2/3 of recruitment. My own kids say that they and their friends are all but allergic to Facebook. It's the uncool hangout for old people, not where teens want to be.
I may be wrong, and I'm certainly not going to tell someone that they're wrong for citing a government study. Still, I doubt it.
The number is wrong / the citation is misleading. It’s closer to 20-30% according to that study, the 79% is referring specifically to cases involving social media, of which Meta platforms are obviously going to make up a large percentage.
There’s also a reporting bias here I’m sure - if Meta is better at reporting these cases then they will become a larger percentage, etc.
You don't really need a majority of potential victims to go to location X for victims from location X to make up a majority of victims; that just means that location X is a low-risk, high-reward place for criminals to lurk looking for victims.
Thanks for looking into it and pulling out that quote. I notice there are some moving goalposts — the parent article claims 79% of _all_ minor sexual trafficking (emphasis mine), but the govt report found
> 65% of child sex trafficking victims recruited _on social
media_ were recruited from Facebook, with 14% being recruited on Instagram
(Emphasis mine). I think the parent article is repeatedly lying about the facts, that’s super annoying. I’m not at all surprised that Facebook and Instagram have the lions share of social-media victims, because they also have the lions share of social media users.
> 4 out of 5 child sex slaves exist thanks to Facebook's policies.
Even if your 79% number is correct, this does not follow. It like if someone said, 30 years ago, that 95% of total advertisements were in the classified section that 9 out of 10 retail sales happened thanks to the classifieds.
(I’m not trying to excuse Facebook’s behavior. But maybe criticisms of Facebook would be more effective if they stayed on track.)
I’m not nitpicking a weird edge case. I’m nitpicking a completely unsound inference. Even if Facebook indeed accounts for 79% of total instances of children being trafficked, it does not follow at all that removing Facebook from the picture would have reduced the number by anywhere near 79%.
Nobody in Salem wanted to be seen to stand up for witches.
I have never had a Facebook account because I never liked what they do, but this 'evidence' against them seems like they are relying on the seriousness of the allegations more than the accuracy.
You are saying that from our perspective. I don't think the argument that witches are not real would have gained you much ground back then.
We don't have the years of analysis of what actually happened for things happening right now.
While a lot of people feel a lot of certainty about all manner of social media harms, the scientific consensus is much less clear. Sure you can pull up studies showing something that looks pretty bad, but you can also find ones that say that climate change is not occurring. The best we have to go on is scientific consensus. The consensus, is not there yet. How do you tell if Jonathan Haidt is another Andrew Wakefield?
I'm not making any claims of certaincy. I have not published any books making claims of harm. I have not gone on a tour of interviews the world over trying to build public opinion instead of building consensus that the information is true.
That's how I know.
I also don't go around talking about race based differences in IQ, but that's just Haidt.
Like, I'd think that was a bad policy for murder in particular, but "we don't allow things but we give you a lot of chances to correct your behavior" is ordinary.
Nonsense. There are lots of things that you need more than three strikes for, especially on a platform that you expect to use for decades.
I'm not here to say that Facebook's enforcement behavior is optimal, and I don't know that a "17 strike policy" is a full description of their enforcement behavior. But there are plenty of behaviors that you want to discourage but not go nuclear about.
The area where I've seen the most homegrown implementations of things like these is HFT, with the caveat it's also designed to be distributed, integrated with isolation systems, start/stop dependency graphs...
I once worked for a company which chose to use Kubernetes instead, they regretted it.
reply