Unfortunately, when you access multiple accounts from the same set of IP addresses and browser signatures, you can bet Google, Apple, Microsoft, and any other large company with that level of information collection has probably correlated all of those accounts to you. The company may lock them all if any one of them is suspected of "bad behavior".
That didn't stop people from throwing a fit over master-slave terminology in software (having nothing to do with slavery), going so far as to rename long-standing development branch names, as well as put significant effort into removing such terms from the code itself and any documentation.
No; roughly, yes. Based on the crystal structure of the metal, fatigue works differently.
> The fatigue limit or endurance limit is the stress level below which an infinite number of loading cycles can be applied to a material without causing fatigue failure.[1] Some metals such as ferrous alloys and titanium alloys have a distinct limit,[2] whereas others such as aluminium and copper do not and will eventually fail even from small stress amplitudes.
Very few legislators have expertise in anything except demagoguery, pandering, and graft. Having more of them to form more subcommittees to mess up more areas of the law... no thanks.
We need merit-selected technical committees of non-representatives to advise politicians and tell them clearly, in as much detail as necessary, when they're wrong on something. If the politicians don't listen, the technical committees should be independent and able to make their case on the internet and social media.
Implementing that would be difficult. The metric for merit is a challenge, and is itself easily coopted by politics. For example, China's vaunted "political meritocracy" is ultimately controlled by party leaders in the CCP, so it's basically a meritocracy for the CCP-aligned, not a meritocracy for anyone else. If a government's goals contradict facts-on-the-ground, the government will find a way to skew an "independent" technical committee to suppress those facts.
College algebra is just rehashed high school algebra.
Math is a broad subject. This is something LLMs are actually reasonably good at: ask them for textbook recommendations, and get into a dialogue about which sub-areas of math you're interested in and what level you're currently at, whether you want pure math (more theorem-proof focused) or applied math (more practice solving concrete problems, e.g. finding lots of derivatives and integrals). Toss in names of books that have been recommended and ask where they fit in to the LLM's other recommendations.
LLMs don't understand the math, but they're trained on a lot of discussions and recommendations for math books, and have a reasonably good sense of what level different books are at.
Download multiple recommendations in each area and try them all out. Seeing how different authors start out approaching largely the same material will help you conceptualize it better than just relying on a single approach. There's no universal "right" book to learn from. I wouldn't buy non-free textbooks without trying them out first.
Youtube has a lot of math lecture series, which can help if you're stuck on a particular point, but they're not the same as doing problem sets yourself.
LLMs DO understand the math. At least Claude does. Seems to be able to solve linear systems, invert matrices, and do a significant amount calculus, and handle some seriously advanced math problems. I haven't probed the outer limits, but, so far, Claude has handled all the math problems I've given it so far.
Even if a deep thinking LLM like Opus can get some math questions right when that depends on identifying the type of problem and applying a learned procedure, it's not going to be able to evaluate the pedagogy of math books it's never encountered, or at most was fringe material in its training set.
I'm also referring to the faster models, not the slow and expensive deep thinking ones which I have little experience with. I don't see how reasoning would enable deep thinking models to meaningfully evaluate textbook pedagogy, either.
It seems pretty silly to make pronouncements about what LLMs are capable of doing if the sum total of your experience is casual use of the cheapest and least capable LLMs. (ChatGPT: measured IQ of about 70, vs. measured IQs of 120+ for more capable models, some of which are available for free).
They DO understand what they are doing. When I ask it to solve math problems, it goes through the several (many) steps involved (e.g. e.g. "apply the chain rule" while doing partial differentiation on a term in a Jacobian matrix). It gets pretty tedious when solving systems of linear equations, where it goes through each step of the Gauss-Jordan elimination while doing an LU decomposition, row by row. But one learns to ignore the blah-blah. Step by step, in absolutely ridiculous detail. The point: they absolutely 100% understand what they are doing, and understand it in minute detail.
It's clearly NOT regurgitating something that it has literally seen before, because the level of detail is beyond ridiculous for a human. It is applying generalized rules to specific concrete problems, and doing so with some level of strategic thinking.
Where did it learn those generalized principles, and how did it learn to do that? With absolute certainty, there are math textbooks among the materials they have been trained on. And they certainly learned it from SOMEWHERE. Probably math textbooks. How did they learn to generalize and think strategically? Well, that's the big mystery, isn't it? But they do.
The very best models achieve high scores on Math Olympiad problem sets (so competitive with some of the best minds on the planet). And Terrence Tau (greatest living mathematician) declares state-of-the-art models to be "better than most of my post-graduate students".
And what they can and cannot do is increasing by leaps and bounds on a weekly or monthly basis right now. It's hard to keep up. I frequently find that they can do things this week, that they could not do a week or a month ago.
Startling, and quite utterly amazing.
Most of the time, I am using Claude Sonnet 4.5 as my coding agent, for which I pay $10/month. Measured IQ of 110, I think, with an IQ of 120 if you flip it into thinking mode. But only because there isn't enough undergraduate level mathematics in a standard IQ test. Claude Sonnet 4.5 is also available for free here: https://claude.ai/chats (during periods of heavy load, it may fall back to simpler models). I often use the free web interface instead of the Coding Agent interface for math problems, because it's easier to read mathematical equations in the browser version. version). And I generally use the free version of Claude instead of Google Search these days.
You're arguing things I didn't argue. The top-level comment wasn't about how to do some math problem that Sonnet or even Opus is capable of; it was about math book recommendations, and I was specifically mentioning that even though the LLM won't understand the math pedagogy behind why one book might be better than another, it's trained on enough commentary that it will give good recommendations (or anti-recommendations) for any well-known textbooks.
My experience with people who have LLM subscriptions of any kind is that they use them all the time and would immediately ask an LLM that kind of question, rather than asking on a web forum that's not even dedicated to math. So I think it's a fair presumption that someone asking that question doesn't have access to the best commercial models.
On the largely irrelevant question of what math LLMs can do, although Opus may do better, Sonnet can follow procedures sometimes but not consistently. It has blind spots and can't scale procedures; beyond certain numbers or dimensions or problem complexity, it just guesses (wrong). And those limits are quite low. If you want 2 simple examples:
LLMs follow procedures, but whimsically. Better LLMs will be less whimsical, but they still won't be fully competent unless they digest questions into more formal terms and then interface with an engine like Wolfram.
Symptoms? Is it limited to when a site has Cloudflare's more aggressive protection turned on? I haven't noticed any problems I've attributed to Cloudflare, and I use Firefox exclusively.
I have more restrictive protections on. If you use just loose settings, it completes, but advanced fingerprint protection, for example, breaks captcha completion.
This matches my experience as well. As a FF user, I very occasionally encounter problems, but these don't seem to be correlated to their using CF protections. Much more often I find sites broken that rely on cloud domains with bad reputations, which my DNS filters block.
I was actually wondering if the stuff that Mozilla's talking about here will be used by bad bot people to try to circumvent CF's abuse protections. As I recall from when I was working with them, CF's service relies in part on being able to identify botnet attacks by doing its own fingerprinting.
Open computing still exists. It's just overshadowed by the prevalence of locked mobile devices because those are convenient and good enough for the vast majority, who would rather use those than a less convenient desktop, laptop, or even raspberrypi.
Surveillance on the internet is challenging to avoid, but internet surveillance and tracking doesn't extend to (outside-of-browser) local compute.
Are you confusing their comments about (paraphrased) "horrible but legal" (up to a point) sites like dailystormer, 8chan, and kiwifarms, with actual blatant phishing sites?
I find it very difficult to believe they won't remove sites involved in clear phishing or malware delivery campaigns, if they can verify it themselves or in cooperation with a security team at a company they trust. That's different from sites that are morally repugnant and whose members spew vitriol, but aren't making any particular threats (and even in cases where there are clear and present threats, CF usually seems to prefer to notify law enforcement, and then follow court orders, rather than inject themselves as a 3rd party judge into the proceedings).
>I find it very difficult to believe they won't remove sites involved in clear phishing or malware delivery campaigns, if they can verify it themselves or in cooperation with a security team at a company they trust.
You may find it difficult to believe buts its true. Tons of phishing and malicious websites use CF nameservers to prevent ddos attacks and etc and Crimeflare will not terminate their access or accounts when reported for the reason I stated above. Even if it's something obvious like coinbase-account-login.com or etc. they do not give a fuck.
Lot of phish malware and ddos (booter's) use CF with options WAF=enabled. So tool like urlscan, abuse.ch cannot connect to check for phish or run scan.
This isn’t true about Daily S. They have been actively working towards and expressively proposing a new holocaust for decades now. In what way are they not an existential threat for Jews, or LGBTQ?
6.6 kW, for... COP 4, T₁-T₀ = 30 [K] (lower value for warm climate), allowable 30 minute heating time, 50 gallon capacity. A cold climate could double that power requirement, or alternatively double the heating time.
reply