What's the snallest possible program that accepts a chess board state and prints any legal move? True randomness may only have a couple hundred ELO, but then, that's pretty big for golf
The program that resigns every time unfortunately does a lot worse than random. But it depends on the population it's pitted against - it should at least pick up a few points against copies of itself.
Perhaps playing 1. e4 2. Bc4 3. Qh5 4. Qf7 (and resigning or offering a draw if some move isn't legal) would minmax this further
The problem isn't really well defined. Elo rating is assumed to be determinable independent of what opponents you face, so scoring 50% against opponents rated 1800 gives you the same information as scoring 26% against opponents rated 2000. In practice that's obviously not completely true, and for degenerate examples like the ones we are discussing it completely falls apart.
Specific fields may not advance for decades at a time, but we are hardly in a scientific drought. There have been dramatic advances in countless fields over the last 20 years alone and there is no good reason to expect such advances to abruptly cease. Frankly this is far too pessimistic.
I don't understand what is wrong with pessimism. That's not a valid critique. If someone is pessimistic but his description of the world matches REALITY, then there's nothing wrong with his view point.
Either way this is also opinion based.
There hasn't been a revolutionary change in technology in the last 20 years. I don't consider smart phones to be revolutionary. I consider going to the moon revolutionary and catching a rocket sort of revolutionary.
Actually I take that back I predict mars as a possible break through along with LLMs, but we got lucky with musk.
You imply your view "matches REALITY", then fall back to "Either way this is also opinion based." Nicely played. But the actual reality is that scientific discovery is proceeding at least as fast as it ever has. These things take time. 20 years is a laughably short time in which to declare defeat, even ignoring the fact that genetic and other biological tech has advanced leaps and bounds in that time. There's important work happening in solid state physics and materials science. JWST is overturning old theories and spawning new ones in cosmology. There's every reality-based reason to believe there will be plenty of big changes in science in the next 20 years or so.
No, your opinions bias toward negativity, and we can see it in this comment by the way you shift the goalposts for every achievement until you can poo-poo it. Oh, except for the ones you just omitted from your quote, maybe because even you can't rationalize why CRISPR isn't a step change.
>No, your opinions bias toward negativity, and we can see it in this comment by the way you shift the goalposts for every achievement until you can poo-poo it. Oh, except for the ones you just omitted from your quote, maybe because even you can't rationalize why CRISPR isn't a step change.
Not true at all. CRISPR isn't a step change because it only made genetic engineering more efficient and it didn't effect the lives of most people. It's still a research thing.
I didn't poo-poo AI did I? That's the favorite thing for everyone to poo-poo these days and ironically it's the one thing that effects everyones life and is causing paradigm shifting changes in society right now.
CRISPR "only made genetic engineering more efficient" which is no big deal. Smartphones don't count though, despite both requiring scientific breakthroughs in multiple fields and turning society upside down, because... reasons. Your standards are incoherent.
BTW, for someone who claims not to poo-poo AI, I find it hilarious that you still don't think we're due for another breakthrough or two in that area in the next decade or so. I hate the current genAI craze and I still think that's coming.
It’s no longer a break through because the breakthrough already happened. Everything subsequent to LLMs is an incremental increase in optimization and not a breakthrough. Even if some breakthrough occurs it will be dragged through shit and ridiculed for being overused for generating slop.
Smartphones required zero breakthroughs. It’s just existing technology made smaller and more efficient. What changed is how we used technology. Under your reasoning dating apps would be a breakthrough.
genetic technology and computing technology have been the biggest drivers for a while. i do think it is remarkable to video call another continent. communication technology is disruptive and revolutionary though it looks like chaos. ai is interesting too if it lives up to the hype even slightly.
catching a rocket is very impressive, but its just a lower cost method for earth orbit. it does unlock megaconstellations tho
Yeah none of those are step function changes. Video calling another continent is like a tiny step from TV. Yeah I receive video wirelessly on my tv not that amazed when I can stretch the distance further with a call that has video. Big deal.
AI is the step function change. The irony is that it became so pervasive and intertwined with slop people like you forget that what it does now (write all code) was unheard of just a couple years ago. ai surpassed the hype, now it’s popular to talk shit about it.
If you want it stated precisely, the function is human cognitive labor per unit time and cost.
For decades, progress mostly shifted physical constraints or communication bandwidth. Faster chips, better networks, cheaper storage. Those move slopes, not discontinuities. Humans still had to think, reason, design, write, debug. The bottleneck stayed human cognition.
LLMs changed that. Not marginally. Qualitatively.
The input to the function used to be “a human with training.” The output was plans, code, explanations, synthesis. Now the same class of output can be produced on demand, at scale, by a machine, with latency measured in seconds and cost approaching zero. That is a step change in effective cognitive throughput.
This is why “video calling another continent” feels incremental. It reduces friction in moving information between humans. AI reduces or removes the human from parts of the loop entirely.
You can argue about ceilings, reliability, or long term limits. Fine. But the step already happened. Tasks that were categorically human two years ago are now automatable enough to be economically and practically useful.
My critique is not due to pessimism, it is due to afactuality. Breakthroughs in science are plenty in the modern era and there is no reason to expect them to slow or halt.
However, from your later comments, it sounds as though you feel the only operating definition of a "breakthrough" is a change inducing a rapid rise in labor extraction / conventional productivity. I could not disagree more strongly with this opinion, as I find this definition utterly defies intuition. It rejects many, if not most, changes in scientific understanding that do not directly induce a discontinuty in labor extraction. But admittedly if one restricts the definition of a breakthrough in this way, then, well, you're probably about right. (Though I don't see what Mars has to do with labor extraction.)
That’s only one dimension. The step function is multidimensional. My critique is more about the Euclidean distance between the initial point and the end point.
To which AI is the only technology that has enough distance to be classified as a “breakthrough”.
Technically this is true. Practically speaking most realists are perceived to be pessimists. There are tons of scientific studies to back this up as well. People who are judged to be pessimistic experimentally have more accurate perceptions of the real world.
This means that most people who you would term as "realists" are likely optimists and not realists at all.
Platforms lose momentum when these events strike, and momentum loss is the death knell for social platforms. Reddit's missteps have put it on a downward spiral. They may hang on, even for an impressively long time, but recovery from this point is very difficult and usually involves transforming or re-forming the vision.
It can be done. It takes the right leaders. Most are unfit for this particular challenge.
Many community-oriented programs have failed after acquisition because they came out too firm, too decided, and too purposeful, only to realize the community is still skeptical and turning against them six months in.
Honestly, for a program like Anki, starting out by saying "we need to figure out what good governance looks like, as well as what might be agreeable and possible for everyone involved" is a much stronger positioning than coming up with something that may or may not fly to try make a strong first impression. Communities do not follow the conventional rules of American business.
Technically speaking, because it's not a set, we should say it involves the collection of all sets that don't contain themselves. But then, who's asking...
This is the easiest of the paradoxes mentioned in this thread to explain. I want to emphasize that this proof uses the technique of "Assume P, derive contradiction, therefore not P". This kind of proof relies on knowing what the running assumptions are at the time that the contradiction is derived, so I'm going to try to make that explicit.
Here's our first assumption: suppose that there's a set X with the property that for any set Y, Y is a member of X if and only if Y doesn't contain itself as a member. In other words, suppose that the collection of sets that don't contain themselves is a set and call it X.
Here's another assumption: Suppose X contains itself. Then by the premise, X doesn't contain itself, which is contradictory. Since the innermost assumption is that X contains itself, this proves that X doesn't contain itself (under the other assumption).
But if X doesn't contain itself, then by the premise again, X is in X, which is again contradictory. Now the only remaining assumption is that X exists, and so this proves that there cannot be a set with the stated property. In other words, the collection of all sets that don't contain themselves is not a set.
Let R = {X \notin X}, i.e. all sets that do not contain themselves. Now is R \in R? Well this is the case if and only if R \notin R. But this clearly cannot be.
Like the barber that shaves all men not shaving themselves.
The paradox. If you create a set theory in which that set exists, you get a paradox and a contradiction. So the usual "fix" is to disallow that from being a set (because it is "too big"), and then you can form a theory which is consistent as far as we know.
If I hire an engineer and that engineer authorizes an "agent" to take an action, if that "agentic action" then causes an incident, guess whose door I'm knocking on?
Engineers are accountable for the actions they authorize. Simple as that. The agent can do nothing unless the engineer says it can. If the engineer doesn't feel they have control over what the agent can or cannot do, under no circumstances should it be authorized. To do so would be alarmingly negligent.
This extends to products. If I buy a product from a vendor and that product behaves in an unexpected and harmful manner, I expect that vendor to own it. I don't expect error-free work, yet nevertheless "our AI behaved unexpectedly" is not a deflection, nor is it satisfactory when presented as a root cause.
In coursework, references are often a way of demonstrating the reading one did on a topic before committing to a course of argumentation. They also contextualize what exactly the student's thinking is in dialogue with, since general familiarity with a topic can't be assumed in introductory coursework. Citation minimums are usually imposed as a means of encouraging a student to read more about a topic before synthesizing their thoughts, and as a means of demonstrating that work to a professor. While there may have been administrative reasons for the citation minimum, the concept behind them is not unfounded, though they are probably not the most effective way of achieving that goal.
While similar, the function is fundamentally different from citations appearing in research. However, even professionally, it is well beyond rare for a philosophical work, even for professional philosophers, to be written truly ex nihilo as you seem to be suggesting. Citation is an essential component of research dialogue and cannot be elided.
Hmm... reads a bit like an email a forum moderator might send a disobedient user. This seems strange, verging on unprofessional, for corporate communications.
Restrictions on SNAP are tricky business. You can't ask someone on SNAP to spend time preparing food. Prepared meals are expensive, often not accessible, and sometimes difficult to prepare for people with certain disabilities. It might seem strange, but I have known people, very poor people, who rely on "foods in bar and drink form" out of necessity. I have known poor people for whom eating fruit is physically challenging.
SNAP changes like this may be better on a population health level, to be sure. On this I have no evidence. But each restriction placed on food for people living in destitution may mean some people go hungry. (And this excludes issues of caloric density.) I would like to see better data, but sadly, there is none.
+1 – it's all well and good for me to buy just some vegetables this week, because I have a pantry full of hundreds of dollars worth of basics, spices, a herb garden, bulk (more expensive) rice/pasta, etc. I also have a single 9-5 job so can spend an hour each day cooking.
But if I had an empty kitchen, lacked the funds to invest in bulk purchases, and had 30 minutes to cook and eat, I'd be eating very differently.
What they need to do is handle disability better. When you try to make it one size fits all you're either too generous with the cheap problems or too stingy with the expensive ones.
As others have pointed out, that's not what the restriction seems to be limited to. The distinction isn't based on sugar content but the amount of "processing", which rules out quite a lot of things beyond just candy and soda.
reply