Hacker Newsnew | past | comments | ask | show | jobs | submit | zgk7iqea's commentslogin

Don't cursor and vscode also have this problem?


Probably all of them do, depending on settings. Copilot / vscode will ask you to confirm link access before it will fetch it or you set the domain as trusted.


Is phone number enumeration now considered a vulnerability? Really?


I know, remember when the telco's just published those in books every year?


But you had the option of having an unlisted or unpublished phone number. To give one datapoint, in Los Angeles in the 1980s about half of all numbers were unlisted. I would expect that the unlisted rate was much higher in big cities like L.A. compared to the rest of the country.

What I find fascinating is that people paid for privacy. Yes, indeed, people paid several dollars extra per month to maintain an unlisted/unpublished phone number. Today very few people are willing to pay actual money for privacy.


Very good point.

Everyone I knew while growing up was in the white pages (parents) with home address, not just phone number.

The early “FreeNet” and ISPs like Compuserve used anonymous usernames. Personalized email addresses came later…

Oddly, because we can’t even pay for privacy today, it appears as if nobody cares. Sure, still desirable but not even an option at any cost.

How we got from there to here is troubling.


What do you mean we can’t pay for privacy and it’s not an option at any cost? Just don’t use big tech services, you pay for them with your data. Use Threema instead, or similar. It is a paid service with focus on privacy.


One would nearly have to live like the Amish to avoid all online services.

Both free and paid online services make extensive use of 3rd party tracking services.

Sure, if one has a flip phone and lives as one did in the 1980s…


funny thing is, there's probably a decent percentage of people here that don't remember this


Sarah Connor?


theres a clippy lint for that



... but we will know from this link, thanks!

edit: to think that such short-form drivel is locked behind a paywall is just sad.


the age of subscriptions. what, you don't want to subscribe?


the economist subscription is also one of the priciest


it is an architecture problem, too. LLMs simply aren't capable of AGI


Why not?

A lot of people say that, but no one, not a single person has ever pointed out a fundamental limitation that would prevent an LLM from going all the way.

If LLMs have limits, we are yet to find them.


We have already found limitations of the current LLM paradigm, even if we don't have a theorem saying transformers can never be AGI. Scaling laws show that performance keeps improving with more params, data + compute but only following a smooth power law with sharply diminishing returns. Each extra order of magnitude of compute buys a smaller gain than the last, and recent work suggests we're running into economic and physical constraints on continuing this trend indefinitely.

OOD is still unsolved problem, they basically struggle under domain shifts and long tail cases or when you try systematically new combinations of concepts (especially on reasoning heavy tasks). This is now a well documented limitation of LLMs/multimodal LLMs.

Work on COT faithfulness shows that the step by step reasoning they print doesn't match their actual internal computation, they frequently generate plausible but misleading explanations of their own answers (lookup anthropic paper). That means they lack self knowledge about how/why they got a result. I doubt you can get AGI without that.

None of this proves that no LLM based architecture could ever reach AGI. But it directly contradicts the idea that we haven't found any limits. We've already found multiple major limitations of the current LLMs, and there's no evidence that blindly scaling this recipe is enough to cross from very capable assistant to AGI.


A lot of those failings (i.e. COT faithfulness) are straight up human failure modes.

LLMs failing the same way as humans do on the same tasks as humans is a weak sign of "this tech is AGI capable", in my eyes. Because it hints that LLMs are angling to do the same things human mind does, and in similar enough ways to share the failure modes. And human mind is the one architecture we know to support general intelligence.

Anthropic has a more recent paper on introspection in LLMs, by the way. With numerous findings. The main takeaway is: existing LLMs have introspection capabilities - weak, limited and unreliable, but present nonetheless. It's a bit weird, given that we never trained them for that.

https://transformer-circuits.pub/2025/introspection/index.ht...

You can train them to be better at it, if you really wanted to. A few other papers tried, although in different contexts.


This is all nonsense and you are just falling for marketing that you want to be true.

The whole space is largely marketing at this point, intentionally conflating all these philosophical terms because we don't want to face the ugly reality that LLMs are a dead end to "AGI".

Not to mention, it is not on those who don't believe in Santa Clause to prove that Santa Clause doesn't exist. It is on those who believe in Santa Clause to show how AGI can possibly emerge from next token prediction.

I would question if you even use the models much really because I thought this in 2023 but I just can't imagine how anyone who uses the models all the time can possibly think we are on the path to AGI with LLMs in 2025.

It is almost like the idea of a thinking being emerging from text was a dumb idea to start with.


You are falling for the AI effect.

Which is: flesh apes want to feel unique and special! And "intelligence" must be what makes them so unique and special! So they deny "intelligence" in anything that's not a fellow flesh ape!

If an AI can't talk like a human, then it must be the talking that makes the human intelligence special! But if the AI can talk, then talking was never important for intelligence in the first place! Repeat for everything.

I use LLMs a lot, and the improvements in the last few years are vast. OpenAI's entire personality tuning team should be loaded into a rocket and fired off into the sun, but that's a separate issue from raw AI capabilities, which keep improving steadily and with no end in sight.


Breaking down in -30C temperatures is also human failure mode, but doesen't make cars human. They both exhibit the exact same behavior (not moving), but are fundamentally different


The similarities go quite a bit deeper than that.

Both rely on a certain metabolic process to be able to move. Both function in a narrow temperature range, and fail outside it. Both have a homeostatic process that attempts to keep them in that temperature range. Both rely on chemical energy, oxidizing stored hydrocarbons to extract power from them, and both take in O2-rich air, and emit air enriched in CO2 and water vapor.

So, yes, the cars aren't humans. But they sure implement quite a few of the same things as humans do - despite being made out of very different parts.

LLMs of today? They implement abstract thinking the same way cars implement aerobic metabolism. A nonhuman implementation, but one that does a great many of the same things.


Real time learning that doesn't pollute limited context windows.


You can mimic this already. Unreliable and computationally inefficient, but those are not fundamental limitations.


LLMs are bounded by the same bounds computers are. They run on computers so a prime example of a limitation is Rices theorem. Any ‘ai’ that writes code is unable (just like humans) to determine if the output is or is not error free.

This means a multi agent workflow without human that writes code may or may not be error free.

LLMs are also bounded by runtime complexity. Could an llm find the shortest Hamiltionian path between two cities in non polynomial time?

LLMs are bounded by in model context: Could an llm create and use a new language with no context in its model?


Firefox is dead. You need an insurmountable amount of configuration to even make it bearable and there is non-user respecting settings and telemetry everywhere. Ads, too. It's not something you can recommend, every site is broken and Mozilla rather likes to spend it's money on [1] discouraging human translators and [2] giving people free coffee on "Browser Raves" in Berlin instead. It's a shadow of its former self.

[1] https://support.mozilla.org/en-US/forums/contributors/717446

[2] https://www.instagram.com/p/DPn_Re5AAkN/


> every site is broken

If a site is broken, it's likely due to blocking of trackers. In the URL bar, click on the shield icon and disable the slider "Enhanced Tracking Protection". But yeah, that can be annoying.

Google is an advertising company. I don't understand choosing to use their browser if you can avoid it.


Mozilla is an advertising company too. In terms of breakage, Im talking about more fundamental things like missing APIs and wrong rendering.


it's the same kind of "workflow optimization" that notion and obsidian users suffer from. You spend so much time making your tools more productive but don't get any actual work done.


I just use obsidian out of the box. No extensions. I dont use tags. I dont use any fancy features. Its a markdown editor with a file tree to me. Its great.


better spend your time well


Firefox should commit more to correctly implement web standards - not even gradients render correctly. A lot of the users are oddballs with strange configurations that break everything. No wonder devs optimize for chrome.


One longstanding issue with gradients was fixed recently.

https://bugzilla.mozilla.org/show_bug.cgi?id=627771


This is just as user friendly as the rest of the firefox configuration. I can't recommend it to anyone in good faith anymore.


They are promising to, along with the AI tab misfeature, add one button in preferences to turn off all the AI features.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: