Probably all of them do, depending on settings. Copilot / vscode will ask you to confirm link access before it will fetch it or you set the domain as trusted.
But you had the option of having an unlisted or unpublished phone number. To give one datapoint, in Los Angeles in the 1980s about half of all numbers were unlisted. I would expect that the unlisted rate was much higher in big cities like L.A. compared to the rest of the country.
What I find fascinating is that people paid for privacy. Yes, indeed, people paid several dollars extra per month to maintain an unlisted/unpublished phone number. Today very few people are willing to pay actual money for privacy.
What do you mean we can’t pay for privacy and it’s not an option at any cost? Just don’t use big tech services, you pay for them with your data. Use Threema instead, or similar. It is a paid service with focus on privacy.
A lot of people say that, but no one, not a single person has ever pointed out a fundamental limitation that would prevent an LLM from going all the way.
We have already found limitations of the current LLM paradigm, even if we don't have a theorem saying transformers can never be AGI.
Scaling laws show that performance keeps improving with more params, data + compute but only following a smooth power law with sharply diminishing returns. Each extra order of magnitude of compute buys a smaller gain than the last, and recent work suggests we're running into economic and physical constraints on continuing this trend indefinitely.
OOD is still unsolved problem, they basically struggle under domain shifts and long tail cases or when you try systematically new combinations of concepts (especially on reasoning heavy tasks). This is now a well documented limitation of LLMs/multimodal LLMs.
Work on COT faithfulness shows that the step by step reasoning they print doesn't match their actual internal computation, they frequently generate plausible but misleading explanations of their own answers (lookup anthropic paper). That means they lack self knowledge about how/why they got a result. I doubt you can get AGI without that.
None of this proves that no LLM based architecture could ever reach AGI. But it directly contradicts the idea that we haven't found any limits. We've already found multiple major limitations of the current LLMs, and there's no evidence that blindly scaling this recipe is enough to cross from very capable assistant to AGI.
A lot of those failings (i.e. COT faithfulness) are straight up human failure modes.
LLMs failing the same way as humans do on the same tasks as humans is a weak sign of "this tech is AGI capable", in my eyes. Because it hints that LLMs are angling to do the same things human mind does, and in similar enough ways to share the failure modes. And human mind is the one architecture we know to support general intelligence.
Anthropic has a more recent paper on introspection in LLMs, by the way. With numerous findings. The main takeaway is: existing LLMs have introspection capabilities - weak, limited and unreliable, but present nonetheless. It's a bit weird, given that we never trained them for that.
This is all nonsense and you are just falling for marketing that you want to be true.
The whole space is largely marketing at this point, intentionally conflating all these philosophical terms because we don't want to face the ugly reality that LLMs are a dead end to "AGI".
Not to mention, it is not on those who don't believe in Santa Clause to prove that Santa Clause doesn't exist. It is on those who believe in Santa Clause to show how AGI can possibly emerge from next token prediction.
I would question if you even use the models much really because I thought this in 2023 but I just can't imagine how anyone who uses the models all the time can possibly think we are on the path to AGI with LLMs in 2025.
It is almost like the idea of a thinking being emerging from text was a dumb idea to start with.
Which is: flesh apes want to feel unique and special! And "intelligence" must be what makes them so unique and special! So they deny "intelligence" in anything that's not a fellow flesh ape!
If an AI can't talk like a human, then it must be the talking that makes the human intelligence special! But if the AI can talk, then talking was never important for intelligence in the first place! Repeat for everything.
I use LLMs a lot, and the improvements in the last few years are vast. OpenAI's entire personality tuning team should be loaded into a rocket and fired off into the sun, but that's a separate issue from raw AI capabilities, which keep improving steadily and with no end in sight.
Breaking down in -30C temperatures is also human failure mode, but doesen't make cars human. They both exhibit the exact same behavior (not moving), but are fundamentally different
Both rely on a certain metabolic process to be able to move. Both function in a narrow temperature range, and fail outside it. Both have a homeostatic process that attempts to keep them in that temperature range. Both rely on chemical energy, oxidizing stored hydrocarbons to extract power from them, and both take in O2-rich air, and emit air enriched in CO2 and water vapor.
So, yes, the cars aren't humans. But they sure implement quite a few of the same things as humans do - despite being made out of very different parts.
LLMs of today? They implement abstract thinking the same way cars implement aerobic metabolism. A nonhuman implementation, but one that does a great many of the same things.
LLMs are bounded by the same bounds computers are. They run on computers so a prime example of a limitation is Rices theorem. Any ‘ai’ that writes code is unable (just like humans) to determine if the output is or is not error free.
This means a multi agent workflow without human that writes code may or may not be error free.
LLMs are also bounded by runtime complexity. Could an llm find the shortest Hamiltionian path between two cities in non polynomial time?
LLMs are bounded by in model context:
Could an llm create and use a new language with no context in its model?
Firefox is dead. You need an insurmountable amount of configuration to even make it bearable and there is non-user respecting settings and telemetry everywhere. Ads, too. It's not something you can recommend, every site is broken and Mozilla rather likes to spend it's money on [1] discouraging human translators and [2] giving people free coffee on "Browser Raves" in Berlin instead. It's a shadow of its former self.
If a site is broken, it's likely due to blocking of trackers. In the URL bar, click on the shield icon and disable the slider "Enhanced Tracking Protection". But yeah, that can be annoying.
Google is an advertising company. I don't understand choosing to use their browser if you can avoid it.
it's the same kind of "workflow optimization" that notion and obsidian users suffer from. You spend so much time making your tools more productive but don't get any actual work done.
I just use obsidian out of the box. No extensions. I dont use tags. I dont use any fancy features. Its a markdown editor with a file tree to me. Its great.
Firefox should commit more to correctly implement web standards - not even gradients render correctly. A lot of the users are oddballs with strange configurations that break everything. No wonder devs optimize for chrome.