Hacker Newsnew | past | comments | ask | show | jobs | submit | more barrkel's commentslogin

It's weird how this seems to be coming everywhere all at once.

Is it just politician FOMO, when they see someone trying this somewhere they feel they need to try it too?

It feels like a concerted push to deanonymize the web.


I guess I lucked out. I bought a 768GB workstation (with 9995wx CPU and rtx 6000 Pro Blackwell GPU) in August. 96GB modules were better value than 128GB. That build would be a good bit pricier today looks like.


what's your usecase for such a build?


If the observable behavior is bad Linux performance, it's a Linux problem.

There's a saying in motorcycling: it's better to be alive than right. There's no upside in being correct if it leaves you worse off.

There are ways to make things better leveraging the Linux way. Make more usable tools for fixing ACPI deficiencies with hotloadable patches, ways of validating or verifying the patches for safety, ways of sharing and downloading them, and building a community around it.

Moaning that manufacturers only pay attention to where their profits come from is not a strategy at all.


Decompile your ACPI tables and then do a grep for "Linux". You are likely to find it, meaning the vendor took time to think about Linux on their hardware. Some vendors take the time to write good settings and code for the Linux ACPI paths, some dump you into no-man's land on purpose if your OSI vendor string is "Linux".

It's quite literally a vendor problem created by vendors leading anyone that doesn't run Windows astray in some cases.

If you run Linux, then dare to change your OSI vendor string to "Windows", you've entered into bespoke code land that follows different non-standard implementations for every SKU, where it's coded to work with a unique set of hardware and bespoke drivers/firmware on Windows. You also forgo any Linux forethought and optimizations that went into the "Linux" code paths.


You seem to have totally ignored his point...


My point is that from the Linux side, you're damned if you and damned if you don't no matter how you tackle the issue. If the layer above Linux is going to deliberately malfunction and lie on the Linux happy path, or speak some non-standard per-device driver protocol if you lie to use the Windows path, there's not much that can be done.

It's only a "Linux problem" if you're trying to run Linux on hardware that is actively hostile to it. There are plenty of vendors who supply good Linux happy paths in their firmware, using their hardware is the solution to that self-imposed problem.


I think the correct strategy in this case is to return your laptop to the store if it has linux compatibility issues, and keep trying until you find one that works.

i.e. don't support vendors whose laptops don't work in Linux.


The structure of what it wrote, and the banality of the point.


This would speed things up since it looks like the bottleneck here is I/O.


I guess you abandoned PC gaming some time in the early 2000s?


I'm guessing you have a very positive experience with gaming PCs; I wish I could say the same. My Windows PC:

  - Randomly BSODs because of (I think) a buggy Focusrite audio interface driver (that I can't fix and Focusrite refuses to)
  - Regularly 'forgets' I have an RX 5600 XT GPU and defaults to the integrated graphics, forcing me to go into the 1995 'Device Manager' to reset it
  - Occasionally just... stops playing audio?
  - Occasionally has its icons disappear from the taskbar
  - Regularly refuses to close applications, making me go into the Task Manager to force-quit them.
These are just the issues I can think of off the top of my head. I've been playing PC games for like 15 years and this is just par for the course for my experience.


Definitely an outlier. Windows has generally been very very solid since about Windows 7. Certainly since Windows 10.

Linux is still quite far behind in terms of desktop stability in my experience. But I guess if Valve fully controls the hardware they can avoid janky driver issues (it sounds like suspend will work reliably!), so this might actually make a good desktop Linux option.


You are also definitely an outlier. In my experience, Linux has been 5x as stable as Windows (and more performant, too). SteamOS is just Arch Linux with the KDE desktop, the actual desktop stability wont be different from the same setup on a normal PC.


I also had frequent BSOD issues because of a Focusrite audio interface, lol. I've since thrown it out and gotten an alternative brand product and have never had the issues again.


I'm quite confused too, that doesn't align with my experience in the last couple years as well. There's notably been a few very good and long lived video cards and also as time goes on there's an ever deepening library of older games that can be played with very affordable cards.

I'm wondering when and with what hardware they had that bad experience.


Drivers are not an issue for quite some time (but its always good to have latest nvidia ones for example for optimizations focused on given game).

But its trivial to run into some .NET or Visual C++ redistributable hell when you just get a cryptic error during starting and thats it. Just check internet. I have roughly 20 of them installed currently (why the heck?) and earlier versions would happily get installed over already-installed version of same for example as part of game installation process, not a stellar workmanship on MS side. Whats wrong with having latest being backward compatible with all of previous ones, like ie Java achieved 25 years ago?

Talking about fully updated windows 10 and say official steam distros of the games.


> Drivers are not an issue for quite some time

> its trivial to run into some .NET or Visual C++ redistributable hell when you just get a cryptic error during starting and thats it. Just check internet.

Thanks for making my point for me.


I can't speak for the other poster, but I actually recently "abandoned" PC gaming. For me, it wasn't a deliberate decision but more of a change in behavior that occurred over time. I suspect the key event was picking up a PS5 Pro. For me, it's the first console that's felt powerful enough to scratch a similar itch as PC gaming -- except I could just plug it into our Atmos-equipped "home theater" set up and have it not only work flawlessly but be easily accessible to everyone, not just me. Since picking it up, between the PS5 Pro and handheld gaming devices, I just have not played a game on my gaming PC a single time and am currently planning on retiring it as a result.

There may be a connection here with age and the type of games I play too. I'm in my mid-30s now and am not interested in competitive twitch shooters like Call of Duty. In many cases, the games I've been interested in have actually been PS5 exclusives or were a mostly equivalent experience on PS5 Pro vs. PC or were actually arguably better on PS5 Pro (e.g., Jedi Survivor). In some cases, like with Doom: The Dark Ages, I've been surprised at how much I enjoyed something I previously would've only considered playing on PC -- the PS5 Pro version still manages to offer both 60 FPS and ray tracing. In other cases, like Diablo IV, I started playing on PC but gradually over time my playtime naturally transitioned almost entirely to PS5 Pro. The last time I played Diablo IV on my PC, which has a 4090, I was shocked at how unstable and stutter-filled the game was with ray tracing enabled, whereas it's comparatively much more stable on PS5 Pro while still offering ray tracing (albeit at 30 FPS -- but I've come to prefer stability > raw FPS in all but the most latency-sensitive games).

One benefit of this approach if you live with someone else or have a family, etc., is that investments in your setup can be experienced by everyone, even non-gamers. For instance, rather than spending thousands of dollars on a gaming PC that only I would use, I've instead been in the market for an upgraded and larger TV for the "home theater", which everyone can use both for gaming and non-gaming purposes.

Something else very cool but still quite niche and poorly understood, even amongst tech circles, is that it's possible to stream PS5 games into the Vision Pro. There are a few ways of doing this, but my preferred method has been using an app called Portal. This is a truly unique experience because of the Vision Pro's combination of high-end displays and quality full-color passthrough / mixed reality. You can essentially get a 4K 120"+ curved screen floating in space in the middle of your room at perfect eye level, with zero glare regardless of any lighting conditions in the room, while still using your surround sound system for audio. The only downside is that streaming does introduce some input latency. I wouldn't play Doom this way, but something like Astro Bot is just phenomenal. This all works flawlessly out of the box with no configuration.


Don't forget, failure modes pierce abstraction boundaries. An abstraction that fully specifies failure modes leaks its implementation.

This is why I think checked exceptions are a dreadful idea; that, and the misguided idea that you should catch exceptions.

Only code close to the exception, where it can see through the abstraction, and code far away from the exception, like a dispatch loop or request handler, where the failure mode is largely irrelevant beyond 4xx vs 5xx, should catch exceptions.

Annotating all the exceptions on the call graph in between is not only pointless, it breaks encapsulation.


This is a better way of expressing what I had been thinking about putting exception details behind an interface, except that in my mind encapsulating errors is just good design rather than implementation hiding, since the programmer might want to express a public error API, for example to tell the user whether a given fopen failed due to not finding the file or due to a filesystem fault.


e.what()

Just include the file name in the error message. And all of this is predicated on logging errors, which is not at all user-friendly, and not remotely acceptable in GUI applications.


I was talking about catching different classes, not about logging. Even if you’re just implementing a string description, there’s nothing stopping you from designing that string for use in a UI element, or implementing a structured message that can specify more details usable for defining UI elements, similarly to how many REST HTTP-driven web UIs work. I don’t think I’m quite following your criticism.


If your error codes leak the implementation details through the whole call stack you are doing it wrong. Each error code describes what fails in terms of it's function call semantics. A layer isn't supposed to just return this upwards, that wouldn't make sense, but to use it to choose it's own error return code, which is in the abstraction domain of it's function interface.


So you want to gradually reduce the fidelity of the error message as it makes its way up the stack.

That means that the top level handler can at best log a vague message.

That in turn means you must log along the way where you have more precise information about the failure, or you risk not having enough information to fix issues.

And that in turn means you must have lots of redundant logging, since each point in the stack doesn't know whether the abstraction it's invoking has already logged or not, or encapsulation would be violated.


Or you can use chained error messages.

Unable to create the media database. Failed to upgrade the database from version 3 to version 6. SQL Query failed. Syntax error at column 29 near "wehre". ("SELECT id`Thumbnail from tThumbailIndex wehre idFile=?").

(Messages are pre-pended to inner errors as one progresses up the stack). Yes, a gruesome error message. But it's a gruesome error condition.

I've also used inner exceptions in a large Enterprise product, and was very pleased with the result. (Java has them, .net has them, C++ and Typescript can have them if you want them). At the top level, the error message goes:

   Unable to connect to the media database.

   Failed to upgrade from version 3 to version 6. 

   SQL Query error failed.

   Syntax error at column 6 near 'ndex.

   ("SELECT idI'ndex from T_ThumbnailIndex")
(each successive error message contained in a successive inner exception). The advantage: no need to assemble multiple lines of logged errors from a logfile that end-users really don't want to be digging around in.

A more prosaic and actually end-user-helpful chained message might be:

   Unable to connect to the media database.

   Failed to create the file /var/MediaServer/mediaIndex.db

   Out of disk space.


> That in turn means you must log along the way where you have more precise information about the failure

Yes, that's the idea. You split the information into a diagnostic with stuff you deal with now and data that the upper layer will handle. The intersection between the data in these two things should be empty.

> And that in turn means you must have lots of redundant logging, since each point in the stack doesn't know whether the abstraction it's invoking has already logged or not, or encapsulation would be violated.

No. You print a diagnostic, about what exactly THIS layer was trying to do, you don't speculate what the upper layer was trying to do and try to log that. Every layer knows the lower layer has already logged the primary error, because an error object exists, and it also knows that the upper layer will print a diagnostic about what was the intention, so it only prints exactly what the error was in this layer.

> doesn't know whether the abstraction it's invoking has already logged or not, or encapsulation would be violated.

It knows that the lower layer has logged all stuff that that layer considered to be important, and that none of the data that is available to this layer was logged, since that is the responsibility of the caller.

An example:

    Document rendering incomplete, skipped publishing step.  Thumbnail #39 missing.  Failed to fetch image: Connection refused. [Discarded malformed packet with SYN flag.  Invalid data in src/network/tcp.c:894 parse_tcp_packet_quirks_mode]
Depending on the log level, you wouldn't show the later diagnostics. If there is a debug flag set, you can also add the function/line information to every step, not just to the last. If you are outtputing to stuff like syslog, you would put each diagnostic on its own line.


The problem with this is you don't have context on subgraphs of control flow which have already returned by the time of the error.

I think our disagreement is less about error handling and more about logging policy. I favour a logging policy which lets you understand what the code is doing even if it doesn't error out; this means that you don't need to log on unwind. You favour a logging policy specific for errors.

My position is that your position ends up with less useful context for diagnosing errors.


There is no reason you can use that scheme only for errors? In fact I don't.

I use "unwinding" for diagnostics even in the happy case.


Presumes that an error code provides sufficient context to determine what the actual problem is. It does not. Which is why exceptions carry some form of text error message which may include (for example) the name of the file that could not be opened because permission was denied.

Awful stuff.


What data can you encode in an out-of-band return value (exception), that you can't encode in a parameter or in-band return value? An exception isn't a magic data type, there is nothing you couldn't also return as a return value or a parameter.

The filename to be opened is likely passed as a string in an argument, so it IS present in the function interface contract.


You probably do want that exception to bubble up, actually. You probably don't want to catch it immediately after open. Because you need to communicate a failure mode to your caller, and what are you going to do then? Throw another exception? Fall back to error codes? Unwind manually with error codes all the way up? And if so, logging was the wrong thing to do, since the caller is probably going to log as well, based on the same philosophy, and you're going to get loads of error messages for one failure mode, and no stack trace (yes, there are ways of getting semi-decent stack traces from C++ exceptions).

Exception safety has a lot of problems in C++, but it's mostly around allowing various implicit operations to throw (copies, assignments, destructors on temporaries and so forth). And that does come down to poor design of C++.


> you're going to get loads of error messages for one failure mode

> and no stack trace

That loads of error messages, meaning every layer describes what it tried to do and what failed, IS a user readable variant of a stack trace. The user would be confused with a real stack trace, but nice error messages serve both the user and the developer.


This is error-prone boilerplate that obscures the code, obscures the logs and is a known antipattern (log and throw) - which you're implementing manually, by hand, in the hope you never make a mistake.

You shouldn't do manually what you can automate.

Boilerplate can make you feel productive and can give you warm fuzzies inside when you see lots of patterns that look familiar, but seeing the same patterns over and over again is actually a smell; it's the smell of a missing abstraction.


No. If you want to have the same information without that approach, you need to implement nested levels of error information, ten layers deep, each layer having a diagnostic, a log level, and file, line information, and a reason. In other words, you are building your own custom stack trace object, with annotated diagnostics. In addition, you can't reason about this at the upper level anyway, or you are rebuilding your application stack at some other place. The only thing you can do is to unwrap that object and serialize it into a diagnostic, which you could have done with less code, less memory and less compute. In addition, you would need to allocate on a failure path, which sounds like a nightmare.

Also you can never have comments in the code, because what you would write into the comment is now in the diagnostic itself. That means you can also turn on DEBUG_LEVEL_XXL and get a nice description, what the program actually does and why.

> This is error-prone

Why is it error-prune. You receive an error of one type and need to convert it into an error of another type. You need to handle that or the compiler will bark.

> obscures the logs

How does it obscure the logs?


The information about what the code was doing at the time an exception is thrown is implicit in the stack trace; the control flow is evident from line numbers, and with half-decent tooling, you can get hyperlinks from stack traces to source code.

If there's extra information you need to know about what's going on, you probably want to log it in non-error cases too, because it'll contextualize an error before the error occurs. You can't log on unwind for decisions made in routines that were invoked and returned from, and are no longer on the stack, in the error case.

> Why is it error-prune. You receive an error of one type and need to convert it into an error of another type. You need to handle that or the compiler will bark.

This is a choice you've made, not something inherent to unchecked exceptions. My argument is that you should not generally change the type of the error, and let polymorphism cover the abstraction. It's error prone because it's code you need to write. All code you write is on the cost side of the ledger and liable to contain mistakes.

> How does it obscure the logs?

You get lots of error log messages for a single error.

> In addition, you would need to allocate on a failure path, which sounds like a nightmare.

I wanted to address this separately. Allocating on the failure path is only a real problem in OOM or severe memory corruption scenarios, in which case logging may not be available and the best course of action is probably to abort the program. In this case, you want the logs that led up to the abort, rather than trying to log positively under severe memory conditions.

Are you a Go user by any chance? I've stayed away from Go precisely because of its boilerplate-reliant error handling mechanism. Or if you're stuck with C++ (with or without exceptions), my commiserations.


> The information about what the code was doing at the time an exception is thrown is implicit in the stack trace; the control flow is evident from line numbers, and with half-decent tooling, you can get hyperlinks from stack traces to source code.

Yes, but when you want to generate a diagnostic from that stack trace, you basically need to branch on all the possible internal states of internal layers at the point you catch the exception. So either you are incapable of generating detailed diagnostics or you essentially model the whole behaviour in a single place. Also the point where the error messages are generated is now completely removed from the place where the error did occur. This sounds like a nightmare to maintain and also means that the possible error messages aren't there as documentation when reading the code.

> If there's extra information you need to know about what's going on, you probably want to log it in non-error cases too, because it'll contextualize an error before the error occurs. You can't log on unwind for decisions made in routines that were invoked and returned from, and are no longer on the stack, in the error case.

I never said, that you can only use this for error cases. In fact what is an error and what not, is not defined in a single layer. For example a failure to open a file will be an error in a lower layer, but for an upper layer, that just means that it should try a different backend. Or a parsing error is fatal for the parser, but it might mean that the file format is different and the next parser should be tried, or that the data is from the network and can simply be discarded. An empty field can be normal data for the parser, but for the upper layer it is a fatal error.

> This is a choice you've made, not something inherent to unchecked exceptions. My argument is that you should not generally change the type of the error, and let polymorphism cover the abstraction.

Then either the reported errors are completely unspecific and unactionable or you are leaking implementation details. When you want to handle every error specifically and not leak implementation details, you need to handle it locally. When you want to know that you handle all cases, unchecked exceptions are unsatisfying. In my opinion programs should know what is going on and not just say "my bad, something happened I can't continue". That does not lead to robust systems and is neither suitable for user transparency nor for automated systems.

In my opinion software should either work completely automated or report to the end user. Software that needs sysadmins and operators at runtime is bad. That doesn't mean that that never occurs, it will, but it should be treated as a software defect.

> You get lots of error log messages for a single error.

Yes, but this describes the issue at different layers of the abstraction and in my eyes the whole thing is a single error message. Neither the fact that resource X isn't available nor the fact that some connection was refused, is a complete error description in isolation. You need both for a (complete) error description.

> Allocating on the failure path is only a real problem in OOM

Yes, but first I don't like my program to act bad in that case, and second, it is also a nightmare for predictable ownership semantics. I generally let the caller allocate the error/status information.

> Are you a Go user by any chance?

I have never used Go, not even tried, but what I read about the error mechanism appealed to me, because it matches what I think is a good idea and do anyway.

> Or if you're stuck with C++ (with or without exceptions), my commiserations.

I don't feel that unhappy with that approach. I think this is a design and architectural decision rather than a language issue.

> with or without exceptions

Currently, definitely without, because it isn't even available when targeting a free-standing implementation (embedded), but I also don't prefer them, it makes for unpredictable control flow and makes it hard to reason about sound-, complete- and exhaustiveness.

You seem to have the impression, that you can just panic and throw a stacktrace. That might work fine for a program running in the terminal and targeting developers, but it is not acceptable for end users nor for libraries. I also know programs that just output a stacktrace and crash. That is stupid. I mean I understand what is going on, because I am a developer, but first I am not familiar with every codebase I use, and second the average end user is not able to act on any of that and will be angry for good reason when it's documents are gone, data is corrupted or even the workflow is interrupted. I also don't perceive a network error, a file (system) issue or OOM to that rare for it to be acceptable, to just ignore it. I should be part of normal program behaviour.


Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.

You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.

These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.


>Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

I don't disagree that the world is full of fuzziness. But the problem I have with this portrayal is that formal models are often normative rather than analytical. They create reality rather than being an interpretation or abstraction of reality.

People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions. And this is not just true for software products. It's also largely true for manufactured products. Our world is very much shaped by artifacts and man-made rules.

Our probabilistic, fuzzy concepts are often simply a misconception. That doesn't mean it's not important of course. It is important for an AI to understand how people talk about things even if their idea of how these things work is flawed.

And then there is the sort of semi-formal language used in legal or scientific contexts that often has to be translated into formal models before it can become effective. Law makers almost never write algorithms (when they do, they are often buggy). But tax authorities and accounting software vendors do have to formally model the language in the law and then potentially change those formal definitions after court decisions.

My point is that the way in which the modeled, formal world interacts with probabilistic, fuzzy language and human actions is complex. In my opinion we will always need both. AIs ultimately need to understand both and be able to combine them just like (competent) humans do. AI "tool use" is a stop-gap. It's not a sufficient level of understanding.


> People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions.

> Our probabilistic, fuzzy concepts are often simply a misconception.

How eg a credit card works today is defined by financial institutions. How it might work tomorrow is defined by politics, incentives, and human action. It's not clear how to model those with formal language.

I think most systems we interact with are fuzzy because they are in a continual state of change due to the aforementioned human society factors.


To some degree I think that our widely used formal languages may just be insufficient and could be improved to better describe change.

But ultimately I agree with you that this entire societal process is just categorically different. It's simply not a description or definition of something, and therefore the question of how formal it can be doesn't really make sense.

Formalisms are tools for a specific but limited purpose. I think we need those tools. Trying to replace them with something fuzzy makes no sense to me either.


I believe the formalisms can be constructed by something fuzzy. Humans are fuzzy; they create imperefect formalisms that work until they break, and then they're abandoned or adapted.

I don't see how LLMs are significantly different. I don't think the formalisms are an "other". I believe they could be tools, both leveraged and maintained by the LLM, in much the same way as most software engineers, when faced with a tricky problem that is amenable to brute force computation, will write up a quick script to answer it rather than try and work it out by hand.


I think AI could do this in principle but I haven't seen a convincing demonstration or argument that Transformer based LLMs can do it.

I believe what makes the current Transformer based systems different to humans is that they cannot reliably decide to simulate a deterministic machine while linking the individual steps and the outcomes of that application to the expectations and goals that live in the fuzzy parts of our cognitive system. They cannot think about why the outcome is undesirable and what the smallest possible change would be to make it work.

When we ask them to do things like that, they can do _something_, but it is clearly based on having learned how people talk about it rather than actually applying the formalism themselves. That's why their performance drops off a cliff as soon as the learned patterns get too sparse (I'm sure there's a better term for this that any LLM would be able to tell you :)

Before developing new formalisms you first have to be able to reason properly. Reasoning requires two things. Being able to learn a formalism without examples. And keeping track of the state of a handful of variables while deterministically applying transformation rules.

The fact that the reasoning performance of LLMs drops off a cliff after a number of steps tells me that they are not really reasoning. The 1000th rules based transformation only depending on the previous state of the system should not be more difficult or error prone than the first one, because every step _is_ the first one in a sense. There is no such cliff-edge for humans.


You're basically describing the knowledge problem vs model structure, how to even begin to design a system which self-updates/dynamically-learns vs being trained and deployed.

Cracking that is a huge step, pure multi-modal trained models will probably give us a hint, but I think we're some ways from seeing a pure multi-modal open model which can be pulled apart/modified. Even then they're still train and deploy not dynamically learning. I worry we're just going to see LSTM design bolted onto deep LLM because we don't know where else to go and it will be fragile and take eons to train.

And less said about the crap of "but inference is doing some kind of minimization within the context window" the better, it's vacuous and not where great minds should be looking for a step forwards.


I have vague notions of there being an entire hidden philosophical/political battlefield (massacre?) behind the whole "are knowledge models/ontologies a realistic goal" debate.

Starting with the sophomoric questions of the optimist who mistakes the possible for the viable: how definite of a thing is "the world", how knowable is it, what is even knowledge... and then back through the more pragmatic: by whom is it knowable, to what degree, and by what means. The mystics: is "the world" the same thing as "the sum of information about the world"? The spooks: how does one study those fields of information which are already agentic and actively resist being studied by changing themselves, such as easily emerge anywhere more than n(D) people gather?

Plenty of food for thought from why ontologies are/aren't a thing. The classical example of how this plays out in the market being search engines winning over internet directories. But that's one turn of the wheel. Look at what search engines grew into quarter century later. What their outgrowths are doing to people's attitude towards knowledge. Different timescale, different picture.

Fundamentally, I don't think human language has sufficient resolution to model large spans of reality within the limited human attention span. The physical limits of human language as information processing device have been hit at some point in the XX century. Probably that 1970s divergence between productivity and wages.

So while LLMs are "computers speak language now" and it's amazing if sad that they cracked it by more data and not by more model, what's more amazing is how many people are continually ready to mistake language for thought. Are they all P-zombies or just obedience-conditioned into emulating ones?!?!?

Practically, what we lack is not the right architecture for "big knowing machine", but better tools for ad-hoc conceptual modeling of local situations. And, just like poetry that rhymes, this is exactly what nobody has a smidgen of interest to serve to consumers, thus someone will just build it in their basement in the hope of turning the tables on everyone. Probably with the help of LLMs as search engines and code generators. Yall better hurry. They're almost done.


Nice commentary and I enjoyed the poetic turn of phrase. I had to respond to it with my own thoughts if only to bookmark it for myself.

> how many people are continually ready to mistake language for thought

This is a fundamental illusion - where, rote memory and names and words get mistaken for understanding. This was wonderfully illustrated here [1]. Few really grok what understanding actually is. This is an unfortunate by-product of our education system.

> Are they all P-zombies or just obedience-conditioned into emulating ones?!?!?

Brilliant way to state the fundamental human condition. ie, we are all zombies conditioned to imitate rather than understand. Social media amplifies the zombification, and now LLMs do that too.

> Starting with the sophomoric questions of the optimist who mistakes the possible for the viable

This is the fundamental tension between operationalized meaning and imagination. A grokking soul gathers mists from the cosmic chaos and creates meaning and operationalizes it for its own benefit and then continually adapts it.

> it's amazing if sad that they cracked it by more data and not by more model

I was speaking to experts in the sciences (chemistry). They were shocked that the underlying architecture is brute force. They expected a compact information-compressed theory which is able to model independent of data. The problem with brute-force approaches are that they dont scale, and dont capture the essences which are embodied in theories.

> The physical limits of human language as information processing device have been hit at some point in the XX century

2000 years back when humans realized that formalism was needed to operationalize meaning, and natural language was too vague to capture and communicate it. Because the world model that natural language captures encompasses "everything" whereas for making it "useful" requires to limit it via formalism.

[1] https://news.ycombinator.com/item?id=2483976


I disagree with most of what you said.


Is it that fuzzy though? If it was would language not adequately grasp and model our realities? And what about the physical world itself: animals are modeling the world adequately enough to navigate it. There's significant gains to make from modeling _enough_ of the world, without falling into hallucinations of purely statistical associations of an LLM.


World models are trivial. eg narratives are world models and they provide only pre frontal simulation, ie they are synthetically prey-predation. No animal uses world models for survival and doubtful they exist (maps are not models), a world model doesn't conform to optic flow, ie instantaneous use and response. Anything like a world model isn't shallow, the basic premise of oscillatory command, it's needlessly deep, nothing like brains. This is just a frontier hail-mary to the current age.


Lobbyists are how companies talk to governments. If you believe that companies create value, then you should believe that companies should communicate with governments. It can help prevent low quality regulations from being pushed through.

Of course what they say should be validated and taken with appropriate weight. Companies are usually blinkered; they know a lot about their specialist area but aren't incentivized to consider collective action problems or externalities. Something similar can be said for every political interest group. Governing effectively means balancing everyone's interests.


> If you believe that companies create value, then you should believe that companies should communicate with governments

Sorry, you're going to have to prove that.

Companies are made up of people, and it's completely reasonable to assume that if people were allowed to have a voice within government, then they could also speak on behalf of their own interests, which will often coincide with that of the companies that they're involved with.

There's no reason to consider companies a separate entity that has its own power to communicate and many reasons not to do that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: