Hacker Newsnew | past | comments | ask | show | jobs | submit | simonask's commentslogin

Traditional (so-called “legacy”) media have legal rights and obligations in most countries. They are required to live up to certain standards, for example by distinguishing between opinion and fact, by disclosing political affiliations, and so on.

Journalist is more than a job title, and so is editor.


If you dislike things happening out of lexical order, I expect must already dislike C because of one of its many notorious footguns, which is that the evaluation order of function arguments is implementation-defined.

About RAII, I think your viewpoint is quite baffling. Destructors are run at one extremely well-defined point in the code: `}`. That's not hard to reason about at all. Especially not compared to often spaghetti-like cleanup tails. If you're lucky, the team does not have a policy against `goto`.


What are you talking about? Rust governance could not be more different from C++.

The fact is that Europe is a lot of things. You have the British class system (not that hidden), you have French and German bureaucracy, but you also have the unique combination of egalitarianism and mercantilism in Scandinavia and the Netherlands, and the zeal of Polish progressivism, shared by a number of their former east block neighbors.

There are places in Europe where you can easily achieve a higher standard of living (on average) than the US, and there are places where you can't.

I believe the reason that Europe is behind on commercial software is just economic: Solid, standardized solutions were available coming from US companies, and they were seen as low-risk for decades, so why would any company try to compete? Network effects apply to things like office suites and e-mail clients just as much as social media. Microsoft doesn't have any serious US or Chinese competitors in this space either.

That's not to say there aren't problems: The pipeline from startup to big tech firm is extremely difficult in Europe, largely because capital is much more conservative, stemming from the fact that European capital tends to be concentrated in things like pension funds. For years, successful European tech startups have at some point or another hooked into the US Bay Area ecosystem (capital, talent pool, etc.), because the local environment was way too risk-averse.

But I think you, like many, have succumbed to anti-European propaganda, which comes in a couple of forms: pro-corporatist, pro-Putinist, orientalist/sinophile, etc.


we live in an age where manufacturing of any kind, software, ai rule the world, and EUs output there is shrinking.

I think that in current day and age EU feels entitled to disproportionately higher standards of living to its output.

And given that, EUs awakening will be the rudest. US’s is going to be too, but different

I don’t want to share some personal grievances, but my negative perspective on EU (esp Germany) is from personal experience. I don’t think that I succumbed to propaganda and I’m certainly not a fan of P or X


If you think AI rules the world you've certainly succumbed to propaganda.

I am not thinking about ChatGPT but robotics now advances with crazy speed

The capital situation is especially dumb. There's been a lot of debate recently in Denmark about why our pension funds invest a lot of money in US venture capital funds that then invest in Danish startups that have moved to the Bay Area. The same money could have been invested in the same startups here in Europe.

As a layman, I have to say the collective credibility of economists does not inspire confidence.

I love economics but it is a field of study that we don't have the proper tools to properly study the subject yet.

Modern economics is literally a bullshit job generating process or complex system.


What's particularly ironic is that economists are redundant from a mainstream economics perspective. They'd be the first job to cut.

Looking at the code, I'm not really sure what part of this would be more verbose in Rust. This kernel does close to nothing, not even page table setup.

Granted, the code writing to the VGA buffer will need to be in `unsafe` blocks, but yeah.


This would work if it wasn’t for that lovely little human trait where we tend to find bumbling characters endearing. People would be sad when the AI lost.

Maybe infusing the AI character with the boundless self confidence of its creators will make it less endearing :)

What’s wrong with having a bittersweet movie?

That’s not the slow part. The slow part is moving any data at all to the GPU - doesn’t super matter if it’s a megabyte or a kilobyte. And you need it there anyway, because that’s what the display is attached to.

Now, the situation is that your display is directly attached to a humongously overpowered beefcake of a coprocessor (the GPU), which is hyper-optimized for calculating pixel stuff, and it can do it orders of magnitude faster than you can tell it manually how to update even a single pixel.

Not using it is silly when you look at it that way.


I'm kinda weirded out by the fact that their renderer takes 3ms on a desktop graphics card that is capable of rendering way more demanding 3D scenes in a video game.

Sure, use it. But it very much shouldn't be needed, and if there's a bug keeping you from using it your performance outside video games should still be fine. Your average new frame only changes a couple pixels, and a CPU can copy rectangles at full memory speed.

I have no problem with it squeezing out the last few percent using the GPU.

But look at my CPU charts in the github link upthread. I understand that maybe that's due to the CPU emulating a GPU? But from a thousand feet, that's not viable for a text editor.


Yeah LLVMpipe means it's emulating the GPU path on the CPU, which is really not what you want. What GPU do you have out of interest? You have to go back pretty far to find something which doesn't support Vulkan at all, it's possible that you do have Vulkan but not the feature set Zed currently expects.

It was ASUS GeForce GT710-SL-2GD5 . I see some sources putting at at 2014. That's not _recent_ recent, but it's within the service life I'd expect.

(Finger in the air, I'd expect an editor to work on 20 year old hardware.)

Sold it ages ago. New one (Intel) works fine.

I was running Ubuntu. I forget which version.


> It was ASUS GeForce GT710-SL-2GD5 . I see some sources putting at at 2014. That's not _recent_ recent, but it's within the service life I'd expect.

That's pretty old, the actual architecture debuted in 2012 and Nvidia stopped supporting the official drivers in 2021. Technically it did barely support Vulkan, but with that much legacy baggage it's not really surprising that greenfield Vulkan software doesn't work on it. In any case you should be set for a long time with that new Intel card.

I get where you're coming from that it's just a text editor, but on the other hand what they're doing is optimal for most of their users, and it would be a lot of extra work to also support the long tail of hardware which is almost old enough to vote.


I initially misremembered the age of the card, but it was about that old when I bought it.

My hope was that they would find a higher-level place to modularize the render than llvmpipe, although I agree that was unreasonable technical choice.

Once-in-a-generation technology cliff-edges have to happen. Hopefully not too often. It's just not pleasant being caught on the wrong side of the cliff!

Thanks for the insights.


LibreOffice is catastrophically bad. It is slow, buggy, and everything it does is either pointlessly emulating a bad product, or pointlessly going against expectations.

It exists for one reason only, which is OSS fervor. Great, but that doesn’t lead to great design.


I'm with wolvoleo. I'm forced to use MS Office at work but install only LO on my personal machines. It may lack features or pizzazz but as a reliable, unfussy authoring tool, it serves my needs very well.

> pointlessly going against expectations

If you're referring to the ribbon, I'm not sold on its superiority. The vast majority of other software still uses the familiar menu structure, which is what LO uses too.

Granted, well meaning educational programs expose students to MS Office and its paradigm, from an early age. For their sake, I eagerly await a coding assistant AI powerful enough to reskin LibreOffice to look just MS Office, ribbon and all.


I started my wife on LibreOffice, putting it on her Mac when her 365 subscription lapsed. She loves it. Her needs aren't fancy, though, and she can create her own or open others' documents and spreadsheets just fine.


I don't agree, I use it all the time. I never use the 'real' office at home, though I do at work. And I'm really happy with it. It works fine, it's pretty light and it runs on every OS without me having to use a substandard web version.

I understand their copying the MS Office look and feel because that muscle memory is key to converting users. I like the way they didn't go all-in on those ribbons which have always been pretty terrible.

In that sense I think the biggest issues with the product is that it's taking so many cues from MS Office which on its own is pretty terrible but has grown to be abundant.

I think the whole office workflow is grossly outdated anyway. Excel is mostly misused as a pisspoor database which it deeply sucks at because it doesn't offer any way to safeguard data integrity. What MS should do is overhaul Access completely to make users grok it better. But they don't care.

Word docs are still full of weird template issues, PowerPoint still uses the old overhead projector transparent slide paradigm.

What it really needs is someone to look at this without any of the 1980s baggage and come up with tools for workflow problems from this century with techniques that fit this century. Adding an AI clippy like MS has done does not cut it at all.

But it does mean having to chip away at the entrenched market position of office, that's the problem. Microsoft stops innovating when they've cornered the market, just like they did with internet explorer. Someone has to do a chrome on office, but it will need someone with a big bag of money. Not an open source project run on a shoestring.

So yeah I think LibreOffice is not great but the not great bits are copied from MS Office because they simply have no alternative.


I recently began using markdown readers/writers like Typora and they’ve blown me away— what LibreOffice Writer could have been. Competing directly with MS Word was a trap.


You have to consider the origins, going back to Star Office in the days where most people were on really slow Dialup if they had internet at all. And even a lot of businesses were almost worse, sharing a single dialup or ISDN connection.


The trouble with C++ is that it maintains backward compatibility with C++.


I suppose so, but I think it's relatively rare for languages to just drop support for older features and force developers to rewrite code using newer mechanisms.

Python 2 to 3 is a good example of what can be expected to happen - very slow adoption of the new version since companies may just not have the resources or desire to rewrite existing code that is running without problems.


Late in the C++ 20 timeline P1881 Epochs were proposed. Similar to Rust's editions, which at that time had been tried in anger only once (2018 Edition) this proposed that there should be a way for C++ to evolve, forbidding obsolete syntax in newer projects rather than just growing forever like cancer.

Epochs was given the usual WG21 treatment and that's the end of that. Rust shipped 2021 Edition, and 2024 Edition, and I see no reason to think 2027 Edition won't happen.

The current iteration of Bjarne's "Profiles" idea is in a similar ballpark though it got there via a very different route. This time because it will aid safety to outlaw things that are now considered a bad idea. If this goes anywhere its nearest ship vehicle is C++ 29.

Now, Python 2 to Python 3 did take a few years, maybe a decade. But just shipping the mechanism to reform C++ looks likely to take at least nine years. Not the reform, just the mechanism to enable it.


Meanwhile editions still don't have an answer for semantics changes exposed on public APIs for crates, each compiled on its own edition.

So what does the compiler chose, the edition of the caller, or the caller?


I don't understand the question, maybe you have an example?


In general, even though semantic changes haven't yet been a thing in editions so far.

Crate A uses edition X.

Crate B uses edition Y + N, which has broken Rust language semantics between those editions, or changed a standard library type that Crate A uses on its public API in a backwards incompatible way.

Now an application on edition Y + M, with M >= N, would like to depend on Crate A and Crate B, and calling that incompatible API would be a requirement for the implementation goal.

So far editions haven't bothered covering such cases, deemed as not worthwhile thinking about.

Those in such scenarios get to hunt for another crate to replace Crate A, or have a local fork.


I think it would be useful to give an example of the kind of semantics that would be desirable to change, and where people might think that it should be possible to change at an edition boundary.

I’m not very sure, but maybe something like unforgettable types? Or linear types? Maybe negative trait bounds, or specialization?


OK, so you don't have any examples, I recommend coming back once you have an example as otherwise there's nothing concrete to engage with here.


That was the example, now if you rather want to play defensive instead of engaging into a meaningful discussion about the Rust approach to language versioning and its constraints, well it is as it is.


> Python 2 to 3 is a good example of what can be expected to happen

People keep bringing this up when discussing backwards compatibility breaks, but I think the conclusion should be a bit more nuanced than just "backwards compatibility break <=> years (decades?) of pain".

IMHO, the problem was the backwards compatibility break coupled with the inability to use Python 2 code from 3 and vice versa. This meant that not only did you need to migrate your own code, but you also needed everything you depend on to also support Python 3. This applied in reverse as well - if you as a library developer naively upgraded to Python 3, that left your Python 2 consumers behind.

Obviously the migration story got better over time with improvements to 2to3, six, etc., that allowed a single codebase to work under both Python 2 and 3, but I think the big takeaway here is that backwards compatibility breaks can be made much more friendly as long as upgrades can be performed independently of each other.


Python3 drops support for older versions of python3 all the time. Every single release comes with a bunch of deprecated and removed features. There is a very strong chance that a python 3.7 codebase is totally broken on 3.14. Almost no language takes backwards compatibility as seriously as c++ does.


It's an interesting policy decision for a language - whether to guarantee (or at least practice) ongoing backwards compatibility or not.

The benefit of NOT maintaining backwards compatibility is that hopefully you end up with a cleaner language without all the baggage of the past, and "impedence mismatches" of new features that don't play well with old ones, etc, etc.

However, as a developer, I think I prefer the backwards compatible approach, knowing that:

1) Upgrading to latest version of the compiler hopefully isn't a big deal, and I can take advantage of any benefits that brings, including new language features, without having to sign up for rewriting existing tested code to remove deprecated features. In a large code base any such forced rewrites could be pretty onerous, and may involve a lot of retesting and potential for introducing bugs into code that was previously working.

2) It's nice to be able to take older projects, that you haven't worked on in a while (whether hobby projects, or legacy corporate code) and have the code still compile with the latest installed tool set, rather than to have developed "bit rot" due to using language features that have since been deprecated. Of course bit rot may also occur due to library changes, so language backwards compatibility is no guarantee of smooth sailing.

Maybe going forwards, with AI coding tools, this will become less of an issue, if they become capable of this sort of codebase updates without introducing errors. In fact, going forwards it may well be that choice of programming languages itself becomes seen as more something the tooling takes care of. Nowadays we write in high level languages and don't really think or care about what the generated machine code looks like. Maybe in the future we can write in high level instructions to the AI, without even having to care what the implementation language is?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: