I don't think that being a free society is a core "civilizational virtue" for the British in the same sense that it is for Americans, or even, say, the French.
I think the purpose of the change was to "increase revenue":
> Requiring that certain research or experimental expenditures be amortized over a five-year period or longer, starting in 2023, would increase revenues by $109 billion over the period from 2023 to 2027.
> I think the purpose of the change was to "increase revenue"
Yes, but in a specific way: they were trying to offset the tax cuts they wanted so they could pass it via the reconciliation process and avoid the Senate filibuster. They didn't actually care about this revenue and the assumption from most people was that the specific carve-out would disappear in some future bill.
And now with their attempts to keep the tax cuts around, they've just decided to ignore the rule entirely and pretend that extending a temporary tax cut counts as not costing anything. Of course, there's nothing that would stop them from getting rid of the filibuster entirely either, but that honestly just makes it weirder to pretend that this somehow fulfills the requirements rather than just is taking advantage of the rules being only self-enforced.
Tell your product owners that they should actually use the product they’re owning. And not just use it, but be a power user of that tool. Not a professional user, not a casual user; use the tool at least six hours a day.
I use YouTube 6+ hours a day and I have for probably 10 years, and I don’t even work there. (I have a few annoying personality limitations which make it so that I usually work better with YouTube on in the background, and NOT on autoplay, autoplay always chooses something I don’t want to see/hear; I know that because I use the tool a lot.)
I can tell you that it has steadily and continually gotten worse in that 10 year time. “I have to come up with stories or I won’t have a job” no you don’t, but even if you did, there are so many things YouTube needs more that enlarged thumbnails with visible compression artifacts.
What shocked me in the aughts was how bad Lotus Notes was. I was pretty sure that the average IBM executive wasn't using the average version of it.
Using the most commonly version of the product, on the commonly used hardware, at least 2 days a week should be a prerequisite for every product owner.
>Using the most commonly version of the product, on the commonly used hardware, at least 2 days a week should be a prerequisite for every product owner.
I am a firm believer that the software should also be developed on commonly used hardware.
Your average user isn't going to have the top-of-the-line MacBook pro, and your program isn't going to be the only thing running on it.
It may run fine on your beefed up monstrosity, and you'll not feel the need to care about performance (worse: you may justify laggy performance with "it runs fine on my machine"). And your users will pay the price for the bloat, which becomes an externality.
Same for websites. Yes, you are going to have a hundred tabs open while working on your web app, but guess what - so will your users.
Performance isn't really product's domain, as in — they would always be happier with things being more snappy; they have to rely on the developer's word as to what's reasonable to expect.
And the expectation becomes that the software should and can only run fine on whatever hardware the developer has, taking all the resources available, and any optimization beyond that is costly and unnecessary.
Giving the devs more modest hardware to develop with (limited traffic/cloud compute/CPU time/...) solves this problem preemptively by making the developers feel the discomfort resulting from the product being slow, and thus having the motivation to improve performance without the product demanding it.
The product, of course, should also have the same modest hardware — otherwise, they'll deprioritize performance improvements.
----
TL;DR: overpowered dev machines turn bloat into an externality.
Make devs use 5+-year-old commodity hardware again.
"The Microsoft Store" app is such a strong example of what happens when nobody cares about performance. It misses UI events most of the time, regardless of what hardware it's running on. Although, in this case, I don't think a Pentium 166MHz would help. The UI event processing is just fundamentally flawed.
<flame=ON>
Usually, but not always, it ignores scroll events while an animation is playing…and hovering over a tile in the list cause a pointless zoom-in animation (the result of which occludes parts of adjacent tiles). Sometimes, the animation won't start immediately, but will still play. To prevent the cannot-scroll-while-animating problem, the only safe place for the mouse pointer is over the scrollbar.
Clicking the (completely invisible) track of the scrollbar has random multi-second delays.
Most of the search filters are hidden by default…and can't be shown without waiting for a slow animation. You can click the show-filters widget over 30 times if you're in a hurry, and still the animation hasn't even drawn the first frame. That delay before it starts means that even if you try to wait, you might click one extra time, and then see both the show-filters animation and then the hide-filters animation…all while none of the rest of UI responds. …And then you might realise you want to refine your search terms…which will reset all filters and re-hide the filter options.
Once you find a tile you want to click, be prepared for another two animation delay: one, if the tile isn't already zoomed in, and another while the app mysteriously animates a slew of placeholders instead of just dumping the items information directly into view. It's slow like a 33.6 moder on a noisy phoneline, but now you finely have details about the item you clicked on maybe 7 to 40 seconds ago.
Now maybe you click a screenshot to enlarge, and decide it wasn't the app for you. You hit your mouse's 'back' button or click the app's strangely tiny (given how freaking huge most of the UI is) back button. Nothing happens. You try again, potentially numerous times…because the app ignores those inputs while a screenshot is enlarged. The app's so unresponsive, it at first doesn't occur to you that no amount of waiting or retrying will help. No, you have to click the little close widget on the opposite side of the window, or 'back' will never mean 'back' again.
You try to go back to your search results. The app eventually responds, but decided to discard that data for some reason and has to play more placeholder animations while reloading it and rediscovering your scroll position.
Then you go into another search result and decide the sidebar of other apps people viewed has some interesting items. These don't have animations on the tiles or any details, so you have to click each one of interest, waiting for more placeholders while imagining modems noises and being outpaced by a Colorado glacier that's crossing the road. And when you page back, the item you just came from does /more/ animations while reloading everything via IP Over Avian Carrier With Quality Of Service.
But when burrowing through the people-also-viewed sidebars, don't go too many layers deep, or when you return to your search results, it will have forgotten your scroll position and turned of your search filters. Ah, time for more UI-blocking animations.
But that's okay, right? Nobody ever made an app that responds in milliseconds to every user input, right? And we all know that doing long, blocking operations on the UI thread is right and holy, right? Even routines single-threaded apps never need to yield to other code blocks or process interrupts, …right?
<flame=OFF>
<meta-flame>
Yes, I have reported this to MS via Feedback Assistant. A few times. No, I don't know why they haven't appeared to do anything about this unshippable pile random bits that somehow slopped out of the Bit Bucket.
I have never experienced any of this. It’s not a great app, but I’ve never had any problems like you’re describing. Or .. somehow I don’t remember them, but that seems unlikely; I’m always willing to dogpile on a shitty application, but I have to experience the things.
It would be funny to compare suicide bombing to a dev implementing features their team is working on even if they don't sound good to that particular dev if it wasn't so sad and offensive.
I'm sorry, but this cop-out really pisses me off. It is far too common and frankly, unacceptable. It really is insulting that you'd expect others to accept this as a justification. It's a lazy dismissal and not even a proper excuse.
You're excuse for doing something shitty is... that someone else will? What does another person even have to do with it?! Seriously, let them have the blood on their hands. You can't even assume that someone else will! If you do it, you guarantee that it happens. Even if it is likely that someone else will, there's a big difference between a certainty. This is literally what creates enshitification.
Plus, the logic is pretty slippery. Certainly you're not going to commit crimes or acts of genocide! You were "just following orders"[0], right? Or parents often say to their children "if everyone jumped off a cliff, would you?" Certainly the line is drawn somewhere, but frankly, it is the same "excuse" given when that extreme shit happened, so no, I won't accept it.
You have autonomy[1], that makes you accountable. You aren't just some mindless automata. You may not be the root cause, but at best you enable it. You can't ignore that you play a role.
And consider the dual: if you don't make it better, who will?
I believe you have the power to make change, do you? Maybe not big, but hey, every big thing is composed of many smaller things, right? So the question is which big thing you want to contribute to.
I suspect for the same reason the comments like I responded to were made: liking my comment means accepting that you are a willing participant in creating shit/harm.
But I still stand, you aren't mindless automata and your actions matter.
Guess I thought of that one as more like a "fix," since the first one was not fit for purpose. Still not good enough, needs spec bump and driver support.
I was looking at potential linux phones the other day, postmarketOS has ports for phones as new as fairphone 4 and OnePlus 6. They don't have any phones with decent camera support though.
Reminds me more of the Amazon Delivery Partner model where the way you want to do something implies harming innocent people, so you have a third-party do it to shift blame for the deaths.
An interesting book is "Bebt: The first 5000 years" by David Graeber
It talks about the various ways debt can insinuate itself into society. In somce cases it can pull people together, in other ways it can corrupt everything it touches.
Not really. This is a pretty clear-cut case of both ethical and legal violations. After the investigation going public, the sheriff's likely going to go to jail for it.
Above-board prison labor is where the slavery question actually comes into play, simply because it's allowed/legal.
The Rust for Linux politics mess should be laid squarely at Linus' feet. He's the BDFL. He approved adding Rust to Linux, and he massively failed at communicating what that even meant. If he wants Rust in the kernel, he needs to kick some maintainers in the ass and get everyone aligned.
To play (BDFL's) advocate - Linux is an -enornous- project. It's not physically possible for any effort to be unilaterally directed. Linus does not personally maintain the whole thing, it's just not possible. So he has maintainers under him who are in charge of the subsystems - the implication being that he trusts their judgment and technical expertise. There's no other way the system could work, really.
Linus's position, as far as I've heard and ascertained, is that introducing any language into the kernel is going to require case-by-case and subsystem-by-subsystem analyses. He's not going to reject the language wholesale, but at the same time it's impossible for him to wholesale mandate its adoption. "Kicking maintainers in the ass" sounds great but nothing is ever that simple.
I think this is an important point. People believe "dictator" means "complete control" when instead "dictator" mean "authority to make any decision." The difference is exactly what you say, that you still have to rely on the hierarchical structure and trust that your "underlings" are making the right decisions. Complete control doesn't scale very well and even a small project or team has many moving parts and the only way to have full control is to be involved in every part. With scale, rapidly comes the necessity for trust. There's only so much time in a day, and only so much can be done in an hour. Damned physics, always getting in the way.
Either he failed at communicating, or he does not care. Reading the recent thread/flamewar on R4L and DRM I wonder if he was somewhat pressured to accept Rust but does not actually care. If you were the leader of a project and explicitly authorized a sub-project, would you tolerate a maintainer undermining it?
FWIW, there are plenty of backchannels through which discussions about this stuff can be had that aren't the public mailing list. My limited understanding is that those discussions have in fact taken place and the R4L people aren't concerned about things working out eventually.
However, I do wish he would say something publicly, so that the internet peanut gallery doesn't fill the void with negativity. It doesn't exactly help attract more interest in the project.
I don't know enough to agree or disagree with the person you're replying to. But I assume the logic would be that if Linus doesn't approve then he should have shut the discussion down long ago or at least do it now without half measures
> then he should have shut the discussion down long ago
It's interesting that people expect Linus to simultaneously chill out and be more inclusive while at the same time acting like a dictator over the entirety of the project.
In any case, perhaps it's worthwhile to remember where it started, and what the attitudes were at the time [1]. It was an experiment. Experiments sometimes fail or reveal themselves to not lead anywhere useful. The choices were to let that experiment happen on the actual kernel or force the rust people to fork some version of the kernel and instantly be left behind.
It clearly wasn't an easy choice and I can only imagine the uproar from the rust community if he then did what you suggest today. So, that leaves us with the question, what if he's decided it's no longer worth it? What then should we do with a mostly failed kernel experiment?
> It's interesting that people expect Linus to simultaneously chill out and be more inclusive while at the same time acting like a dictator over the entirety of the project.
It's not particularly interesting that different people have different expectations on different issues
It's interesting that you moved the goalposts to an entirely different point. I'm clearly talking about the overall "community," which whether you care about it or not, has been used as a point to successfully force Linus' hand before.
In other words, there are clearly politics involved, so perhaps that should be taken into consideration before making a blithe point about a complex human organizational issue?
You can be decisive and inclusive without being a prick who is personally targeting developers and publicly demeaning. Linus was like that. No sane developer wants that. A true inclusive environment also requires leaders to tell rowdy maintainers calling people "cancer" to know their place and watch their tone.
It is not really a good experiment in technical qualification nor in developer collaboration, if the leadership lets individuals to constantly blockade work, isn't it?
How else would you target them? You can disagree over tone and scope but this is an open source project with an open contribution model.
> publicly demeaning. Linus was like that. No sane developer wants that.
Show me someone who hasn't been turned to this behavior over frustration. In isolation you can always find this from a leader what you should be concerned with is context and persistence. While it was occasionally over the top the majority of the time he was perfectly "sane."
> A true inclusive environment also requires leaders to tell rowdy maintainers calling people "cancer" to know their place and watch their tone.
This is open source. What exactly is "their place?" How much time should one dedicate to policing tone? Isn't that personally targeting people but in a different direction?
Which is part of my point here. Previously we just developed and ignored Linus' hot head behavior. This push for a vaguely defined "truly" inclusive community is what I feel led Linus into the mistake of allowing Rust into core.
> if the leadership lets individuals to constantly blockade work
What if the work just isn't very good or is counterproductive to the project as a whole? What if there is no consensus on this point? How much time should one dedicate to building consensus?
>Thinking of literally starting a Linux maintainer hall of shame. Not for public consumption, but to help new kernel contributors know what to expect.
>Every experienced kernel submitter has this in their head, maybe it should be finally written down.
>If shaming on social media does not work, then tell me what does, because I'm out of ideas.
Hector Martin went way beyond being a "prick". And Linus Torvalds told him to stop it.
Have you considered that the problem may be the people that are making "hall of shame" lists of people and doing social media brigading?
Worsening matters, Steve Klabnik (major Rust community figure, has run @rustlang, primary author of the Rust Programming Language Book, former Rust Core member, and moderator of r/rust), have been busy here and on reddit making excuses for Hector Martin. What kind of community is the Rust community?
I haven’t been involved with the Rust Project for years, I speak only for myself.
I said that I feel for Hector, he’s clearly hurting, and that I hope he feels better. I’ve said I don’t want to pass judgement on if what he did is right or wrong, because I’m trying to stick to facts here. I’ve said that I don’t think what he did was particularly effective.
That’s not making excuses. Hector’s actions aren’t the main point of this story. It’s not even his patch!
I'm not gonna end up replying to this thread again, because it's a waste of my time. But I will elaborate on some things.
> You have clearly made excuses for Hector.
This is how posts like this use semantic drift and context collapse. Take this for example:
> Against the decision that the project made, that is very straightforwardly sabotage.
I can believe that the definition of the word "sabotage" applies in this situation without supporting every last thing about Hector and his actions. But because Hector is originally the one who said this, you turn my specific statement into some sort of generic "making excuses."
Also, the first and third comments have nothing to do with Hector. The fourth one is just stating some facts?
> And how can you not deem it wrong of him to begin social media brigading?
Because this is also making a mountain out of a molehill. Hector complained on Mastodon about something that was happening. He was very clearly burned out and upset.
Do I think it's good? No. Do I think it worked? No. But I'm not particularly interested in trying to pass some sort of judgement about if Hector is Good or Bad simply because he said a bunch of things when he was at his whit's end. If I did, I would be quite the hypocrite.
Notably, I also am not interested in passing judgement on weather Christoph Hellwig is a Good or Bad person.
None of this is about that. I don't know either of these people. They could be having a bad day. Or maybe one or both of them are evil. I don't know and I don't really care.
> How can you not deem it wrong of him to begin making a "hall of shame" list?
I don't use mastodon, and so I didn't really see this follow-up. I'm aware it exists, but not of what's in it or what it's about. Maybe it sucks. No clue.
> If it is true that you, Steve Klabnik, were "Community Team Leader for the Rust team at Mozilla, in charge of official Rust community documentation as well as the key Rust community advocate.", can you confirm that you were in the past paid to be an advocate for Rust? And if yes, are you still paid to be an advocate for Rust?
Listen man, if you want to play Joseph McCarthy, be my guest, but I'm not going to engage. It doesn't really matter what I say.
>And I also do not want another maintainer. If you want to make Linux impossible to maintain due to a cross-language codebase do that in your driver so that you have to do it instead of spreading this cancer to core subsystems. (where this cancer explicitly is a cross-language
codebase and not rust itself, just to escape the flameware brigade).
Not referring to people. And many developers would agree that a multilanguage codebase can easily end up becoming a nightmare and pure cancer to maintain, whether or not Rust is one of those languages.
Another C project has not had good experiences with all interop attempts, pulling the plug on interop with one Rust library, while keeping support for two other Rust libraries.
>Before this step, we supported three different backends backed up by libraries written in rust. Now we are down to two: rustls (for TLS) and quiche (for QUIC and HTTP/3). Both of them are still marked experimental.
>These two backends use better internal APIs in curl and are hooked into libcurl in a cleaner way that makes them easier to support and less of burden to maintain over time.
I remember seeing an interview where Linus said that at one time, he rejected a contribution and felt bad because he wasn't clear enough in that he didn't wanted that.
He said he now tries to be very clear about what he doesn't want. So I guess he is ok with how things are going, otherwise, we would be likely to see another of his famous rants.
Exactly. The recent DRM issue was about who's time would be wasted. Rust is either happening, and they are wasting time by not having proper bindings, or it is not, and all the drivers written were a waste of time. In either case, the discussions around it are already a waste of time.
Something I've been wondering: why do ebooks take so long to render? My kindle seems good at it, but opening an ebook in calibre/fbreader/etc can take minutes or even fail in some readers depending on the ebook.
I would guess there are multiple potential pitfalls here. Firstly, not all ebook formats are created equal -- Storyteller only operates on EPUB files, because EPUB is an open source format and it supports Media Overlays (read-aloud) natively. I can only really speak to that format, but there are others (MOBI, PDF, etc).
An EPUB is just a ZIP archive of XML and XHTML files (plus other assets, like images). Partly, I suspect, because of the dearth of actively maintained open source projects in the space, and partly because of the nature of tech in the book publishing industry, EPUB generation software used by authors and publishers often messes up this spec, which means that EPUB readers sometimes need to have fairly complex fallback logic for trying to figure out how to render a book. Also, because EPUBs are ZIP archives, some readers may either unzip the entire book into memory or "explode" it into an unzipped directory on disk, both of which may result in some slowness, especially if the book has lots of large resources. The newest Brandon Sanderson novel, for example, is ~300MB _zipped_.
Additionally, and perhaps more importantly, EPUBs (and I believe MOBIs as well) represent content as XHTML and CSS, which means that readers very often need to use a browser or webview to actually render the book. Precisely how they deliver this content into the webview can have a huge impact on performance; most browser don't love to be told to format entire novels worth of content into text columns, for example.
Additionally the XHTML content can just be a single large file instead of one file per chapter/section. Paginating and rendering the large single file is going to be more effort than the same on a smaller file. This is all on top of the pitfalls and variability you mention.
Yup, great point. Especially if you've used some tool to convert from another file, like a PDF, into an EPUB, you can easily end up with the entire book in a single XHTML file, which, again, can be pretty heavy for a browser to parse and format! I also have no idea whether Calibre et al actually use native web views, or have their own renderers, which are almost certainly less performant than native web views!
I used Storyteller to align the most recent Sanderson's novel on audio and the result is 1.7Gb. That's... painful. It resulted in it crashing the reader on Remarkable2 tablet.
I'm now actually working on a Calibre-Web change to strip the audio and media overlay from the books it serves via OPDS.
Then I'll need to tackle cross-device progress sync. This turned out to be surprisingly tricky.
You can’t do much better than that; that’s the size of the audiobook! For what it’s worth, I also used Storyteller on Wind and Truth, and got it down to 1.2GB by using the OPUS codec with a 32 kb/s bitrate.
Yeah. My current workaround is to create KEPUBs (Kobo-optimized epubs), but that creates an issue with cross-format reading progress sync. This is an interesting task in itself, though.
So I'm trying to design a progress sync protocol. My current idea is to just use several words from the text itself to unambiguously pinpoint the position within a section (chapter).
Is the idea that you have some devices that you want to download just the text to, but have it sync with your other devices? I think we could support that natively, honestly! Storyteller already has the input files, and it uses a text-based position system that doesn’t require the audio to exist. If you’re already doing work on this, maybe we could add it to Storyteller?
Ooh, that's an interesting idea. I only have one device where I would ever want to switch to listening to my book, but a couple others where I would like to read it.
FWIW, I wrote an EPUB (well, it was called OEBPS at the time) reader that rendered pretty much all of the format ~21 years ago (including all of XHTML and CSS) and it had very decent performance. I seem to recall that someone tried it on the One Laptop Per Child XO and it was... well, slow, but it worked.
Of course! I'm hoping to have a web reader with Media Overlay support built in to Storyteller available in the next few months, along with some much needed library management tooling, so maybe that will be useful for you! I'll try to make it snappy :)
Remember the name McKinsey. They are an evil organization that does bad things to good people. Anyone from McKinsey is to be regarded with suspicion at the very least.
I'm no McKinsey fan, but it's naïve to think that they are worse (or different) than other multinationals, particularly firms providing consulting or legal or other advisory services.
Surely we have to have some people lurking here that currently or used to be at McKinsey, I guess it wouldn't be surprising why they wouldn't chime in, but many have the same feelings about every other sus company many of us work for.
Everyone I worked with that had affiliation to McKinsey owned projects that ended up costing the company tens to hundreds of millions of dollars. They were better at office politics, however their actual skill was rubbish. Everyone paid for it, except them of course. In fact, after record layoffs guess who is still managed to keep their position? The McKinsey dudes. Crazy.
I work there. It’s just a very decentralized place. What you do is driven by a tiny branch of the widest, flattest org you’ve ever seen. They’ve added some governance as to what partners can just sign up to do themselves. But it’s all firewalled internally. Idk what people in the next room over are doing to avoid creating conflicts of interest.
I would say a lot of people here are not grounded, but in more of an anxious way than an evil profit minded kind of way (which may sound mythical but I’ve seen it blatantly in other company cultures)
“…You need to think of Larry Ellison the way you think of a lawnmower. You don’t anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it’ll chop it off, the end. You don’t think ‘oh, the lawnmower hates me’ – lawnmower doesn’t give a shit about you, lawnmower can’t hate you. Don’t anthropomorphize the lawnmower. Don’t fall into that trap about Oracle.” – Bryan Cantrill