> Some people just want to use a tool to do a job swiftly. Not everything has to be educational.
> some folks want to use lossless cut
In that case I would encourage you to ruminate on what the following in the post you're replying to means and what the implications are:
> "ff convert video.mkv to mp4" (an extremely common usecase) maps to `ffmpeg -i video.mkv -y video.mp4` here, which does a full reencode (losing quality and wasting time) for what can usually just be a simple remux
Depending on the size of the video, the time it would take you to "do the job swiftly" (i.e. not caring about how the tools you are using actually work) might be more than just reading the ffmpeg manual, or at the very least searching for some command examples.
As the other person said (and this is my mistake for not capitalizing), Lossless Cut is a popular CLI wrapper for ffmpeg with a (somewhat) intuitive interface. Someone is going to be able to pick up and use that a lot faster than they are ffmpeg. I think a lot of folks forget how daunting most people find using a terminal, yet a lot of those people still want something to do a simple lossless trim of an existing video or some other little tweak. It’s good that they have both options (and more).
> > some folks want to use lossless cut
> In that case I would encourage you to ruminate on what the following in the post you're replying to means and what the implications are:
You may have misunderstood the comment: "lossless cut" is the name of an ffmpeg GUI front end. They're not discussing which exact command line gives lossless results.
The thing is that when a video is being re-encoded, so long as I'm not trying to play games on my computer at the same time, I'm free to go do something else. It does not command any of my attention while its working, whereas sitting and reading the man pages commands my attention absolutely.
”I posted content to a proprietary social network, then got upset when it generated a page description with AI”
Sure, the description is garbage, it may not be obvious it’s not written by the user, but people need to understand what partaking in closed and proprietary social media actually means. You are not paying anything, you do not control the content, you are the product.
If you don’t enjoy using a service that does this to the content you post then don’t use that service.
I’ll stick to this point only even if I feel that there are other things in the post that are terribly annoying.
When the behavior is not only something something you "don't like" but is also (as this woman perceives it) a professional threat (she makes a living out of carefully choosing her words; she felt this attributed to her words she would never have said) and furthermore is unexpected, to simply quietly leave the platform seems insufficient. One ought to warn other users about the unexpected dangerous practice -- which is precisely what this article accomplishes!
Interesting that this got posted today, I also have a server on Hetzner (although I don't think it's relevant) and noticed yesterday that a Monero miner had been installed.
Luckily for me, the software I had installed[1] was in an LXC container running under Incus, so the intrusion never escaped the application environment, and the container itself was configured with low CPU priority so I didn't even notice it until I tried to visit the page and it didn't load.
I looked around a bit and it seemed like an SSH key had been added under the root user, and there were some kind of remote management agents installed. This container was running Alpine so it was pretty easy to identify what processes didn't belong from a simple ps output of the remaining processes after shutting down the actual web application.
In the end, I just scrapped the container, but I did save it in case I ever feel like digging around (probably not). In the end I did learn some useful things:
- It's a good idea to assume your system will get taken over, so ensure it's isolated and suitably resource constrained (looking at you, pay-as-you-go cloud users).
- Make sure you have snapshots and backups, in my case I do daily ZFS snapshots in Incus which makes rolling back to before the intrusion a breeze.
- While ideally anything compromised should be scrapped, rolling back, locking it down and upgrading might be OK depending on the threat.
Regarding the miner itself:
- from what I could see in its configuration it hadn't actually been correctly configured, so it's possible they do some kind of benchmark and just leave the system silently compromised if it's not "worth it", they still have a way in to use it for other purposes.
- no attempt had been made at file system obfuscation, which is probably the only reason I really discovered it. There were literally folders in /root lying around with the word "monero" in them, this could have been easily hidden.
- if they hadn't installed a miner and just silently compromised the system, leaving whatever running on it alone (or even doing a better job at CPU priority), I probably never would have noticed this.
The parent attempted to excuse them by pointing out that the initial design was based on phone numbers. Putting aside the fact that initial design is irrelevant to present design criticism, they went out of their way to design usernames yet deliberately disallow signup without phone numbers.
> Not a very good case made since you obviously didn’t read the parent discussion.
This isn't an argument, do you have anything to back up your assertion?
While I understand the attraction of doing so, I’m not sure I like the implication in the post that the reason this needs to be reviewed is because of how loyal of a customer this person is, or the fact that they have written books on developing for Apple devices.
I continue to be unimpressed by LLMs when it comes to creative work, they're certainly useful sometimes for "reference digging", but maybe I just don't understand enough about how they work and this is actually something that can already be "fixed" or at least optimized for; anyway, one of the headlines is:
> Debian 18 "Trixie" released
While it correctly derives that a likely version number in ten years would be 18, as there are new releases approximately every two years which means +5 from today's version 13, it then goes on to "make up" that the name of it would be "Trixie" -- the same name as the current release in 2025.
Debian has never re-used a release name, and I think we can be pretty confident they won't (as will no other Linux distro), so I would expect it to "understand" that:
- The next Debian release always uses a previously non-used Toy Story character
- Based on this information, _any_ name of a Toy Story character that hasn't been used is fair game
- At the very least, it certainly won't be the same name again, so at least make up a name
And the fact that it thinks it will take 10 years to go from Linux kernel 6.18 to 7.4 when it only took 13 months to go from 5.18 to 6.4... It's off by about an order of magnitude...
From a quick check, Gemini Pro 3's cutoff date is Jan 2025, before Trixie's release in August 2025, so it could be Gemini actually did notice it should pick an unused Toy Story character.
It's "cool" but in terms of usability I think it's terrible. I would think that someone who works full time as a designer and has strong opinions on right and wrong would think twice about wasting screen real estate on mobile devices with a bunch of bullshit like a sticky menu, footer and comically huge "window"-headers.
The only thing I see is the design equivalent of over-engineering a car with bells and whistles that nobody gives a shit about, it's simply showing off and sending a signal to other designers, which is obviously fine if that's what you're going for, but personally I hate it (as you may suspect, my job is not in design).
For example, I read it on a 10 inch tablet, so the line length for the article was about seven words which required a lot of extra scrolling, and it’s scrolling within a box so I needed to scroll in a specific area of the page, rather than just scroll the whole page
> some folks want to use lossless cut
In that case I would encourage you to ruminate on what the following in the post you're replying to means and what the implications are:
> "ff convert video.mkv to mp4" (an extremely common usecase) maps to `ffmpeg -i video.mkv -y video.mp4` here, which does a full reencode (losing quality and wasting time) for what can usually just be a simple remux
Depending on the size of the video, the time it would take you to "do the job swiftly" (i.e. not caring about how the tools you are using actually work) might be more than just reading the ffmpeg manual, or at the very least searching for some command examples.
reply