`1cpy` is a single-memcpy (assumes message size divides buffer size evenly)
`2cpy` is the split memcpy (which supports other message sizes)
`funny` is single-memcpy with the double-memmapped buffer
and:
`mod` uses `head %= buffer_size`
`and` uses `head &= buffer_size`
`sub` uses `if (head >= buffer_size) head -= buffer_size`
The tl;dr here is that the doubly memmapped buffer performs the same as the 1-memcpy implementation that doesn't do anything funny and significantly better than all 2-memcpy implementations except `and`. Since your page size is gonna be a power of 2 anyway, this means this trick is not really worthwhile and indeed you should just use `and`. But, compared to the other wrapping implementations it does still provide a tangible improvement. That may just come down to how the compiler was able to optimize it, but I don't feel like nitpicking the generated code to figure out why right now.
What is a "super segment hybrid display"? Looks cool, has VFD vibes, but I assume it's just an OLED with an overlay or something based off in the Verge article "Most of the KO II’s parts are just off-the-shelf components, including the display"
That's my guess as well. Either monochromatic OLED (cheap!) and colored icons (not so cheap?) or the other way around. LCD with front panel might also be fine as long as it's bright enough, and the front panel is dim enough.
> Like Hysteria, Brutal is designed for environments where the user knows the bandwidth of their connection, as this information is essential for Brutal to work.
They don't quite say that this is a bad idea for use over WAN. If they intentionally avoided ruling out such usage in this qualification, they're making an implicit assumption here that either the last-mile connection or the endpoints themselves are going to be the bottleneck. If some router in between is having a bad day, it would definitely make its day worse.
edit: I wasn't familiar with Hysteria but now that I'm reading those docs, I guess the intent is for this to be used on the internet. In that case, it does seem pretty like it'd be pretty adversarial to run this. I bet if it saw widespread adoption it'd make ISPs pretty upset.
edit 2: Going slightly off-topic now, but I wonder if the bandwidth profile of Hysteria compromises its HTTP/3 masquerade?
It is intentionally used on WAN. Brutal part of Hysteria(https://news.ycombinator.com/item?id=38026756) internal components, and Hysteria is a proxy made for people in China under censorship, where outbound Internet access is heavily degraded.
> but I wonder if the bandwidth profile of Hysteria compromises its HTTP/3 masquerade?
Most likely so. GFW is not able to reassemble and analyze QUIC (and AFAIK, any UDP-based multiplexed protocol) traffic, yet. If Hysteria takes off, GFW will try to kill it and so far it's likely to be degraded severely just as Shadowsocks, V2Ray or (ironically) Trojan.
Very few "censorship-resistance" proxy implementations out of China were designed to systematically evade traffic analysis, they usually just avoid general techniques and rely on being niche enough to fly under radar. Which is not wrong: being diverse is also a good strategy.
The layout is eerily similar to one I designed last year [1]. I may be biased, but I'm not sure if it is worth the $365 price point. That being said, it is a bit more refined and has some nice accessories bundled.
I may pick one up for comparison. But, just noting that there is a comparable open source design available. ;)
But budget options pretty much demand a variety of skills.
DIY boards require soldering, and flashing firmware to the microcontrollers. If a mistake is made when soldering, this can be very difficult to diagnose, and harder to fix. -- Paying someone else to solder is sometimes an option, but can be expensive, too.
Even with cheap boards from e.g. AliExpress, you still might have to be familiar with flashing firmware.
For someone with more time than money, DIY is a good option.
With ZSA, I feel I can at least recommend a reputable brand; I'm not sure I can recommend DIY assembly to most people.
Yeah, I agree full DIY is not for everyone, but SMT assembly is pretty affordable even in small batches these days. I just placed an order for 5x of a 60% unibody ortho board and it came out to roughly $40 a piece before switches, case, etc.
I considered the price as well. I do love open source designs more than I love the design of the ZSA Voyager (it has too few thumb keys), but in terms of price, it might actually be fair. Milled steel cases for a Corne or Lily58 are only available via group buys that charge $300-400, and they don't come with either keycaps or switches. I believe it's realistic that you could pay the same amount or more for an alternative.
I use a service called Plausible for this - they are privacy focused (do not even use cookies) but there are some drawbacks (like not being able to really know MAU since user identifiers reset daily)
> * A replacement front-end for "ffmpeg", see above
I have one of these too... It's kind of frightening how hard ffmpeg is to use without some kind of custom frontend. I have probably dozens of bash/python scripts to invoke ffmpeg for different common tasks.
- One to extract audio
- One to extract all the individual streams from a container
- A couple different transcoding scripts
- One specifically for gifs
- One to crop video
- A few that I can't remember the purpose of... and can't tell from reading the source
You could threaten to kill me and I wouldn't be able to user ffmpeg from memory. I just don't have to use it often enough. So I created a script w/ the settings I usually want to encode certain things that I need to encode semi-often. It definitely doesn't make it easier to remember ffmpeg command line options because I don't have to use them.
kubectl and our home grown command line tools to interact with our build and deployment? I know most of it by heart of course because I use it daily. I really don't like some of the scripts that we have and do it "by hand" instead because I have to use it all often enough that I want to know what is really going on underneath in case things don't go the way they should (and there's always something). I am able to diagnose and fix or work around all these issues with ease because of it, while lots of other people just run the wrapper scripts and if something doesn't work they very often aren't able to even troubleshoot the simplest problems. I'm that guy w/ ffmpeg ;)
I'm pretty sure you meant this in a different way but it reminded me of another thing I see a lot with people.
They have these huge lists (written down in some tool or another) of commands to do specific things and they copy and paste them. It's heart wrenching to see them search for these sometimes (even if they find them) and then they copy and paste them. But they sometimes (many times) don't work or are super simple things. Like the `kubectl get pod` thing I mentioned, they might have that in one of those lists under some heading like "dev environment commands" or somesuch together with 10 or 15 others.
Because they never actually tried to understand the simple logic and meaning behind these command line tools and only ever copy and pasted, they get very easily tripped up by even the simplest things, such as replacing the parts that need to be adjusted for their specific situation, even if clearly marked, such as `kubectl -n <yournamespace> get pod` (or one that comes up even more often in troubleshooting sessions 'The command from the docs does not work and I made sure I copy and paste so I don't mistype it' and if you ask what they did it was `kubectl -n examplenamespace get pod`). Or they might have written down a command from the onboarding docs that combined multiple things into one command line. To fix issues with their environment, they have to basically nuke everything and start from scratch, because only then will their copy and pasted command actually work. They haven't learned to decompose these and use the parts individually or recompose them.
I agree that keeping a cookbook of stuff pasted from the internet without knowing the fundamentals is an anti-pattern.
For me (and probably for GP), my cookbook is stuff that took me more than a few minutes of tinkering or reading manpages to figure out, and it’s stuff that I use maybe once every couple weeks — not often enough to memorize, and annoying if I have to figure it out again.
It’s the ‘wrong’ way to think about it in the sense of subjectivity : powerful in this context means lots of arguments and config settings you have to learn, memorize or look-up.
That means you have to become an expert level user to become fluent (where the user would call it “easy”)
It’s an UX that would exclude entry level computer user (since entry level here means not even knowing where the shell is for ffmpeg) and would perhaps be a barrier intermediate computer users.
ffmpeg arguments generally compose pretty well, although it's powerful enough that (as other comments have mentioned) for special use cases you do often have to look things up.
If you can remember 3 or 4 things you can do most stuff without looking anything up, however:
-i -> input file
-vcodec / -acodec -> video and audio codec; "copy" specifies copying the input stream
-vn / -an / -sn -> disable the video / audio / subtitles in the output file
-ss / -t -> specify the start time and length of the conversion from the input file
So, just by looking at the above, you can easily see how to extract audio (or any individual stream). For gifs specifically I would recommend using gifski which has much better results anyway. For cropping, I don't find the `-vf crop` syntax too bad.
But when you have differing behaviour based on argument position like putting -ss / -t before the -i or after (fast seek vs accurate seek), it gets confusing pretty fast.
I wrote one of those last week as a sort of poor man’s video editor. Besides aliases for common commands (like extracting a time range without re-encoding), it also takes the tedium out of repeatedly typing file names. Output file names are generated based on the input file name, with a prefix that auto-increments like sql. Input files can be referenced by prefix. It makes a huge difference.
1000000 iterations, buffer 8192, message 128
i9-13900K, gcc 13.3, -O3, kernel 6.8.0-49, glibc 2.39
byte-mod: 0.117353 us/iter
byte-and: 0.065379 us/iter
byte-sub: 0.027865 us/iter
1cpy-mod: 0.001143 us/iter
1cpy-and: 0.001100 us/iter
1cpy-sub: 0.001098 us/iter
2cpy-mod: 0.008140 us/iter
2cpy-and: 0.001100 us/iter
2cpy-sub: 0.007711 us/iter
funny-mod: 0.001145 us/iter
funny-and: 0.001101 us/iter
funny-sub: 0.001100 us/iter
where:
`byte` is per-byte copy
`1cpy` is a single-memcpy (assumes message size divides buffer size evenly)
`2cpy` is the split memcpy (which supports other message sizes)
`funny` is single-memcpy with the double-memmapped buffer
and:
`mod` uses `head %= buffer_size`
`and` uses `head &= buffer_size`
`sub` uses `if (head >= buffer_size) head -= buffer_size`
The tl;dr here is that the doubly memmapped buffer performs the same as the 1-memcpy implementation that doesn't do anything funny and significantly better than all 2-memcpy implementations except `and`. Since your page size is gonna be a power of 2 anyway, this means this trick is not really worthwhile and indeed you should just use `and`. But, compared to the other wrapping implementations it does still provide a tangible improvement. That may just come down to how the compiler was able to optimize it, but I don't feel like nitpicking the generated code to figure out why right now.