I also like the aesthetics of OS X versions prior to 10.10. It is hard to find programs that still support older versions though. To make things easier I'm using the more modern macOS 10.14 but installed a few things to make it look more like 10.9.
macOSLucidaGrande (https://github.com/LumingYin/macOSLucidaGrande) can change the system font back to Lucida Grande like 10.9 used. One annoyance with this though is that the '*' character won't show up in password input fields.
I'm actually typing this reply on a 2012 MacBook Pro which is still working pretty well. I've also used several recent MacBook models at work so am familiar with those. Things that I like better about the 2012 MacBook Pro compared to newer models is that it's easier to replace/upgrade items. On my 2012 MacBook Pro I've replaced the original hard drive with a SSD, upgraded the memory, and replaced the failed battery (which shouldn't be unexpected for such an old laptop) which were all fairly simple to do.
I also do not like that Apple has completely removed all USB type A ports on the newer MacBooks. USB type A plugs are still very common and I wish Apple left one or two on the newer MacBooks in addition to the USB-C ports. Yes, you can use USB-C to USB type A adapters but it is annoying.
I also do not like that Apple has removed the Ethernet and microphone jacks. Both jacks are still useful to have on modern computers. I'll make an exception for removing the Ethernet jack on the MacBook Air to accommodate a thinner chassis but wish the MacBook Pro chassis was kept thick enough to accommodate the Ethernet jack.
Ah, so 2012 was the year they brought retina displays to the MacBook Pro. The port reductions didn't bother me much because when I'm docked, I've got so many peripherals that I'd need the edges of the laptop to be nothing but USB-A ports to fit them all. So, all I ended up needing to do was replace my hub with one that uses USB-C and has ports for Ethernet and displays. The biggest upside of this arrangement is that I only need to plug in one cable at my desk instead of six.
> I also do not like that Apple has removed the Ethernet and microphone jacks. Both jacks are still useful to have on modern computers
Ethernet is available over the ports via fairly cheap adapters if you need it. There's so much bandwidth on a thunderbolt port that it can do that and a display or two at the same time.
This depends on how the organization configures things. My company used to allow TOTP so many TOTP apps could be used instead of Microsoft Authenticator but my company disabled that a while ago. Now the only authenticator app my company allows is Microsoft Authenticator using push notifications (see https://learn.microsoft.com/en-us/entra/identity/authenticat... ). Consider yourself lucky if your employer allows you to use any TOTP app you want instead of forcing you to use Microsoft Authenticator.
Those lunches could add up to something significant over time. If you're paying $10 per lunch for 10 years, that's $36,500 which is pretty comparable to the cost of a car.
I was reading another web page (I don't have the link unfortunately) several days ago where another reader pointed out to the author the same type of attack mentioned in this article. To address that attack the author came up with the same solution you proposed and I do believe that is sufficient for preventing the type of attack mentioned in this article. There still are other types of attacks (cold boot attack, sniffing TPM traffic, etc...) that can be done though so it still is a good idea to use a PIN/password, network bound disk encryption, etc... in addition to the the TPM.
I'm currently working on setting up disk encryption for a new home server and as an additional precaution I'm also working on getting the initrd to do a few additional sanity checks prior to decrypting a LUKS partition and prior to mounting the root file system within. One check which I think will be highly effective is that prior to decrypting the LUKS partition I have the initrd hash the entire LUKS header and make sure it has the expected value before allowing the boot to continue. So far it seems to be working OK but hashing the entire LUKS header is overkill which will require some care to make sure the expected hash value is kept updated if the LUKS header changes for some reason (like changing encryption passwords). I can not recommend this idea for everyone consequently.
Found the page I had mentioned earlier: https://pawitp.medium.com/the-correct-way-to-use-secure-boot... . In the comments Aleksandar mentioned the possibility of using the attack mentioned in this article and the author replied back with the same solution of verifying a secret file on the root partition.
One reason why it might be a good idea to use higher quality drives when using ZFS is because it seems like in some scenarios ZFS can result in more writes being done to the drive than when other file systems are used. This can be a problem for some QLC and TLC drives that have low endurance.
I'm in the process of setting up a server at home and was testing a few different file systems. I was doing a test where I had a program continuously synchronously writing just a single byte every second (like might happen for some programs that are writing logs fairly continuously). For most of my tests I was just using the default settings for each file system. When using ext4 this resulted in 28 KB/s of actual writes being done to the drive which seems reasonable due to 4 KB blocks needing to be written, journaling, writing metadata, etc... BTRFS generated 68 KB/s of actual writes which still isn't too bad. When using ZFS about the best I could get it to do after trying various settings for volblocksize, ashift, logbias, atime, and compression settings still resulted in 312 KB/s of actual writes being done to the drive which I was not pleased with. At the rate ZFS was writing data, over a 10 year span that same program running continuously would result in about 100 TB of writes being done to the drive which is about a quarter of what my SSD is rated for.
One knob you could change that should radically alter that is zfs_txg_timeout which is how many seconds ZFS will accumulate writes before flushing them out to disk. The default is 5 seconds, but I usually increase mine to 20. When writing a lot of data, it'll get flushed to disk more often, so this timer is only for when you're writing small amounts of data like the test you just described.
> like might happen for some programs that are writing logs fairly continuously
On Linux, I think journald would be aggregating your logs from multiple services so at least you wouldn't be incurring that cost on a per-program basis. On FreeBSD with syslog we're doomed to separate log files.
> over a 10 year span that same program running continuously would result in about 100 TB of writes being done to the drive which is about a quarter of what my SSD is rated for
> One knob you could change that should radically alter that is zfs_txg_timeout which is how many seconds ZFS will accumulate writes before flushing them out to disk.
I don't believe that zfs_txg_timeout setting would make much of a difference for the test I described where I was doing synchronous writes.
> On Linux, I think journald would be aggregating your logs from multiple services so at least you wouldn't be incurring that cost on a per-program basis.
The server I'm setting up will be hosting several VMs running a mix of OSes and distros and running many types types of services and apps. Some of the logging could be aggregated but there will be multiple types of I/O (various types of databases, app updates, file server, etc...) and I wanted to get an idea of how much file system overhead there might be in a worst case kind of scenario.
> I sure hope I've upgraded SSDs by the year 2065.
Since I'll be running a lot of stuff on the server, I'll probably have quite a bit more writing going on than the test I described so if I used ZFS I believe the SSD could reach its rated endurance in just several years.
> But presumably he's writing other files to disk too. Not just that one file.
Yes, there will be much more going on than the simple test I was doing. The server will be hosting several VMs running a mix of OSes and distros and running many types types of services and apps.
Additionally, there are multiple reasons why one programming language might have nine different programs:
-The programs could have been written by someone who prefers to make small iterations and perform many submissions instead of someone who likes to make bigger changes with fewer submissions.
-The programs could be submitted by a novice who needs to make more submissions than would be required by a pro.
-The programming language may be older and has had more submissions.
-The programming language may be more actively changing and requires more updates in order to fix old programs.
-The programmers may also be making more submissions to improve other characteristics like memory usage, code size, code readability, compatibility, etc... that are unrelated to performance.
In short, previously the benchmarks game explicitly stated that the program numbers were arbitrary and signified nothing; and the same reasoning applies to hanabi1224's website.
It's not a good idea to use this site (or the Computer Language Benchmarks Games https://benchmarksgame-team.pages.debian.net/benchmarksgame/ that it is partially based off) as an indicator of how may rewrites are necessary in order to generate good programs. The skill levels of the contributors and the size/popularity of the various programming language communities can vary a lot. The benchmark rules have been changing over time and contributors have been figuring out better algorithms over time so these both result in the contributed programs getting updated over time as well. Older programming languages have been around longer than newer programming languages and will have had more contributed programs consequently. Some programming languages are under more active development so require more revamps of existing programs. Etc...
You could certainly make an argument that an easier language allows a programmer to reach a certain skill level faster or maybe even reach a higher peak skill level. By extension you could also argue that, by some definitions, the skill level of that language's community might be higher. However assuming you had equally skilled Rust and Zig programmers, I think it is wrong to say that the Rust programmers would require more rewrites to match or surpass the Zig programmers.
The file and instructions in https://forums.macrumors.com/threads/mavericks-window-contro... change many of the UI elements to look like those in 10.9.
Menu Bar Tint (https://manytricks.com/menubartint/) can make the menubar look like the one in 10.9.
macOSLucidaGrande (https://github.com/LumingYin/macOSLucidaGrande) can change the system font back to Lucida Grande like 10.9 used. One annoyance with this though is that the '*' character won't show up in password input fields.