If you're following along and can't/don't want to remember the SQL syntax, use the examples from the post for LLM text-to-SQL context:
Q: Which photo has the highest number of faces?
A: SELECT SourceFile
FROM photos
WHERE RegionType IS NOT ''
ORDER BY length(RegionType) DESC
LIMIT 1;
Q: ...
You can also fetch and use the table schema with `sqlite3 exif.db .schema`
He also has a YouTube channel. I first saw him speak at SF Nerd Nite which is a speaker series in San Francisco. He’s very entertaining and basically a self taught botanist.
His youtube channel is great! I love his Bay Area hikes and plant explanations.
There's a nice article about him, he is a full-time train engineer who drives train shipments all around the west coast and while he was traveling he got curious about all the plants he would see from the train so he started going to the libraries on his breaks from work to learn about plants.
I second the Youtube channel suggestion. Very entertaining and informative. Lots of trash talking and banter to keep things interesting.
The SF tours are especially great. Explanations of why things should/should not be planted in areas really opened my eyes to how bad the urban designs can be.
Sidenote: I like that he tattooed his finger with measurement lines so he can use his fingers in the pictures as reference for size.
It's been a while since I attended one. I remember some of the most interesting talks were the ones I was not particularly interested in from the descriptions. Mostly talks about the history of San Francisco:
For those in the Bay Area he has a pretty hilarious video of him shopping at Berkeley Bowl too, shredding junk like the homeopathic bs but praising their produce selection. Bonus video of him analyzing the sad trees in the Emeryville Target parking lot.
You may find this site interesting: https://folkrnn.org/tune/119105 Thing is tabs and other kinds of notation only capture a small part of the composition, often just the melody. So to generate "real" polyphonic music you need other kinds of data.
Just want to send my thanks. I’ve been a regular user since the early testflights and it’s been great seeing the iterations and improvements, particularly for me the scheduled-but-no-show ghost buses.
For image and layer manipulation, crane is awesome - as is the underlying go-containerregistry library.
It lets you add new layers, or edit any metadata (env vars, labels, entrypoint, etc) in existing images. You can also "flatten" an image with multiple layers into a single layer. Additionally you can "rebase" an image (re-apply your changes onto a new/updated base image). It does all this directly in the registry, so no docker needed (though it's still useful for creating the original image).
This is a great recommendation. It is worth noting that unlike Docker, crane is root- and daemonless which makes it work great in Nix (it's called 'crane' in the Nix repository). This allows for Nix to be used to manage dependencies for both building (e.g. Go) as well as packaging and deploying (e.g. gnu tar, crane).
Is there any performance benefit to having fewer layers? My understanding is that there's no gain by merging layers as the size of the image remains constant.
There are some useful cases — for example, if you're taking a rather bloated image as a base and trimming it down with `rm` commands, those will be saved as differential layers, which will not reduce the size of the final image in the slightest. Only merging will actually "register" these deletions.
Less performance and more security. Lots of ameteur images use a secret file or inadvertently store a secret to a layer without realizing an rm or other process in another layer doesn't actually eliminate it. If the final step of your build squashes the filesystem flat again you can remove a lot of potentially exposed metadata and secrets stored in intermediate layers
Eventually, once zstd support gets fully supported, and tiny gzip compression windows are not a limitation, then compressing a full layer would almost certainly have a better ratio over several smaller layers
If you've got a 50 layer image then each time you open a file, I believe the kernel has to look for that file in all 50 layers before it can fail with ENOENT.
It depends on your OCI engine; but this isn’t the case with containers. Each layer is successively “unpacked” upon a “snapshot”, from which containers are created.
A container runtime could optimize for speed by unpacking all those layers one by one into a single lower directory for the container to use; but at the cost of using lots of disk space, since those layers would no longer be shared between different containers.
In practice I've found the performance savings often goes the other way--for large (multi-GB) images it's faster to split it up into more layers that it can download in parallel from the registry. It won't parallelize the download of a single layer and in EC2+ECR you won't get particularly good throughput with a single layer.
Depends. If you would have to fetch a big layer often because of updates, that's not good. But if what is changing frequently is in a smaller layer, it will be more favorable
> Things that are far away look smaller, but things that are REALLY far away look bigger, because when their light was emitted, the universe was small and they were close to us.
Lindows was my first Linux distro c. 1999. It was my first year at college, majoring in Business Administration, living on campus with internet speeds above 56kbps for the first time. Trawling through the internet at that time, I stumbled on a (likely bootleg) Lindows installer somewhere and fell down the rabbit hole.
Looking back, I realize now I’d been a self-denying computer geek before then but for whatever reason Lindows, it’s wacky installer, dual boot support, and fortunate hardware compatibility gave me the right nudge at the right time to send me on a lifetime of hacking.
Almost 25 years later, most as a professional software engineer, I have a completely biased affection for this strange OS.
Even though it became Lindpire, the spiritual successor to lightweight and windows like was LXDE and Lubuntu.
It was Slackware that pioneered the loadlin boot loader installed on a dos partition that I think li does/linspire picked-up. This was all gradually killed off by ntfs and windows 2k
It was a magical time for Linux with konqueror as a viable desktop browser and galeon as the best browser available until IBM started sponsoring the project to make it like epiphany. Eventually konqueror was absorbed into WebKit for safari, proprietary flash made browsing on Linux unnecessary difficult (gnash came later), and Microsoft lawsuits were customary towards all friendly UX not on a Mac.
Like 20 years ago I was obsessed with getting MythTV running with a TV tuner, but kept running into driver issues and low spec'd hardware. It was still magical to me, though.
TVtime worked faster but if didn't have LIRC support, I can't remember. If it did, it was to setup, much more lightweight but without recording support.
Very fond of lubuntu, which I managed to get installed on NTFS in a directory on my C:drive allowing switching back-and-forth between it and my Windows install, and really wish that that was still a thing.
Mine was DamnSmallLinux and Slax. I wasn't allowed to install Linux on my family's computer of course, so live CD's were my go-to. Though once I scrounged up enough spare parts from family friends to build a PC, I then got to learn how to force Slax to "install" to a hard drive, which was quite a challenge for me, as it really didn't want to be run off non-live-install media haha
My first distro was Ubuntu 8.04 Server. Kind of funny because I was in high school back then and decided to install Ubuntu on my machine, coming from the Windows world, I thought that it would be cool to run the server version, thinking it would be like Windows Server 2008 (with a desktop). After installing I was shocked to find out that I had no desktop to work with, spent the next 3 days figuring out how to connect my machine to the internet and then eventually downloaded Gnome. Now almost 15 years later I still use Ubuntu as my daily driver, typing this on 22.04
Many here share that journey. For me, it was printing out the Gentoo "hard mode" installation guide where you compiled _everything_ from scratch, including the kernel.
Didn't make it but it taught me the terminal and some foundational concepts in an OS. Set me down the path of linux (ubuntu - bit easier to install) and hacking.
With my limited exposure to linux when I was a kid, I didn't understand why I needed to or what those `configure; make; make install` commands did. It was until very later when I knew what compiling was or that make is actually a tool to run arbitrary commands.
oh gods, the many many hours I "wasted" recompiling my gentoo installation trying to squeeze more performance out of my gaming machine and prove to all my gaming naysaying friends that linux is no good for gaming.
This was back in the VoodooFX days...
My first mentor in Linux was a sysadmin at the hometown ISP, and my initiation was compiling Gentoo from Stage 1 (the "hard mode" install) on a dual socket P3 700mHz system. I've never done it ever again, but that foundational experience helped me immensely.
Oh shit, I remember the stages now, not much about them though. Did you really install Gentoo if you didn't compile the compiler you used to compile the rest of the system from the ground up though?
I'm not sure if this is possible anymore, but for a while, there was a way to copy over the compiled base system from the disk instead of compiling it all from scratch. Running your first emerge -vaDu world might end up recompiling everything anyway, depending on the age of the ISO, but you didn't have to do any compiling to get a base system deployed.
(year fixed below)