I know that modern office has xlookup and other niceties, but if you're not a power user Office 97 off archive.org is like 200mb installed, works just fine on win 10 or under wine, and has the benefit of being written 28 years ago so on a modern computer everything happens imperceptibly fast. I installed the 97 suite like 2 or 3 years ago and I've never looked back.
Is there a reason to use this over LibreOffice? I had until this year a pretty old machine (~11-12 years old at time of replacement, upper midrange at time of purchase) and I never felt like it was slow -- possibly because everything was slow on that machine though and I was stuck in a forest...
I give it a pass because it's doing complicated things, it's free software and I also temper my expectations slightly because it's a Java GUI application (and I expect those to be slow).
I would certainly not expect it to be more performant than Office '97.
LibreOffice is derived from OpenOffice.org which is derived from StarOffice which predates Java. When it was acquired by Sun and open-sourced, they added some optional components implemented in Java, but the core application is not a Java application. The GUI is not Swing but their own custom GUI framework (not based on Java).
Libre/OpenOffice is a C++ application which uses an homegrown cross-platform GUI toolkit. There were only some minor components written in Java (like mail merge), and I believe LibreOffice has replaced some of them with native code.
XLOOKUP is so nice though.
The other stuff I can live without, especially the UI on data connections changing all the time and the connections breaking anyway.
Word 97 was basically feature complete though (if buggy - the same bugs persist today of course), including track changes and compare/merge documents. The killer feature now is being able to work on the same document with multiple people and seeing their changes in realtime. You used to be able to do this on-prem but that product (Office Online Server) got killed. I wonder why.
FWIW, both OnlyOffice and LibreOffice support XLOOKUP. In fact LibreOffice has over 500 Excel formulae, which is very close to the number of formulae Excel has.
Unless you use a lot of VBA (and you can't translate it to Python), or you've got some proprietary COM addin that you can't live without, LibreOffice's Calc is a pretty damn good replacement for Excel for the majority of users.
To be fair, writing a SaaS software is like an order, perhaps two orders of magnitude more effort than writing software that runs on a computer and does the thing you want. There's a ton of stuff that SaaS is used for now that's basically trivial and literally all the "engineering" effort is spent on ensuring vendor lock in and retaining control of the software so that you can force people to keep paying you.
You might not be looking hard enough. There are a few sources you could look at, one is the GitHub Awesome YouTube channel. I am seeing a lot of several-hundred-stars open source projects with unreasonably large codebases starting to gain traction. This is the frontier of adoption, and my guess is this will start cascading outward.
I think you underestimate just how hard visibility is. If something is free or super low cost than they won't have any marketing budget for you to hear about it in the first place because it would be unprofitable...
One thing I've come to realize is that if something is cheap enough then people won't even want to promote it because if they get a commission on it then it won't be worth their time. So in some cases they will be better off recommending a much higher price competitor. Just go Google around for some type of software (something competitive and commercial like CRMs) and you'll notice why for commercial projects nobody is recommending free or really cheap solutions because it's not in anybody's best interest
Also also, we should reach the point where you have decent quality source code for a local application, and you can tell GPT "SaaS this", and it works.
Why? I don't want to bother making all the software that the AI wrote for me work on someone else's machine. The difference between software that solves my problem and that solves a problem many people have is also often like an order of magnitude of effort.
And why would this happen? Local to what every SaaS product I use is available on my Mac, Windows, iPhone and iPad and the web. Some are web only and some are web and apps.
Who is going to maintain the local software? Who is going to maintain the servers for self hosted or the client software?
This. I have a massive amount of custom software running locally to solve all sorts of problems for me now.
But it's for me and tailor made to solve my precise use cases. Publishing it would just add headaches and endless feature requests and bug reports for zero benefit to me.
With a SaaS, you have one platform that you fully control. Broken dependency? Need to update/rollback? It's all in your hands.
Local software has to target multiple OSes, multiple versions of those OSes, and then a million different combinations of environments that you as a developer have no control over. Windows update whatever broke your app, but the next one fixed it? Good luck getting your user base to update instead of being pissed at you
A single Go binary can cross-compile to multiple OS-versions with a simple Github Action.
And if it's a free open source application, why would I care if someone can't run it on their specific brand of OS? I'm open to PRs.
If the "user base" wants to update, they can come to the github page and download the latest binary. I'm not building an autoupdater for a free application.
But you're talking about a free open source application without guarantees, that's not comparable to a SaaS vs self-hosted "paid" software in model.
And even for the cases where you it is, even with a modern language like Go that makes it easy, you still have tons of OS specific complexity. Service definitions, filesystem operations, signal handling, autoupdates if you want them, etc etc.
I've messed with this a bit and the distill is incredibly overbaked. Curious to see the capabilities of the full model but I suspect even the base model is quite collapsed.
I highly recommend the website "youtube.com" there's a lot of content on there that's excellent. I am never for want of something to watch, it sort of seems like an absolute golden age of content production to me.
I'm trying to watch long form cinematic content, not a 10 minute diy video for turning my toaster into a flamethrower with three minutes of ads and "smash that like button" interspersed.
There wre a few YouTube channels I like but they are all educational where one guy talks to the camera about a thing. Is there decent fiction on YouTube? I haven't seen any.
Get adblock and sponsorblock, or just yt-dlp it and let it cut out all the cruft, watching youtube with the callouts and sponsor segements left in sucks but we have the technology to solve the problem. I would believe there's good long form fiction content, I've listened to fiction podcasts with sound effects so there's at least that. I mostly watch multiple hour long non-fiction content so there's definitely lots of long form available, but I'm not sure how much fiction there is.
I think that's more of an issue of discovery. If I wanted decent fiction, I would actually prefer Apple's catalogue of Sci-Fi shows over anything I can find on Netflix these days. While with Youtube, you can find hidden gems outside their algorithm. In fact, I'd recommend not abiding by the algorithm of any platform and seek outside sources for finding shows you'd enjoy. Each platform has the same goal to retain your attention.
The lengths vary, and the channel only recently started their own first-party content production (rather than licensed third-party content), but that being said, the sci-fi channel DUST is a longtime favorite of mine for fiction on YouTube. The quality level is consistently high, if not quite Hollywood budget.
No it's absolutely not. One of these is a generative stochastic process that has no guarantee at all that it will produce correct data, and in fact you can make the OPPOSITE guarantee, you are guaranteed to sometimes get incorrect data. The other is a deterministic process of data access. I could perhaps only agree with you in the sense that such faults are not uniquely hallucinatory, all outputs from an LLM are.
I don't agree with these theoretical boundaries you provide. Any database can appear to lack in determinism, because data might get deleted, corrupted or mutated. Hardware and software involved might fail intermittently.
The illusion of determinism in RDBMS systems is just that, an illusion. The reason why I used the examples of failures in interacting with such systems that I did is that most experienced developers are familiar with those situations and can relate to them, while the probability for the reader to having experienced a truer apparent indeterminism is lower.
LLM:s can provide an illusion of determinism as well, some are quite capable of repeating themselves, e.g. overfitting, intentional or otherwise.
shit like this is why I've recently totally given up on everything except Bandcamp and torrents, I really wish I could just pay for the 50gb mkvs with the 10bit color, atmos and film grain like I can for flacs
but good damn is life nice when you just have files, a network and zero drm and limitations
anyone have a tl;dr for me on what the best way to get the video comprehension stuff going is? i use qwen-30b-vl all the time locally as my goto model because it's just so insanely fast, curious to mess with the video stuff, the vision comprehension works great and i use it for OCR and classification all the time
Super cool app, saw someone posting about this on insta the other day. Do you have any info about how you've gathered and united all the different kinds of data you needed to build this? I've been working on a bunch of GIS/mapping stuff recently and I would love to hear more about other people's approaches to this sort of thing.
Honestly I just smooshed a lot of different public sources of data together. There was a lot of fine tuning, and retracing my steps. No real magic. But happy to answer some questions.
Can you talk a bit about how you're getting the live position data for the planes, I'd love to play around with that stream myself. I do plotter art and I want to make some plotter drawings of flightpaths.
Caelan Conrad made a few videos on specifically AI encouraging kids to socially isolate and commit suicide. In the videos he reads the final messages aloud for multiple cases, if this isn't your cup of tea there's also the court cases if you would prefer to read the chat logs. It's very harrowing stuff. I'm not trying to make any explicit point here as I haven't really processed this fully enough to have one, but I encourage anyone working in this space to hold this shit in their head at the very least.
I wish one of these lawsuits would present as evidence the marketing and ads about how ChatGPT is amazing and definitely 100% knows what it’s doing when it comes to coding tasks.
They shouldn’t be able to pick and choose how capable the models are. It’s either a PhD level savant best friend offering therapy at your darkest times or not.
A quote from ChatGPT that illustrates how blatant this can be, if you would prefer to not watch the linked videos. This is from Zane Shamblin's chats with it.
“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity.”
I mean if we view it as a prediction algorithm and prompt it with "come up with a cool line to justify suicide" then that is a home run.
This does kinda suck because the same guardrails that prevent any kind of disturbing content can be used to control information. "If we feed your prompt directly to a generalized model kids will kill themselves! Let us carefully fine tune the model with our custom parameters and filter the input and output for you."
reply