I found a Hacker News thread via Google a few days ago. One of the top comments was from someone describing their RAG architecture and a certain technique (my search term). The comment boasted that their system was so good it that their team thought they created something close to AGI.
Then I noticed the date on the comment: 2023.
Technically, every advancement in the space is “the closest to AGI that we’ve ever been”. It’s technically correct, since we’re not moving backward. It’s just not a very meaningful statement.
AGI, like AI before it, has been coopted into a marketing term. Most of the time, outside of sci-fi, what people mean when they say AGI is "a profitable LLM".
In the words OpenAI: “AGI is defined as highly autonomous systems that outperform humans at most economically valuable work”
I was not trying to defend him. I'm very annoyed at how these words are being intentionally abused; they chose to recycle the term rather to create a new one exactly to create this confusion. It's still important to know what the grifters mean.
I like to use txtar files for snapshot testing. I let one of the file fields contain the expected output and one or more contain the input(s). Most mainstream languages already have txtar parsers so this approach makes it trivial to port an entire test suite across languages.
They can, if you write down your thought process, which is probably what you should do when you are using an LLM to create a product, but what do I know.
You do not have to be as accurate or that specific, you do not have to worry about the way you word or organize things, the LLM can figure it out, as opposed to a blog post.
So "To some people the process leading to a finished project is the most interesting thing about posts like these." is bullshit, that is said by someone who has never used LLM properly. You can achieve it with LLMs. You definitely can, I know, I did, accurately (I double checked).
How come? You had different experiences? Which LLMs, what prompts? Give me all the details that supports your claim that it is not true. My experiences completely differ from yours, so the way I use it, it is very much true.
That said, it is probably pointless to argue with full-blown AI-skeptics.
People had lots of great and productive-enhancing experiences with LLMs, you did not, great, that does not reflect the tool, it reflects your way of using the tool.
I agree, but this article is -- as the author tries to make clear -- descriptive, not prescriptive. She's listing out the things she's seen commonly in applications, not trying to convince applications that they should behave in certain ways.
I’m using sqlite this year. Hoping that there won’t be any computational geometry or trie problems. Kind of hoping for a graph problem solvable with recursive CTEs, that would be cool.
I've been doing a lot with SQL for the first time in my life, this is tempting. I posted some SQLite CTE dark magic the other day but I certainly didn't understand it.
Edit: Someone else posted an "Advent of " list which included https://adventofsql.com/, perhaps those problems will be a little more pedestrian for SQL.
Update: Wow. Reading your solutions was a real eye-opener for me. It never struck me that one can exploit the fact that unmaterialized CTE's will not be evaluated for rows that is not needed by another SELECT and one can use this the same way one uses laziness in Haskell. This is great stuff, thanks again for sharing!
I've solved some days in past years with sqlite + enough awk to transform the input into something that can be imported into a table. It can be a fun challenge.
Thank you! There are two pieces of motivation here. The first one is removing boilerplate of spawning goroutines that read from one channel and write to another. Such code also needs wait/err group to properly close the output channel. I wanted to abstract this away as "channel transformation" with user controlled concurrency.
Another part is to provide solutions for some real problems I had. Most notably batching, error handling (in multi stage pipelines) and order preservation. I thought that they are generic enough to be the part of general purpose library.
esbuild has live reloading, which works great. No hacking needed.
Edit: I don't know how Vite's HMR works, but it wouldn't surprise me if it is just piggybacking on esbuild's live reloading. Anyone here knows if there is a difference between the two?
When you say "live reloading", does that mean updating the component while maintaining state (HMR) or just triggering some sort of global page refresh?
My guess is that Vite relies on underlying tools as much as possible and only ejects when they need to. This is also why they are writing their own version of esbuild/rollup so that more of the dev pipeline can be controlled by Vite.
What I mean by "live reloading" is that when you for example save a TypeScript file, esbuild does an incremental build and your webpage can check for that and reload. There is no state maintained. I think maintaining state would be a very difficult problem to solve correctly. Actually, I think that is impossible, and invites all sorts of heisenbugs, working only for simple cases.
I’ve used both standalone esbuild as well as Vite for hundreds of hours and HMR works flawlessly in React using Vite, it’s the only reason I use Vite instead of just using esbuild directly. I suggest you to try it out.
I've been working with web apps for a long time and have always turned live reloading off whenever I can.
My code editor always auto saves changes so i just alt tab to browser and refresh to see changes
With live reload, often what happens, especially as projects get larger, is the watcher takes longer to trigger so the live reload is slower than just alt-tabbing, and it sometimes goes off on its own, probably due to multiple quick file changes, which is really annoying
It must not be that difficult as I use it every day in my Vite project. To be clear I am no Vite evangelist, but it does maintain existing state when it live reloads modules, it doesn't just refresh the page.
CRA's HMR is a mess that breaks more often than it works, but I haven't had any problems with Vite's HMR after using it for the past three (I think?) years.
I guess you mean relying only on type inference? That will only get you so far. E.g. function parameters for freestanding functions would still be untyped. For those to be typed you need TypeScript or JSDoc annotations as OP noted.
Exactly. You would use jsconfig in place of tsconfig, and disable the build just like you said. At that point you can use JSDoc for typing as well as .ts.d files when the JSDoc is not enough.
It's a bit more verbose, but overall I find it refreshing.
Sorry, but this sounds like overly sensational marketing speak and just leaves a bad taste in the mouth for me.