They are definitely ahead in multi modality and I'd argue they have been for a long time. Their image understanding was already great, when their core LLM was still terrible.
The app you're looking for is Obsidian. Notion abandoned that goal years ago. Notion is making money for being a project management / team wiki. They don't care personal note taking.
"Notes and Domino is a cross-platform, distributed document-oriented NoSQL database and messaging framework and rapid application development environment that includes pre-built applications like email, calendar, etc." [0]
Lotus Notes was the original offline-first everything app, including cutting edge PKI and encryption. It worked over dial-up and needed only a handful of MBs of memory (before the Java rewrite at least). Has anything else really come close since?
I do writing with RAG and it can be implemented to suprisingly good if you already have your own writing that the text is being generated from. FAQs etc can be pretty easy when your content is context for the AI.
After a few rounds of AI generating AI content from AI content, I'm sure it could eventually become slop...like the model collapse lol idk.
I've looked yesterday on replit. It used to be such a nice tool to play with various languages, be able to create a, say, python file, share it with students, etc.
Now you are welcomed by a "AI chat" that wants me to specify what a great application I want to create. Total madness...
I find that I get most value out of less memorable notes in my Obsidian, because for the more memorable ones my brain is able to stay on top of it.
I use LLMs to extract key points, make proposals or generate a short summary, but I personally want to manually be responsible for adding all this and I don't want my note taking tool to do this unsupervised in the background.
It's absolutely not, the ARC line is not a threat in any way to nVidia, it's to get it's feet into the CPU market without the initial setup costs and research it would take to start from scratch.
They will be dominating AMD now on both fronts if things go smoothly for them.
People repeating articles or papers. I know myself. I know from my own experiences what the good and bad of practice A or database B is. I don't need to read a conclusion by some Muppet.
Chill. Interesting times. Learn stuff, like always. Iterate. Be mindful and intentional and don't just chase mirrors but be practical.
The takeaway from the paper is you don’t know yourself. It’s one paper, and a small sample size, but attempting to refute its conclusion by stating it’s false doesn’t really get us anywhere.
Lol. Maybe you need enough experience to understand if you think "it's written in a paper" is proof of anything, specially over the experience of people who do actual work and have been profilic over long periods.
I fear for the variety in the internet coding culture. Significant portion of code is being written by ai, all looks the same, all has the same mediocre quality. Even the github page descriptions are generated with ai, overflowing with emojis and same sentence structures repeated.
> I fear for the variety in the internet coding culture. Significant portion of code is being written by ai, all looks the same, all has the same mediocre quality.
Who cares who actually "typed" it? Shit code will be shit code regardless of author, there is just more of it now compared to before, just like there was more 10 years ago compared to 20 years, as the barriers for getting started is lowered time and time again. Hopefully, it'll be a net-positive, just like previous times, it's never been easier to write code to solve your own specific personal problems.
Developers who have strict requirements on the code they "produce" will make the LLM fit with their requirements when needed, and "sloppy" developers will continue to publish spaghetti code, regardless of LLMs existence.
I don't get the whole "vibe-coding" thing because clearly most of the code LLMs produce is really horrible, but with good prompting, strict reviews and not accepting bad changes just to move forward lets you mold the code into something acceptable.
(I have not looked at this specific project's code, so not sure this applies to this project, but is more of a general view obviously)
I think the issue is, you used to be able to tell at a glance how much effort someone put into a project (among other things) and that would give you a reasonable approximation of what to expect from the project itself.
But now the signals are much harder to read. People post polished looking libraries / tools with boastful convincing claims about what they can do for you, yet they didn't even bother to check if they work at all, just wasting everyone's time.
Not even "care". The result is the only thing that matters. If at a first glance the source code of a project doesnt look pretty or whatnot odds might be that the reader landed on particularly nasty chunk or that it was indeed as complicated as the code itself. In my view this test is better done after the overall functionality of the software has been proven. Honestly if libraries with fancy benchmarks don't solve the business problem, I won't care much how clean their code is or if its written in Rust.
How is this commentary relevant to this project in any way? Why must every post about an LLM tool be accompanied by a pile of meta commentary about the LLM landscape? If someone posted a Ruby library, would we all start waxing philosophically about the Ruby ecosystem as a whole?
I’m not trying to attack your comment specifically, but every single post about LLMs being accompanied by deeply subjective, usually negative, meta commentary is highly annoying.
I left a comment like that, because the project itself is about the productivity of AI. Project aims to increase it. Mine is also about productivity of AI. But my distaste about its blandness.
On the whole meta discussion thing, i have been reading HN for at least 15 years. Posts with lots of comments are meta discussions. HN is not really a place to discuss technics of a project.
First of all, there are certainly many issues with abusing vibe coding in a production environment. I think the core problem is that the code can't be reviewed. After all, it's ultimately people who are responsible for the code.
However, not all code requires the same quality standards (think perfectionism). The tools in this project are like blog posts written by an individual that haven’t been reviewed by others, while an ASF open-source project is more like a peer-reviewed article. I believe both types of projects are valid.
Moreover, this kind of project is like a cache. If no one else writes it, I might want to quickly vibe-code it myself. In fact, without vibe coding, I might not even do it at all due to time constraints. It's totally reasonable to treat this project as a rough draft of an idea. Why should we apply the same standards to every project?
In fact, their approach to using vibe coding in production comes with many restrictions and requirements. For example:
1. Acting as Claude's product manager (e.g., asking the right questions)
2. Using Claude to implement low-dependency leaf nodes, rather than core infrastructure systems that are widely relied upon
3. Verifiability (e.g., testing)
BTW, their argument for the necessity of vibe coding does make some sense:
As AI capabilities grow exponentially, the traditional method of reviewing code line by line won’t scale. We need to find new ways to validate and manage code safely in order to harness this exponential advantage.
it is a subtle bug. mostly concerning preservationists. players are likely more worried about bandits shooting at them than mountains not reflected on the water.
I didn't draw a single piece of PNG until i am 38.
Now i am about to release my second solo developed game.
My art is nothing great, but a year a ago wouldn't even believe i could draw a single sprite.
That's because of pixel perfect rendering is not handling rotations and position changes great.
It looks fine in motion, but i can improve with using semi transparent pixels on the edges.
Be competent and able to prove it. Work with in-demand tools - for me that's .NET, React, Azure, SQL dbs etc. For others it may be go, python, java, AWS, GCP whatever is in demand near you. Probably not Rust, C or C++ etc - I'm sure there's demand for that too but at least near me they're a lot rarer.
Some people do well working with obscure stuff like cobol and Delphi etc, but I wouldn't really recommend that unless it kind of just falls in your lap somehow.
Web development is pretty big, if you can work full stack even better. At least that's what I do, and I don't have any trouble getting jobs.
If you struggle with simple interview questions, work on fundamentals. All my technical interviews have been quite easy but the interviewers have been very impressed. This tells me most devs have poor understanding of programming fundamentals. Being able to do well at interviews is not that hard and it opens a lot of doors. Things like advent of code, codewars etc are good practice. Maybe dust off your old DS&A book and go through it again. A good DSA understanding will help you in your daily work as well, it's not just about interviews. You're not supposed to memorize algorithms, you're supposed to understand them, understand what makes some algorithms faster than others, understand how to use different data structures to improve your algorithms. Understand how to judge the performance of an algorithm just by reading it (big O and such). It's extremely useful and important, I use this knowledge on a daily basis and it helps me do well in interviews.
Also be good with databases. The database is the core of an application, it can and should do most of the heavy lifting. An API is basically just an adapter between a frontend and a db.
There are a few comments from the companies that hired him in the og twitter thread [0]. Sounds like he was actually really good at interviews. Kinda shows how broken the hiring system is if you can smash an interview but fail catastrophically at the job.
You think? I'm extending the term to actually getting a job in "traditional" organizations. You already have to optimize for keywords etc, don't you? It's not human interaction but a "process".
> he targeted mostly (YC) startups eager to hire (AI) engineers quickly so they can scale.
But they got an "AI" engineer didn't they? Or no one in management could define what an "AI" engineer is?
Tbh I'd give the guy a high paying job, but in marketing.