Hacker Newsnew | past | comments | ask | show | jobs | submit | saejox's commentslogin

This is very impressive. Google really is ahead


They are definitely ahead in multi modality and I'd argue they have been for a long time. Their image understanding was already great, when their core LLM was still terrible.


AWS has been the backbone of the internet. It is single point of failure most websites.

Other hosting services like Vercel, package managers like npm, even the docker registeries are down because of it.


Note taking is supposed to be a map of my brain. Not generated garbage from LLMs. I only want the most memorable things to be in there.


The app you're looking for is Obsidian. Notion abandoned that goal years ago. Notion is making money for being a project management / team wiki. They don't care personal note taking.


Notion is on its track to become the new Lotus Notes, just with notes instead of email memos.


"Notes and Domino is a cross-platform, distributed document-oriented NoSQL database and messaging framework and rapid application development environment that includes pre-built applications like email, calendar, etc." [0]

Lotus Notes was the original offline-first everything app, including cutting edge PKI and encryption. It worked over dial-up and needed only a handful of MBs of memory (before the Java rewrite at least). Has anything else really come close since?

[0] https://en.wikipedia.org/wiki/HCL_Notes


But why would I want my team generating/reading AI slop in our Wiki??


You don't. Notion stakeholders do.


I do writing with RAG and it can be implemented to suprisingly good if you already have your own writing that the text is being generated from. FAQs etc can be pretty easy when your content is context for the AI.

After a few rounds of AI generating AI content from AI content, I'm sure it could eventually become slop...like the model collapse lol idk.

"AI models collapse when trained on recursively generated data" - https://www.nature.com/articles/s41586-024-07566-y


I've looked yesterday on replit. It used to be such a nice tool to play with various languages, be able to create a, say, python file, share it with students, etc.

Now you are welcomed by a "AI chat" that wants me to specify what a great application I want to create. Total madness...


I find that I get most value out of less memorable notes in my Obsidian, because for the more memorable ones my brain is able to stay on top of it.

I use LLMs to extract key points, make proposals or generate a short summary, but I personally want to manually be responsible for adding all this and I don't want my note taking tool to do this unsupervised in the background.


I was waiting for a linux client for such a long time. Or a way for api access where i do not need to poll for changes.

But instead more ai slop.

And thats not the worst. Every time when a company adds ai features, i know they want to train on my data sooner or later.

So hard pass for that one.

edit: seems like webhooks are here now. Will give them a try, but knowing notion, expect wild limitations


I hope this isn't "Shut-up" money to end ARC gpu development. i have an A770, i am very happy with it.


It's absolutely not, the ARC line is not a threat in any way to nVidia, it's to get it's feet into the CPU market without the initial setup costs and research it would take to start from scratch.

They will be dominating AMD now on both fronts if things go smoothly for them.


> AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this?

True in the long run. Like a car with a high acceleration but low top speed.

AI makes you start fast, but regret later because you don't have the top speed.


People repeating articles or papers. I know myself. I know from my own experiences what the good and bad of practice A or database B is. I don't need to read a conclusion by some Muppet.

Chill. Interesting times. Learn stuff, like always. Iterate. Be mindful and intentional and don't just chase mirrors but be practical.

The rest is fluff. You know yourself.


The takeaway from the paper is you don’t know yourself. It’s one paper, and a small sample size, but attempting to refute its conclusion by stating it’s false doesn’t really get us anywhere.


Lol. Maybe you need enough experience to understand if you think "it's written in a paper" is proof of anything, specially over the experience of people who do actual work and have been profilic over long periods.


This is shown in figure 5 of the paper. https://arxiv.org/pdf/2507.09089


I fear for the variety in the internet coding culture. Significant portion of code is being written by ai, all looks the same, all has the same mediocre quality. Even the github page descriptions are generated with ai, overflowing with emojis and same sentence structures repeated.


> I fear for the variety in the internet coding culture. Significant portion of code is being written by ai, all looks the same, all has the same mediocre quality.

Who cares who actually "typed" it? Shit code will be shit code regardless of author, there is just more of it now compared to before, just like there was more 10 years ago compared to 20 years, as the barriers for getting started is lowered time and time again. Hopefully, it'll be a net-positive, just like previous times, it's never been easier to write code to solve your own specific personal problems.

Developers who have strict requirements on the code they "produce" will make the LLM fit with their requirements when needed, and "sloppy" developers will continue to publish spaghetti code, regardless of LLMs existence.

I don't get the whole "vibe-coding" thing because clearly most of the code LLMs produce is really horrible, but with good prompting, strict reviews and not accepting bad changes just to move forward lets you mold the code into something acceptable.

(I have not looked at this specific project's code, so not sure this applies to this project, but is more of a general view obviously)


I think the issue is, you used to be able to tell at a glance how much effort someone put into a project (among other things) and that would give you a reasonable approximation of what to expect from the project itself.

But now the signals are much harder to read. People post polished looking libraries / tools with boastful convincing claims about what they can do for you, yet they didn't even bother to check if they work at all, just wasting everyone's time.

It's a weird new world.


I think you should never do that. Who cares how much effort went into something ? Its the result that matters.


Maybe "care" would have been more appropriate than "effort".


Not even "care". The result is the only thing that matters. If at a first glance the source code of a project doesnt look pretty or whatnot odds might be that the reader landed on particularly nasty chunk or that it was indeed as complicated as the code itself. In my view this test is better done after the overall functionality of the software has been proven. Honestly if libraries with fancy benchmarks don't solve the business problem, I won't care much how clean their code is or if its written in Rust.


It's fake polish, but at the end of the day you have always to check whats in the codebase if you're gonna vendor it in or adopt it as a dep imho

if anything the smelly readme's that look all like claude wrote them, are a good tell to check deeper


How is this commentary relevant to this project in any way? Why must every post about an LLM tool be accompanied by a pile of meta commentary about the LLM landscape? If someone posted a Ruby library, would we all start waxing philosophically about the Ruby ecosystem as a whole? I’m not trying to attack your comment specifically, but every single post about LLMs being accompanied by deeply subjective, usually negative, meta commentary is highly annoying.


I left a comment like that, because the project itself is about the productivity of AI. Project aims to increase it. Mine is also about productivity of AI. But my distaste about its blandness.

On the whole meta discussion thing, i have been reading HN for at least 15 years. Posts with lots of comments are meta discussions. HN is not really a place to discuss technics of a project.


I'm tired of <10 day old accounts being created on this site to pile on any dissenter to LLMs.


First of all, there are certainly many issues with abusing vibe coding in a production environment. I think the core problem is that the code can't be reviewed. After all, it's ultimately people who are responsible for the code.

However, not all code requires the same quality standards (think perfectionism). The tools in this project are like blog posts written by an individual that haven’t been reviewed by others, while an ASF open-source project is more like a peer-reviewed article. I believe both types of projects are valid.

Moreover, this kind of project is like a cache. If no one else writes it, I might want to quickly vibe-code it myself. In fact, without vibe coding, I might not even do it at all due to time constraints. It's totally reasonable to treat this project as a rough draft of an idea. Why should we apply the same standards to every project?


Anthropic talked about vibe coding in production: https://www.youtube.com/watch?v=fHWFF_pnqDk

In fact, their approach to using vibe coding in production comes with many restrictions and requirements. For example: 1. Acting as Claude's product manager (e.g., asking the right questions) 2. Using Claude to implement low-dependency leaf nodes, rather than core infrastructure systems that are widely relied upon 3. Verifiability (e.g., testing)

BTW, their argument for the necessity of vibe coding does make some sense:

As AI capabilities grow exponentially, the traditional method of reviewing code line by line won’t scale. We need to find new ways to validate and manage code safely in order to harness this exponential advantage.


I also agree with this.

While the ai has less context, you have more context using the limited chat window. You know what you need from the ai.


it is a subtle bug. mostly concerning preservationists. players are likely more worried about bandits shooting at them than mountains not reflected on the water.


> bandits shooting at them

through walls and from kilometers away, since this is Far Cry 1 on modern systems :P


I didn't draw a single piece of PNG until i am 38. Now i am about to release my second solo developed game. My art is nothing great, but a year a ago wouldn't even believe i could draw a single sprite.

https://www.instagram.com/arcadenest_games/

I find my age to be great period to learn new skills. It is never too late.


Not bad! Nit: I think your moon is a bit too rough on the edges. Did you try a smoother edge?


That's because of pixel perfect rendering is not handling rotations and position changes great. It looks fine in motion, but i can improve with using semi transparent pixels on the edges.


I can't even find one job. What's his secret?


Be competent and able to prove it. Work with in-demand tools - for me that's .NET, React, Azure, SQL dbs etc. For others it may be go, python, java, AWS, GCP whatever is in demand near you. Probably not Rust, C or C++ etc - I'm sure there's demand for that too but at least near me they're a lot rarer.

Some people do well working with obscure stuff like cobol and Delphi etc, but I wouldn't really recommend that unless it kind of just falls in your lap somehow.

Web development is pretty big, if you can work full stack even better. At least that's what I do, and I don't have any trouble getting jobs.

If you struggle with simple interview questions, work on fundamentals. All my technical interviews have been quite easy but the interviewers have been very impressed. This tells me most devs have poor understanding of programming fundamentals. Being able to do well at interviews is not that hard and it opens a lot of doors. Things like advent of code, codewars etc are good practice. Maybe dust off your old DS&A book and go through it again. A good DSA understanding will help you in your daily work as well, it's not just about interviews. You're not supposed to memorize algorithms, you're supposed to understand them, understand what makes some algorithms faster than others, understand how to use different data structures to improve your algorithms. Understand how to judge the performance of an algorithm just by reading it (big O and such). It's extremely useful and important, I use this knowledge on a daily basis and it helps me do well in interviews.

Also be good with databases. The database is the core of an application, it can and should do most of the heavy lifting. An API is basically just an adapter between a frontend and a db.


He perfected the hiring game, probably automated fake activity on his GitHub, lied on his resume, among other things: https://leaderbiography.com/soham-parekh/


There are a few comments from the companies that hired him in the og twitter thread [0]. Sounds like he was actually really good at interviews. Kinda shows how broken the hiring system is if you can smash an interview but fail catastrophically at the job.

[0] https://x.com/Suhail/status/1940287384131969067


He was good at the office politics kabuki. Wore the right masks and all.


GP is asking how is he able to land multiple jobs in the first place when they can’t even land one.

Office politics comes after you land a job so it doesn’t explain why he was so successful at getting multiple offers.

I’ve seen claims on Twitter that he used multiple tactics:

1. Good ol’ cold emails;

2.Using a recruiter for warm intros

3. Applying like everyone else but with a resume that is full of fabrications.

A common thread in many of his victim companies: he targeted mostly (YC) startups eager to hire (AI) engineers quickly so they can scale.


> Office politics comes after you land a job

You think? I'm extending the term to actually getting a job in "traditional" organizations. You already have to optimize for keywords etc, don't you? It's not human interaction but a "process".

> he targeted mostly (YC) startups eager to hire (AI) engineers quickly so they can scale.

But they got an "AI" engineer didn't they? Or no one in management could define what an "AI" engineer is?

Tbh I'd give the guy a high paying job, but in marketing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: