Hacker Newsnew | past | comments | ask | show | jobs | submit | 0xCE0's commentslogin

Print to PDF + throw it to /TOREAD, which you will never open, but at least the content is there. Maybe add some relevant keywords in the filename (keep the original name also), so you can quickly grep/find what you need.

The point is to go beyond LLM training set. When the knowledge of LLMs, books, expert conversations etc. ends and cannot answer to your questions, you begin to feel where that boundary begins. From that line on, you are alone, and you can invent/discover something novel. Nothing is promised though, and it is the hardest thing to do, but at least the struggle gives a feeling of purpose.

I really wouldn't want any vibe-coded COBOL in my bank db/app logic...


vibecoding != AI.

For example: I'm a senior dev, I use AI extensively but I fully understand and vet every single line of code I push. No exceptions. Not even in tests.


Whilst I agree with your point, I think what sometimes gets lost in these conversations is that reviewing code thoroughly is harder than writing code.

Personally, and I’m not trying to speak for everyone here, I found it took me just as long to review AI output as it would have taken to write that code myself.

There have been some exceptions to that rule. But those exceptions have generally been in domains I’m unfamiliar with. So we are back to trusting AI as a research assistant, if not a “vibe coding” assistant.


The worst is reviewing the code and realizing it stinks and should be done another way

So you re-roll the slot machine and pay the reviewing cost twice

I don't think AI's biggest strength is in writing code


> as long to review AI output as it would have taken to write that code myself

That is often the case.

What immensely helps though is that AI gets me past writer's block. Then I have to rewrite all the slop, but hey, it's rewrite and that's much easier to get in that zone and streamline the work. Sometimes I produce more code per day rewriting AI slop than writing it from scratch myself.


I think the point is in a banking context, every line of code gets reviewed thoroughly anyway.


Would you consider Knight Capital Group[1] a banking context?

[1]: https://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stoc...


I’d expect every line of code to get reviewed in any organisation.

The difference with AI is that the “prompt engineer” reviews the output, and then the code gets peer reviewed like usual from someone else too.


You'd be surprised...


Unfortunately, the people who are "pro-AI" are so often because it lets them skip the understanding part with less scrutiny


The good news here is that their code is of such a poor quality it doesn't properly work anyway.

I have recently tried to blindly create a small .dylib consolidation tool in JS using Claude Code, Opus 4.5 and AskUserTool to create a detailed spec. My god how awful and broken the code was. Unusable. But it faked* working just good enough to pass someone who's got no clue.


> The good news here is that their code is of such a poor quality it doesn't properly work anyway.

This is just wishful thinking. In reality it works just well enough to be dangerous. Just look at the latest RCE in OpenCode. The AI it was vibe-coded with allowed any website with origin * to execute code, and the Prompt Engineer™ didn't understand the implications.


> it works just well enough to be dangerous

Excellent. I for one fully welcome Prompt Engineers™ into the world of software development.


I assume you don't understand some of the words in the rest of my comment. Or you're a nihilist and enjoy watching everything burn to the ground.

It's all fun and games until actual lives are at stake.


I'm watching the voters around the world electing charismatic leaders and then cheering the consequences.

Thus companies electing to replace software developers with AI slop are not of a much surprise to me.

It doesn't matter whether people will die because of AI slop. What matters is keeping Microsoft shareholders happy and they are only happy when there is a growing demand for slop.


> Not even in tests.

This should be "especially in tests". It's more important that they work than the actual code, because their purpose is to catch when the rest of the code breaks.


That is my preferred way to use it also, though I see many folks seemingly pushing for pure vibe coding, apparently striving for maximum throughput as a high-priority goal. Which goal would be hindered by careful review of the output.

It's unclear to me why most software projects would need to grow by tens (or hundreds) of thousands of lines of code each day, but I guess that's a thing?


And I do a lot of top level design when I use it. AIs are terrible at abstraction and functional decomposition.


Aye. AI is also great for learning specifics of poorly documented APIs, e.g. COM-based brainrot from Microsoft.


Hey now, that COM based rot paid for my house and kid’s college expenses.


Not anymore. AI actually does this part much better.


Does the use AI always implies slope and vibe coding? I’m really not sure


No, it doesn't. For example, you could use an AI agent just to aid you in code search and understanding or for filling out well specified functions which you then do QA on.


To do quality QA/code review, one of course needs to understand the design decisions/motivations/intentions (why those exact code lines were added, and why they are correct), meaning it is the same job as one would originally code those lines and building the understanding==quality on the way.

For the terminology, I consider "vibe-coding" as Claude etc. coding agents that sculpts entire blocks of code based on prompts. My use-tactic for LLM/AI-coding is to just get the signature/example of some functions that I need (because documents usually suck), and then coding it myself. That way the control/understanding is more (and very egoistically) in my hands/head, than in LLMs. I don't know what kind of projects you do, but many times the magic of LLMs ends, and the discussion just starts to go same incorrect circle when reflected on reality. At that point I need to return to use classic human intelligence.

And for COBOL + AI, in my experience mentioning "COBOL" means that there is usually DB + UI/APP/API/BATCHJOB for interacting with it. And the DB schema + semantics is propably the most critical to understand here, because it totally defines the operations/bizlogic/interpretations for it. So any "AI" would also need to understand your DB (semantically) fully to not make any mistakes.

But in any case, someone needs to be responsible for the committed code, because only personified human blame and guilt can eventually avert/minimize sloppiness.


You 100% can use it this way. But it takes a lot of discipline to keep the slop out of the code base. The same way it took discipline to keep human slop out.

There has always been a class of devs who throw things at the wall and see what sticks. They copy paste from other parts of the application, or from stack overflow. They write half assed tests or no tests at all and they try their best to push it thought the review process with pleas about how urgent it is (there are developers on the opposite side of this spectrum who are also bad).

The new problem is that this class of developer is the exact kind of developer who AI speeds up the most, and they are the most experienced at getting shit code through review.


> But it takes a lot of discipline to keep the slop out of the code base.

It is largely a question of working ethics, rather than a matter of discipline per se.


Because the question almost always comes with an undertone of “Can this replace me?”. If it’s just code search, debugging, the answer’s no because a non-developer won’t have the skills or experience to put it all together.


That undertone is overt in the statements of CEOs and managers who salivate at “reducing headcount.”

The people who should fear AI the most right now are the offshore shops. They’re the most replaceable because the only reason they exist is the desire to carve off low skill work and do it cheaply.

But all of this overblown anyway because I don’t see appetite for new software getting satiated anytime soon, even if we made everyone 2x productive.


How many banks really use COBOL? Here in central Europe it seems to be Java, Java, Java for the most part. Since many years actually.


In the US, there are several thousands of banks and credit unions, and the smaller ones use a patchwork of different vendor software. They likely don't have to write COBOL directly, but some of those components are still running it.

From the vendor's perspective, it doesn't make sense to do a complete rewrite and risk creating hairy financial issues for potentially hundreds of clients.


As others have said, US banks seem to run a lot of it, as in they have millions of lines of code of it.

This is not saying that banks don't also have a metric shitload of Java, they do. I think most people would be surprised how much code your average large bank manages.


I'm in Australia and a friend of a friend had a COBOL job working at a mid-sized bank (the COBOL had lots of Java on top). Australia's big banks are older than this bank so if they're not using COBOL at the bottom layer, they'll be using something similarly old for sure.


ECB is mostly COBOL and Fortran. The interfaces are Java, but not the backend.


Management loves trying to save money, a bunch of grads with AI have differently had a project to try to write COBOL!


I feel this is a major turning point for how entities can/will behave from now on towards Trump's wants/decisions. Now it is publicly proved, that you cannot trust deals made with Trump, because they can be just invalidated at a moment's notice. Only bad deals to be made, so why would any reasoned entity agree to those. World will not take threats seriously any more, and will defend themselves.

Maybe missions in Venezuela, Iran etc. was accomplished so easily, that it blurred the judgment of what could be done. But those countries are different than a conglomerate of 450 M people / 27 countries. And now military and economic thinking/domains/threats were also mixed. "Weak EU leaders" can and are now forced to unite as one strong resistance.


Yeah, I see it as a kind of "jumping the shark" moment. There is simply no way to compromise with a neighbors territorial sovereignty to avoid tariffs. It is not going to happen. We may as well regard the US as a black hole that we no longer trade with.


Great ideas are cheap to copy but not cheap to generate (time-/skill-/moneywise). Execution has always some cost, and the cost trend is towards 0 (but not 0, because everything cost eventually and someone has to pay the amortized cost).

I'd say what matters again is authenticity, i.e. intentful design and intentful business==product(s). Businesses==product(s) that are created to respect the user, make their life (private-/businesswise) less sad. Giving prompt "make me a unicorn" isn't authentic/intentful business/product design. Real businesses==products have to prove their reason for existence (and keep doing it ad infinitum), so customers can trust them and keep them alive with cash flow.

If there is a bad business/product in market, in a long run it is buyers to be blamed, because they are the ones supporting its existence with cash flow. VC/loan cash can only give time==money for companies couple of years, because eventually someone has to pay the cost.

And I'd say the most "quality/intentful" products are not the ones that makes the most money on the market. One has to choose whether do design "The Witness" xor "Candy Crush Saga".


Yes, execution is cheap, ideas matter again


https://futuretextpublishing.com/ --> books vol 1-5

And what comes to original article, there is no "text [systems]" (or there is, like there are "number [systems]", just made up). "Text" like this very thing you are reading is 2D drawing. There are no character glyphs of any kind (latin, logograms etc.) defined by universe*, they are human invented and stored/interpreted at human collective level. Computers don't know anything about text, only "numbers" of some bit width, and with those numbers a system must be created that can map some number representation to some drawing in some method (e.g. with bitmap). Also there is a lot of difference between formal/executable and natural human languages. Anyways, it's not a about some text format/encoding, it's the human/computer defined/interpreted non-linguistical meaning behind it (Wittgenstein).

* DNA/RNA can be one such "universal character glyph/string", as the "textual" information is physically constructed and interpreted.


The exit of the company by the founders was definitely timely, if having assumption that SO can't be "relevant" anymore at the times of LLMs. Of course there is always value for human-to-human Q&A that goes beyond LLM training set, but that might happen now only at cutting-edge private environments/communities.


Slowly, but hurry, is my take.

Quality vs quantity of course depends on the nature of work. If you are employee and all the working infrastructure is ready there to be used, you can "just" focus on doing something, what ever it is. If you are employer, you can't "just" even go to the work, because you have to use unpredicted amount of time to figure out what you even need to do or have and why.

Whether you are employee or employer, make sure you feel the practical progress, that is, e.g. once a week you can have status session, where you can show that now you have something that you didn't have at last session, and that it is important step for the end goal.


Try to dig what a thing actually is, not what people say it is. Write down your current understanding with a date, so you can see years later how wrong or right you were. True learning is ugly route. Refine your own definition/understanding to be real-world bullet-proof. You need to be less-wrong over time. Use your bullet-proof learnings to build something, and don't let all the faux renduntant new ideas or manipulative generated comments destroy it.

Try to explode different things, so you can see clear boundaries of each separate thing and to minimize redundancy.

Try to map the depencency graph of a thing. Every higher level thing is a make file / spreadsheet cell DAG.


I love Fossil, I love SQLite, and I also like Althttpd.

https://sqlite.org/althttpd/doc/trunk/althttpd.md

Just like Fossil vs Git, SQLite vs $SomeRealSQLServer, I wish someday Althttpd would become a no-bullshit self-contained replacement for Nginx/Apache/whatever bloated HTTP servers. It has already proved its working by serving Fossil/SQLite, but configuration/features for serving actual web site is not yet "real production quality", at least that is how I feel.

Overall, what an amazing legacy this set of software has been to the world.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: