Hacker Newsnew | past | comments | ask | show | jobs | submit | rstuart4133's commentslogin

> If courts wanted to act, they needed to act years ago.

My own view is copyright law is a mess. When technology changes what happens is all the interested parties (read: people wanting to make the public pay for their copyrighted material, the people paying the money don't get a seat at this table) get together in a room and hammer out a compromise. The compromise is always a whole pile of band-aids stuck onto the old version, which was of course mostly a whole pile of band-aids stuck on the previous version.

It's always been that way. When the printing press way first used to make serious money, Queens Elizabeth offered to pass laws regulating their use but was told her help wasn't required. I suspect the thinking her idea of the "help" was censorship. So the first version of copyright was "no thank you". But then the publishers discovered they were terrible at selecting books that would sell, and so they published a lot of lemons. The occasional success had to pay for all the bad ones. But without copyright, other publishers can just cherry pick the successful ones without all the expensive investment in the bad ones, which in the end meant no one made any money. So they begged for the very first band-aid - a new copyright law, and got copyright and censorship. It's band-aids all the way down.

This has happened over and over again - radio, TV, cassettes, CDs, movie theatres, all caused huge disruption, much hand wringing, lots of pontificating about how existing law should be applied to the newcomers, which just like now the newcomers mostly ignored.

If you look at copyright law, with its provisions like 70 years after the author's death, the Disney extension, it should be regarded as a standing joke at this point. The biggest part of the joke is the justification handed out to the people who pay for all these copyrighted works. It's all for our benefit. It's there to ensure the publishers supply us with a large variety of works to enjoy. It has a grain of truth to it. Back when copyright was 14 years, it was a pretty big grain. Now it's so small, it's a joke.

I have no sympathy for any of them.


> If you look at copyright law, with its provisions like 70 years after the author's death, the Disney extension, ...

Sure but once again we have to conclude that "justice" according to the outcome US (and European) courts produced means YOU AND I get charged $30000 per copyright violation they catch you. Yet, it is apparently also entirely just, according to judges, that OpenAI, Anthropic, Google, Alibaba, META, ... don't get charged anything for violating copyright on a scale so large it's difficult to even imagine.

So why would anyone follow the law or opinions of courts, congress, ... unless physically forced? As opposed to finding any creative way out of it? Obviously the outcome courts explicitly chose does not follow either mine, or US courts' own version of justice. They are just a way to guarantee big company and state profits using violence, and literally nothing more.


> I honestly still don't see the point of compaction.

Currently my mental model is every token Claude generates gets added to the context window. When it fills up there is no way forward. If you are going to get a meaningful amount of work done before the next compaction they have to delete most of the tokens in the context window. I agree after compaction it's like dealing with something that's developed a bad case of dementia, but you've run out what is the alternative?

> why would you even want compaction and not just start a blank sessions by reading that md?

If you look at "how to use Claude" instructions (even those from Anthropic), that's pretty much what they do. Subagents for example are Claude instances that start set of instructions and a clean context window to play with. The "art of using Claude" seems to be the "art of dividing a project into tasks, so every task gets done without it overflowing the context window".

This gives me an almost overwhelming sense of déjà vu. I've spent my entire life writing my code with some restriction in mind - like registers, RAM, lines of code in a function, size of PR's, functions in a API. Now the restriction is the size of the bloody context window.

> I'm working on something which tries to achieve lossless compaction but that is incredibly expensive and the process needs around 5 to 10 times as many tokens to compact as the conversation it is compacting.

I took a slightly different approach. I wanted a feel for what the limit was.

I was using Claude to do a clean room implementation of existing code. This entails asking Claude to read an existing code base, and produce a detailed specification of all of its externally observable behaviours. Then using that specification only (ie, without reference to the existing program, or a global CLAUDE.md, or any other prompts), it had to reliably produce a working version of the original in another language. Thus the specification had to include all the steps that are needed to do that - like unit tests, integration tests, coding standards instructions on running the compiler, and so on, that might normally come from elsewhere.

Before proceeding, I wanted to ensure Claude could actually do the task without overflowing its context window - so I asked Claude for some conservative limits. The answer was: a 10,000 word specification that generated 10,000 lines of code would be a comfortable fit. My task happened to fit, but it's tiny really.

When working with even a moderate code base, where you have CLAUDE.md, and a global CLAUDE.md for coding standards and what not and are using multiple modules in that code base so it has to read many lines of code, you run into that 10,000 words of prompt, 10,000 lines of code it has to read or write very quickly - within a couple of hours for me. And then the battle starts to split up the tasks, create sub-agents, yada-yada. In the end, they are all hacks for working around the limited size of the context window - because, as you say, compaction is about as successful for managing the context window as the OOM killer is for managing RAM.


There are Non-Resident Importers, which are foreign companies that import goods into the USA, but do not have a presence in the United States. About 15% of USA imports come through NRIs.

For them this reversal sets up a true irony. Trump effectively forced US citizens to pay more the imported goods. He thought that money would go to the USA treasury. Now the US treasury has to pay it back, so it is a free gift to the exporting countries. Like China.

Truly delicious.


> It was literally in a blink of an eye.!!

It's not even close. It takes the eye 100mm .. 400ms to blink. This think takes under 30ms to process a small query - about 10 times faster.


> But related to this article, is China winning in terms of accumulating talent?

You can ask Google for metrics:

- China produces about over 1.3 to 1.6 million new engineering graduates per year.

- The USA produces about 130,000–200,000, or about 1/10 of China, but has a population of about 1/4.

- Europe is hard to measure, but USA plus Europe combined is almost certainly less than China by a significant margin.


For the LLM explainer, did you point Claude at this one? https://explainextended.com/2023/12/31/happy-new-year-15/ This page Claude assisted page rhymes with that one. Sorta.

If you liked that explanation of Fourier transforms, you'll probably like this one: https://www.jezzamon.com/fourier/index.html

> That is impressive enough for now, I think.

There are lot of embedded SQL libraries out there. I'm not particularly enamoured with some of the design choices SQLite made, for example the "flexible" approach they take to naming column types, so that isn't why I use it.

I use it for one reason: it is the most reliable SQL implementation I know of. I can safely assume if file corruption, or invariants I tried to keep aren't there, it isn't SQLite. By completely eliminating one branch of the failure tree, it saves me time.

That one reason is the one thing this implementation lacks - while keeping what I consider SQLite's warts.


> Lets all arbitrarily agree AGI is here. I can't even be bothered discussing what the definition of AGI is.

There is a definition of AGI the AI companies are using to justify their valuation. It's not what most people would call AGI but it does that job well enough, and you will care when it arrives.

They define it as an AI that can develop other AI's faster than the best team of human engineers. Once they build one of those in house they outpace the competition and become the winner that takes all. Personally I think it's more likely they will all achieve it at a similar time. That would mean the the race will continues, accelerating as fast as they can build data centres and power plants to feed them.

It will impact everyone, because the already dizzying pace of the current advances will accelerate. I don't know about you, but I'm having trouble figuring out what my job will be next year as it is.

An AI that just develops other AI's could hardly be called "general" in my book, but my opinion doesn't count for much.


May I ask, what experiences are you personally having with LLMs right now that is leading you to the conclusion that they will become "intelligent" enough to identify, organise, and build advancing improvements to themselves, without any human interaction in the near future (1 - 2 years lets say)?

> May I ask, what experiences are you personally having with LLMs right now that is leading you to the conclusion that they will become "intelligent" enough to identify, organise, and build advancing improvements to themselves, without any human interaction in the near future (1 - 2 years lets say)?

None, as I don't develop LLM's.

I wasn't saying I think they will succeed, but I think it is worth noting their AGI ambitions are not as grand as the term implies. Nonetheless, if they achieve them, the world will change.


I mis-read. Thanks for clarifying :-)

Re-reading, it's entirely my fault. I should have said:

> and you will care if/when it arrives.


> Just because you were late to the party doesn't mean all of us were.

It wasn't a party I liked back in 2023. I'm just repeating the same stuff I see said over and over again here, but there has been a step change with Opus 4.5.

You can still it in action now because the other models are still where Opus was at a while ago. I recently needed to make small change to script I was using. It is a tiny (50 line) script written with the help of AI's ages ago, but was subtly wrong in so many ways. It's now become clear neither the AI's (I used several and cross checked) nor myself had a clue about what we were dealing with. The current "seems to work" version was created after much blood caused by misunderstandings was spilt, exposing bugs that had to be fixed.

I asked Claude 4.6 to fix yet another misunderstanding, and the result was a patch changing the minimum number of lines to get the job done. Just reviewing such a surgical modification was far easier than doing it myself.

I gave exactly the same prompt to Gemini. The result was a wholesale rearrangement of the code. Maybe it was good, but the effort to verify that was far lager than just doing it myself. It was a very 2023 experience.

The usual 2023 experience for me was ask an AI write some greenfield code, and get a result that looked like someone had changed variable names in something they found on the web after a brief search for code that looked like it might do a similar job. If you got lucky, it might have found something that was indeed very similar, but in my case that was rare. Asking it to modify code unlike something it had seen before was like asking someone to poke your eyes with a stick.

As I said, some of the organisers of this style of party seem have gotten their act together, so now it is well worth joining their parties. But this is a newish development.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: