Hacker Newsnew | past | comments | ask | show | jobs | submit | peter_retief's commentslogin

I ask AI to generate diagrams in LaTeX, works well for me.


Wow that brings back memories!


I always thought the sentence was too extreme, he broke some laws he should do some time. Not life without parole.


How does one actually sign up?


From https://www.archives.gov/citizen-archivist/register-and-get-...:

> Citizen Archivists must register for a free user account in order to contribute to the National Archives Catalog. Begin the registration process by clicking on the Log in / Sign Up button found in the upper right hand corner of the Catalog.

Catalog: https://catalog.archives.gov/


"Print a text file to STDOUT using ffmpeg" ffmpeg -v quiet -f data -i input.txt -map 0:0 -c text -f data - I tried this in a directory with input.txt with some random text Nothing.

So changed the verbosity to trace ffmpeg -v trace -f data -i input.txt -map 0:0 -c text -f data -

---snip-- [dost#0:0 @ 0x625775f0ba80] Encoder 'text' specified, but only '-codec copy' supported for data streams [dost#0:0 @ 0x625775f0ba80] Error selecting an encoder Error opening output file -. Error opening output files: Function not implemented [AVIOContext @ 0x625775f09cc0] Statistics: 10 bytes read, 0 seeks

I was expecting text to be written to stdout? What did I miss?


It's not working for me either, on FFmpeg 7.0.2. I suspect something has changed in FFmpeg since that command was shared on the Reddit post mentioned on the website. That was a few years ago.

However, from the same Reddit thread, this works:

ffmpeg -v quiet -f data -i input.txt -map 0 -f data pipe:1

EDIT: just verified the `-c text` approach works on FFmpeg major versions 4 and 5. From FFmpeg 6 onwards, it's broken. The `pipe:1` method works from FFmpeg 5 onwards, so the site should probably be updated to use that instead (also, FFmpeg 5.1 is an LTS release).


Thanks, yes they should update the site.



When I read that, it resembles very much the format of responses from copilot.microsoft.com

especially: point 4 is the final giveaway!


Ha, very literal answer, why not use cat to print the file, is that a bit of sarcasm creeping into the LLM's?


In fact, that LLMs are typically steered away from sarcasm or irony (i guess via system prompts stressing on formalism), makes it easier to identify their output. Its output is so formal, taking the question very seriously though it is obviously just an exercise, that it sounds ironical.


In retrospect it seems obvious now.


For some. For others with fewer stars in their eyes, it was obvious from the beginning.


the launch of ChatGPT had an amount of hype that was downright confusing for someone who had previously downloaded and fine tuned GPT2. Everyone who hadn't used a language model said it was revolutionary but it was obviously evolutionary

and I'm not sure the progress is linear, it might be logarithmic.

genAI in its current state has some uses.. but I fear that mostly ChatGPT is hallucinating false information of all kinds into the minds of uninformed people who think GPT is actually intelligence.


Everyone who actually works on this stuff, and didn't have ulterior motives in hyping it up to (over)sell it, have been identifying themselves as such and providing context for the hype since the beginning.

The furthest they got before the hype machine took over was introducing the term "stochastic parrot" to popular discourse.


AI is useful as a tool but it is far from trustworthy.

I just used Grok to write some CRON scripts for me, gave me perfectly good results, if you know exactly what you want, it is great.

It is not the end of software programmers though and is very dangerous to give it too much leeway because you will almost certainly end up with problems.

I agree with the conclusion that a hybrid model is possible.


> if you know exactly what you want, it is great.

Kinda kills the utility if you need to know what you want out tho...


It speeds up code writing, it's not useless. Best use case for me is to help me understand libraries that are sparsely documented (e.g. dotnet roslyn api).

edit: spelling


If I can get 100 lines generated instantly while explaining it in 25, scan the answer just to validate it and then, no wait, add other 50 lines as I forgot something before. All that in minutes then I'm happy.

Plus I can detach the "tell the AI" part from the actual running of the code. That's pretty powerful to me.

For instance, I could be on the train thinking of something, chat it over with an LLM, get it where I want and then pause before actually copying it into the project.


Could this work with roulette betting on color? Seems like you could spend a lot of time not winning or losing


Roulette results are uncorrelated and you have the exact same chance of winning each time, so the Kelly criterion isn’t applicable. Betting on a color has a negative edge and you don’t have the option of taking the house’s side, so it just tells you the obvious thing that you should bet zero.


> exact same chance of winning each time, so the Kelly criterion isn’t applicable.

Actually, the main assumption that leads to the Kelly criterion is that you will have future opportunities to bet with the same edge, not constrained by the amount.

For example, if you knew this was your last profitable betting opportunity, to maximise your expected value you should bet your entire stake.

I'm slightly surprised it leads to such a nice result for this game - I don't see a claim that this is the optimal strategy for maximizing EV zero variance is great, but having more money is also great.

Of course you are right about roulette and, if you are playing standard casino roulette against the house, the optimal strategy is not to play. But that's not because bets are uncorrelated, it's because they are all negative value.


> Actually, the main assumption that leads to the Kelly criterion is that you will have future opportunities to bet with the same edge, not constrained by the amount.

Not the same edge -- any edge! And this condition of new betting opportunities arriving every now and then is a fairly accurate description of life, even if you walk out of the casino.


What makes 0 better than the other numbers?


Can't bet negative in that kind of game. If a game is expected to lose you money, don't play.


$0, not 0 on the wheel.


Why not make the whole of OF AI?


Is this what is used to find direction of sound or where sound is coming from. Like gunshots?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: