Hacker Newsnew | past | comments | ask | show | jobs | submit | more noodles_nomore's commentslogin

I had chronic sleep paralysis for many years. One thing I eventually figured out is that I still had control over my breathing, and I could hyper-ventilate myself awake. Sleep pararalysis then stopped either because I aquired that skill or because I stopped eating gluten.


Huh I actually don't recall ever having control of my breathing actually. Can wiggle wiggle a few fingers/toes and muted scream and some eye control is all I got i think.


'Dynamic Programming' has to be the worst name for a concept in all of computer science. I can never remember what it refers to. Then, now and then, I come across something interesting that makes it seem like a big deal, look it up, roll my eyes and immediately forget it again.


It was only named 'dynamic programming' because its inventor thought is was a more marketable name:

"I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word, ‘programming.’ I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities”.

From: Richard Bellman, Eye of the Hurricane: An Autobiography (1984, page 159)


I like the name Algorithmic Induction, or Inductive Programming.

Just like how in an (strongly) inductive proof you use facts from Fact(0)..Fact(n) to prove Fact(n+1), with inductive programming you're using results from Result(0)..Result(n) to compute Result(n+1).


facilitate: 1. To make easy or easier. From French faciliter, from Latin facilis ("easy").


What you're describing is a dopamine rush. I get the same thing when I watch videos of games I was addicted to once. It's not the device itself that causes it.

As a counter-experience: I'm over thirty and only got a modern smartphone a few years ago. The thing is awful. The touch interface is finicky, even after long use. Holding the thing for an extended amount of time is uncomfortable, even after long use. UI design seems universally terrible. All the apps I tried were disorganized and didn't quite serve the purpose I had expected, but they did always give me way too many unlabeled buttons and random pop-ups to tap, and asked for my money/attention every few minutes. The Android OS is obscurantic and loves to give you completely meaningless things to tap: "Tap here to optimize your device", like fucking what? I downloaded a game that appealed to me with loot boxes and sexy characters, but after an initial "wow, phone hardware has come a long way," the novelty wore off very quickly. Frankly, I now loathe the thing. The amount of frustration I've had with it, and the idea that it will just become ever more mandatory as every asshole company and government office loves the idea of shoving you through exactly, for their purposes, calibrated UI that you can't control or argue with -- it outweighs any positive experience I've had from it by a large factor. I don't feel any magic feel-good screen powers, only irritation.


Well, yeah, but how is it not the device itself that causes it? I'm addicted to the dopamine rush from interacting with the device. The cause is both the addiction in my brain and the addictive properties of the device that gave rise to it in the first place.

I used a flip phone for a year after having had a smartphone for a decade and I felt similarly to you when I first went back to my smartphone. It was overstimulating and I hated it. I have always used an iPhone, though, and I find that they're a significantly better user experience than Android so I didn't notice the UX problems you described. More like the entire thing was just TOO MUCH COMING AT ME ALL THE TIME AHHHHHH! After a few months I was back to being addicted and it all felt normal again. I miss my flip phone a lot but life without a smartphone was just too annoying.

Basically, I despise my smartphone, but I'm still addicted to it. I use it even when I don't want to and when it prevents me from doing other things I'd rather be doing. I try to stop and eventually I always give in. It's a legitimate behavioral addiction and I believe the problem is both the design of specific apps and properties inherent to the device itself.


> Well, yeah, but how is it not the device itself that causes it? I'm addicted to the dopamine rush from interacting with the device.

You wouldn't blame trees or paper if you were addicted to some books. And I bet there would also be a set of books that you just couldn't stomach reading, due to how uninteresting to you they were (but were liked by others).

Are the trees and paper the problem? Is it the shape/format of a typical book? Or is it just certain kinds of books that you get addicted too? What about people who also like the type of books that you are addicted to, but they aren't addicted to them like you are? That sounds like a book type+personal problem to me, not a trees and paper problem.


I think that certain aspects of the design of some electronic devices, particularly phones and tablets, are a huge part of the reason why they're so addictive. I'm specifically thinking about the screen resolution, brightly colored displays, and interactivity. How many people have you ever met who feel they have an addictive relationship to their kindle? I think if phones were grayscale e-ink displays with limited interactivity and no ability to play videos they would be much less addictive. When I turn my phone to grayscale it's immediately much less compelling.

If someone developed a new form of paper and suddenly millions of people were hooked on books published on that particular paper — wildly different books and only books printed on that kind of paper — it would make sense to conclude that there might be something about the new form of paper that's contributing to the problem. I'm sure I'm not the only one who found social media less compelling when I could only access it on a desktop computer. In my opinion the design of smartphones and tablets is not something we can just ignore when thinking about how and why they might be addictive. The devices themselves are brain candy just like the apps that run on them.


An average programmer's main job is to track down and fix bugs that shouldn't exist inside software that shouldn't exist build on frameworks that shouldn't exist for companies that shouldn't exist solving problems that shouldn't exist in industry niches that shouldn't exist. I'm 100% convinced that, if someone comes along and creates something that actually obsoletes 95% of programming jobs, everyone would very quickly come to the conclusion that they don't need it and it doesn't work anyway.


I am actually finding amusing that managers will generate 100k lines project with AI and then will start figuring out that it does not work as they want to. Then they figured out actual developers are needed to fix it, either in a very strict way telling AI what should happen (i.e. higher level programming) or directly fixing code generated by AI.


I know a small financial agency in the 00's that laid off their one-person IT department because they thought the computers would run themselves. It's honestly great that they're overselling AI, lots of messes to clean up.

edit: Ultimately there are going to be iterative pipelines with traditional programmers in the loop rearranging things and reprompting. Math skills are going to be deemphasized a bit and domain skill value increased a bit. Also, I think there's going to be a rise in static analysis along with the new safe languages, giving us more tools to safely evaluate and clean up output.


Ah, the old:

"Everything's broken, why am I paying you?"

"Everything works, why am I paying you?"


You're assuming that the AI is even generating anything that will make sense to a human. It seems inevitable we'll reach the point that for SaaS the AI will do everything directly based some internal model it has of what it believes the requirements are (e.g. it will be capable of acting just like a live web server), whereas for desktop and mobile apps, while that paradigm still remains relevant, it will generate the compiled package for distribution. And I imagine it would be unrealistic to attempt reverse engineering it. Fixing bugs will be done by telling the AI to refine its model.


> It seems inevitable we'll reach the point that...

It's inevitable that we'll reach AGI. It's inevitable that humans will extinct.

Everything you described is not how today's AI works. It's not even a stretch, it's just pure sci-fi.


I don’t think it is inevitable we’ll reach AGI. I think that question is very much up in the air at the moment.


My point is "It's inevitable that {scifi_scenario}" always sounds kinda plausible but doesnt necessarily means anything.


I'll be genuinely surprised if we don't have tools with that sort of capability within 10 years, quite possibly much sooner.


Are you arguing that LLMs already provide the technology to do this or are you arguing that it "seems inevitable" to you in the sense that somebody might think it "seems inevitable" that humans will some day travel to the stars, despite doing so requiring technological capabilities significantly beyond what we have yet developed?


It doesn't strike me as being much of a leap from what we have already, certainly not compared with traveling to the stars.


> You're assuming that the AI is even generating anything that will make sense to a human.

Why wouldn't it? It's trained on code generated by humans and already generates code that is more readable than the output of many humans me included.


But why would anyone bother with using AI to generate human readable code if the AI can generate the final desired behavior directly, either on-the-fly or as executable machine code?


Because the AI's, at least right now, can't generate/change code so that it correctly does what's expected with the confidence intervals we expect. I've tried to get it to happen, and it just doesn't. As long as that's true, we'll need to somehow get the correctness to where it needs to be, and that's going to require a person.


A lot of people have already figured out at some tricks to improving code generation.

You can fairly easily update the “next token” choice with a syntax check filter. LLMs like ChatGPT provide a selection of “likely” options, not a single perfect choice. Simply filter the top-n recommendations mechanically for validity. This will improve output a lot.

Similarly, backtracking can be used to fix larger semantic errors.

Last but not least, any scenario where a test case is available can be utilised to automatically iterate the LLM over the same problem until it gets it right. For example, feed it compiler error messages until it fixes the remaining errors.

This will guarantee output that compiles, but it may still be the wrong solution.

As the LLMs get smarter they will do better. Also, they can be fine tuned for specific problems automatically because the labels are available! We can easily determine if a piece of code compiles, or if it makes a unit test pass.


Currently ChatGPT isn't, at least via public access, hooked up to a compiler or interpreter that it can use to feed the code it generates into and determine whether it executes as expected. That wouldn't even seem particularly difficult to do, and once it is, ChatGPT would literally be able to train itself how to get the desired result.


Precisely. I think people should consider the "v4" in "ChatGPT 4" as more like "0.4 alpha".

We're very much in the "early days" of experimenting with how LLMs can be effectively used. The API restrictions enforced by OpenAI are preventing entire categories of use-cases from being tested.

Expect to see fine-tuned versions of LLaMA run circles around ChatGPT once people start hooking it up like this.


> what it believes the requirements are

It will have to describe these requirements in a way that a human can understand, and verify. The language will have to be unambiguous and structured. A human will need to be able to read that language, build up a mental model, and understand it is correct, or know the way to make corrections. Who do you think that person will be? Hint: it will be a specialist that knows how to think in a structured, logical way.


Sure, I agree with that. But it will very different to how programming is done today, and I'd suggest there'll be a lower bar to becoming capable of formulating such requirements and ensuring the software works as expected than there is now.


At least permabans are going to be more fun


It’s pretty true, someone today on here wrote, “teach it to understand swagger”, I actually laughed, like I’ve used swagger and it often turns into a Frankenstein, and sometimes for good reason. I completely understand the sentiment and I like swagger.

I believe the world is wiggly, not geometrically perfect, intellectuals struggle with that because square problems are easier to solve. Ideal scenarios are predictable and it’s what we like to think about.

Have you ever had to use a sleep() intentionally just to get something shipped ? That’s a wiggle.

We’re going to try square out the world so we can use ChatGPT to solve wiggly problems. It’s going to be interesting.

Yesterday I tried to use a SaaS product and due to some obscurity my account has issues and the API wouldn’t work, they have a well specified API but it still didn’t work out, I’ve been working with the support team to resolve it, but this is what I call a wiggle, they seem to exist everywhere.

Ask a construction worker about them.


> Ask a construction worker about them.

Hah. So true. The more I work on renovating parts of my house the more I see where a workers experience kicked in the finagle something. Very analogous to programming. All the parts that fit together perfectly are already easy today. It’s those bits that aren’t square, but also need to fit where the ‘art’ comes in.

Can AI also do that part? IDK, currently I believe it will simply help us do the art part much like the computer in Star Trek.


I’m positive about it, there is a lot of repetition in coding and it’s rare we get to spend the time on the good bits because of it.

If we need a semi-intelligent system to help us with the copy pasta, so be it.


Actually chat gpt is quite good at understanding some kinds of wiggliness. I built a restful api and documented it in a readme.md file in the wiggliest of ways. I then asked chatgpt to turn the readme into a swagger spec and then give me a page that read the spec and gave me a nice doc page with api exercise tool. Both tasks it performed really well and saved me a whole bunch of time.


Yeah, but now ask it to write a program that uses this API and then let it debug problems which arise from the swagger spec (or the backend) having bugs. I don't think LLMs have any way of recognizing and dealing with bad input data. That is I don't think they can recognize when something that is supposed to work in a particular way doesn't and fixing it is completely out of your reach, but you still need to get things working (by introducing workarounds).


Have you tried it? If you copy the errors back into the chat I could imagine it working quite well. Certainly you can give it contradictory instructions and it makes a decent effort at following them.


Yes, I'm subscribed to poe.com and am playing with all public models. They all suck at debugging issues with no known answers (I'm talking about typical problems every software developer, DevOps or infosec person solves every day).

You need a real ability to reason and preserve context beyond inherent context window somehow (we humans do it by keeping notes, writing emails, and filing JIRA tickets). So while this doesn't require full AGI and some form of AI might be able to do it this century, it won't be LLMs.


If you think that the average public LLM is equivalent to ChatGPT or GPT-4 then you are completely mistaken. By a factor of say 500-10000%.


poe.com is a web interface (by Quora) to multiple LLMs. Right now it's ChatGPT, GPT-4, Claude, Claude+ as well as Sage and Dragonfly.


I have some meticulous API docs I've written, which I tried to get ChatGPT to convert into swagger

It failed spectacularly

I wonder if it's because the API is quite large, and I had to paste in ~10 messages worth of API docs before I was finished.

It kept repeating segments of the same routes/paths and wasn't able to provide anything cohesive or useful to me.

Was your API pretty small? Or were your docs pretty concise?


Chatgpt has a token limit. If you exceeded it then it would have no way of delivering a good result because it would simply have dirtied what you said at first. My api was not huge, about 8 endpoints.


It can accept about 4k tokens, maybe 3000 words or 3500.

GPT-4 can now accept 8k or 32k. The 32k version is 8 times larger than the one you tried.

And these advances have come in a matter of a few months.

Over the next several years we should expect at least one, quite easily two or more orders of magnitude improvements.

I don't believe that this stuff can necessarily get a million times smarter. But 10 times? 100? In a few months the memory increased by a factor of 8.

Pretty quickly we are going to get to the point where we have to question the wisdom of every advanced primate having a platoon of supergeniuses at their disposal.

Probably as soon as the hardware scales out, or we get large scale memristor systems or whatever the next thing is which will be 1000 times more performant and efficient. Without exaggeration. Within about 10 years.


So people want to build a nuclear reactor on the moon, I think these things should probably live on the moon or better yet Mars.

That should be the place for experiments like this.

Lowery latency links back to Earth and first see how it goes.

Also you don’t think there will be resource constraints at some stage ? It’s funny we yelled at people for Bitcoin but when it’s ChstGPT, it’s fine to run probably tens of thousands of GPUs? In the middle of a climate crisis ? Not good.


Personally I don't think AI tools energy usage are comparable to BTC yet.

Also, with BTC it's literally burning it in an unproductive way for "improved security". It's like lighting a forest on fire to keep warm.

All the AI tools combined, last I heard, aren't consuming 0.5% of the world's energy usage. And even if they were, it would be absolutely bonkers to argue we should keep doing that when there were alternatives that accomplished similar goals without the energy usage (proof of stake)


So, there is money to be made from LLMs, there will be advertising injected into the models responses etc.

It's really the early days, but there's no way energy consumption won't grow exponentially now there is potential for earning money.


>Have you ever had to use a sleep() intentionally just to get something shipped ?

no, I'm not that deep in hell


I highly disagree. That might (might!) be true of some segments of the tech industry, like SV-based startups, creating products no one wants.

But it's definitely not true of the average piece of software. So much of the world around us runs on software and hardware that somebody had to build. From your computer itself, to most software that people use on a day-to-day basis to do their jobs, to the cars we drive, to the control software on the elevators we ride, software is everywhere.

There is a lot of waste in software, to be sure, but I really don't think the average SE works for a company that shouldn't exist.


I’m leaning in this direction too. I saw someone on Twitter phrase it quite well: “You can believe that most jobs are bullshit [jobs]. And you can believe that GPT-4 will completely disrupt the job market. But you can’t believe both.”


Bullshit jobs exist because upper management can't know exactly what everybody in the company is doing, which leaves opportunities for middle management to advance their own interests at the expense of the company as a whole. Upper management might suspect jobs are bullshit, but it's risky for them to fire people because the job might actually be important.

But upper management can know exactly what LLMs are capable of, because they are products with fixed capabilities. ChatGPT is the same ChatGPT for everybody. This makes firing obsolete workers much safer.


Hate to break it to you but upper management is usually the main driver of bullshit jobs. They know what’s going on


Won’t it find traction in bullshit jobs pretty easily?


It's rather that the jobs (not the workers) are replaced in the way saddle makers were replaced by mechanics.


Exactly this.

Everyone thinks only in terms of current needs and state of affairs of people when analyzing a future technology. No one thinks about the insatiable human desire for more and the higher expectations for that new normal that always meets the increased productivity available. Anything that automatically solves much of our wants is doomed to be static and limited.


Existential crisis averted by another existential crisis... :D


Lol, zerohedge called it. Shameless.


YC sounds a lot like ZH these days


It's the vision of the net at the time of the transition to web 2.0. The web as a giant database of the world's knowledge, but now curated not by experts on their own little websites but distributedly through the collective wisdom of humanity. Think wikipedia, imdb, tvtropes. SO's gamification is primarily geared towards cleanup, not participation. However, databasing questions never really made sense. It works remarkably well, but the subject is just too open ended. What we really needed was a collective effort to produce great, searchable, navigable documentation. Instead, we now have a collection of hyper-specific, often outdated snippets that do not educate, and the effortfully produced helpful introductions, overviews, and explanations you want to read are dying disorganized somewhere on diverse wordpress blogs.


> What we really needed was a collective effort to produce great, searchable, navigable documentation

They tried. It didn’t work - https://meta.stackoverflow.com/questions/354217/sunsetting-d... - the site is still there at https://stackoverflow.com/documentation/


Compliance is the constitutive purpose of surveillance.


An earlier version of wiktionary had that definition. Apparently it was deleted for lack of citation other than Tolkien which might be an in-universe term.

> 1. A person in their twenties, between 20 and 29 years old. Possibly including 30-32.

> 1954 "The Fellowship of the Ring", J.R.R. Tolkien

> At that time Frodo was still in his tweens, as the hobbits called the irresponsible twenties between childhood and coming of age at thirty-three.


That's amazing, thank you. I just hit my wife over the head with it, and she's not amused. But I feel vindicated that Tolkien, 41*, and I all are ahead of everyone else ;)


You're making the assumption that GPT doesn't automatically build a phonological theory of the english language from rhymes and lyrics and linguistic texts that are fed into it.


We know experimentally that it doesn't... It can reproduce certain rhymes when it's generating a song, but the original GPT-3 was terrible at this, so I think they've fine tuned the model specifically for lyrics or poetry.

I just asked ChatGPT: Do "not" and "knot" sound similar?

> "Not" and "knot" do not sound similar. They are two distinct words that are pronounced differently and have different meanings. "Not" is a negation word that is pronounced with a short, sharp "n" sound followed by a long "o" sound and a "t" sound. "Knot" is a noun that refers to a tangled mass of something, such as rope or thread, and is pronounced with a short "k" sound followed by a long "o" sound and a "t" sound. Despite their similar spellings, "not" and "knot" are not homophones, which are words that are pronounced the same but have different meanings.


I just asked it and it said

> Yes, the words "not" and "knot" do sound similar when spoken aloud. They are pronounced the same way, with a short "o" sound and a long "t" sound. However, they have different meanings and spellings. "Not" is a negation or denial, while "knot" is a fastening made by looping a piece of cord, rope, or other material around itself or through itself and then pulling tight.

I tried a few times and it gave me other answers, including something like your answer and another claiming they were similar but not quite the same.


It's interesting, because even when it "knows" that two words are homophones, it doesn't want to believe they sound the same. I just asked it "Are 'knot' and 'not' homophones?" and it replied:

> Yes, "knot" and "not" are homophones. Homophones are words that are pronounced the same but have different meanings and spellings.

My immediate follow-up: "Do 'knot' and 'not' sound exactly the same?"

> No, "knot" and "not" do not sound exactly the same, although they are pronounced similarly. [...] "Knot" is pronounced with a short, sharp "k" sound at the beginning


I'd love to hear the opinion of someone who has really good knowledge and experience of how byte-pair encoding works in models like these. I think I agree with you that in theory it should be able to build a phonology from the amount of explicitly rhyming material in its training corpus, but for whatever reason it doesn't do this or at least doesn't do it consistently.

I've spend a long time testing this in ChatGPT, and no matter what I do it still gives results like this (paraphrasing here because it's down right now):

>What words rhyme with coffee? > doff happy toffee snuff duff

> Does "snuff" rhyme with "coffee"? >

Yes because they both share the 'o' vowel sound.


But how? Many poems don't rhyme, but there's no outward way to tell. And to parse linguistic texts it would need to know the phonetic alphabet, which I assume it doesn't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: