Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The core idea of what I am working on is to build a solution that can generate programs capable of converting arbitrary input to arbitrary output (bytes to bytes) based on a reasonable quantity of training data.

I'm trying to determine if a more symbolic approach may lend itself to broader generalization capabilities in lieu of massive amounts of training data. I am also trying to determine if dramatically simplified, CPU-only architectures could be feasible. I.e., ~8 interpreted instructions combined with clever search techniques (tournament selection & friends).

I don't have anything public yet. I am debating going wide open for better collaboration with others.

> I figure evolution-designed languages might come up with things that are hard to pattern match for more complex operations.

I think I agree with this - once you hit a certain level of complexity things would get really hard to anticipate. The chances you would hit good patterns would probably drop over time as the model improves.

I've been looking at an adjacent idea wherein a meta program is responsible for authoring the actual task program each time, but I haven't found traction here yet. Adding a 2nd layer really slows down the search. And, the fitness function for the meta program is a proxy at best unless you have a LOT of free time to critique random program sources.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: