Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been playing around with interpreted variants of brainfuck for genetic programming experiments. The intended audience of the language is the evolutionary algorithm, not a human. The central goals are to minimize the size of the search space while still providing enough expressivity to avoid the Turing tarpit scenario (i.e., where we need an infeasible # of cycles to calculate a result).

I've recently found that moving from a linear memory model to a stack-based model creates a dramatic improvement in performance. The program tape is still linear, but the memory is a stack interface. It seems the search space is made prohibitively large by using pointer-based memory access. Stack based makes it a lot easier to stick arbitrary segments of programs together and have meaningful outcomes. Crossover of linear program tapes does not seem practical without constraining the memories in some way like this.



Hey! Have you come across the recent(ish) paper from Google researchers about self-replicators? In one of their experiments they used a self-modifying (metaprogrammable) variant of BrainFuck that I've found very interesting for EAs. I haven't fully replicated their findings as I've been experimenting with better ways to observe the evolution progress, but perhaps it might be interesting for your work as well.


This paper? https://arxiv.org/abs/2406.19108

I've read it many times. It introduced me to the idea of using self-modifying programs. I've built several prototypes that use their exact instruction set, but I haven't found a way to guide the evolution in a desired direction yet. The self-modifying aspect can be quite destructive to the genome.


Yes! That's the one!

I have to dig up my old code. I remember It was difficult to observe and identify the replicators. I don't remember following their "tracker" idea.

As you've mentioned, it can be quite self destructive, so I've been experimenting with the instruction set itself.

Each cell is one of 256 values, where only 10 of those values are instructions. In addition, the original instruction set is not uniformly distributed. This means that the likelihood of mutating destructively is highly affected by ratio of valid instructions and their distribution. For example, a cell with value 0xD0 is much less likely to mutate to a valid instruction than a cell of value 0x0D (assuming UTF-8). By playing with these parameters to make the state space smaller, I've seen significantly different levels of stability.

I'd love to follow your work if you share it anywhere!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: