Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Emergences: A Talk by Danny Hillis (edge.org)
58 points by hhs on Sept 5, 2019 | hide | past | favorite | 23 comments


FYI, Danny Hillis is a pioneer of large scale parallel computing ("Connection Machine", he hired Richard Feynman to help him with that project), the author of the book that got me interested in computers ("The Pattern on the Stone"), and a researcher in evolutionary programming ("Co-evolving parasites improve simulated evolution as an optimization procedure")


I wrote code for his first connection machine (it was SIMD, not MIMD like later models) in Star Lisp. Good times.


   Co-evolving parasites
This paper strikes me as the earliest manifestation of what would later become GANs (generative adversarial networks). Yes, the mechanism is different (GAs vs NNs), but the spirit (having a competitive mechanism for speeding up local search) is similar.


One individual that has always interested me - and for some reason, has seemed to be "black-holed" by those in computer science (but maybe that is my own perception of things) is Hugo de Garis:

https://en.wikipedia.org/wiki/Hugo_de_Garis

A relatively eccentric person, in the late 1990s - early 2000s he along with others created a hardware system called the CAM Brain Machine (CBM) - his bio above contains more information about it. Numerous papers regarding it were published, some of which can still be found on the internet today.

The machine itself never achieved the successes he marketed it as being able to achieve, but it did seem like an interesting project. Much like Babbage, though, de Garis was constantly moving to "sell" the system to get more government grants to do more research with the machine, hopping from one place to another as the grants (or the government's patience) ran out and cancelled the project.

Perhaps that's why you don't here much about him any longer:

A combination of his eccentricity, his work not panning out as expected (indeed, from what I understand, the whole idea of "evolving neural networks" didn't pan out), his funding model likely sowing seeds of discontent among potential funding sources (that future AI/ML researchers would run up against, generating animosity or disdain toward him?), plus his later ideas being a bit too far out there for researchers to continue to take him seriously (expressed in his work of fiction "The Artilect War"); all of that contributing to his imposed seclusion from the subject matter...

While the idea of creating a "robotic kitten" (Robonoko) controlled by an "evolved neural network" didn't work out, the machines he created are still among the most "aesthetically pleasing to look at" computers around (I place them second only to the Cray Supercomputer designs); they were systems that weren't only technically advanced for their purpose, but also pleasing to look at and display (handy from a marketing perspective I suppose). Only a handful of them were built - I often wonder what happened to those machines; at least one example should belong in a museum - ideally CHM in Mountain View.


the whole idea of "evolving neural networks" didn't pan out

To the contrary, evolving neural networks is all the rage these days ("Neural Architecture Search", e.g. SOTA on Imagenet [1])

[1] https://arxiv.org/abs/1905.11946


It's interesting that some of his intuitions seem really really good like the idea that you'll get better results by using very large models. On the other hand, specialized hardware before the software works well has always been a disaster because it's so hard to iterate on new hardware. His idea of using evolution as a primary learning process also doesn't work well.


Social networks are a marvelous thing [1] (Yeah, somebody "stole" the idea in your comment)

[1] https://old.reddit.com/r/MachineLearning/comments/d05nfr/d_i...


That's interesting. I certainly did not make the Reddit post. Since there is interest in proto-GANs, here are two more.

- J. Schmidhuber, Learning Factorial Codes By Predictability Minimization. (1992)

- W. Li, M. Gauci, R. Gross, A Coevolutionary Approach to Learn Animal Behavior Through Controlled Interaction. (2013)

Schmidhuber's work will be widely known (and its relationship with GANs controversial), but I can't recall where I read about Li et al.


I stole your idea :) Imagine how Danny Hillis must have felt when he heard about Goodfellow's paper!


He also built a computer out of Tinkertoys and more recently designed a clock that would run for 10,000 years [0]. Danny Hillis has done many very interesting things.

[0] http://longnow.org/clock/


The Tinkertoy "computer" was a tic-tac-toe (naughts and crosses) playing machine:

https://www.computerhistory.org/collections/catalog/X39.81


> We’re now in a post-individual human world. We’re now in a world that is controlled by these emergent goals of the corporations. I don’t think there’s any turning back the clock on that. We are now in that world.

One of the things that worries me about this is that there are some things that corporations and governments aren't good at yet.

Lets call the root of the problem, adopting a new world view. Why is this important? Adopting new world views is very important for scientific advancement and for adapting to problems (climate change denial can be seen as a failure to adopt a world view).

If we accept organisations are the future, we need to create new organisational types that can adopt new world views more quickly, so that we can have more reactive and innovative organisations. New organisations tend to require new legislation, as what is a legal organisations is defined in law.

However our current organisations can chose to stop this legislation. So we might be left in a position where individuals are squeezed out of having any influence but not so much innovation is coming from the organisations either. It might be worth looking to see if this view is historically consistent with things like the stagnation in China.


I have also wondered about the inevitability of the dominance of a tiny number of increasingly powerful corporations. I used to joke about William Gibson’s cyber punk novels being future history, but now perhaps it is nothing to joke about.

Since (mostly) retiring I have been volunteering at my local food bank which is both a positive example of local self organization and serves as an introduction to many other small specialized groups that make sure kids get new school clothes, and other specific community needs.

By nature I am a happy and optimistic person and I think that the future might still turn out just fine, but with better AI and other tech, different social structures, etc. I bet that changes in society will accelerate. Nothing is the same except for change, but I argue the that second derivative of change is going to exponentially skyrocket.


> I bet that changes in society will accelerate. Nothing is the same except for change, but I argue the that second derivative of change is going to exponentially skyrocket.

Whenever people talk about change, I look at my flush toilet which uses the same technology which was invented in 1592. Feeding into a sewage system built in 1866. With waste water treatment standards from 1912. This helps ground me in expectations about how quickly things change.

There are countervailing forces to change as well. Society has had the internet for 20-30 years which has changed the means with which society forms opinions (for the better and worse), but it has had little impact on the actual method of government and laws. Nor has it changed how these things change much. Why not? Should we expect more changes in these things or not? This is all a bit unknowable.

We now exist in two-three ( western/china and possibly russia) spheres of influence in the on-line world. Lack of small disconnected regions has been hypothesised as a cause of china's stagnation. Will it be the same for us?

Another countervailing tendency is energy (or lack of it). Change requires energy and energy is problematic right now. We cannot agree how it should be produced (carbon is out, but nuclear and renewables are at loggerheads a bit) and it seems like it we have some negative feedback loops in progress already (increased numbers of droughts due to climate change will decrease the amount of food energy we can capture as a whole).


It seems to me the heart of this discussion is objective functions -- the goals of parent organizations and emergent 'AIs' being created within their R&D departments. It's not clear that the current goals are coherent, that the funders (governments or FAANG corporations or billionaires) have a clear agenda aside from driving demand for their primary product, which is mostly advertising.

The East India Company knew what it wanted: profits derived from maximizing industries based on colonialist labor and raw materials. But what are the driving forces that will shape and direct AI-enabled industries, today and tomorrow?


What about AI-as-a-service?


Seems like AI-as-a-service would just delegate the AI objective function (OF) to someone else. But unless demand for smart services ignites into a business value multiplier (like elimination of existing human labor, maybe), I don't see it catalyzing downstream emergences of the kind Hillis et al were discussing.

I think the Edge group is suggesting that AI might spawn unexpected forms of emergence in terms of social priorities behaviors or or movements, though they don't provide any examples (outside sci-fi) of what forms this might take.


https://soundcloud.com/edgefoundationinc

The other talks from the conference that Danny spoke at are here as well...

https://www.edge.org/conversations

It appears that these conversations all happened around the same table where some members of Edge gathered on a particular day but they were published each week over a course of time


As noted in the talk, Hillis is fascinated by how simpler units form more complex and dynamic systems from which ultimately emerge greater levels of intelligence. He also wonders how we can understand such an emergent intelligence (EI), and whether we can detect it - and if so, how? He believes that it's possible that such EI is already happening.

I would tend to agree with this. But I am not sure if we can detect it - or if we should even try, given the potential consequences.

Let us imagine a scenario unlikely to happen:

What if tomorrow, one of your neurons realized "Hey - I'm actually part of a larger thinking systems - what if instead of firing now - I fire a bit later - or if I do something completely at odds with what I should do? Maybe some of my friends around here would like to help? Hey guys! Guys...!"

...and let's say you - as the higher EI - noticed this happening; that is, a part of your brain "revolting" or at least trying to do something odd - what would you do?

Well - you'd likely try to find a way to stop those neurons, right? Maybe via surgery or some other drastic means, you would do your best to either make sure they stopped asking such questions at the least, or destroy them at the worst.

So - let us imagine right now - that we suspect some larger organization of -us- humans - are actually "processing units" or "processing elements" of a larger "entity". A large corporation could be one such entity; a religion could be another. Or perhaps just the interaction of people on the internet; maybe more than just information or what-not is being passed?

If we started to "poke around" these systems, and somehow attempt to figure out whether we were creating an EI from our social and/or economic interactions (perhaps even aided and abetted by the processing power of our computer and network systems) - do you really think that such a "higher order" intelligence would just sit there and let it happen?

Or do you think you, and perhaps your cohorts in "crime", would die strange and mysterious "random" deaths by accident or maybe adventure?

I also posit that it may not even be possible for us - at this level - to know if such an intelligence is or has emerged from us lower level processing units/elements. We have no idea how this "being" would think, what it's language is, or what timescale it operates on - it could be that "normal time" to it is either much faster or much slower than our timescale; either way, it wouldn't help the situation.

Also - we would be operating on a smaller level, trying to figure out how the greater EI system actually works; akin to trying to predict the weather by following the motion of the winds and other events. While we've had some success, weather prediction is not an exact science; I'd say we'd likely have about the same amount of luck predicting the actions or thinking of an EI.

So does such an EI (or multiple?) actually exist? I don't know. But it may turn out to be an exercise in hubris to attempt to find out...



Maybe a better explanation of the issues is this article by Evgeny Morozov https://newrepublic.com/article/154826/jeffrey-epsteins-inte... . In short, allegedly John Brockman (the guy behind Edge) had close ties with Jeffrey Epstein, and there's some reason to suspect he had a pretty good idea of at least some of the bad things Epstein got up to but continued to take his money and promote him as a scientific philanthropist and interesting thinker.

(It is entirely possible that Morozov has axes of his own to grind, and I don't know how accurate that article is. But it's probably fairly representative of why some people might think Edge should at least be keeping a low profile for a while.)


Can you elaborate?


"and Jeffrey Epstein, who recently endowed The Program for Evolutionary Dynamics at Harvard University which is involved in researching applications of mathematics and computer science to biology."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: