Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>We as humans do not understand what makes us conscious.

Yes. And doesn't that make it highly unlikely that we are going to accidentally create a conscious machine?



Evolution seems to have done so without any intentionality.

I'm less concerned about the idea that AI will become conscious. What concerns me is that we start hooking these things up to systems that allow them to do actual harm.

While the question of whether it's having a conscious experience or not is an interesting one, it ultimately doesn't matter. It can be "smart" enough to do harm whether it's conscious or not. Indeed, after reading this, I'm less worried that we end up as paperclips or grey goo, and more concerned that this tech just continues the shittification of everything, fills the internet with crap, and generally making life harder and more irritating for the average Joe.


Yes. If the machine can produce a narrative of harm, and it is connected to tools that allow it to execute its narrative, we're in deep trouble. At that point, we should focus on what narratives it can produce, and what seems to "provoke" it, over whether it has an internal experience, whatever that means.


please do humor me and meditate on this:

https://www.youtube.com/watch?v=R1iWK3dlowI

So when you say I And point to the I as that Which doesn't change It cannot be what happens to you It cannot be the thoughts, It cannot be the emotions and feelings that you experience

So, what is the nature of I? What does the word mean Or point to?

something timeless, it's always been there who you truly are, underneath all the circumstances.

Untouched by time.

Every Answer Generates further questions

- Eckhart Tolle

------

yeah, yeah, he is a self help guru or whatevs, dismiss him but meditate on these his words. i think it is species-driven solipsism, perhaps with a dash of colonialism to disregard the possibility of a consciousness emerging from a substrate soaked in information. i understand that it's all statistics, that whatever results from all that "training" is just a multidimensional acceleration datastructure for processing vast amounts of data. but, in what way are we different? what makes us so special that only us humans can experience consciousness? in the history humans have time and again used language to draw a line between themselfs and other (i really want to write consciousness-substrates here:) humans they perceived SUB to them.

i think this kind of research is unethical as long as we dont have a solid understanding of what a "consciousness" is: where "I" points to and perhaps how we could transfer that from one substrate to another. perhaps that would be actual proof. at least subjectively :)

thank you for reading and humoring me


> What concerns me is that we start hooking these things up to systems that allow them to do actual harm.

This already happened long, long ago; notable example:

https://wikipedia.org/wiki/Dead_Hand


Yeah, we're unlikely to randomly create a highly intelligent machine. If you saw someone trying to create new chemical compounds, or a computer science student writing a video game AI, or a child randomly assembling blocks and stick - it would be absurd to worry that they would accidentally create some kind of intelligence.

What would make your belief more reasonable though is if you started to see evidence that people were on a path to creating intelligence. This evidence should make you think that what people were doing actually has a potential of getting to intelligence, and as that evidence builds so should your concern.

To go back to the idea of a child randomly assembling blocks and sticks - imagine if the child's creation started to talk incoherently. That would be pretty surprising. Then the creation starts to talk in grammatically correct but meaningless sentences. Then the creation starts to say things that are semantically meaningful but often out of context. Then the stuff almost always makes sense in context but it's not really novel. Now, it's saying novel creative stuff, but it's not always factually accurate. Is the correct intellectual posture - "Well, no worries, this creation is sometimes wrong. I'm certain what the child is building will never become really intelligent." I don't think that's a good stance to take.


I don't think primordial soup knew what consciousness was either, yet here we are. It stands to reason that more purposefully engineered mutations are more likely to generate something new faster than random evolution.

That said, I'm a bit skeptical of that outcome as well.


We created atom bombs with only a surface-level knowledge of quantum mechanics. We cannot describe what fully makes the universe function at the bottom level but we have the ability to rip apart the fabric of reality to devastating effect.

I see our efforts with AI as no different. Just because we don't understand consciousness does not mean we won't accidentally end up creating it. And we need to be prepared for that possibility.


When we've been trying to do exactly that for a century? And we've been building neural nets based on math that's roughly analogous to the way neural connections form in real brains? And throwing more and more data and compute behind it?

I'd say it'd be shocking if it didn't happen eventually


More likely it just means in the process of doing so that we’re unlikely to understand it.


Not necessarily. A defining characteristic of emergent behavior is that the designers of the system in which it occurs do not understand it. We might have a better chance of producing consciousness by accident than by intent.


If you buy into, say, the thousand-brains theory of the brain, a key part of what makes our brains special is replicating mostly identical cortical columns over and over and over, and they work together to create an astonishing emergent result. I think there's some parallel with just adding more and more compute and size to these models, as we see them develop more and more behaviors and skills.


  You won't ever make mistakes
  'Cause you were never taught
  How mistakes are made
Francis by Sophia Kennedy


Not necessarily. Sentience may well be a lot more simple than we understand, and as a species we haven't really been very good at recognizing it in others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: