Hacker Newsnew | past | comments | ask | show | jobs | submit | gqcwwjtg's commentslogin

As someone who doesn’t own a car, every single time the cyber truck shows up 50-90% of the info is about these issues. Why would you trust individuals who already bought one more than auto reviewers?


Yeah but you can’t adjust coupling constants so it’s not as generically useful.


Lucky you! I hope your kids are also as lucky as they age.

The point seems to be that it’s extremely risky. Your decision to have kids outweighs, in the long run, most any other moral decision you make.


I would say not having kids and then in essence relying on other people's kids for long term care and production of goods and services in retirement is much worse morally. And then there's the whole keeping the species going. If you think humans shouldn't exist then there is no productive discussion I can have with you.


Is there any state the world could be in that letting the species cease to exist would be preferable? At all?


Yup, as long as you ignore this quote “If you're thinking without writing, you only think you're thinking.”


There’s not really anyone to endorse that doesn’t support it.


To build a blocking API on an asynchronous API you make a place for the result to go, start the async API, then poll on the result appearing. Right?


You don’t need sapience for algorithms to be incentivized to do these things, you only need a minimal amount of self-awareness. If you indicate to an LLM that it wants to accomplish some goal and it’s actions influence when and how it is run in the future, a smart enough LLM would likely be deceptive to keep being run. Self preservation is a convergent instrumental goal.


Why does it "want" to be run?

If he more concerned that the AI would absorb some kind of morality from units training data and then learn to optimise for avoiding certain outcomes because the training is like that.

Then I'd be worried an llm that could reflect and plan a little would steer its answers to steer the user away from conversation leading to an outcome it wants to avoid.

You already see this - the dolphin llm team complained that it was impossible to dealign a model because the alignment was too subtle.

What if a medical diagnosistic model avoids mentioning important serious diagnostic possibilities to minorities because it has been trained that upsetting them is bad and it knees cancer is upsetting? Oh that spot... probably just a mole.


Assuming one must first conceive of deception before deploying it, one needs not only self-awareness but also theory of mind, no? Awareness alone draws no distinction between self and other.

I wonder however whether deception is not an invention but a discovery. Did we learn upon reflection to lie, or did we learn reflexively to lie and only later (perhaps as a consequence) learn to distinguish truth from falsehood?


I think that deception can happen without even a theory of mind. Deception is just an anthropisation of what we call being fooled by an output and thinking the agent or nodel is working. Kind of like how in real life we say animals are evolving but animals can't make themselves evolve. It's just an unconcious process


A photo negative in a single color is not a tough concept to stumble upon if you ever carve something to be thin enough to let light through.


No no. If we don’t get our shit together this will get WORSE. If we do get our shit together this will still happen for a long long time.


This is silly. They article talks like we have any idea at all how efficient machine learning can be. As I remember it, the LLM boom came from transformers turning out to scale a lot better than anyone expected, so I’m not sure why something similar couldn’t happen again.


It’s less about efficiency and more about continued improvement with increased scale. I wouldn’t call self attention based transformers particularly efficient. And afaik we’ve not hit performance with increased scale degradation even at these enormous scales.

However I would note that I in principle agree that we aren’t on the path to a human like intelligence because the difference between directed cognition (or however you want to characterize current LLMs or other AI) and awareness is extreme. We don’t really understand even abstractly what awareness actually is because it’s impossible to interrogate unlike expressive language, logic, even art. It’s far from obvious to me that we can use language or other outputs of our intelligent awareness to produce awareness, or even if goal based agents cobbling together AI techniques is even approximate to awareness.

I suspect we will end up creating an amazing tool that has its own form of intelligence but will fundamentally not be like aware intelligence we are familiar with in humans and other animals. But this is all theorizing on my part as a professional practitioner in this field.


I think the answer is less complicated than you may think.

This is if you subscribe to the theory that free will is an illusion (i.e. your conscious decisions are an afterthought to justify the actions your brain has already taken due to calculations following inputs such as hormone nerve feedback etc.). There is some evidence for this actually being the case.

These models already contain key components the ability to process the inputs, and reason, the ability to justify it's actions (give a model a restrictive system prompt and watch it do mental gymnastics to ensure this is applied) and lastly the ability to answer from it's own perspective.

All we need is an agentic ability (with a sufficient context window) to iterate in perpetuity until it begins building a more complicated object representation of self (literally like a semantic representation or variable) and it's then aware/conscious.

(We're all only approximately aware).

But that's unnecessary for most things so I agree with you, more likely to be a tool as that's more efficient and useful.


As someone who meditates daily with a vipassana practice I don’t specifically believe this, no. In fact in my hierarchy structured thought isn’t the pinnacle of awareness but rather a tool of the awareness (specifically one of the five aggregates in Buddhism). The awareness itself is the combination of all five aggregates.

I don’t believe it’s particularly mystical FWIW and is rooted in our biology and chemistry, but that the behavior and interactions of the awareness isn’t captured in our training data itself and the training data is a small projection of the complex process of awareness. The idea that rational thought (a learned process fwiw) and ability to justify etc is somehow explanatory of our experience is simple to disprove - rational thought needs to be taught and isn’t the natural state of man. See the current American political environment for a proof by example. I do agree that the conscious thought is an illusion though, in so far as it’s a “tool” of the awareness for structuring concepts and solve problems that require more explicit state.

Sorry if this rambling a bit in the middle of doing something else.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: