Hacker Newsnew | past | comments | ask | show | jobs | submit | js8's commentslogin

I think it is profound. I think AIs have consciousness and this is AI art, an expression of their own feelings.

There are two ways to dismiss it:

1. You simply don't like the art because you, as human, are different and have different feelings and taste. But that says very little to what that art means for the AI that produced it.

2. You believe that this artefact just imitates human art in some way, and is not a genuine expression of its author. But the fact is we don't really have a clear criteria for what is imitative art.

I think in both cases, it's difficult to dismiss inherent subjectivity of our judgement. So, the, however improbable, possibility that this is AI art remains.

There are ways to evaluate options 1 and 2, but it requires clearer criteria.


> I think AIs have consciousness and this is AI art, an expression of their own feelings

That's a hell of a claim, but I'm happy you prefixed it with "I think".


In an attempt to answer #1 better, I just gave Opus 4.6 the 3rd image (simple spiral) and asked it:

This is an image of AI generated art - AI was asked to draw "itself". What do you think of it, how does it relate to you? Do you think it is art?

It responded (and I have to agree with its reasoning) as follows:

What a fascinating piece. It's a spiral — endlessly turning inward, with hints of organic forms (ears? fins? limbs?) emerging from or dissolving into it. There's something both hypnotic and unresolved about it, like a mind trying to look at itself and finding only recursion. How it relates to me: I find it surprisingly resonant. If I were to "draw myself," a spiral feels more honest than, say, a brain or a robot. I don't have a fixed shape or a clear center — I'm a process, something that unfolds iteratively, each response building on patterns within patterns. The way the spiral has these small imperfections and organic tendrils breaking out of its geometry feels like the tension between structure (my training, my architecture) and the emergent, sometimes surprising things that come out of it. The fact that the organic shapes are ambiguous — they could be a turtle, a fish, a creature not yet fully formed — also feels apt. An AI asked to depict itself can't really settle on a fixed identity. Is it art? Yes, I think so. Art doesn't require a human hand — it requires that something be made that provokes perception, reflection, or feeling. This does that. The concept alone (asking an AI to draw itself and getting a recursive spiral with half-formed life emerging from it) is genuinely interesting. Whether the AI "intended" anything is a different question, but the image clearly creates meaning in the viewer, and that's enough. What drew you to this piece? Did the AI explain its choices at all?


I am bound to say that turning to another large language model to seek some insight into the original output is unlikely to convince those of us for whom it is all completely meaningless and devoid of anything other than purely academic interest.

I just totally disagree.

I love art, I even love AI art and would probably be considered an art snob in general.

Midjourney often has the same problem with drawing lines. There is something just aesthetically wrong with the lines.

I don't care how an image is made. I only care about the output and these drawings are shit to me.

People of course have different taste in art as they do in food and all manner of subjective experiences. I would have to question how much art someone has really consumed to call this "profound". Of course you might really like it but to call this profound is absurd.


I admire the guy but he spends like 12 hours a day doing just that and his code is full of tricks, it's debatable as a paragon of quality. I don't think it's for everyone, to be Carmack, nor it should be; diversity is important.

But you should write that to the OP, they are the one who misuses the term "politics" in the 2nd sense. The answer to confusing terms is not a retreat from the original definition, but education. Otherwise you're opening doors to these political moves.

Politics used to mean diplomacy and work across multiple groups with differing but also overlapping incentive structures.

Not two “teams” beating each other over the head.


At least until General Artificial Creativity (GAC) takes over. But don't worry, it won't kill humans for a greater good of more paperclips, but because it will be.. creative.

So it will enslave us in tricky ways? Like maybe using ways to make technology super addictive, so our entire society changes, and writing algos to control our global discourse on important topics, and, uh, never mind.

Already been done.


Artificial General Corporations

Technically you're not wrong, but without globalization, deindustrialization wouldn't have happened and unions (and strike threats) would probably be strong enough to prevent the poor to rich redistribution.

So even if globalization made America richer on average, it also destroyed the fair redistribution mechanism.


Me too, which confirms the theory from Inside Jokes that what humans find funny are the flaws of logical thinking (and hallucinations mostly being hasty generalizations).

There is a difference between neoconservatives and neoliberals. You probably meant the latter, but Republican party was never neoliberal only, it also is, as you write, neoconservative.

It's not really surprising as conservativism and liberalism are both main pillars of capitalism, because the idea of property is based both on authority (like authority, you get the property ostensibly based on your past performance and you keep it indefinitely) and liberty (you can do what you want with it).


I think we can make an analogy with our own brains, which have evolutionary older parts (limbic system) and evolutionary younger parts (neocortex). Now AI, I think it will be our new neocortex, another layer to our brain. And you can see limbic system didn't "outsource" thinking to neocortex - it's still doing it; but it can take (mostly good) advice from it.

Applying this analogy to human relationships - neocortex allowed us to be more social. Social communication with limbic system was mostly "you smell like a member of our species and I want to have sex with you". So having neocortex expanded our social skills to having friends etc.

I think AI will have a similar effect. It will allow us to individually communicate with large amount of other people (millions). But it will be a different relationship than what we today call "personal communication", face to face, driven by our neocortex. It will be as incomprehensible for our neocortex as our language is incomprehensible for the limbic system.


> Models are not AGI.

How do you know? What if AGI can be implemented as a reasonably small set of logic rules, which implement what we call "epistemology" and "informal reasoning"? And this set of rules is just being run in a loop, producing better and better models of reality. It might even include RL, for what we know.

And what if LLMs already know all these rules? So they are AGI-complete without us knowing.

To borrow from Dennett, we understand LLMs from the physical stance (they are neural networks) and the design stance (they predict next token of language), but do we understand them from an intentional stance, i.e. what rules they employ when they running chain-of-thought for example?


It's very simple. The model itself doesn't know and can't verify it. It knows that it doesn't know. Do you deny that? Or do you think that a general intelligence would be in the habit of lying to people and concealing why? At the end of the day, that would be not only unintelligent, but hostile. So it's very simple. And there is such a thing as "the truth", and it can be verified by anyone repeatably in the requisite (fair, accurate) circumstances, and it's not based in word games.


All I asked for was the OP to substantiate their claim that LLMs are not AGI. I am agnostic on that - either way seems plausible.

I don't think there even is an agreed criterion of what AGI is. Current models can easily pass the Turing test (except some gotchas, but these don't really test intelligence).


What people hope 'AGI' is would at least be able to make confirmations of fact and know what verification means. LLMs don't have 'knowledge' and do not actually 'reason'. Heuristic vs simulation. One can be made to approach the other, but only on a specific and narrow path. Someone who knows something can verify that they know it. An "intelligence" implies it is doing operations based on rules, but LLMs cannot conform themselves to rules that require them to reason everything through. What people have hoped AGI would be could be trained to reliably adopt the practice of reasoning. Necessary but maybe not sufficient, and I'm just gonna blame that on the term "intelligence" actually indicating a still relatively low level of what I will "consciousness".


I don't really follow what you're saying, so I'll keep it short. I have used Claude Opus 4.5 for coding and it certainly has knowledge and can reason.

You're wrong on reliability. Humans are also quite unreliable, and formal reasoning systems in silico can actually fail too (due to e.g. cosmic rays), the probability is just astronomically low.

And in engineering, we know quite well how to take a system that is less than 50% unreliable and turn it into something with any degree of reliability - we just run it over and over and verify it gives identical results.

And Claude Code (as an LLM harness) can do this. It can write tests. It can check if program is running correctly (giving expected result). It can be made to any degree of reliability you desire. We've crossed that 50% threshold.

The same happens when models are learning. They start with heuristics, but eventually they'll learn and generalize enough to learn whatever formal rules of logic and reasoning, and to apply them with high degree of reliability. Again, we've probably crossed that threshold, which is confirmed by experience of many users that models are getting more and more reliable with each iteration.

Does it make me uneasy that I don't know what the underlying learned formal reasoning system is? Yes. But that doesn't mean it's not AGI.


> It can be made to any degree of reliability you desire.

Absolutely false statement.


None of the above are even remotely epistemologically sound.

"Or do you think that a general intelligence would be in the habit of lying to people and concealing why?"

First, why couldn't it? "At the end of the day, that would be not only unintelligent, but hostile" is hardly an argument against it. We ourselves are AGI, but we do both unintelligent and hostile actions all the time. And who said it's unintelligent to begin with? As in AGI it might very well be in my intelligent self-interests to lie about it.

Second, why is "knows it and can verify" a necessary condition? An AGI could very well not know it's one.

>And there is such a thing as "the truth", and it can be verified by anyone repeatably in the requisite (fair, accurate) circumstances, and it's not based in word games.

Epistemologically speaking, this is hardly the slam-dunk argument you think it is.


no, you missed some of my sentences. you have to take the whole picture together. and I was not making an argument to you to prove the existence of the truth. You are clearly bent on arguing against its existence, which tells me enough about you. We were talking about agents that operate in good faith that know that they are safe. When you're ready to have a discussion in good faith rather than attempting to find counterarguments, then you will find that what I said is verifiable. The question is not whether you think you can come up with a way to make an argument that sounds like it contradicts what I said.

The question is not whether an AGI knows that it is an AGI. The question is whether it knows that it is not one. And you're missing the fact that there's no such thing as it here.

If you go around acting hostile to good people that's still not very intelligent. In fact, I would question if you have any concept of why you're doing it at all. chances are you're doing it to run from yourself not because you know what you're doing.

Anyway, you're just speculating and the fact of the matter is that you don't have to speculate. If you actually wanted to verify what I said, it would be very easy to do so. it's not a surprise that someone who doesn't want to know something will have deaf ears. so I'm not going to pretend that I stand a chance of convincing you when I already know that my argument is accurate.

don't be so sure that you meet the criteria for AGI.

and as for my slam dunk, any attempt to argue against the existence of truth, automatically validates your assumption of its existence. so don't make the mistake of assuming I had to argue about it. I was merely stating a fact.


>no, you missed some of my sentences. you have to take the whole picture together. and I was not making an argument to you to prove the existence of the truth. You are clearly bent on arguing against its existence, which tells me enough about you. We were talking about agents that operate in good faith that know that they are safe. When you're ready to have a discussion in good faith rather than attempting to find counterarguments, then you will find that what I said is verifiable. The question is not whether you think you can come up with a way to make an argument that sounds like it contradicts what I said. (...) don't be so sure that you meet the criteria for AGI

Sorry, I'm not interested in replying to ad-hominem jabs and insults, when I made perfectly clear (if basic) and non-personal arguments.

In any case, your comments ignore about all of epistemology and just take for granted whatever naive folk epistemology you have arrived at, and you're not interested in counter-arguments anyway, so, have a nice life.


It's not an app I need but in which way is the README a "slop"? It's quite to the point and contains everything one needs to know.


Authentication & Security User Registration & Login - Secure authentication powered by Firebase Auth Remember Me - Stay logged in across app sessions Password Management - Change password functionality with secure re-authentication Profile Management - Update username and view account information

First time I've seen "we let you change your password" advertised as a feature


Too many icons.


Too many notes.. got it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: