fun thing about this page: i have gemini in the browser and when I asked it 'why is the entire Wall Family naming these things?' it said it couldn't engage. Turns out 'goatse' is a forbidden word to Gemini.
I recently read about 'in thread' ads, like on Twitter, as being not as effective unless they are 'brand recognition' ads. Like, they will help you decide which one to pick when you are staring at two fungible brands on the shelf, but they will not convince you to buy something you have never heard about before, especially not from a direct click through. So while Ads work is true, in many ways, they don't in many others. The brand damage you can get from having those in-thread ads is also real: Ads target the user, not the thread, but by showing up, users associate advertisers with the thread. If you were in some argument about dictators taking over, and suddenly a product pops up, you may assign the negative energy you have toward dictators to that brand as well.
Grok is a hosted service. In your analogy, it would be like a gun shop renting a gun out to someone who puts down "Rob a store" as the intended usage of the rental. Then renting another gun to that same client. Then when confronted, telling people "I'm not responsible for what people do with the guns they rent from me".
It's not a personal tool that the company has no control over. It's a service they are actively providing and administering.
I think a better analogy would be going into a gun shop and paying the owner to shoot someone. They're asking grok to undress people and it's just doing it.
Would you blame only the users of a murder-for-hire service? Sure, yes, they are also to blame, but the murder-for-hire service would also seem to be equally culpable.
Somehow I doubt it. Getting such an email from a human is one thing, because humans actually feel gratitude. I don't think LLMs feel gratitude, so seeing them express gratitude is creepy and makes me questions the motives of the people running the experiment (though it does sound like an interesting experiment. I'm going to read more about it.)
Not a PR stunt. It's an experiment of letting models run wild and form their own mini-society. There really wasn't any human involved in sending this email, and nobody really has anything to gain from this.
I am unmoved by his little diatribe. What sort of compensation was he looking for, exactly, and under what auspices? Is there some language creator payout somewhere for people who invent them?
'defect' only applies to prisoners dilemma type problems. that is just one, very limited class of problem, and I would argue not very relevant to discussing AI inevitability.
qualia may not exist as such. they could just be essentially 'names' for states of neurons that we mix and match (like chords on a keyboard. arguing over the 'redness' of a percept is like arguing about the C-sharpness of a chord. we can talk about some frequencies but that's it.) we would have no way of knowing otherwise since we only perceive the output of our neural processes, and don't get to participate in the construction of these outputs, nor sense them happening. We just 'know' they are happening when we achieve those neural states and we identify those states relative to the others.
The point of qualia is that we seem to agree that these certain neuronal states "feel" like something. That being alive and conscious is an experience. Yes, it's exceedingly likely that all of the necessary components for "feeling" something is encoded right in the neuronal state. But we still need a framework for asking questions such as, "Does your red look the same as my red?" and "Why do I experience sensation, sometimes physical in nature, when I am depressed?"
It is absolutely an ill-defined concept, but it's another blunt tool in our toolbox that we use to better explore the world. Sometimes, our observations lead to better tools, and "artificial" intelligence is a fantastic sandbox for exploring these ideas. I'm glad that this discussion is taking place.
Empirical evidence, for one. And the existence of fine-tuning, which allows you to artificially influence how a model responds to questions. This means we can't just ask an LLM, "do you see red?" I can't really even ask you that. I just know that I see red, and that many other philosophers and scientists in the past seem to agree with my experience, and that it's a deep, deep discussion which only shallow spectators are currently drawing hard conclusions from.
Language isn't meaningless just because it evolves. I literally cannot help it if people ignore history and start using words in new ways. Those who take precision seriously will adapt and adopt new words if needed.
You said, "So then your opinion is about as meaningful as theirs", after I said I can't help it if people use words the wrong way and language evolves... I don't think this is worth continuing. Have a good day.
I discovered this a few years ago when someone who didn't understand what semver is was trying to do a rails version upgrade for us. They were practically throwing stuff when I got there and explained that lexicographical comparison of the strings would not work. I was about to write my own class for it, but then I thought that since Bundler knew how to resolve deps we should see what it uses. The rest is history!
I use it quite a bit when I have to monkeypatch a gem to backport a fix while I wait for a release:
raise "check if monkeypatch in #{__FILE__} is still needed" if Gem::Version.new(Rails.version) >= Gem::Version.new("8.0.0")
This will blow up immediately when the gem gets upgraded, so we can see if we still need it, instead of it laying around in wait to cause a subtle bug in the future.
The question is: why does the official documentation not mention this, along with guides?
Answer: because documentation is something the ruby core team does not want to think about. It is using scary language after all: English. The bane of most japanese developers. Plus, it is well-documented in Japanese already ... :>
reply