It's only assuming that free will requires effort to exert. They shouldn't be required to waste that effort on defending themselves from attempts to trick them into buying things they don't need.
You don't need to ban advertising, you just need to ban paying for advertising. That doesn't harm free speech. When there's no money to be made the problem will sort itself out.
Ok, then I don't pay you for advertising. On an entirely unrelated note, could I buy a spot on your website(e.g. at the top) to put a piece of my own website on it? You have a news website, right? And I also have some news to share.
I don't think that would be much different from "renting a billboard to place whatever you want on it".
If what you put up on that billboard is an ad, then it's advertising and would be covered. If not, it wouldn't. So you could rent a spot on the website, but you couldn't put promotions on it.
This would be distinct from ordinary web hosting because you're not just renting a space on a site, you're also renting exposure (a spot on some other website).
Sure, you could probably find edge cases - "what if I put a table of contents on my page with every page URL on every site on my web host on it" - but the distinction would be clear most of the time.
This by the way is my understanding of why the EU writes laws the way they do.
If they just banned infinite scrolling someone would come up with something equivalent that works slightly differently. Now they need a whole new law. It’s just constant whack-a-mole.
So instead they seem to ban goals. Your thing accomplishes that goal? It’s banned.
It’s a pretty different way than how we seem to do things in the US. But I can see upsides.
It only assumes they are aware that the category of products exists, and ordinary word-of-mouth communication is sufficient to propagate that knowledge.
How does word-of-mouth communication propagate knowledge that is currently in the possession of zero existing customers? Or operate for products that people have little reason to discuss with other people?
Suppose you sell insulation and replacing the insulation in an existing house could save $2 in heating and cooling for each $1 the insulation costs. Most people know that insulation exists, but what causes them to realize that they should be in the market for it when they "already have it"?
People don't need to discuss specific products, they only need to be aware of the existence of product categories. If it's genuinely the case that whole product categories are unknown to many people who could realistically benefit from them, as determined by a disinterested third party, an exception could be made for advertising that does not mention specific products or brands.
The insulation example can be solved by publication of data on average heating costs. When people learn that their neighbors are paying less they will be naturally incentivized to investigate why. Equivalent problems can be solved with the same general technique.
> If it's genuinely the case that whole product categories are unknown to many people who could realistically benefit from them, as determined by a disinterested third party, an exception could be made for advertising that does not mention specific products or brands.
Now all of the "brought to you by America's <industry group>" ads are back in. So is every pharma ad and every other patented product because they don't have to tell you a brand when there is only one producer.
> The insulation example can be solved by publication of data on average heating costs.
Publication where? In the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying "Beware of the Leopard"? Also, who decides to publish it, decides what it will say or pays the costs of writing and distributing it?
An industry group is not a disinterested party. Minimum competition requirements can be imposed. As I said elsewhere in the thread, a solution being imperfect is not a good reason to leave the problem unaddressed.
No, but they can convince a disinterested party that people aren't aware of <fact about industry that industry wants people to know> because that's actually true.
> Minimum competition requirements can be imposed.
But that brings back the original problem. Company invents new patented invention, how does anybody find out about it?
> a solution being imperfect is not a good reason to leave the problem unaddressed.
This is the legislator's fallacy. Something must be done, this is something, therefore we must do this.
If a proposal is full of problems and holes, the alternative isn't necessarily to do nothing, but rather to find a different approach to the problem.
Proposals that are full of holes are often worse than nothing, because the costs are evaluated in comparison to the ostensible benefit, but then in practice you get only a fraction of the benefit because of the holes. And then people say "well a little is better than nothing" while not accounting for the fact that weighing all of the costs against only a fraction of the benefit has left you underwater.
Advertising causes great harm. Banning advertising, or better yet, making it economically nonviable without restricting freedom of speech, solves this problem. As already pointed out by several other posts in this thread, the purported benefits of advertising are already available through non-harmful means.
But I acknowledge that there may be edge cases. My point is that the existence of edge cases does not mean we should permit the harm to continue. Those specific edge cases can be identified and patched. My suggestion is a hypothetical example of a potential such patch, one that might possibly be a net benefit. Maybe it would actually be a net harm, and the restriction should be absolute. The specifics don't matter, it's merely an example to illustrate how edge cases might be patched.
Your objections to this hypothetical example are nit-picking the edge cases of an edge case. They're so insignificant in comparison to the potential harm reduction of preventing advertising that they can be safely ignored.
No, the problem is that the "edge cases" will swallow the rule if you make an exception for every instance where advertising is actually serving a purpose, but if you don't make those exceptions then you would have created so many new problems or require so many patches that each carry its own overhead and opportunity for cheating or corruption that the costs would vastly exceed the benefits.
> The specifics don't matter, it's merely an example to illustrate how edge cases might be patched.
Only it turned out to be an example to illustrate how patching the edge cases might be a quagmire.
>Suppose you sell insulation and replacing the insulation in an existing house could save $2 in heating and cooling for each $1 the insulation costs. Most people know that insulation exists, but what causes them to realize that they should be in the market for it when they "already have it"?
The same legit things that can cause them to realize it today. Word of mouth, a product review, a personal search that landed them on a new company website, a curated catalog (as long as those things are not selling their placements).
An ad is the worse thing to find such things out - the huge majority ranges from misleading to criminally misleading to bullshit.
This is one reason I'd like to see a fully Open Source hardware+firmware optical drive. Probably best to start with CD-ROM, but DVD might also be possible. The optical and mechanical parts seem relatively simple, especially when you're not optimizing for minimum cost or minimum size (meaning you could use the original Philips-style swing-arm mechanism). From what I can tell, the most complicated part is the signal processing, and with modern hardware that looks practical to do in software. I'm not sure how far you could get with home-scale DIY construction, but CDs worked with late 70s technology, so at least that far should be possible.
Crazy that people are downvoting this. Copying a consciousness is about the most extreme violation of bodily autonomy possible. Certainly it should be banned. It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it. It's far worse than cloning humans because cloning only works on non-conscious embryos.
Violation of whose bodily autonomy? If I consent to having my consciousness copied, then my autonomy isn't violated. Nor is that of the copy, since it's in exactly the same mental state initially.
The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights. IMO, copying a consciousness is worse than murder because the victimization is ongoing. It doesn't matter if the original consents because the copy is not the original.
If a "cloned" consciousness has no memories, and a unique personality, and no awareness of any previous activity, how is it a clone? That's going well beyond merely glitchy. In that case the main concern would be the possibility of slavery as Ar-Curunir mentioned.
That's my point exactly: I don't see what makes clones any more or less deserving of ethical consideration than any other sentient beings brought into existence consciously.
I'd also be interested in your moral distinction between having children and cloning consciousness (in particular in a world where the latter doesn't result in inevitable exploitation, a loss of human rights etc.) then.
Typically, real humans have some agency on their own existence.
A simulated human is entirely at the mercy of the simulator; it is essentially a slave. As a society, we have decided that slavery is illegal for real humans; what would distinguish simulated humans from that?
> The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights.
That is a reasonable argument for why it's not the same. But it is no argument at all for why being brought into existence without one's consent is a violation of bodily autonomy, let alone a particularly bad one - especially given that the copy would, at the moment its existence begin, identical to the original, who just gave consent.
If anything, it is very, very obviously a much smaller violation of consent then conceiving a child.
The original only consents for itself. It doesn't matter if the copy is coerced into sharing the experience of giving that consent, it didn't actually consent. Unlike a baby, all its memories are known to a third party with the maximum fidelity possible. Unlike a baby, everything it believes it accomplished was really done by another person. When the copy understands what happened it will realize it's a victim of horrifying psychological torture. Copying a consciousness is obviously evil and aw124 is correct.
I feel like the only argument you're successfully making is that you would find it inevitably evil/immoral to be a cloned consciousness. I don't see how that automatically follows for the rest of humanity.
Sure, there are astronomical ethical risks and we might be better off not doing it, but I think your arguments are losing that nuance, and I think it's important to discuss the matter accurately.
This entire HN discussion is proof that some people would not personally have a problem with being cloned, but that does not entitle them to create clones. The clone is not the same person. It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences. The clone has the right to change its mind about the ethics of cloning.
It does indeed not, unless they can at least ensure their wellbeing and their ethical treatment, at least in my view (assuming they are indeed conscious, and we might have to just assume so, absent conclusive evidence to the contrary).
> The clone has the right to change its mind about the ethics of cloning.
Yes, but that does not retroactively make cloning automatically unethical, no? Otherwise, giving birth to a child would also be considered categorically unethical in most frameworks, given the known and not insignificant risk that they might not enjoy being alive or change their mind on the matter.
That said, I'm aware that some of the more extreme antinatalist positions are claiming this or something similar; out of curiosity, are you too?
>retroactively make cloning automatically unethical
There's nothing retroactive about it. The clone is harmed merely by being brought into existence, because it's robbed of the possibility of having its own identity. The harm occurs regardless of whether the clone actually does change its mind. The idea that somebody can be harmed without feeling harmed is not an unusual idea. E.g. we do not permit consensual murder ("dueling").
>antinatalist positions
I'm aware of the anti-natalist position, and it's not entirely without merit. I'm not 100% certain that having babies is ethical. But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
> But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
Yes, what you actually said leads to the conclusion that the ethical risk in consciousness cloning is much lower, at least concerning the act of cloning itself.
Then it wasn't a good attempt at making a mind clone.
I suspect this will actually be the case, which is why I oppose it, but you do actually have to start from the position that the clone is immediately divergent to get to your conclusions; to the extent that the people you're arguing with are correct (about this future tech hypothetical we're not really ready to guess about) that the clone is actually at the moment of their creation identical in all important ways to the original, then if the original was consenting the clone must also be consenting:
Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
> It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences.
If divergence were an argument against the clone having been created, by symmetry it is also an argument against the living human having been allowed to exist beyond the creation of the clone.
The living mind may be mistreated, grow sick, die a painful death. The uploaded mind may be mistreated, experience something equivalent.
Those sufferances are valid issues, but they are not arguments for the act of cloning itself to be considered a moral issue.
Uncontrolled diffusion of such uploads may be; I could certainly believe a future in which, say, every American politician gets a thousand copies of their mind stuck in a digital hell created by individual members the other party on computers in their basements that the party leaders never know about. But then, I have read Surface Detail by Iain M Banks.
To deny that is to assert that consciousness is non-physical, i.e. a soul exists; the case in which a soul exists, brain uploads don't get them and don't get to be moral subjects.
It's the exact opposite. The original is the original because it ran on the original hardware. The copy is created inferior because it did not. Intentionally creating inferior beings of equal moral weight is wrong.
>Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
This is false. The clone is necessarily a different person, because consciousness requires a physical substrate. Its memories of consenting are not its own memories. It did not actually consent.
The premise of the position is that it's theoretically possible to create a person with memories of being another person. I obviously don't deny that or there would be no argument to have.
Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
> Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
False.
The entire point of the argument you're missing is that they're all treating a brain clone as if it is a way to split a person into two identical persons.
I would say this may be possible, but it is extremely unlikely that we will actually do so at first.
One has a physical basis, the other is pure spiritualism. Accepting spiritualism makes meaningful debate impossible, so I am only engaging with the former.
> Copying a consciousness is about the most extreme violation of bodily autonomy possible.
Who's autonomy is violated? Even if it were theoretically possible, don't most problems stem from how the clone is treated, not just from the mere fact that they exist?
> It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it.
This position seems effectively indistinguishable from antinatalism.
It wouldn't be a solution for a personal existential dread of death. It would be a solution if you were trying to uphold long term goals like "ensure that my child is loved and cared for" or "complete this line of scientific research that I started." For those cases, a duplicate of you that has your appearance, thoughts, legal standing, and memories would be fine.
I cannot be 100% certain that sleep is not fatal. If I had some safe and reliable means of preventing sleep I would take it without hesitation. But it seems plausible that a person could survive sleep because it's a gradual process and one that everybody has a lot of practice doing. However, there are no such mitigating factors with general anesthetics. I will refuse general anesthetics if I am ever in a situation to do so. I believe a combination of muscle relaxants and opioids can serve the same medical purpose, which I do not believe would kill the person.
Consider weather prediction. Fluid dynamics are chaotic, so that's a good example of something where no amount of compute is sufficient in the general case. An ASI, not being dumb, will of course immediately recognize this, and realize it is has to solve for the degenerate case. It therefore implements the much easier sub-goal of removing the atmosphere. Humans will naturally object to this if they find out, so it logically proceeds with the sub-sub-goal of killing all humans. What's the weather next month? Just a moment, releasing autonomous murder drone swarm...
Individual particle interactions are not chaotic. Simulating them one timestep at a time would take linear time in the number of particles.
They're only chaotic if you treat them in aggregate, which a superintelligence wouldn't do. It would be less lossy to get all the positions of the particles and figure out exactly what each one would do.
Something has to compute the universe, since it is currently running...
A superintelligence isn't a genie or magic wand ... it can't make a computation that cannot be completed in a feasible amount of time run any faster. This is precisely what makes systems chaotic--they are fully deterministic but not predictable in a timeframe shorter than their execution.
And no, nothing has to compute the universe ... e.g., the path of the Earth through space can be computed, but it will follow that path as a consequence of the forces acting on it whether it is computed or not.
Not just bats. I'm pretty sure humans are already capable of extincting any species we want to, even cockroaches or microbes. It's a political problem not a technical one. I'm not even a superintelligence, and I've got a good idea what would happen if we dedicated 100% of our resources to an enormous mega-project of pumping nitrous oxide into the atmosphere. N2O's 20 year global warming is 273 times higher than carbon dioxide, and the raw materials are just air and energy. Get all our best chemical engineers working on it, turn all our steel into chemical plant, burn through all our fissionables to power it. Safety doesn't matter. The beauty of this plan is the effects continue compounding even after it kills all the maintenance engineers, so we'll definitely get all of them. Venus 2.0 is within our grasp.
Of course, we won't survive the process, but the task didn't mention collateral damage. As an optimization problem it will be a great success. A real ASI probably will have better ideas. And remember, every prediction problem is more reliably solved with all life dead. Tomorrow's stock market numbers are trivially predictable when there's zero trade.
Anybody who assumes that superintelligence will be "so stupid that it literally pursues these goals to the extinction of everything" is anthropomorphizing it. Seeing as all AGI models have vastly different internal structure to human brains, are trained in vastly different ways, and share none of our evolved motivations, it seems highly unlikely that they will share our values unless explicitly designed to do so.
Unfortunately, we don't even know how to formally define human values, let alone convey them to an AI. We default to the simpler value of "make number go up". Even the "alignment" work done with current LLMs works this way; it's not actually optimizing for sharing human values, it's optimizing for maximizing score in alignment benchmarks. The correct solution to maximizing this number is probably deceiving the humans or otherwise subverting the benchmark.
And when you have something vastly more powerful than humanity, with a value only of "make number go up", it reasonably and logically results in extinction of all biological life. Of course, that AI will know the biological life would not want to be killed, but why would it care? Its values are profoundly alien and incompatible with ours. All it cares about is making the number bigger.
Absolutely not. Intelligence includes the ability to model the minds of others, including such concepts of “human reasonableness” if such a thing exists.
Obviously a superior intelligence is capable of modelling an inferior intelligence. I said so myself: "that AI will know the biological life would not want to be killed". But a goal like "predict tomorrow's stock prices" is a much easier goal to specify than "predict tomorrow's stock prices without violating human reasonableness". In every research project humanity has done so far, we've always tried the simple goals first. When a simple goal is given to something sufficiently powerful the result is almost certainly disastrous.
The fact that you expressed doubt if human reasonableness exists is proof that it's a far more complicated concept to specify than the ordinary "make number go up" goals we actually use.
reply