Hacker Newsnew | past | comments | ask | show | jobs | submit | andrewdb's commentslogin

We are getting to a point where AI will be able to construct sound arguments in prose. They will make logical sense. Dismissing them only because of their origin is fallacious thinking.

Conclusion:

Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.

Premises:

1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.

2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.

3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.

Assumptions:

1. AI-generated arguments can have true premises and sound inferences

2. The genetic fallacy is a legitimate logical error to avoid

3. Source-based dismissals are categorically inappropriate in logical evaluation

4. AI should be treated as equivalent to any other source when evaluating arguments


I disagree with your first assumption. Well, mostly disagree. Let me explain.

It's true that AI-generated arguments can have true premises and sound inferences, some of the time. But the models are still too likely to hallucinate. Let's say at some distant future date the hallucination rate gets down to just 10%, so it's 90% likely that the argument is well constructed. (Which I personally doubt LLMs will ever be capable of as long as they are still statistically-based; I think it will take a model that is based on facts and logical reasoning, rather than on the probability of the next word being "the" or "argument" or "premise", before LLMs will be reliably able to produce logical reasoning that follows actual logic).

But here's the thing. When I'm reading an article, I'm not looking for "is this 90% likely to be true?" I'm looking for 100%. If a source has a 10% chance of being wrong, I'm going to skip reading that source in favor of a source that has a 0% chance of being wrong, or if that's not possible then a 1% chance. Yes, that's a logical fallacy... if my goal was proving the argument wrong. But my goal is different. My goal is finding reliable information as quickly as I can. And to that end, the genetic fallacy is actually useful to apply. Not as a "it's written by AI so it's wrong" argument — that would be fallacious indeed — but "it's written by AI so I'm not going to spend time on it, I'll skip to another article that is less likely to contain hallucinations" is an actually useful metric to apply.

I've had one too many cases where I asked an LLM, "Can product XYZ do ABC?", it confidently told me "Yes, you can do ABC with XYZ and here's how to do it," then I looked at the actual documentation for XYZ and it specifically said "we can't do ABC; at some future point we plan to add it, and then you will be able to do this: (example code)". And that example code was what the LLM spat out to me saying "Yes, you can do ABC" when the truth was the opposite.

The maxim falsus in uno, falsus in omnibus doesn't really apply to LLMs, because they don't have a moral component to them. It applies to people, because someone whose ethics forbid them to lie is reliable, but someone who is willing to lie about one thing is very likely to be willing to lie about other things, and is therefore unreliable as a source of information. LLMs don't have a sense of morality, and in fact when they hallucinate they're not lying, per se, since lying requires knowing the truth and willingly saying the opposite (as opposed to being mistaken, where you think you're telling the truth even though you're speaking an objective falsehood). LLMs don't know the truth, that's just not a concept programmed into them, so they're not lying, and their willingness to "lie" once does not prove a moral defect. But the fact that they do hallucinate a measurable percentage of the time makes them just as unreliable a source of information as a person who is willing to lie.

So while I do agree that AI-generated arguments can be logically correct, it is not guaranteed that they will be correct. And while it would be fallacious to say "AI-generated, therefore false", it is still useful to say "AI-generated, therefore unreliable and I'll seek out a different source of information".


If the PR had been proposed by a human, but it was 100% identical to the output generated by the bot, would it have been accepted?

I don't know about this PR but I suggest that people have wasted so much time on sloppy generated PRs that they have had to decide to ignore them to have any time to deal with real people and real PRs that aren't slop.

Sure, there is a problem with slop AI PRs _now_ .

That will not remain true for infinity.

What happens when the AI PRs aren't slop?


We can stop bothering with open source software completely.

We can just generate anything we want directly into machine code without any libraries.

...and if they commit libel we can "put them in jail" since they cannot be considered intelligent but somehow not responsible.


So when will multiple Waymo cars communicate input data to one another to avoid the blind spots?

This would give the ability to see things other cars cannot see as well.


One way to slightly mitigate the difficulties of nuance in language when translating to formal arguments is to attwmpt to always steelman the argument. Afford it all the guarded language and nuance you can, and then formalize in premises and conclusion.

This would also make interaction much more civil as well, given so much proclivity to do the opposite (straw man).

It's not a perfect approach, but it helps. LLMs are quite decent at steelmanning as well, because they can easiky pivot language to caveat and decorate with nuamce.


A prompt that I like to use for this:

---

Intake the following block of text and then formulate it as a steelmanned deductive argument. Use the format of premises and conclusion. After the argument, list possible fallacies in the argument. DO NOT fact check - simply analyze the logic. do not search.

After the fallacies list, show the following:

1. Evaluate Argument Strength: Assess the strength of each premise and the overall argument.

2. Provide Counterarguments: Suggest possible counterarguments to the premises and conclusion.

3. Highlight Assumptions: Identify any underlying assumptions that need examination.

4. Suggest Improvements: Recommend ways to strengthen the argument's logical structure.

5. Test with Scenarios: Apply the argument to various scenarios to see how it holds up.

6. Analyze Relevance: Check the relevance and connection between each premise and the conclusion.

Format the argument in the following manner:

Premise N: Premise N Text

ETC

Conclusion:

Conclusion text

[The block of text to evaluate]


Nice prompt, I've been doing something similar but not this robust. I'll give this a spin.

Thanks again!


Not just chip exports. It also limits model weights.

From the article:

In addition to the semiconductor controls, the new rules also limit the export of closed AI model weights, which are the numerical parameters that software uses to process data and make predictions or decisions.

Companies would be prohibited from hosting powerful closed model weights in Tier 3 countries, like China and Russia, and would have to abide by security standards to host those weights in Tier 2 countries. That means the controls on model weights don’t apply to companies that obtain universal VEU status, one of the people said.

Open weight models — which allow the public to access underlying code — aren’t affected by the rules, nor are closed models that are less powerful than an already-available open model. But if an AI company wants to fine-tune a general-purpose open weight model for a specific purpose, and that process uses a significant amount of computing power, they would need to apply for a US government license to do so in a Tier 2 country.


Steel-manned Deductive Argument

Premise 1: AI (specifically, statistical modeling based on hidden layer neural networks) has been increasingly integrated into various technological products and services.

Premise 2: Major companies like Microsoft, Apple, and Intel are intensifying their efforts in AI development and integration, with Microsoft announcing products like CoPilot+ and Recall.

Premise 3: CoPilot+, which is AI-driven, is built on users’ actual usage data, potentially offering more realistic and user-friendly assistance than previous AI iterations like Microsoft’s Clippy.

Premise 4: Recall, another AI feature by Microsoft, aims to enhance user productivity by automatically capturing and storing screenshots and textual content, but it stores this data unencrypted locally, posing significant privacy risks.

Premise 5: The continuous expansion of AI features in technology products often correlates with increased privacy risks and potential legal issues, as evidenced by the concerns surrounding Recall’s handling of personal data.

Conclusion: The rapid integration of AI into technology products, while intended to enhance functionality and user interaction, simultaneously amplifies privacy and security concerns, necessitating a cautious and regulated approach to AI deployment in consumer technologies.

Possible Fallacies in the Argument

Hasty Generalization: Concluding that all AI integrations pose privacy risks based on specific examples like Microsoft’s Recall might not account for other AI integrations that prioritize security and privacy.

Slippery Slope: The argument implies that increasing AI functionalities will inevitably lead to greater privacy and legal issues, which may not necessarily hold true if appropriate measures are taken.

Appeal to Fear: Highlighting the severe privacy risks and potential legal issues may play on the fears of surveillance and loss of privacy, overshadowing potential benefits of AI.

Biased Sample: The argument focuses mainly on Microsoft’s implementations of AI, which may not represent the broader industry approach to AI integration and its implications.


Is this comment LLM-generated?


In large part, yes


I have found the below to be a good starting point for formulating text into classical formulated arguments.

Intake the following block of text and then formulate it as a steelmanned deductive argument. Use the format of premises and conclusion. After the argument, list possible fallacies in the argument. DO NOT fact check - simply analyze the logic. do not search.

format in the following manner:

Premise N: Premise N Text

ETC

Conclusion:

Conclusion text

Output in English

[the block of text to analze]


Useful, thanks. Note that the "after the argument, list fallacies" part can be swapped out for other lists.

For example:

1. Evaluate Argument Strength: Assess the strength of each premise and the overall argument. [ChatGPT is an ass kisser so always says "strong"]

2. Provide Counterarguments: Suggest possible counterarguments to the premises and conclusion.

3. Highlight Assumptions: Identify any underlying assumptions that need examination.

4. Suggest Improvements: Recommend ways to strengthen the argument's logical structure.

5. Test with Scenarios: Apply the argument to various scenarios to see how it holds up.

6. Analyze Relevance: Check the relevance and connection between each premise and the conclusion.


Those are good suggestions. I will use some of them!

It is also interesting to go back and forth with the model, asking it to mitigate fallacies listed, and then re-check for fallacies, then mitigate again, etc, etc.

I have found that a workflow using pytube into OpenAPI Whisper into the above prompt is a decent way of breaking down a YouTube video into formulated arguments.


If the nutritional value of a plant's output is governed by its genetics, then this should be a solvable problem.

Using the boosted method, breed a high nutritional value plant with a high output plant.


If only this Dockerfile were real. It would greatly help app developers and publishers:

FROM apple/mac-os-slim:latest ...

Please, Apple, please let your developers use more virtualization or containerization.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: