Yep, hence why I said up front “I agree with you about declining quality of answers” because they definitively have based on personal experience with examples similar to yours.
I hope this will be found in history books and some students will point the irony that people are relying on gpt4's arguments about reasoning in a thread where it's proclaimed that said model can't reason
In fact it is not absurd or weird. The model does not need to be capable of x/reasoning to produce knowledge about x/reasoning. A book with a chapter on x/reasoning doesn't reason either.
Not the original poster, but here are some examples:
- Paste a bunch of log code about which I have no idea. Ask to identify and explain the problem.
- Wireshark / dmesg / OpenWrt configuration pasting, ask to fix the problem. For instance, I fixed a Wi-Fi issue in a heterogeneous setup, which turned out to be caused by a stray DHCPv6 server.
- Paste C code, along with an error log, and ask to fix the problem.
- Paste my program and a sample. Ask to extend my program.
- Proofread and format Markdown nicely.
- Paste government letters, asking for a response that includes <what I want>.
- Paste a chat log and obtain documentation.
- Paste a tax declaration, and ask to check for consistency.
- Paste my code and ask for critique.
When discussing versions, people often confuse versions 3.5 and 4. I am always referring to version 4.
Keep in mind that ChatGPT has been seriously and intentionally downgraded since March.
This debate frequently leaves me wondering if I'm encountering a coordinated effort by bots. The examples I listed above come very naturally to me. I can't understand why people don't try to paste whatever they're working on and check the results. If it's too complex, asking it to critique instead of create, because that's easier. It feels as though there's an effort to shape public opinion into viewing these tools as "immature" and suitable only for edge cases.
Yeah I think the people who haven't found value are people who either don't have strong communication skills hence cannot articulate what they need OR work in very speciailized fields - either way they're clearly the minority seeing how many paying users chatgtp has and how fast they got there
> don't have strong communication skills hence cannot articulate
Basically this - even in a highly specialised field you would still find value in it if you're providing enough context and describing the problem properly.
People don't want to hear that they're poor communicators - it's much easier to blame the tech.
> People don't want to hear that they're poor communicators - it's much easier to blame the tech.
It's just hard to even know what to say to this. So you are implying that the problem lies with the human, who holds years of experience in education, problem solving, communication, etc., and not the "AI", which has no experience in any of that and has no real thought processes or problem solving or reasoning abilities? And people say ChatGPT is over-hyped!
I've had some chatlogs where people are not able to articulate even one clear sentence. They were fine telling me what they want, but not able to type the same sentence
> Yeah I think the people who haven't found value are people who either don't have strong communication skills hence cannot articulate what they need OR work in very speciailized fields
This is so weird and seems to be a common thread. You, and others, seem to be saying that ChatGPT is not useful if you're too dumb or too smart. It just seems both tautological and strange.
My communication skills are fine. ChatGPT is just not useful for me, period. I've tried taking what people have mentioned here, trying to have a so-called dialogue with it when things it suggests, for much more simple things, aren't quite correct. It keeps suggesting all these things, but it just feels like an interactive StackOverflow, except all the suggestions are wrong or don't work.
If you really wanted to improve you could show a link to a discussion, then I can give you feedback.
I am switching to teaching / consulting in AI. Chatbots are a part of that. I've reviewed some chatlogs, and some people get abyssimal results, which are entirely explainable and are 100% their fault.
Can you paste a transcript for any of those examples here? I’m genuinely curious. I’ve never seen a single actual transcript from anyone talking about how they use ChatGPT all the time for programming work and it’s great.
This is the transcript. Forgive my dumb last question, I didn't understand the code, what it did worked well.
---
User:
Please fix the deprecation warning on this code:
for i in range(0,3):
df.loc[df['date'] < "2023-06-01", 'date'] = df['date'] + pd.DateOffset(years=1)
ChatGPT:
The code you've provided updates the 'date' column in a DataFrame df by adding one year to each date that is earlier than June 1, 2023. However, this operation is performed multiple times due to the loop, which is likely not intended and could be the source of inefficiency or unexpected results.
To fix this, you should update the dates in a vectorized way without using a loop. Here's the corrected code:
import pandas as pd
# Sample DataFrame creation for demonstration (you should use your own DataFrame 'df')
# df = pd.DataFrame({'date': pd.to_datetime(['2022-05-01', '2023-07-01', '2024-08-01'])})
# Correcting the date manipulation
df.loc[df['date'] < "2023-06-01", 'date'] = df.loc[df['date'] < "2023-06-01", 'date'] + pd.DateOffset(years=1)
*This code will effectively update all the dates that are earlier than June 1, 2023, by adding one year to them, and it does this in a single pass without a loop. This should also avoid any deprecation warnings and improve the efficiency of your code.*
User:
give me the code in one code block. first the old version, then the improved version. Make sure it's executable on its own
ChatGPT:
Certainly! Below is the code block with both the original version and the improved version. The improved version eliminates the unnecessary loop and applies the date modification in a more efficient manner.
import pandas as pd
# Original code
df = pd.DataFrame({'date': pd.to_datetime(['2022-05-01', '2023-07-01', '2024-08-01'])})
for i in range(0,3):
df.loc[df['date'] < "2023-06-01", 'date'] = df['date'] + pd.DateOffset(years=1)
# Print the result of the original code
print("Original Code Result:")
print(df)
# Improved code
df = pd.DataFrame({'date': pd.to_datetime(['2022-05-01', '2023-07-01', '2024-08-01'])})
df.loc[df['date'] < "2023-06-01", 'date'] = df.loc[df['date'] < "2023-06-01", 'date'] + pd.DateOffset(years=1)
# Print the result of the improved code
print("\nImproved Code Result:")
print(df)
*This code first runs the original version, prints the result, then runs the improved version, and prints that result for comparison. Note that the DataFrame df is reinitialized before running the improved version to ensure a fair comparison.*
User:
is it really the same?
Code:
>>> import pandas as pd
>>> # Original code
>>> df = pd.DataFrame({'date': pd.to_datetime(['2022-05-01', '2023-07-01', '2024-08-01'])})
>>> for i in range(0,3):
... df.loc[df['date'] < "2023-06-01", 'date'] = df['date'] + pd.DateOffset(years=1)
...
>>> # Print the result of the original code
>>> print("Original Code Result:")
Original Code Result:
>>> print(df)
date
0 2024-05-01
1 2023-07-01
2 2024-08-01
>>>
>>> # Improved code
>>> df = pd.DataFrame({'date': pd.to_datetime(['2022-05-01', '2023-07-01', '2024-08-01'])})
>>> df.loc[df['date'] < "2023-06-01", 'date'] = df.loc[df['date'] < "2023-06-01", 'date'] + pd.DateOffset(years=1)
>>>
>>> # Print the result of the improved code
>>> print("\nImproved Code Result:")
Improved Code Result:
>>> print(df)
date
0 2023-05-01
1 2023-07-01
2 2024-08-01
ChatGPT:
You are correct. The two code versions do not produce the same result. The original code with the loop increments the date by one year for each iteration of the loop, but only for dates before "2023-06-01". This results in multiple increments for dates that are initially before this threshold.
In contrast, the improved code without the loop increments the date by one year only once, making it more efficient and potentially more aligned with what one might expect logically. However, if the intent of the original code was indeed to increment certain dates multiple times, the "improved" version would not be equivalent.
If the goal is to replicate the logic of the original code but without the deprecation warning and with improved efficiency, we need to understand the exact behavior intended with the loop. If the intent was to increment dates before "2023-06-01" by multiple years based on some condition, the improved version should be adjusted accordingly.
---
It wasn't meant ad hominem, I am really wondering. I feel in a bubble.
I once spent an hour researching the responses on reddit and hackernews. Turned out if these profiles are fake, then they are really good. F.e. age is >5 years, they argue about sport teams and seem really human.
Of course the site is biased, but probably less than BBC.