Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ETH Zürich's position on AI in education and ChatGPT (ethz.ch)
56 points by pandoro on March 30, 2023 | hide | past | favorite | 16 comments


> The result of GPT is highly plausible fiction which oftentimes happens to be factually correct.

Exactly. Short and to the point.

This is my take: For usecases where mistakes don't matter a lot, this is actually useful.

Some examples: generated fiction (mistakes are not mistakes but just an unexpected variation of a story), story-building for a game, and also everywhere you can fix the mistakes yourself (by fact-checking, getting started yourself, or escalating).

Of course this is dangerous if you are not able to catch the mistakes yourself.

I have the feeling: if consumers are powerless then AI is absolutely detrimental. They have to suck up the mistakes and even worse, they might be clueless being fed bullshit.

Some people indistinctly sense that and are afraid.


> highly plausible fiction which oftentimes happens to be factually correct

Isn't this also a way to describe what the press gives us every day? I think the dangers you describe existed long before GPT.


Of course. But it is unfair to compare press and AI. Press just gives you text.


> But it is unfair to compare press and AI. Press just gives you text.

The press I know has a lot of pictures. And the comparison is not unfair at all. Both are Parrots repeating and recombining information from somewhere else, or just inventing new noises if there is nothing to copy.


Well, paper exams remain unaffected and you can assign those remotely to be proctored at a university. Seems like you can make a quick fix by lowering the weight of HW, increasing the weight of exams, and switching to paper only. Cheat on the HW in that case all you want but it won’t help you pass the class.


> First of all, there is every indication that the majority of our students are honest.

Hah!


Indeed. The issue is the arms race, not someone's quality of character. If everybody around you is getting better grades by leveraging a moral grey area tool, you can either stick to your moral standards and potentially be the last in your class, or you can jaywalk like everybody else, and fulfill the curse of the red queen. At least you get to stay in the game.


This essay by ETH Zurich couldn’t have been written by ChatGPT. Think about that. There’s a soul to it: an argument, a position, a certain edge, a purpose. I never get that sense when reading the works of ChatGPT.

Plausible Fiction is fabulous framing. It explains succinctly what annoys me about much prose on the internet, plausible fiction. And it should surprise no one: ChatGPT is trained by the internet.


> much more efficient and save choice to copy from fellow students

s/save/safe/


>which is almost grammatically correct, but non- sensical

maybe they are testing us


Whoever wrote this is a native German speaker.


The made a good point on the current trend of artificial stupidity


Two thoughts on this:

1. someone could make a shitload of money building a tool for learning how to prompt engineer LLMs in a formal course setting. Would be better than most of the GPT grifts currently circulating.

2. the ETHZ writer, while having a generally good grasp, seems to not understand the concept of Attention here. Or that you can do additional training on these models (even just RLHF). I don't think I understand RLHF, Attention, or extra training (new NN layers right?) enough to comment further.


There are already sites online that enable students to cheat. There are people that can churn out high quality essays for a very low price. Is there a place for lower quality essays at a lower price in that market?


Like A/B testing prompts and comparing output type of learning?


Something like that. Since interacting with the model is the best way to learn how to use it, a good course could just put that experience on rails to direct the student's learning towards optimal ways to use the chat bot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: