Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been thinking about trying to do LoRA style fine tuning of Flacon-40b or Falcon-7b on RunPod. The new OpenAI 16k context and functions thinking made me lose the urge to get into that. Was questionable whether it could really write code consistently anyway even if very well fine tuned.

But at least that is something that can be attempted without $25k.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: