Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I’m really not all that familiar with the space so I could be mistaking. The definition of ai alignment on Wikipedia says an aligned model one that “advances intended objectives”.

In the paper, “distribution alignment” is one of methods used to improve the results of compression so intent is preserved:

> To narrow the gap between the distribution of the LLM and that of the small language model used for prompt compression, here we align the two distributions via instruction tuning

So in any case for this paper alignment seems to be used in very specific way that doesn’t seem related to censorship

Edit: would to love to hear from someone who has a better understanding of the paper to clarify. I am operating from the position of layman here


It is common, standard usage precisely in this context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: