It hasn't been actively maintained, but it is still a good crate. Also it has good document unlike other similar crates. I loved this community tutorial: https://github.com/sidwellr/schotter
This is an advertisement for a 'tenuo warrant'. So, I read its document[0]. Put simply, it works like this:
1. A person orders an AI agent to do A.
2. The agent issues a tenuo warrant for doing A.
3. The agent can now only use the tool to perform A.
The article is about that 'warrant' can now be used in case of an incident because it contains information such as 'who ordered the task' and 'what authority was given'.
I get the idea. This isn't about whether a person is responsible or not(because of course they are). It's more about whether it was intentional.
However... wouldn't it be much easier to just save the prompt log? This article is based entirely on "But the prompt history? Deleted."(from the article) situation.
You've got the model right. And saving prompt logs does help with reconstruction.
But warrants aren't just "more audit data." They're an authorization primitive enforced in the critical path: scope and constraints are checked mechanically before the action executes. The receipt is a byproduct.
Prompt logs tell you what the model claimed it was doing. A warrant is what the human actually authorized, bound to an agent key, verifiable without trusting the agent runtime.
This matters more in multi-agent systems. When Agent A delegates to Agent B, which calls a tool, you want to be able to link that action back to the human who started it. Warrants chain cryptographically. Each hop signs and attenuates. The authorization provenance is in the artifact itself.
A worker agent doesn't mint warrants. It receives them. Either it requests a capability and an issuer approves, or the issuer pushes a scoped warrant when assigning a task. Either way, the issuer signs and the agent can only act within those bounds.
At execution time, the "verifier" checks the warrant: valid signatures, attenuation (scope only narrows through delegation), TTL (authority is task-scoped), and that the action fits the constraints. Only then does the call proceed.
This is sometimes called the P/Q model: the non-deterministic layer proposes, the deterministic layer decides. The agent can ask for anything. It only gets what's explicitly granted.
If the agent asks for the wrong thing, it fails closed. If an overly broad scope is approved, the receipt makes that approval explicit and reviewable.
> They explicitly state that the problem is you don't know which human to point at.
The point is "explicitly authorized", as the article emphasizes. It's easy to find who ran the agent(article assumes they have OAuth log). This article is about 'Everyone knows who did it, but did they do it on purpose? Our system can figure it out'
The video shows a user asking Prism to find articles to cite and to put them in a bib file. But what's the point of citing papers that aren't referenced in the paper you're actually writing? Can you do that?
Edit: You can add papers that are not cited, to bibliography. Video is about bibliography and I was thinking about cited works.
A common approach to research is to do literature review first, and build up a library of citable material. Then when writing your article, you summarize the relevant past research and put in appropriate citations.
To clarify, there is a difference between a bibliography (a list of relevant works but not necessarily cited), and cited work (a direct reference in an article to relevant work). But most people start with a bibliography (the superset of relevant work) to make their citations.
Most academics who have been doing research for a long time maintain an ongoing bibliography of work in their field. Some people do it as a giant .bib file, some use software products like Zotero, Mendeley, etc. A few absolute psychos keep track of their bibliography in MS Word references (tbh people in some fields do this because .docx is the accepted submission format for their journals, not because they are crazy).
I once took a philosophy class where an essay assignment had a minimum citation count.
Obviously ridiculous, since a philosophical argument should follow a chain of reasoning starting at stated axioms.
Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).
The citation requirement allowed the class to fulfill a curricular requirement that students needed to graduate, and therefore made the class more popular.
In coursework, references are often a way of demonstrating the reading one did on a topic before committing to a course of argumentation. They also contextualize what exactly the student's thinking is in dialogue with, since general familiarity with a topic can't be assumed in introductory coursework. Citation minimums are usually imposed as a means of encouraging a student to read more about a topic before synthesizing their thoughts, and as a means of demonstrating that work to a professor. While there may have been administrative reasons for the citation minimum, the concept behind them is not unfounded, though they are probably not the most effective way of achieving that goal.
While similar, the function is fundamentally different from citations appearing in research. However, even professionally, it is well beyond rare for a philosophical work, even for professional philosophers, to be written truly ex nihilo as you seem to be suggesting. Citation is an essential component of research dialogue and cannot be elided.
> Citing a paper to defend your position is just an appeal to authority
Hmm, I guess I read this as a requirement to find enough supportive evidence to establish your argument as novel (or at least supported in 'established' logic).
An appeal to authority explicitly has no reasoning associated with it; is your argument that one should be able to quote a blog as well as a journal article?
It’s also a way of getting people to read things about the subject that they otherwise wouldn’t. I read a lot of philosophy because it was relevant to a paper I was writing, but wasn’t assigned to the entire class.
Huh? It's quite sensible to make reference to someone else's work when writing a philosophy paper, and there are many ways to do so that do not amount to an appeal to authority.
> Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).
an appeal to authority is fallacious when the authority is unqualified for the subject at hand. Citing a paper from a philosopher to support a point isn't fallacious, but "<philosophical statement> because my biology professor said so" is.
> HN users that have settled on the belief that if something is done using AI it is "lazy slop" and it needs to be shunned!
Honestly, I used to be one of them. I recently saw a great library (atcute) that changed my opinion. I think most AI sceptics haven't had this experience. They saw AI slops and set their opinion: AI-generated codes are bad. I can't really blame them, though, because there are so many of AI slops.
> anyone who is not incorporating it in some way (workflow or actual end product) is just holding themselves back.
It's true. However, I think people who create AI slops are worse than those who don't use it. They are diligently making this world a worse place.
In the second report, Daniel greeted the slopper very kindly and tried to start a conversation with them. But the slopper calls him by the completely wrong name. And this was December 2023. It must have been extremely tiring.
This (manual?) addition in the second report [1] likely gives an idea as to the reporter's mastery of English and ability to proofread before spamming out slop:
> Sorry that I'm replying to other triager of other program, so it's mistake went in flow
I think it would be really interesting if someone at HackerOne did a dive into the demographic of many of the banned posters.
December 2023... that was early AI era. Had to double-check the dates actually, because I misremembered the release date of GPT-4 as being in 2024; turns out it was in 2023, and that was when LLMs first became remotely useful for even this kind of slop.
reply