"According a recent AI Working Group internal email obtained by FedScoop, the AI tool is expected to be used for many day to day tasks and key responsibilities within congressional offices such as: generating constituent response drafts and press documents; summarizing large amounts of text in speeches; drafting policy papers or even bills; creating new logos or graphical element for branded office resources and more."
While generating press documents, summarizing speeches, and creating logos seem all reasonable innocent tasks, "drafting policy papers or even bills" is not exactly something that should be outsourced to a garbage-generating machine...
The entire process whereby laws are drafted and enforced is alarming. Lobbyists and people who disagree with your way of life have substantial influence. I believe that for some of the largest expenditures of my lifetime the US Congress voted literally without people having read the final text bills they were voting on [0]. The language is interpreted by a large group of unreliable judges and juries. Corruption hangs in the air, and people who talk rationally are thin in the conversation. Lord of the Flies looms large in everyone's minds.
Using AI in the drafting process is hardly something to get worked up about in amongst that mess. ChatGPT has a nice clear style and will probably help make these laws more accessible.
ChatGPT can read a 4,000 page bill in 3 days. It'll probably be a net win once the dust settles if it makes it harder to ram stuff through without debate. And it'll be a nice neutral way of arguing about what is in a bill - people might be able to agree on which AI is reasonable and what it thinks the summary should be. That'd be a win the size of the moon in terms of better pressure to write good legislation.
I agree that lawmaking has never been a process under any kind of statistical control, and so its hard to say what any change to the system is really doing.
That said, I can only really agree that it might be a net good IF the weights (and all other data necessary to reproduce the summaries) were open knowledge. Without that, you are just shipping a bill off to a black box and trusting that the gremlins in the box are acting in good faith. If every legislative office is doing this, then those gremlins will become the new focal point of a basically-invisible form of lobbying. This is good for OpenAI (and whoever controls it) but probably bad for everyone else.
So can the legislative staff of the committee of responsibility.
> And it'll be a nice neutral way of arguing about what is in a bill - people might be able to agree on which AI is reasonable
Ha ha ha ha. No. Just like they can’t do that with human experts, and for the same reason. Heck, this has already been demonstrated in the case of AI models quite resoundingly.
And it would be a bad thing if they could: designating a single AI vendor as the sole arbiter of truth in the lawmaking process would be much worse than the status quo.
And with most bills I've read, the devil is 100% in the details. Of those 4000 pages, it might just be one sentence that has far reaching ramifications. Alternatively, folks will start maliciously crafting bills in a way that the ChatGPT summary misses the key parts.
That's less of a GPT problem and more of a human problem: bills/contracts/T&Cs/ToS have all been massively blown out because:
* Companies need to cover every minute detail/situation to prevent getting boned, especially in America where your citizens will sue at the drop of a hat
* Companies want to purposefully bulk out the contract so that most people don't bother reading it
It probably can, assuming that a bill can be separated into smaller parts, more or less independent of each other. Trusting the summary is another story.
But expecting people to read 4000 pages in 3 days and make informed decisions after that is unrealistic, regardless of whether they are using lossy untrusted compression tools like ChatGPT.
"Trusting the summary is another story." What story could we possibly be interested in that involves having chat GPT read a bill but we don't care about a summary of it being accurate?
Add an intern in the loop that feeds every chapter separately and then compiles per-chapter summary that can then be summarized again if it is too long or has too many interdependencies.
I don't know about GPT 4, but I did not find GPT 3.5 great for summarizing long documents. I tried it on two long that I knew the content on, and it did not pick up the main themes well. Not even well enough that it would be believable at a glimpse, but I would be more concerned if it was believable and used as the basis of a debate. GPT 4 should do a little bit better in that regard, since it allows for longer input strings, but still not long enough to properly process 4,000 pages.
> It'll probably be a net win once the dust settles if it makes it harder to ram stuff through without debate
Or make it worse, because it completely normalises the practice of foisting long revisions on people at the last minute "you had plenty of time to understand it; my computer read it in seconds!" and encourages people rely more on tooling than dividing reading text between themselves and people they trust who (unlike GPT) have a shared interest in achieving particular goals and protecting particular parties.
From what we've seen current gen summarising tools aren't particularly accurate at summarising legal documents, never mind highlighting stuff which is politically sensitive, and that's before we get into clauses inserted adversarially by entities who have the same tools to test whether GPT summaries pick them up
ChatGPT _can't_ read a 4000 page bill in 3 days. It can't even read a 10 page not-very dense technical document without hallucinating wildly in its summary in my experience with ChatGPT4.
> I believe that for some of the largest expenditures of my lifetime the US Congress voted literally without people having read the final text bills they were voting on.
Congress is similar to most other large organizations, such as large software companies. In a large company software company the upper level managers don't actually read all the code. They make their decisions based what managers below them report, and so on down the management chain.
Representatives and Senators are Congress' equivalent to those upper level managers.
The House has over 6000 staff members, ranging from staff working for individual representatives to staff employed by committees. The Senate has over 4000 staff members. Those are the people that deal with nitty gritty details of the bills.
> ... people might be able to agree on which AI is reasonable and what it thinks the summary should be.
We're going to wind up with ChatDNC and ChatRNC. Nobody in office is going to agree on which one is reasonable, for the same reason that they refuse to agree on anything today.
I agree with the first part of your comment. I'm not sure I can follow you where you take it though.
> ChatGPT has a nice clear style and will probably help make these laws more accessible.
If the laws are not meant to be read in the first place (which I agree with as a likely pattern), how is this motivating? I'm skeptical that they're not read because of their density. I'd flip the causality; they're dense because they're not to be read.
> nice neutral way of arguing about what is in a bill
Elon Musk is already talking about "TruthGPT", an AI without "liberal bias". I don't think you will find anything resembling neutrality in this conversation.
To put all political opinions aside, I was also about to add that it is fundamentally wrong that representatives in a representative democracy are outsourcing lawmaking to a private company whose training data and associated reinforcement learning may contain all sorts of unknown biases. Even if all biases would be absent, I would argue that elected representatives are neither doing their jobs nor serving their constituencies if they outsource policy deliberation and lawmaking to a machine owned by a private company.
> Elon Musk is already talking about "TruthGPT", an AI without "liberal bias". I don't think you will find anything resembling neutrality in this conversation.
Between TruthGPT, Chuck Schumer & Mitch McConnell I would expect the most honest summary to be TruthGPT. I doubt it'd be close. Even with a very healthy dose of cynicism towards all billionaire projects and scepticism of the abilities of AI. If nothing else the AI has a big advantage because it would have read and possibly remembered the text of the bill and could answer specific questions which already gives it a massive edge over its geriatric competition.
The politicians lie proactively with plausible deniability and/or people wilfully ignoring it because of team dynamics. Something like ChatGPT is hard to train to lie without it being demonstrable with experimental evidence.
> Something like ChatGPT is hard to train to lie without it being demonstrable with experimental evidence.
See, I consider the functional definition of what current generation LLMs do to be "lying". Not out of malice or moral failing (of the model or the model's creators), but simply out of lack of control/alignment/modulation systems that people typically associate with honesty, integrity, and intention.
The problem-model behind these systems is "predict what comes next". The objective in training is not accuracy of content (by whatever definition...), but verisimilitude of output. An LLM happily improvises on whatever seed you give it. Sometimes this leads to output that aligns with someone's definition of reality. Sometimes it's hallucination or bullshit or garbage or whatever you term you want to use for it. You can modulate this with better prompt engineering, but only by degrees.
I have no doubt future generations of these systems will start tackling this, but the systems as they stand now are by their very nature, liars.
edit: And additionally, while I can't necessarily argue against your trust in these systems (especially given the choice you presented.. I'd probably choose XGPT too), "neutrality" is defined by a shared trust that I don't see a route to in the current climate.
> Since “TruthGPT” has an overt mission (“anti-woke”) to lean even harder into biases existing LLMs have demonstrated, I... would expect the opposite.
I'd find it interesting to chat to an LLM which had pre-training and fine-tuning for instruction-following, but no attempt to train it to be "safe" or "unbiased" or "truthful". Of course, it is going to repeat whatever biases exist in its pre-training data – but it would be an interesting way of exploring what those biases actually are.
I wonder if Musk has thought of using the algorithm of Twitter Community Notes (formerly Birdwatch). Essentially, its algorithm treats a claim as true when people who normally disagree on what is true agree that it is true. If people at opposite ends of the political spectrum agree X is true, it probably is true, and its truth is unlikely to be contested or controversial. So you could use human reviewers to build such a dataset of non-controversial truths, and then fine-tune an LLM to endorse them. When it comes to the opposite kind of statement, the kind of highly controversial claim on which there is widespread disagreement, there are two possible strategies: (1) explicitly train it to avoid taking sides, (2) don't do anything and just let it answer according to the biases of its pre-training data, whatever that may be.
That way you could train an LLM to not be "woke" (or at least no more "woke" than its pre-training dataset is), without training it to be "anti-woke". It would be interesting to talk to such an LLM and see what kind of opinions it may endorse.
ChatGPT CAN'T NOT lie even if it wanted to, because it has no concept of truth or false, it's just generating sentences. A lie is just as valid a sentence to construct from a prompt as an accurate statement.
> Something like ChatGPT is hard to train to lie without it being demonstrable with experimental evidence.
ChatGPT "lies" even when you don't want it to. It's true that it's not a strategic liar like a regular politician, and liable to give the game away by accidentally telling the truth, but it'll happily refine your lies or make up coherent nonsense pretty much anywhere it hasn't been trained specifically not to
Of course, but the statement was likely a jab at an organic congressperson's inability to read a bill in even three days, before going ahead and signing it anyway.
Actually it takes about 10-30 seconds (one request) for GPT4 to summarize 2000 words (a page or two). And there is a limit of 25 requests every 3 hours for GPT4, and no API access. So this might even take significantly longer than 3 days.
> And it'll be a nice neutral way of arguing about what is in a bill - people might be able to agree on which AI is reasonable and what it thinks the summary should be.
It's more likely that they'll further politicize these models.
Calling cutting edge-models consumer facing models like ChatGPT-4 garbage generating machines is very intellectually dishonest. These models are fully capable of drafting these kinds of texts, esp. when qualified staff is guiding the model.
Well, I just popped in "Write a new Federal law banning the collection of melted snow by individuals or small-business proprietorships for the purpose of protecting endangered plant species. Include a loophole that excludes minority-owned businesses or people who contribute a sufficient amount of money to carbon sequestration technologies or senators or representatives who voted in favor of strongly pro-union causes." and I won't burden HN with the results but it definitely has the shape of a fully-fledged bill for Congress to pass.
One problem ChatGPT would have in its current form is it would need auxiliary assistance to craft a larger-sized bill, as bills easily exceed its current window size. But that's a solvable problem too.
They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.
A technology can be both wildly powerful, mindblowingly cool, and deeply imperfect. I don't believe it's intellectually dishonest to emphasize the latter when it comes to impact against the human beings on the other end of the barrel. Especially when the technology starts to break out of the communities that already understand it (and its limitations).
> They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.
I truly can’t tell whether you are describing the US Congress or LLMs.
I can't deny that the similarities are strong enough that it weakens some of the philosophical underpinnings of the argument. But I am also wondering these days whether we are all just LLMs at the core of it.
How is it intellectually dishonest? It generates garbage, it's fully up to you to dig into that garbage and find something worthy from it. It has no idea it's even generating garbage!
You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Of note: I use chatgpt a lot to generate a lot of garbage. Or for those of you offended by the word, then mentally replace it with something more "neutral" sounding like "debris" or "fragments".
> You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Exactly.
It is AI snake oil with the humans still checking if it will hallucinate which it certainly will and thus cannot be fully autonomous and needs qualified people monitoring and reading / checking the output.
Since not only it can generate garbage, it is untrustworthy to be left by itself and to be fully autonomous at the click of a button.
There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
This community has such a ridiculous blueprint for “anti-ChatGPT” arguments. There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.
In attempts to avoid the phrase "garbage generator" you've described human beings in your life in the most depressing way possible. Value providers who you don't trust to operate by themselves.
Anyways, I have a bone to pick with you in your last paragraph. You are creating the problem for yourself. There are plenty of people elsewhere (even within HN) discussing exactly what you want, but you choose not to interact with them and instead spend time arguing against "ridiculous blueprints".
You choose what you interact with online when it comes to posting comments, you are choosing not to interact with "nuanced conversations and genuine information transfer" -- why? Are we certain you care about genuine information transfer, or are you just here to feel superior to plebs with "anti-ChatGPT arguments"? Rhetorical questions for the culture.
It is relevant and you know exactly why it can't be left by itself.
> There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Except that people can be held to account when something goes wrong and an AI cannot. I can guarantee you that you would not trust an AI in high risk situations such as Level 5 transportation in cars or planes with no pilots (this is not the same as autopilot mid-flight) and sit on the passenger's seat to transport you from A to B.
> Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
You're not getting the point. It's about trustworthiness in AI, when a human does something wrong they can explain themselves and their actions transparently. A black-box AI model cannot, and can generate and regurgitate nonsense confidently from it's own training set to convince novices that it is correct.
> There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.
Or perhaps many here are skeptical about the AI LLM hype and still do not trust it?
Intellectual honesty is very much in the garbage generating machine camp. Making an embedding space of reasonable language and then randomly sampling it is not a way to draft a law.
As someone that doesn’t know how the human brain works, and has never drafted any laws, let alone empirically seen what value an LLM can bring in this scenario, you should certainly quality this with a massive “in my layperson’s opinion”.
I beg to disagree. There are already hundreds of real-world examples whereby these models are doing terrible jobs with anything related to jurisprudence.
What happens when a Congress member uses Chat GPT to draft a public statement and it accidentally includes their resignation? Will we suddenly find that AI is a danger to society and must be regulated?
Nothing happens, except everyone on the Hill will ridicule the politician and his staff.
To resign Congress members have to actually send a letter to the governor of the state they represent. Most unilateral actions in government require some sort of formal document addressed to the relevant authority.
I don't think it's reasonable to assume they're just going to let GPT write bills with no supervision, especially given that the first sentence specifically says they're experimenting with it.
It's really, really good that they're doing this - this is an incredibly important technology, and the best way for politicians to understand it is to actually use it in their day to day work. Have it write the first draft of bills and see what kind of weird stuff it puts in there - that's how you actually understand the hallucination issue.
> generating constituent response drafts and press documents;
Not too worried about that one since most responses are already automated
> summarizing large amounts of text in speeches;
Again, this seems fine.
> drafting policy papers or even bills;
Not terribly thrilled about this one but I don’t expect it to be able to draft actual legislation — my experience has been that ChatGPT doesn’t know actual legislation well and wouldn’t know how to modify current laws.
> creating new logos or graphical element for branded office resources and more
Fine with this one.
Only one of these is potentially problematic to me but there are also ways for them to use it to make better laws.
>> summarizing large amounts of text in speeches;
> Again, this seems fine.
Not sure about that. Imagine all the misunderstandings of the speeches already and add on ChatGPT making up something or wrongly summarizing something.
1. the warning is right there when you open up chatgpt:
> This is a free research preview.
Our goal is to get external feedback in order to improve our systems and make them safer.
While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.
2. i know lots of companies that have banned the use of chatgpt for a plethora of reasons. but government is allowed to use it?
Against which would you verify when the goal of policy documents and bills is to create something new (and even the references to old things are unavailable, inaccurate, or even invented)?
Another thing is that the Congress (like any governmental body in any country) presumably has huge databases of digitalized documents that could be used to train a custom LLM better suited for these tasks.
When I see statements like "garbage-generating machine", I immediately think that the person making such a statement may not be aware of the capabilities of GPT-4.
Conversely , I disagree and think garbage generating machine is funny and on point in this serious context. Just for the record I love chatgpt but don’t tell me it also doesn’t produce garbage as per OP.
Eh, did something change or is GPT-4 a language model?
The one way to show that something contains very little information at all is to show how easily it can be predicted.
There's no difference between you sending me random text or me letting GPT random text for me, and the only information difference between a prompted text and a fully random text is the prompt itself.
Hence why it is garbage-generating, the output contains exactly as much information as the input, except possibly in more words.
Funny, I see that I think they've been in the trenches and had to beat the thing into submission to get useful output from it and realised the true impossibility of the task.
That's true but it doesn't need to process the whole bill in one prompt.
For example you could split a 200K token bill into 10 parts that are 20K each, and ask GPT-4 to create a 2K token summary of each part. Then combine those summaries to create another 20k prompt, and ask it to summarize that.
To quote Mitch Ratcliffe, 'A computer lets you make more mistakes faster than any other invention with the possible exceptions of handguns and Tequila.'
We already have large legislations that include a lot of sneaked in clauses that no one reads. It is only a matter of time before "As an AI model" ends up in the constitution and laws.
I interned at the US Senate for a year, and it was pretty eye opening. There are quite literally teams available to members to which they basically can go "write me a law that does X," and then they get to work. Staff at the Congressional Research Service has to be sweating bullets now! /s
This seems like a pretty big win, but could have some unintended side effects. Writing a summary of a bill is great to glean it's purpose at a high level. However, this isn't enough to make a decision, a bill could have hundreds of lines of text with real legal enforceable effect on citizen's rights. Every one of those lines is important so while this MIGHT help summarize all the "legalese" I'm not sure it will help us get simpler, more straightforward legislation.
On the positive side it could lead to people who never read bills to actually get some sort of summary they can read quickly. Caveat: I don't think it is fun or interesting read a 500-page farm bill, but also that is their job, so they have to do it or pass simpler bills.
On the negative side, it could lead to further crap being shoved into bills which lobbyists know won't be picked up by an AI-driven "bill-explainer" product.
> Writing a summary of a bill is great to glean it's purpose at a high level. However, this isn't enough to make a decision
Staff summaries are a major driver of decisions on bills as it is.
OTOH, ChatGPT is notoriously bad in any serious legal analysis related task—fast, yes, but quite often outrageously wrong in ways that are very good at convincing non-experts and might often also convince experts if they didn't do the same work (it won’t fool experts on the toy problems it fails, but where its analyzing a large bill that the expert hasn’t also fully analyzed, it may or may not be obvious that the LLM is hallucinating the premise underlying its analysis.)
It will be both hilarious and horrifying to see what emerges from Congress playing with AI tools.
I am sorry to say I have grown to have contempt for the vast majority of everyone involved in government. Sometimes it seems that these people would have ended-up being criminals, ambulance chasers, bad used-car sales people or some other combination along those lines. The level of ignorance in our governing bodies is something to behold; hilarious and horrifying in its own right.
This comment applies to all political parties, BTW.
Short story, back in the early 80's I got to spend a lot of time working with the late Frank Zappa. We discussed all kinds of topics over dinner nearly every day for months. It should come as no surprise to anyone of that era who know what Zappa was about that politics was one of his favorite punching bags.
I don't remember the exact statement he made one night. The gist of it was that, as he put it, intelligent and capable people like us don't ever go into politics because the whole thing is a dirty slimy cesspool of ignorant, petulant, self-absorbed, tribal morons who only care about themselves, their political future, their party and not one bit about the people, the country and the job they are supposed to do. An intelligent person wanting to make a difference would quickly be ground to dust and exit all bruised-up.
That was in the 80's. I can't imagine what he would say if he were alive today other than "It got worse than either one of us could have ever imagined.".
It is worse. Today, if you don't fall in line, they will do their best to ruin your life, your career and your family. Why would anyone with more than two connected neurons want any part of that? This is why we end-up with the "leadership" we get, election after election.
And now we are going to give them AI?
Brilliant.
Maybe the rule should be: Government is banned from using AI.
This is bad. Let's say that ChatGPT has, or is induced to have a bias in a certain direction.
Then one side of a political discussion is able to generate bills en masse, constant new over and over with slight tweaks. Another side of the discussion is unable to get new bills to discussion or consideration because of the mass of bills generated by the other side.
Effectively a legislative DoS attack. Not to mention ChatGPT in it's current form would already be unwilling to generate the text of bills for certain issues.
It'd have to be more than a slight bias for ChatGPT to be completely useless. However, this provides an excellent incentive to reform the legislative process which is long overdue anyways.
It's easy to think this will be only bad things, but currently we have volumes of legislation being dropped at the 11th hour and kneejerk judgements on whether to sign it or delegate "the good parts" to understudies and interns to review. If this gets legislators to get a good summary, it won't fix the system but does fix the problem for now.
To be honest, it basically can't make the current situation significantly worse. DC has already solved the problem of how to pass extremely large amounts of legislation with no accountability (who wrote this line?) and no ability for any individual human, most notably including the people nominally voting on the bill, to possibly have read the entire bill even at a skimming speed, let alone comprehend and understand the possible implications.
I don't mean this as a cynical crack, either. It's literally true. The spending omnibusses and such are often passed at speeds where it is only borderline physically possible for a legislator to have skimmed the bills, and certainly utterly impossible for them to understand what is in it. DC doesn't need help on that front.
I mean we have ~4,000 page bills going through Congress today, it is impossible to even verify anyone has read through the entire thing. At least this will be a start.
I don't know, it always feels like we are stuck between SUPER LONG and descriptive TOS and contracts when simple language + trust would make things easier. Maybe something like this helps abstract away legalese and keep bills in readable form? That would be a great world to be in, common sense contract/bills but still hold up to bad actors/conflict resolution.
Yes. What's the use of having standards for document handling and guaranteeing their correctness if we then feed them through a black box and accept its interpretation at face value.
one can only hope that they use it at least in tiny part to allow for just a little bit more whimsy in the maze of bureaucracy. An occasional artistic flourish of pirate speak or poetry in subsection 37B, while preserving meaning could be wonderful. Remember that a lawyer will be the one using the tool, so its at least possible this could be done without major issues
Each paragraph of AI-generated content should have to be prefaced with the phrase "computer-generated:" with the same visibility as the rest of the text. Same goes for images but with a watermark instead. No site-wide opt-outs.
What if the content is generated, read by a human, and approved? What if the content is drafted by a generative AI and then edited by a human? What if I had an extensive conversation with GPT-4 to figure out what I was going to write, and then wrote it myself?
Using plagiarism rules seems both fair and appropriate (given the source of the AI's content).
A human reviewing an author's writing and then passing it off as their own isn't okay. If they make substantive edits such that it's essentially new work, then it's okay. Discussing with the author then writing their own versions is okay.
Really though what I'm after is a mass protection so consumers can understand that images they're seeing or language they're reading or hearing isn't from a primary (ie human) source. Sort of like the "actor portrayal" or "this image has been photoshopped" requirements some countries have.
CHATGPT: As a creative congressman, I propose a new law that grants copyright protection to AI-generated works. This law will acknowledge that AI systems can produce unique and creative works that deserve legal protection. The law will also ensure that such protection does not infringe on the rights of human authors and creators.
The law will recognize that AI systems do not have a personality or consciousness to imprint on what they produce, but they can produce works that are original and creative. Some countries like the United Kingdom, Ireland, and New Zealand already grant copyright-like protection to computer-generated works. The UK Copyright Designs and Patents Act 1988 creates a legal fiction for computer-generated works where there is no human author. Section 9(3) states that “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.” This provision assumes some form of creative intervention by a human and not autonomous, human-less generation by a computer program alone. Therefore, this new law will ensure that AI-generated works that require human intervention will be protected under copyright law.
The law will also recognize the importance of originality in copyright protection. Common law jurisdictions generally have a low threshold for originality, requiring only a minimal level of creativity or intellectual labor and independent creation for a work to be protectable. Some other countries, like Brazil, approach originality from the negative, and state that all works of the (human) mind that do not fall within the list of works that are expressly defined as “unprotected works” can be protected. Therefore, this new law will ensure that AI-generated works that meet the threshold for originality will be protected under copyright law.
Furthermore, the law will recognize the importance of fictional characters in creative works. Copyright law protects creative expression, and the legal theory is based on derivation copyrights. Derivative works become separate copyright-protected work independent from the original as long as the transformation, modification, or adaptation of the original work is substantial and original. This new law will ensure that AI-generated fictional characters that meet the threshold for originality will be protected under copyright law.
In conclusion, this new law will protect the creative works generated by AI systems that meet the threshold for originality and require human intervention. The law will also ensure that such protection does not infringe on the rights of human authors and creators. This law will acknowledge the importance of AI-generated works and provide a legal framework for their protection.
I'm not saying I used ChatGPT/Bard exclusively to do my taxes, but it does a surprisingly good job explaining concepts on the tax forms that TurboTax just doesn't even bother helping you with.
This actually could be quite useful for summarizing all of the parts of legislation that is written in overly-complicated legalese, crammed into other legislation, and delivered the night before a vote.
"According a recent AI Working Group internal email obtained by FedScoop, the AI tool is expected to be used for many day to day tasks and key responsibilities within congressional offices such as: generating constituent response drafts and press documents; summarizing large amounts of text in speeches; drafting policy papers or even bills; creating new logos or graphical element for branded office resources and more."
While generating press documents, summarizing speeches, and creating logos seem all reasonable innocent tasks, "drafting policy papers or even bills" is not exactly something that should be outsourced to a garbage-generating machine...