It's free for now, just like registries were "free" and docker desktop was free.. until they weren't. I am not against Docker capitalizing and charging for their services (as they should); however, the pattern of offering a service for free and then reneging after it's widely adopted, makes me hesitant to adopt any of their offerings.
A month or two after a visit to Iceland (my favorite country by far by the way), I received a ticket in the mail for speeding. It included a picture of the car I rented and a closeup of the driver's face- a face that did not belong to me (presumably another renter).
Luckily, a quick phone call and a copy of my drivers license cleared things up, but systems like these inevitably lead to "guilty until proven innocent" scenarios instead of "innocent until proven guilty".
From the abstract, it sure sounds like any electronic checkers or chess game would fall under this patent. If so, I'm sure there is plenty of prior art to invalidate their claim.
I'm not even sure it saves a ton of time to be honest. It sure _feels_ like I spend more time writing up tasks and researching/debugging hallucinations than just doing the thing myself.
This is consistently my experience too, I'm seriously just baffled by reports of time saved. I think it costs me more time cleaning up its mistakes than it saves me by solving my problems
There's really pernicious stuff I've noticed cropping up too, over the months of use.
Not just subtle bugs, but unused variables (with names that seem to indicate some important use), comments that don't accurately describe the line of code that it precedes and other things that feel very 'uncanny.'
The problem is, the code often looks really good at first glance. Generally LLMs produce well structured code with good naming conventions etc.
I think people are doing one of several things to get value:
0. Use it for research and prototyping, aka throwaway stuff.
2. Use it for studying an existing, complex project. More or less read only or very limited writes.
3. Use it for simple stuff they don't care much about and can validate quickly and reasonably accurately, the standard examples are CLI scripts and GUI layouts.
4. Segment the area in which the LLM works very precisely. Small functions, small modules, ideally they add tests from another source.
ive found that the shorter the "task horizon" the more time saved
essentially, a longer horizon increases chances of mistakes, increasing time needed to find and fix them. so at one point that becomes greater than the time saved in not having to do it myself
this is why im not bullish on AI agents. task horizon is too long and dynamical
The reports of time saved are so cooked it's not funny. Just part of the overall AI grift going on - the actual productivity gains will shake out in the next couple years, just gotta live through the current "game changer" and "paradigm shifting event" nonsense the upper management types and VC's are pushing.
When I see stuff like "Amazon saved 4500 dev years of effort by using AI", I know it's on stuff that we would use automation for anyways so it's not really THAT big of a difference over what we've done in the past. But it sounds better if we just pretend like we can compare AI solutions to literally having thousands of developers write Java SDK upgrades manually.
What "in connection with" means is vague. I think a reasonably competent tax attorney could probably argue that the costs of running your production cloud serving existing customers don't count, but IANAL.
Everson on the @GeekDetour YouTube channel explores how a patent filed in 2020 could be preventing our slicers from generating stronger 3d prints using "Brick Layers". He shows, what appears to me at least, to be pretty clear evidence of prior art that _should_ invalidate the patent.
I'm not a lawyer but would love to see this added to all slicers.
It sounds like the same requirements as a 70b+ model, but if someone manages to get inference running locally on a single rtx4090 (AMD 7950x3D w/ 64GB ddr5) reasonably well, please let me know.