Not arguing against 'great', but cost efficiency is questionable. for 10% you can get two used 3090. The good thing about LLMs is they are sequential and should be easily parallelized. Model can be split in several sub-models, by the number of GPUs. Then 2,3,4.. GPUs should improve performance proportionally on big batches, and make it possible to run bigger model on low end hardware.
it may be even cheaper to use big models through API. Claude, GPT, whatever. Rental is efficient only for big butches, while API is priced per call/size and is cheap for small models.
Simple tax optimization. Like new billionaires promising to significant donations. a) they don't have to donate. b) they can immediately slash those promised donations from taxes.
It's a trick. If you donate something to charity you can write it off the taxes. After IPOs most new billionaires did it, including Zuk. It's enough to just promise it publicly. There is no time limit, it can take years to 'donate', but writing off is immediate. To optimize it even further they create their own dummy charities full of friends and relatives.
For 4, it would be neat to first pass each block of code (function or class or whatever) through an llm to extract meaning, and then embed some combination of llm parsed meaning, docstring and comments, and function name. Then do semantic search against that.
That way you’d cover what the human thinks the block is for vs what an LLM “thinks” it’s for. Should cover some amount of drift in names and comments that any codebase sees.
This is cool, but how about local semantic search through tens of thousands articles and books. Sure I'm not the first, there should be some tools already.
I definitely was thinking about something like this for PaperMatch itself. Where anyone can pull a docker image and search through the articles locally! Do you think this idea is worthwhile pursuing?
I'm not an expert, but I'll do it for learning. Then open source if it works. As far as I understand this approach requires a vector database and LLM which doesn't have to be big. Technically it can be implemented as local web server. Should be easy to use, just type and get a sorted by relevance list.
> it will just have much better precision than us.
and much faster with the right hardware. And that's enough if AI can do in seconds what humans takes years. With o3 the price is only the limit, looks like.
After collision drone should shut itself down and drop like a rock. Looks like it wasn't the case here. And there should be nobody on the ground below, of course. Flying over that lake would be safe
Why should there be nobody on the ground below? Other kinds of aircraft can fly over people. Quadcopters are much easier to control and stabilize than other kinds of aircraft.
Other things that fly overhead aren't allowed to put 200 in the air all within 700 feet of each other, purposefully inches away performing acrobatic acts.
All of these heavy machines in the sky rely on a ton of things going right at once not to collide... software, hardware, weather, etc.
There are different rules for flying in an airshow [1]. Here's a very crude summary.
There's a designated area where the aircraft do their show in and there are not supposed to be spectators in that area.
Airshow craft can fly over spectator areas but not when doing anything dangerous and if they are high enough.
The US actually has some of the most stringent airshow safety rules in the world, which is why most of the time when there is an accident that kills a lot of spectators at an airshow it is not in the US. Plenty of pilots have been killed in US airshows, but the shows are carefully planned so that if something goes wrong the planes should crash away from spectators.
I don't know if drone shows are subject to the same rules.
If the article title didnt make it apparent, people can get seriously injured or killed when they plummet. Also I believe its illegal to operate a drone over humans without very specific exemptions.
Other aircraft are also inspected regularly and have regulations to follow, there isnt much of that for drones, any shmoe can fly their modified flying blender
My point is that no, it's not illegal based on type of aircraft - it's legal if you have the proper pilot license. And it's not hard to get it (few weekends of study).
With this a lot of damage can be done even without drone. As for weaponized it's not a future, it's a reality in Ukraine for years now. Defense against them is difficult to impossible. A bodyguard who can sacrifice himself may sometimes work.
> AGI is special. Because one day AI can start improving itself autonomously
AGI can be sub-human, right? That's probably how it will start. The question will be is it already AGI or not yet, i.e. where to set the boundary. So, at first that will be humans improving AGI, but then... I'm afraid it can get so much better that humans will be literally like macaques in comparison.
from https://www.asacomputers.com/nvidia-l40s-48gb-graphics-card....
nvidia l40s 48gb graphics card Our price: $7,569.10*
Not arguing against 'great', but cost efficiency is questionable. for 10% you can get two used 3090. The good thing about LLMs is they are sequential and should be easily parallelized. Model can be split in several sub-models, by the number of GPUs. Then 2,3,4.. GPUs should improve performance proportionally on big batches, and make it possible to run bigger model on low end hardware.