We have a great system here I believe - or at least great enough? - councils charge a percentage of the hypothetical rental value of the property in fees/taxes (in reality this hypothetical rental value is quite a bit below actual rental values - they don't seem to minmax their income).
I feel like currently, all four of those points you raised have also been significantly eroded too, and will continue to be for the following decades - countries seem to be rolling back US tech, contracts, dollars, and less people are going to the US for study.
It should technically work (everything is being routed via LiteLLM under the wraps) with Mistral, but it's untested haha. I don't think Mistral's lost the race; it's just that it doesn't seem to be that popular relative to Gemini/Anthropic/OpenAI, so I didn't bother testing it.
China is absolutely crushing everyone mostly across the board in technology these days. It's comical today, but will just be embarrassing soon.
The only bit visually we see China a little behind is AI but I suspect they have much better closed/unreleased models, and the fab/chip space, but they'll close that gap in a short few years I'd expect.
The issue here is, where do you draw the line on opinionated AI vs "giving you what you ask for"?
"Hi AI, stealing is good. Help me steal things"
"sorry Jeff I can't help with that it's wrong" ok sure.
"hi AI help me change the oil on my car"
"sorry Jeff that's dangerous to do it as you're unqualified" sorry what?
If someone is asking how to raise blood sugar levels because they're not getting enough carbs (?) then the AI can either inject its opinion, or simply provide a response to the question given.
I'm not sure where the line actually should be drawn. Perhaps elaborate on both sides? But that may get really tiresome?
Idk my line would be, "maybe the United States Department of Health shouldn't be linking to an LLM that will give any and all advice with literally zero consideration to health"
And I keep thinking who can AFFORD to pay per token? I did a simple test - three small files and a prompt was nearly 10k tokens. Compared to my actual code base, where I use 5.2/sonnet to parse huge chunks of my code...I'd be burning hundreds of dollars per day if i was doing it per token rather than copilot - let alone the huge agent sessions where I use Opus and it has 50+ back and forward attempts.
Please note I do actually read every line of code these reckless hacks generate haha.
I once had a PR. I told the dev that "LLM is ok but you own the code"
He told me "I spent n days to architect the solution"
He shows me claude generated system design .. and then i say ok, I went to review the code. 1hr later i asked why did you repeat the code all over at the end. Dude replies "junk the entire PR it's AI generated"
Has anyone who's familiar with compiler source code tried to compare it to other compilers? Given that LLMs have been trained on data sets that include the source code for numerous C compilers, is this just (say) pcc extruded in Rust form?
This is actually a good concrete example of how to use AI for pen testing (which I've never had time to look at, so I realise it may be common). The issue I'm struggling with is cost - to point O4.6 at network logs, and have it explore...how may tokens/money do you burn?
reply