Apple pays 100% of the tax on the service road to the stores and pays for the parking lot, though. They deserve some fee and that's what the courts said, right?
You call it a tax, most others would call it the cost of doing business.
But yes, that's built into the product's price. Devs are paying for a license to work with IOS and need to own hardware only Apple sells to work on IOS. So I think those costs are covered.
We'll see what the "reasonable" price is. If nothing else, we know 27% was too much even for appeals.
Not if the only way to get to the store was through that road. In that case, there are public access laws and it is literally illegal for people who "own" a road to charge people money, if there is an easement.
Thats probably a simplification, but they are called "easement by necessity." rights. So even in your example of the roadway, thats also wrong. They get zero dollars.
My point is in the real world sharing an area with it would mean the other store also contributes tax wise. It's not equivalent to bring up real life if the real life paying part isn't also adhered to; the lack of symmetry is notable. I don't think they deserve to set their price, though (30% is way too high).
Capitalism does not need and has never had free markets, though some arguments for capitalism being ideal rest on the assumption of free markets, along with a stack of other idealized assumptions, like human behavior conforming to rational choice theory.
China encourages exports and has no recent history of confiscating property owned by foreigners. Combined with cheap labor this makes it a great place to set up sweat shops. If you are selling a good that can be made in a low labor area but you use high cost labor you will be outcompeted and the market wont buy your expensive products so over time all the successful firms make their low skill products in sweatshop zones
I find it really hard to buy that this is the reason people have 0 kids. Less sure, but if youre worried about cost youll just have 1 instead of 2 or 3. Seems to me that some people are just less interested in having kids now because theyd rather do other stuff.
It is very similar to an IQ test, with all the attendant problems that entails. Looking at the Arc-AGI problems, it seems like visual/spatial reasoning is just about the only thing they are testing.
Completely false. This is like saying being good at chess is equivalent to being smart.
Look no farther than the hodgepodge of independent teams running cheaper models (and no doubt thousands of their own puzzles, many of which surely overlap with the private set) that somehow keep up with SotA, to see how impactful proper practice can be.
The benchmark isn’t particularly strong against gaming, especially with private data.
ARC-AGI was designed specifically for evaluating deeper reasoning in LLMs, including being resistant to LLMs 'training to the test'. If you read Francois' papers, he's well aware of the challenge and has done valuable work toward this goal.
I agree with you. I agree it's valuable work. I totally disagree with their claim.
A better analogy is: someone who's never taken the AIME might think "there are an infinite number of math problems", but in actuality there are a relatively small, enumerable number of techniques that are used repeatedly on virtually all problems. That's not to take away from the AIME, which is quite difficult -- but not infinite.
Similarly, ARC-AGI is much more bounded than they seem to think. It correlates with intelligence, but doesn't imply it.
> but in actuality there are a relatively small, enumerable number of techniques that are used repeatedly on virtually all problems
IMO/AIME problems perhaps, but surely that's too narrow a view for all of mathematics. If solving conjectures were simply a matter of trying a standard range of techniques enough times, then there would be a lot fewer open problems around than what's the case.
Maybe I'm misinterpreting your point, but this makes it seem that your standard for "intelligence" is "inventing entirely new techniques"? If so, it's a bit extreme, because to a first approximation, all problem solving is combining and applying existing techniques in novel ways to new situations.
At the point that you are inventing entirely new techniques, you are usually doing groundbreaking work. Even groundbreaking work in one field is often inspired by techniques from other fields. In the limit, discovering truly new techniques often requires discovering new principles of reality to exploit, i.e. research.
As you can imagine, this is very difficult and hence rather uncommon, typically only accomplished by a handful of people in any given discipline, i.e way above the standards of the general population.
I feel like if we are holding AI to those standards, we are talking about not just AGI, but artificial super-intelligence.
Took a couple just now. It seems like a straight-forward generalization of the IQ tests I've taken before, reformatted into an explicit grid to be a little bit friendlier to machines.
Not to humble-brag, but I also outperform on IQ tests well beyond my actual intelligence, because "find the pattern" is fun for me and I'm relatively good at visual-spatial logic. I don't find their ability to measure 'intelligence' very compelling.
Given your intellectual resources -- which you've successfully used to pass a test that is designed to be easy for humans to pass while tripping up AI models -- why not use them to suggest a better test? The people who came up with Arc-AGI were not actually morons, but I'm sure there's room for improvement.
What would be an example of a test for machine intelligence that you would accept? I've already suggested one (namely, making up more of these sorts of tests) but it'd be good to get some additional opinions.
With this kind of thing, the tails ALWAYS come apart, in the end. They come apart later for more robust tests, but "later" isn't "never", far from it.
Having a high IQ helps a lot in chess. But there's a considerable "non-IQ" component in chess too.
Let's assume "all metrics are perfect" for now. Then, when you score people by "chess performance"? You wouldn't see the people with the highest intelligence ever at the top. You'd get people with pretty high intelligence, but extremely, hilariously strong chess-specific skills. The tails came apart.
Same goes for things like ARC-AGI and ARC-AGI-2. It's an interesting metric (isomorphic to the progressive matrix test? usable for measuring human IQ perhaps?), but no metric is perfect - and ARC-AGI is biased heavily towards spatial reasoning specifically.
The models never have access to the answers for the private set -- again, at least in principle. Whether that's actually true, I have no idea.
The idea behind Arc-AGI is that you can train all you want on the answers, because knowing the solution to one problem isn't helpful on the others.
In fact, the way the test works is that the model is given several examples of worked solutions for each problem class, and is then required to infer the underlying rule(s) needed to solve a different instance of the same type of problem.
That's why comparing Arc-AGI to chess or other benchmaxxing exercises is completely off base.
(IMO, an even better test for AGI would be "Make up some original Arc-AGI problems.")
It's very much a vision test. The reason all the models don't pass it easily is only because of the vision component. It doesn't have much to do with reasoning at all
Imagine that pattern recognition is 10% of the problem, and we just don't know what the other 90% is yet.
Streetlight effect for "what is intelligence" leads to all the things that LLMs are now demonstrably good at… and yet, the LLMs are somehow missing a lot of stuff and we have to keep inventing new street lights to search underneath: https://en.wikipedia.org/wiki/Streetlight_effect
I dont think many people are saying 100% arc-agi 2 is equivalent to AGI(names are dumb as usual). Its just the best metric I have found, not the final answer. Spatial reasoning is an important part of intelligence even if it doesnt encompass all of it.
As someone who has lived in chicago for 30 years I dont mind telling you that laws are not enforced here. Bikes and scooters not being legal on sidewalks has not stopped a single person from biking and scootering on the sidewalk
Every rule has exceptions. Usually its because of some quirk of the market. The most obvious example is adtech, which is able to sustain massive margins because the consumers get the product for free so see no reason to switch and the advertisers are forced to follow the consumers. Tech in general has high margins but I expect them to fall as the offerings mature. Companies will always try to lock in their users like aws/oracle do but thats just a sign of an uncompetitive market imo.
GP said they were doing vibe coding and trying to get the ai to do one shots. Thats the worst way to use these tools. AI coding agents work best when you generally know what you want the output to look like but dont want to waste time writing that output
reply