Many people move to London when they want to make money, and move out when they want to do anything else. Pretending that London is independent of the rest of the country's children and retirees is misleading.
No idea if they are doing this, but you can use Gosper islands (https://en.wikipedia.org/wiki/Gosper_curve) which are close to hexagons, but can be exactly decomposed into 7 smaller copies.
Yes! A Gosper Island in H3 is just the outline of all the descendants of a cell at a some resolution. The H3 cells at that resolution tile the sphere, and the Gosper Islands are just non-overlapping subsets of those cells, which means they tile the sphere.
Not quite - you need 12 pentagons in a mostly hexagonal tiling of the sphere (and if you're keeping them similar sizes, Gosper-islands force hexagon-like adjacency). I don't think it's possible to tile the sphere using more than 20 exactly identical pieces.
You could get a Gosper-island like tiling starting from H3 by saying that each "Hex" is defined recursively to be the union of its 6/7 parts (stopping at some small enough hexagons/pentagons if you really want). Away from the pentagons, these tiles would be very close to Gosper islands.
> I don't think it's possible to tile the sphere using more than 20 exactly identical pieces.
I was wrong about this (e.g. https://en.wikipedia.org/wiki/Rhombic_triacontahedron). It still seems possible to me that there's a limit to the smallest tile that can tile a unit sphere on its own. (Smallest by diameter as a set of points in R^3).
Decidability of a type system is like well-typedness of a program. It doesn't guarantee it's sensible, but not having the property is an indicator of problems.
I'm not entirely smart enough to connect all of these things together but I think there is a kind of subtlety here thats being stepped on.
1. Complete, Decidable, Well founded are all distinct things.
2. Zig (which allows types to be types) is Turing complete at compile time regardless. So the compiler isn't guaranteed to halt regardless and it doesn't practically matter.
3. The existance of a set x contains x is not enough by itself to create a paradox and prove false. All it does is violate the axiom of foundation, not create a russles paradox.
4. The axiom of foundation is a weird sort of arbitrariness in that it implies this sort of DAG nature to all sets under set membership operation.
6. The Axiom of Foundation exists to stop you from making weird cycles, but there is parallel to the axiom of choice, which directly asserts the existance of non computable sets using a non algorithmicly realizable oracle anyway....
Your other points are more relevant to the content of the article, but point 2. relates the practical consequences of undecidable type-checking, so I'll reply to that.
I don't have a problem with compile time code execution potentially not terminating, since it's clear to the programmer why that may happen. However, conventional type checking/inference is more like solving a system of constraints, and the programmer should understand what the constraints mean, but not need to know how the constraint solver (type checker) operates. If it's undecidable, that means there is a program that a programmer knows should type check, but the implementation won't be happy with; ruining the programmer's blissful ignorance of the internals.
> 2. Zig (which allows types to be types) is Turing complete at compile time regardless. So the compiler isn't guaranteed to halt regardless and it doesn't practically matter.
Being Turing complete at compile time causes the same kinds of problems as undecidable typechecking, sure. That doesn't make either of those things a good idea.
> 3. The existance of a set x contains x is not enough by itself to create a paradox and prove false. All it does is violate the axiom of foundation, not create a russles paradox.
A set that violates an axiom is immediately a paradox from which you can prove anything. See the principle of explosion.
> 4. The axiom of foundation is a weird sort of arbitrariness in that it implies this sort of DAG nature to all sets under set membership operation.
Well, sure, that's what a set is. I don't think it's weird; quite the opposite,
> 5. This isn't nessesarily some axiomatically self evident fact. Aczel's anti foundation axiom works as well and you can make arbitrary sets with weird memberships if you adopt that.
I don't think this kind of thing is established enough to say that it works well. There aren't enough people working on those non-standard axioms and theories to conclude that they're practical or meet our intuitions.
> 6. The Axiom of Foundation exists to stop you from making weird cycles, but there is parallel to the axiom of choice, which directly asserts the existance of non computable sets using a non algorithmicly realizable oracle anyway....
The Axiom of Foundation exists to make induction work, and so does the Axiom of Choice. They both express a sense that if you can start and you can always make progress, eventually you can finish. It's very hard to prove general results without them.
But like, of all the
expressive power vs analyzability
trade-offs you can make,
there's a huge leap in expressive power
when you give away decidability.
Undecidability is not a sign
that the foundation has cracks
(not well founded),
but it might be a sign
that you put the foundation on wheels
so you can drive it at highway speeds,
with all the dangers that entails.
It's not a trade everyone would make,
but the languages I prefer do.
No. Being well typed is not a semantic property of of a program - in a language where it makes sense to talk about running badly typed code, a piece of code that starts with an infinite loop may be well or badly typed after that point with no observable difference in program behaviour.
There are decidable type systems for Turing complete languages (many try to have this property), and there are languages in which all well typed programs terminate for which type checking is undecidable (System F without all type annotations).
For many publications you could be critisizing, I'd agree with you, but Quanta usually reaches a higher standard that I feel they deserve credit for. Here's the Quanta article on the same thing [1]. It goes into much more detail, it shows a picture of the perfect sofa, and links to the actual research paper. They're aimed at a level above "finished high school", and I appreciate that; it gives me a chance to learn from the solution to a problem, and encourages me to think about it independently.
I agree with you that Quanta doesn't always "allow specialists to understand exactly what's being claimed", which is a problem; but linking to the research papers greatly mitigates that sin.
And here's how they clearly explain the proof strategy.
> First, he showed that for any sofa in his space, the output of Q would be at least as big as the sofa’s area. It essentially measured the area of a shape that contained the sofa. That meant that if Baek could find the maximum value of Q, it would give him a good upper bound on the area of the optimal sofa.
> This alone wasn’t enough to resolve the moving sofa problem. But Baek also defined Q so that for Gerver’s sofa, the function didn’t just give an upper bound. Its output was exactly equal to the sofa’s area. Baek therefore just had to prove that Q hit its maximum value when its input was Gerver’s sofa. That would mean that Gerver’s sofa had the biggest area of all the potential sofas, making it the solution to the moving sofa problem.
Setting A4 to zero (or anything below 80) doesn't work. This doesn't improve if the constants in the A4 formula are moved a short distance away from 100.
In case you can't tell from that last example, I think being able to fix the intended values of multiple outputs simultaneously would be interesting. If you were to give more details about the solver's internals, I'd be keen to hear them.
I believe that the important part of a brain is the computation it's carrying out. I would call this computation thinking and say it's responsible for consciousness. I think we agree that this computation would be identical if it were simulated on a computer or paper.
If you pushed me on what exactly it means for a computation to physically happen and create consciousness, I would have to move to statements I'd call dubious conjectures rather than beliefs - your points in other threads about relying on interpretation have made me think more carefully about this.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
> Why would anyone pick the flexible/potentially-insecure option?
Because having a connection that's encrypted between a user and Cloudflare, then unencrypted between Cloudflare and your server is often better than unencrypted all the way. Sketchy ISPs could insert/replace ads, and anyone hosting a free wifi hotspot could learn things your users wouldn't want them to know (e.g. their address if they order a delivery).
Setting up TLS properly on your server is harder than using Cloudflare (disclaimer: I have not used Cloudflare, though I have sorted out a certificate for an https server).
The problem is that users can't tell if their connection is encrypted all the way to your server. Visiting an https url might lead someone to assume that no-one can eavesdrop on their connection by tapping a cross-ocean cable (TLS can deliver this property). Cloudflare breaks that assumption.
Cloudflare's marketing on this is deceptive: https://www.cloudflare.com/application-services/products/ssl... says "TLS ensures data passing between users and servers is encrypted". This is true, but the servers it's talking about are Cloudflare's, not the website owner's.
Going through to "compare plans", the description of "Universal SSL Certificate" says "If you do not currently use SSL, Cloudflare can provide you with SSL capabilities — no configuration required." This could mislead users and server operators into thinking that they are more secure than they actually are. You cannot get the full benefits of TLS without a private key on your web server.
Despite this, I would guess that Cloudflare's "encryption remover" improves security compared to a world where Cloudflare did not offer this. I might feel differently about this if I knew more about people who interact with traffic between Cloudflare's servers and the servers of Cloudflare's customers.
Let's encrypt and ACME hasn't always been available. Lots of companies also use appliances for the reverse proxy/Ingress.
If they don't support ACME, it's actually quite the chore to do - at least it was the last time I had to before acme was a thing (which is admittedly over 10 yrs ago)
https://trustforlondon.org.uk/data/population-age-groups/
reply