I don't feel his overflow miscompilation example is a good one. A 64bit multiplication converted back to 32bit has the same overflow behavior as if the computation was in 32bit (assuming nobody depends on the overflow indication, which is rare). And in high level programming languages you typically can't tell the difference.
chimpanzees, gorillas, orangutans evolved intelligence too. They are smarter than most other critters in the jungle. Just all not as much as the lineage that leads to humans.
It's actually quite difficult to define human intelligence. Every time we think we find something unique by humans eventually some animal turns up that can do it too. It may be all just a question of degree and how it's used.
From what I've heard, language is unambiguously unique to humans, if you consider grammar an integral part of languages. You can teach chimpanzees hand signs, but they could never make the leap to stringing them together under a coherent rule: something like the difference between "Mom give me cookies" vs "I give mom cookies."
(I'm no expert, so take that with a grain of salt.)
proto-grammers are fairly common. https://en.wikipedia.org/wiki/Alex_(parrot) for example shows that parrots are capable of understanding English word order to some extent.
Unique to modern humans, maybe. But that's only because we outcompeted/killed all of our sibling species that also spoke language. Denisovans likely had language as well.
Starting from what should be considered "writing" to how to identify specific artifacts as abstract words.
Some researchers spend years in the forest studying one animal to isolate one single word they're speaking. Understanding other kind of intelligences is a crazy complex task.
This is actually not correct for Starlink. They did a lot of work to lower their albedo based on astronomer complaints, even though there wasn't any government regulation in this area.
It might apply to some of the emerging Starlink competitors however, especially the Chinese ones and AST.
The albedo reduction they worked on is the exact reason why I wrote this, "...or if they think that a government order is imminent so they come to some voluntary agreement ahead of time."
SpaceX only "voluntarily" did that because the government was likely to put more stringent requirements on them if they ignored the complaints.
If you would spin the whole structure you couldn't have multiple shells all with 1G on their surface. The required spin speed for 1G depends on the diameter. But their whole concept is built around multiple shells, which is clear from the name.
Regarding the GDP needed once you have a working "mine from the moon and send to orbit" economy it doesn't seem to be too bad. The assumption would be that a lot of technology is already developed for other projects. Launching it all from earth obviously wouldn't be possible even with vastly cheaper launch. That's why they put the build into the moon-earth L1 lagrange point to be easily reachable from the moon.
For propulsion and reactors, but there are multiple projects today working on all of this. Building a life support system for 400 years is still an unsolved problem however.
Re: orbital assembly. L1 point is bad in every respect. If most materials come from Moon, then the best assembly point is low Moon orbit (as benefit you get a boot to your launch speed to escape Earth gravity), if most material comes from Earth, then best assembly point is low Earth orbit. Hauling all material to L1 is going to be more expensive in either case (unless the ratio of materials is very exact, which is unlikely).
Re: spin. I still claim that the best design is to rotate entire living module as one. Most of the activity is going to be on the outer shell. Warehouses, etc will be in the lower gravity inside. No moving parts.
The only question is what to do with fuel and retro engines. Rotate them as well? Fuel tanks need to be stronger. Do not rotate? Then maybe living module can undock for the flight and rotate separately.
I suspect for most cloud providers you deleting data is cheaper because the data is not charged by the byte. But then they like having data, maybe just to train their AI models or for bragging rights to their investors.
For the expiration dates most modern file systems have the concept of arbitrary extended attributes per file. It's quite easy to add meta data like this yourself.
Unused services are always pure profit, and storage space is no exception. Providers can offer 100GB for like $24/year, because only like 2% of the subscribers will ever approach the limit, and so the extra space is never wasted but can be allocated to someone or something else.
It's like gym memberships, ISP/telco service bundles, and amenities at your apartment complex. Anyone not using every possible service is wasting money, but it's impossible to purchase a bespoke service, so essentially everyone will waste money because they're chipping in money for services that someone else uses more than they do.
Here at home I don't ever use the gym, the racquetball courts, the doggy-doo supplies, the laundry room, or a parking space, and yet my rent (everyone's rent) includes upkeep for all of those things. I'm subsidizing all my neighbors and all the wear-and-tear damage they put on those common amenities. Likewise, everyone who's paying $24/year for storage, or any business that purchases big multi-terabyte storage media, they're paying for unused storage space and giving profit. It's practically impossible to rightsize your storage media, and you never want it undersized, and you can't simply shrink them and reclaim the resources you invested, you just keep adding on new ones and replacing the malfunctioning ones. So nearly everyone always owns or rents more space than they can realistically utilize.
Furthermore, you'll notice that I specified "automatic" destruction of data by expiration date. Of course it is trivial to tag any arbitrary file with arbitrary metadata, but the challenge is to create a filesystem that executes automatic data purges on schedule, rather than pushing it into a rickety old handmade cronjob in userland. I've never ever seen a filesystem with such a feature, nor does it seem that anyone's interested in doing so.
And here I thought that computers were useful for automating business logic and easing the burden on human effort. And this is me, manually sifting through emails and photos in order to manually delete each one with 3 dialog boxes intervening. It takes hours, days, weeks.
There’s a whole concept of records management in enterprises that manages the disposal of data. It’s far more complex than just purge dates as there’s often regulatory requirements and legal discovery issues so <2% of data is actually disposed due to perceived risk.
For personal data, the concept would be simpler but still has requirements like say tax records need to be kept 7 years.
One of the authors here. It's a somewhat nuanced answer. In principle, I think a classical controller would have been fine here and if you read the paper (might be in one of the other papers) we do benchmark a bunch of them. But what's really nice about RL is what it does to the workflow. We can add a sensor, drop a sensor, change the dynamics of the system, and have a functional controller the next day. It trades compute for control engineer time.
On a secondary small point, the dynamics of the cruise control cars are an unpleasant switched system and there's a lot of partial observability, we never fully sense the traffic state, we didn't even have direct measurements of the distance to the car in front, and the individual car control decisions are coupled to macroscopic effects on the system i.e. since all the cars have the same policy their decisions actually affect the traffic flow. So, it's not a trivial control design problem at all.