CEQA itself is a mixed bag. I want to be clear that there are very important things the CEQA does to improve our environmental conditions[0]! The very real issue of CEQA being “weaponized”[1] stems from how environmental complaints have to be re-litigated in their entirety every time one is filed.
Say there’s a coalition of neighbors who do not want something built. They can each file a lawsuit alleging environmental issues and each will have to be handled in isolation
*I am not going into immense detail here. It is admittedly a bit more complex than this, but this is a reasonable summary
> there are very important things the CEQA does to improve our environmental conditions
Which fits with OP’s assertion that it does “more harm than good.” (Fortunately, restricting the private right of action would curtail a lot of the harm. On the national level I’m pretty much at the point of wanting NEPA repealed.)
After reading over the EPA letter[0], I can't help but wonder whether the final paragraph gives a "bad faith out" to John Deere and their ilk. You could disingenuously interpret the "increment of time necessary to effectuate the repair" to mean the time it would take an official John Deere service technician with a full suite of tools to make the repair.
The following sentence admittedly muddies it a bit, but in general the suggestion that John Deere can still be the arbiter for when the machine can / cannot run without the environmental system in the loop seems like a significantly less meaningful change than what is described on the EPA.gov website.
Been playing around with Clickhouse a lot recently and have had a great experience particularly because it hits many of these same points. In my case the "local files" hasn't been a huge fixture but the Parquet and JSON ingestion have been very convenient and I think CH intends for `clickhouse-local` to be some sort of analog to the "add duckdb" point.
One of my favorite features is `SELECT ... FROM s3Cluster('<ch cluster>', 'https://...<s3 url>.../data//.json', ..., 'JSON')`[0] which lets you wildcard ingest from an S3 bucket and distributes the processing across nodes in your configured cluster. Also, I think it works with `schema_inference_mode` (mentioned below) though I haven't tried it. Very cool time for databases / DB tooling.
(I actually wasn't familiar with `union_by_name` but it looks to be like Clickhouse has implemented that as well [1,2] Neat feature in either case!)
The native support for streaming in SAM3 is awesome. Especially since it should also remove some of the memory accumulation for long sequences.
I used SAM2 for tracking tumors in real-time MRI images. With the default SAM2 and loading images from the da, we could only process videos with 10^2 - 10^3 frames before running out of memory.
By developing/adapting a custom version (1) based on a modified implementation with real (almost) stateless streaming (2) we were able to increase that to 10^5 frames. While this was enough for our purposes, I spend way too much time debugging/investigating tiny differences between SAM2 versions. So it’s great that the canonical version now supports streaming as well.
(Side note: I also know of people using SAM2 for real-time ultrasound imaging.)
The easiest way to do it is on a repositioning cruise where at the end of the season the cruise lines take their ships from one market to another. The positives are a cheap cruise, the negatives are that you're right at the end and beginning of ideal seasons so your experience at each end might not be great. Also you'll have a lot more sea days and fewer port days which could be in either column.
Weather at sea can be considerably worse than coastal weather. Cruise ships are pretty stable and it's unlikely to be awful, but plenty of people do get seasick. I used to be a professional sailor and so obviously can't see what the fuss is about, but probably half the passengers I spoke to felt ill on at least one of the days, and a few people spent the entire ocean transit in bed. If you're prone to motion sickness take some medication with you.
The atmosphere is different to regular cruises. Typically less of a party and the clientele skew towards older and more independent travellers.
To answer your actual question - go to cruisemapper and seascanner and you'll find them easily enough. They're all over the world.
Yes exactly repositioning cruises ! I confirm what you're saying. Sometimes you also get people who were visiting a continent going back home. For example Brasilians going back home in December for their summer
Modern ships built in the last 10 years have stabilizers. I've been in a big cruiseship without stabilizer before and it can indeed move a lot laterally up and down (in the directions orthogonal to the direction of the ship). So, for people afraid of that just research if the ship has stabilizers
As the other said I was looking for repositioning cruises like 2 months in advance, from one continent to the other. Like I did Europe -> South America 550 euros Costa Cruise last year and Miami -> England NCL 650 euros early this year
The difficulty is finding good prices for a solo cabin. For a cabin of 2 it's even easier to find good prices for repositioning cruises
As the other said, there are more days at sea (less stopovers), and it can happen that some stopover port get cancelled (for example they cancelled my stop in the Azores because they didn't want to bother entering a big Atlantic storm, it was risky)
I'd recommend a long canoe trip in Algonquin park (or somewhere similar nearby if there is something similar nearby) if offline is your goal (though obviously not entirely similar to an ocean crossing in other ways as well).
Yeah, it is. Basically just up to your route planning... I feel like at more than 2 weeks people start going farther north but I think that has more to do with "we can" than there not still being good routes in Algonquin.
I'd probably not recommend more than a week for a first canoe trip anyways.
Nebula[0] addresses this and is IMO an improvement over WireGuard. Came out of Slack originally, and it supports peer discovery, NAT hole punching, and some other cool features. Also still uses the Noise Protocol.
In practice, the extra networking features + better first class peer config management baked in is very nice (Nebula’s “lighthouses” are configured with a tool similar to DSNet for Wireguard[1])
People keep saying that, but haven't we learned already that eventually Tailscale gets bought, then priorities change, then they make incompatible changes because they're need to grow, and headscale either can't keep up, or gets pushed away by Tailscale themselves, and we're back to using $TailscaleCompetitor who promises to not do the same thing.
Just don't rely on centralized for-profit entities, rely on stuff produced by non-profits and foundations, that you know isn't gonna screw you over as soon as they need money.
> Just don't rely on centralized for-profit entities, rely on stuff produced by non-profits and foundations, that you know isn't gonna screw you over as soon as they need money
What do you use that fits that philosophy and offers the basic functionality (NAT traversal, Magic DNS, failover relaying) TS provides?
While I agree in spirit, I find this logic around for profit FOSS projects a little backwards sometimes, because it implies forking Tailscale wouldn't save much time.
What makes you think we'd be better off building a competitor to something open source if it has all the features we want now? The reason we don't see open source competitors to big products is not because people are too dumb to try it. It's because it's way, way harder. It makes way more sense to Fork and work from there while we're still getting this momentum from Tailscale.
If you think Headscale is going to have problems keeping up with a private Tailscale, good luck rebuilding Tailscale.
It’s been some years but I recall that at one point probably around 2016-2017, California produced 80% of the world’s almonds. This was notable because at the same time, California was experiencing historic droughts.
>Tx. rutilus feeding behaviors make them strikingly different from a typical mosquito. Both adult males and females are strictly nectar-feeding and so they do not have a role in the transmission of pathogens to animals as in other mosquitoes.[7] Instead, their larvae are predacious and could potentially help curb the spread of diseases via vector mosquitoes. While they commonly prey on copepods, rotifers, ostracods, and chironomids, they also generally have a preference for certain species of mosquito larvae including common disease vectors such as Aedes albopictus, Aedes aegypti, and Aedes polynesiensis.
Honestly, I was so confident in what I thought I knew (namely evil giant Alaskan mosquitoes) that I didn’t bother to read the article I linked, I just added it for others’ context. Culiseta alaskaensis is probably the source of confusion, and I think a switch-up happened when this was first relayed to me.
Looks like the island might be back on, assuming it doesn’t get hot enough to become temperate in the next few years hah.
In my experience Black flies[1] and no-see-ums[2] are far worse (not counting mosquitoes born disease). It's like a massive angry cloud of micro horseflies that intend to dismember you bite by bite.
Pretty much - although after living with it for 4 year I can report it's often worse. =)
The local bookstore gave a discount for the first bite of the season if you lived long enough to collect.
While technically true, I think that's in tropical Africa. Are there also diseases that they carry in North America? Even without disease, I tend to agree with the OP that black flies are worse than mosquitoes, and don't think I've ever heard of anyone getting a disease from a black fly bite in the US or Canada.
You're mostly correct, but apparently not totally :) - I did think that it was further north, but the human cases are usually only in south and central america and africa.
Nonetheless, there are some.
He’s just as nice and fun in person as he seems online. He’s put time into using these tools but isn’t selling anything, so you can just enjoy the pelicans without thinking he’s thirsty for mass layoffs.
Because he's prolific writer on the subject with a history of thoughtful content and contributions, including datasette and the useful Python llm CLI package.
he's incredibly nice and a passionate geek like the rest of us. he's just excited about what generative models could mean for people who like to build stuff. if you want a better understanding of what someone who co-created django is doing posting about this stuff, take a look at his blog post introducing django -- https://simonwillison.net/2005/Jul/17/django/
People with zero domain expertise can still provide value by acting as link aggregators - although, to be fair, people with domain expertise are usually much better at it. But some value is better than none.
For every new model he’s either added it to the llm tool, or he’s tested it on a pelican svg, so you see his comments a lot. He also pushes datasette all the time and I still don’t know what that thing is for.
I think moving OTA updates for embedded devices to project-specific key management rather than relying on web roots of trust should become the norm.
Since your firmware images should themselves be signed and relying on some physical fusing of the key hashes + have some ratchet system, this leaves a web root-of-trust as a liability.
With the setup described above, you could deliver the OTAs signed by some key material that could more easily and/or effectively be made public.
*I am not going into immense detail here. It is admittedly a bit more complex than this, but this is a reasonable summary
[0] https://youtu.be/TKN7Cl6finE?si=CR4SjVK5_ojk-OKq [1] https://www.planningreport.com/2015/12/21/new-ceqa-study-rev...