Drones over 250 grams or for any drone operated commercially under part 107 registration is required. But, its easy to just build your own or desolder the id chip if you dont want it.
It’s easy to build your own, but it’s impossible to build one to be as stable as a DJI one, or as cheaply. E.g. with an FPV drone hitting the lens would be much harder (but you could use spray instead of a stick to make it easier). Removing remote id ‘chip’ is plain impossible since it’s implemented by the same radio that does video link.
Regardless of how you feel about content moderation, 48 hours is a ridiculously long time given what AI can do today. That “bad” image could have been propagated around the world to millions of people in that time. It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore. However, the compute costs would eat into profits…
Again, I’m not judging about content moderation, but this is an extremely weak initiative.
> It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore.
CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas whether an image was shared nonconsensually seems like it'd often require context that is not in the image itself, possibly contacting the parties involved.
Indeed. It seems that the process being described is some kind of one-stop portal, operated by or for OFCOM or the police, where someone can attest "this is a nonconsensual intimate image of me" (hopefully in some legally binding way!), triggering a cross-system takedown. Not all that dissimilar to DMCA.
Another, related issue is that the takedown mechanism becomes a de facto censorship mechanism, as anyone who has dealt with DMCA takedowns and automated detectors can tell you.
Someone reports something for Special Pleading X, and you (the operator) have to ~instantly take down the thing, by law. There is never an equally efficient mechanism to push back against abuses -- there can't be, because it exposes the operator to legal risk in doing so. So you effectively have a one-sided mechanism for removal of unwanted content.
Maybe this is fine for "revenge porn", but even ignoring the slippery slope argument (which is real -- we already have these kinds of rules for copyrighted content!) it's not so easy to cleanly define "revenge porn".
DMCA isn't directly that bad. DMCA is under penalty of perjury, so false take downs are rare.
The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty. Though if it ever happens to you I suspect you have a good case against whoever did this - but the lawyer costs will far exceed your total gain. (as in spend $30 million or more to collect $100). Either we need enough people affected by a false non-DMCA take down that a class action can work (you get $0.50 but at least they pay something), or we need legal reform so that all take downs against a third party are ???
> DMCA is under penalty of perjury, so false take downs are rare.
Maybe true with the platonic ideal "DMCA takedown letter" (though these are rarely litigated, so who really knows), but as you note, they're incredibly common with things like the automated systems that scan for music in videos (and which actually are related to DMCA takedowns), "bad words" and the like.
> The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty.
It's true that most takedowns in the US aren't under DMCA, but even that once-limited process has metastasized into large, fully automated content scanning systems that proactively take down huge amounts of content without much recourse. Companies do this to avoid liability as part of safe harbor laws, or just to curry favor with powerful interests.
We're talking about US laws here, but in general, these kinds of instant-takedown laws become huge loopholes in whatever free speech provisions a country might have. The asymmetric exercise of rights essentially guarantees abuse.
I believe google issues legitimate dmca takedowns for copyright strikes, even when there is no infringement. They put the work to defend the strike on the apparent commiter, often with little to no detail.
While the false takedown may be rare. Using dmca as a mechanism to inflict pain where no copyright infringement has taken place is indeed common enough that it happens to small time youtubers like myself and others I have talked to.
Regardless of how you feel about content moderation, we are talking about a situation where the government is DEMANDING corporations to implement automated, totalitarian surveillance tools. This is the key factor here.
The next step would be for the government to demand direct access to these tools. Then the government would be able to carry out holocausts against any ethnic group, only 10 times more effectively and inevitably than Hitler did.
Are you conflating this specific principle with the much wider Online Safety Act? Because, while the latter has certain privacy-undermining elements to it, I'm not sure how asking social media companies to take down content has anything to do with 'surveillance'.
No, I'm not conflating. The point here is that for taking content down in the way they want it, the implementation of totalitarian surveillance tools is necessary.
As bad as I think this law is, this isn't demanding any degree of surveillance in the sense that real human beings have their information or activity tracked. This is mandating taking down content, not surveiling anyone.
> This is mandating taking down content, not surveiling anyone.
As far as I understand, it precisely mandating to monitor EVERYONE.
They are not talking about removing a specific image from the platform based on its hash or something. They are talking about actions that involve automated analysis of all content on the platform for patterns arbitrarily specified by the government.
The technologies discussed differ from totalitarian surveillance by simply toggling a single flag on the platform, and are indistinguishable from such surveillance for the user.
Imagine dang wrote a script to delete every HN comment that contains the string "velociraptor". Under your logic, this involves surveiling every HN commenter. This is true in the pedantic sense that every comment posted to the site would be checked for "velociraptor".
But most people understand the word "surveillance" to mean more involved information collection than just deleting content that matches certain criteria.
These systems does not imply the tools of totalitarian surveillance.
In contrast to the proposed one, which should be able to classify all content on the platform at any arbitrary moment in time according to a post-factum specified arbitrary filter. Literally a mechanism of totalitarian surveillance.
This holiday season, I wouldn’t buy high priced high quality items from Amazon due to concerns about counterfeit. I probably still won’t even after they’ve made this change. DTC from quality producers now have decent websites, free shipping, and good customer service. If I’m going to buy a premium expensive product, why risk it.
Software that connects your own personal devices to form a p2p mesh network and also connects your friends personal mesh network. On top of that technology, the ability to chat, share media, or any other basic computing done between friends. Data is synced between devices. The goal is to make the underlying tech mostly invisible to the users. They pair devices and start doing standard software things.
> We need to support farmers because a market spread too thin on farming means people would starve.
People would not starve if we stopped the ethanol mandate. In fact, corn prices would fall because the government would no longer force ethanol to be mixed with oil. Less demand would decrease the price.
I'm obviously talking about maintaining spare production capacity. Far worse things than higher prices would happen if we actually had crops fail. If the government stopped requiring ethanol, there would be fewer farmers (although a few might convert to other crops, some land is not suitable for many different crops).
Are there any good CI systems to begin with? joking, but not really
Jenkins has been rock solid, we are trying to migrate to Argo Workflows/Events, but there are a complaints (like deploying argo workflows with helm, such fun!)
I've been using dagger.io and it's been really nice to work with.
- runs locally
- has a language server: python, typescript, go, java, OR elixer
- has static typing
- the new caching mechanisms introduced in 0.19.4 are chef's kiss
I do not work for dagger and pay for it using the company credit card. A breath of fresh air after the unceasing misery and pain that is Gitlab and GHA.
I use Dagger as well, since v0.1.2, even worked on the CUE stuff around then with them.
I wouldn't call it a CI system though, but certainly the philosophy that local and CU should be running the same thing saves many hours of frustration.
I'm currently using Dagger to create forkable/rewindable agent sessions and environments (not with their agent nonsense). Dagger is a pretty sweet piece of tech, so many uses for programmatic container layers
How would we effectively regulate social media? Being the regulator could be a very powerful political tool and used to capture or maintain political power.
Regulating is already being done by the “private” companies that own them, heck it’s the plot of a bond movie (sub in newspapers for social media) with a real life Larry, Elon or Mark as the villain.
As a society we choose what to allow or not allow together, collectively, through politics (ideally) and when things damage our collective health we regulate or ban them. All regulations probably seem impossible before they happen. Australia regulated guns, China regulated social media, plenty of countries regulate alcohol, drugs, gambling. It’s all possible, just have to weigh the positives and negatives and find a balance, but the status quo is broken.
Deciding what we want as a society is fine. Vehemently disagreeing over what and how things should be regulated is fine too. In general, trying to do anything in good faith is more or less fine.
What is not fine is proposing to make regulations that purport to do things that are near-universally supported, but in reality further agendas that are widely opposed, agendas that work against the interests of the American people and would never pass otherwise.
That is very clearly what is happening here, and we know that because it happens all the time, using the same tried-and-true formula. In particular, anything claiming to "protect the children" is almost certainly an obfuscated attempt to erode civil rights protections like free speech or privacy, and should be treated with extreme prejudice.
*edit* Also, anything Rahm Emmanuel says, believe the exact opposite.
I agree with the sentiment, but the rights of Americans are being eroded at a comical rate with no positives like protecting children (be that an allusion to protecting them or actually doing it)
Look at the TikTok “ban” for example. Congress passed a law to ban it because they didn’t have control over what the population was seeing, specifically around the genocide in Gaza. Now US ownership has passed to Larry Ellison, a republican connected pro-Zionist that will make sure the objectionable content that shows Palestinian's suffering does not bubble up in the algorithm. Never mind that you see 10 year old girls practicing TikTok dances when they are standing in line, waiting for the bus, etc. That problem persists, and no one in leadership cares because now the right people are getting rich and censoring the actual content the rulers cared about.
I’m with you on Rahm, but I’m not going to let him trying to hook his wagon to a policy that I support ruin my support of it.
It’s been about a year since I looked into this sort of thing, but molmo will give you x,y coordinates. I hacked together a project about it. I also think Microsoft’s omniparser is good at finding coordinates too.
reply