Hacker Newsnew | past | comments | ask | show | jobs | submit | chha's commentslogin

The belt packs typically do a lot more than to amplify ambient noise, they also handle RF, depending on the model decryption of the audio signal, EQ as well as other stuff. All while typically running on 2x1,5V AA batteries.

Audio gear isn't made to last long on batteries, it's made to be reliable for the hours a show typically lasts. I worked part-time as a sound tech (paid hobby) for 15+ years, and I never started a show without fresh batteries, regardless of what the indicators on the transmitters/receivers told me.


Been a while since I looked into this, but afaik Maven Central is run by Sonatype, which happens to be one of the major players for systems related to Supply Chain Security.

From what I remember (a few years old, things may have changed) they required devs to stage packages to a specific test env, packages were inspected not only for malware but also vulnerabilities before being released to the public.

NPM on the other hand... Write a package -> publish. Npm might scan for malware, they might do a few additional checks, but at least back when I looked into it nothing happened proactively.


npm is run by github / microsoft now, which also sells security products...


It depends. If they simply ignore the ruling, my guess is that whatever trade agreements are in place have a mechanism for escalating such violations so that an Israeli court can enforce the order. I also take for granted that it depends a lot on the political climate and the strategic value for the governments of Israel and the US...


Similar to the Shai Hulud attack, but with more sofisticated C2 (blockchain, Google Calendar). It also uses Unicode characters to hide source code in IDEs, harvests ecosystem credentials to infect and publish new versions of packages you have access to, and more.


There could, this would essentially be in the form of a standard library. That would work until someone decides they don't like the form/naming conventions/architecture/ideology/lack of ideology/whatever else and then reinvent everything to do the same, but in a slightly different way.

And before you know it, you have a multitude of distributions to choose from, each with their own issues...


> Deliberately extracting personal data into un-audited environments without good reason (eg printing a label for shipping), should be punished with GDPR-style global turnover-based penalties and jail for those responsible.

There already are, but only for Europeans through the GDPR.


Technically not quite, because even in the EU, you don't have to provide the audit log for someone's data specifically and you as a subject have to make specific requests to delete or retreive your data, it's not make transparent to you as a default position. But yes, you can't just dump it out anywhere you want.

How it should be is that personal data's current and historical disposition is always available to the person in question.

If that's a problem for the company processing the data (other than being fiddly to implement at first), that sounds like the company is up to some shady shit that they want to keep quiet about.

Nothing to hide, nothing to fear should apply here, and companies should be fucking terrified with an existential dread of screwing up their data handling and looking for ways to always avoid handing PII at all costs. The analogy of PII being like radioactive material is a good one. You need excellent processes, excellent reasons to be doing it in the first place, you must show you can do it safely, securely and if you fuck up, you'd better hope your process documentation is top tier or you'll be in the dock. Or, better, you can decide that actually you can make do by handling the nuclear material only in some safer form like encapsulated vitrified blocks at least for most of your processes.

The data processing industry has repeatedly demonstrated they they cannot be trusted and so they should reap the whirlwind.


It doesn't say audited environments as such, but you are required to use secure environments that you control as a basis. What "secure" means can always be discussed, but in general it depends on what data you process and what you do with it; if it is a large volume/big population/article 9-data auditable environments should be expected - though not publicly auditable. Although that would be nice...

Fully agree on what you are saying, and my popcorn is ready for August when the penalties part of the AI Act comes into force. There is a grace period for two years for certain systems already on the market, but any new model introduced after August this year has to be compliant. AI Act+GDPR will be a great show to watch...


Sure they can. GDPR article 2 (2) says:

«This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: (a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union;»

So the GDPR applies. The EU can of course sanction violators outside, but given the current political climate it is likely to be more difficult than before.


>Sure they can. GDPR article 2 (2) says:

No they can't. That says only about where the law applies, it doesn't say about prosecution of entities not residing in the EU. EU's legal arm can't extent outside the boarders of the EU, without an outright military invasion, it's toothless to foreign entities.

>So the GDPR applies.

I never said it doesn't apply. I said how is the EU gonna prosecute an entity that doesn't reside in the EU?

Extraditions are tough to negotiate even for serious stuff like murder. No foreign judge will take GDPR violations seriously to do that.


Sorry for that, guess I'm just too used to the common misconception some people have that the GDPR doesn't apply if a company isn't established in the EU.

In this case 23andMe is on the Data Privacy Framework list, so they have volunteered to follow the GDPR while still being based in the US. This is basically the same as a number of other GDPR cases, including the GDPR fine against Clearview AI. Fining 23andMe if they violate GDPR should be trivial in that case.


EU will presumably stop you from doing business in the EU, if you break EU laws and ignore judgments. Companies don't want that. Grindr LLC, a US company with no EU corporate presence as far as I know, was fined 6.5 million for breaching GDPR by an Oslo Court, upheld in appeals. They paid that fine. If they didn't, they'd probably be kicked out of the app stores from Norway (if not from all of EU). Apple and Google do have an EU presence. Even if they didn't, they wouldn't start a war over 23andMe.

We generally don't need to worry about EU judgments against US companies being enforced, or vice versa. It doesn't get far enough that we need to think about how it's going to be enforced as a practical matter.


You really think they are getting access to their data and looking for some German dude’s data in every server they have? At most 23andMe gets some 250 item questionnaire that some poor soul with a red stapler in a basement has to answer, some middle manager puts a signature on it, sends it saying “yeah we good” and that’s it. Unless there is enough noise for someone to sue, no one is looking at that shit hard enough.

Even if they are, 6.5 million in fines is chump change.


Filling out some form won't help if someone finds out they're not doing what they said they will do. The data protection agencies will sue if they find that out. And they did find out that Grindr was selling data it wasn't allowed to.

Fines are scaled according to revenue. That's the reason for the monumental fines leveled at Google and Facebook. They don't mess around.

I believe Grindr's fine was just based on their revenue from Norwegian users. Probably plenty hurtful enough to make them implement data protection in the jurisdictions which demand it.


Fines are based on a number of things; the type of data (PII or Article-9-PII), number of people affected, amount of data, previous violations, and as far as I know also the country issuing the fine. 6.5M might be a small fine by some standards, but if fined and nothing improves the fine is likely to be a lot higher the next time around.


Meta has been fined €1.2b for breaching GDPR, though it seems to still be appealing.

The fine is revenue adjusted.


> EU's legal arm can't extent outside the boarders of the EU, without an outright military invasion

All the lawsuits from EU against tech companies outside the EU have been carried out without any military invasions required. You are delusional if you think USA is going to go “Google won’t pay the fines, invade us if you want the money”


>You are delusional if you think USA is going to go “Google won’t pay the fines, invade us if you want the money”

Why don't you read my comment thoroughly before accusing people of being delusional? Google is registered in Europe(HQ in Ireland IIRC ) and does business in Europe as an European company, so there's a physical, legal and tax paying entity the EU can fine and even European management they can send to jail just in case they don't comply, same how they did with VW for the diesel scandal.

If you would have read my comment thoroughly (hard ask, I know), you would have seen I'm specifically asking about how would the EU fine foreign companies that have EU citizens' data but have no HQ or any legal entity in Europe that can be reached by EU law enforcement in case of GDPR violations.


Absolutely. A bunch of them also predated the www, or started just when the www was gaining popularity, meaning that information on possible products might be more limited than it is today. Some have narrow use-cases, and some might have started as really narrow in terms of scope, but then ballooned after gaining popularity, sales (and feature requests), and some probably started because the alternative was deemed too expensive compared to just making it in-house.

I think the main point bob1029 was trying to make is that it can be worthwhile doing somehting in-house if the alternatives doesn't match the use-case, are too expensive or whatever else - but that you seriously need to consider if your architecture is the best way to solve the problem before going down that route.


The AI Act classifies systems depending on their capabilities and their intended use, and the risk that they pose on the people using or being exposed to them. If a tool ends up being classified as a high-risk AI system or as a model with systemic risk there are strict requirements in terms of reporting incidents. For certain types of incidents it is as little as two days, for others upwards of 15 days.


"I think this is the quickest way of clearing my overdraft"

That statement from Michael Eavis (founder of the festival) kind of hits home with a recent story from The Independent in how bands are settling for less, or even paying to play at the festival - as an investment. Being shown on the BBC as part of their coverage can make or break the experience (and their economy).

https://www.independent.co.uk/arts-entertainment/music/featu...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: