Hacker Newsnew | past | comments | ask | show | jobs | submit | BubRoss's commentslogin

> I'm a gay man of color

What is the connection here? What does that have to do with linking relationship dynamics to astrology?


The article is mainly about how cis white males don't like astrology and thus it is misogyny. So the connection is pretty much perfect as he is giving an example of how you can be annoyed by astrology without being a cis white male.

edit: typo


I was thrown by the same thing. Both the article and the parent comment have the same premise - the author presumes that Rachel, as a woman, is more qualified than the average person to judge whether Apple's rejection of an astrology dating app is an act of discrimination. And then parent comment says that "as a gay man of color," he's more qualified than the average person to judge whether Rachel / the author's complaint about discrimination is well founded. It's a subtle form of the argument to authority fallacy.


I mean, is a woman more qualified to judge whether a situation is an example of misogyny? Perhaps not, as a man could reach the same conclusion. But it's certainly more appropriate than leaving it to a group of men, or an industry that is generally dominated by men. It's relevant, not because the logic is dependent on the fact, but because it is a perspective regarding societal power.

The article indicates that hating astrology is characteristic of white cis men. The parent comment says "FWIW, I'm not those things, and I hate astrology." They aren't asserting their authority to make a logical argument. They are offering their perspective as a sample.

Respectfully, I think you are letting your intellect get in the way of your ability to think by dismissing these perspectives, misidentifying a report of someone's subjective experience as a logical argument, and therefore a fallacy.

Personally, I hate astrology, but I wouldn't have this app flagged as spam and banned from the app store because of it. I think the argument that this is a form of (probably unintentional) racial/gender/sexual discrimination holds water. If nothing else, I find the idea that Apple happily exerts this kind of force on their customer's personal interests super frustrating and distasteful, and bad for business. Money talks, and bullshit walks. If there is no market for it, let it die on its own merit. I don't need Apple's protection from astrology.

We don't all have to embrace each other, but we do have to suffer each other. And traditionally, some of us have been suffering a lot more than others.

That's just my opinion. Take it for what it's worth.


TFA states that white men are against astrology because women and POC like it (???)


Have you actually gotten the static library and compile to memory mode to work?


Yes, I used it in a game engine for many years, on Windows and Linux. No problems on that part.

I don't really recommend doing it, unless you want to do it as a learning experiment. It was a fun thing to do.


I got it to work as a DLL for calling into C from common-lisp when I needed something that was header-file dependent. It was really just a proof of concept though because if you distribute source, you can require that they have CC installed just as easily as libtcc, and if you distribute a binary, you can grovel ahead of time and don't need libtcc...


I have tried the latter and it seems to work fine.


I don't understand why something like this would need a separate language. Switching languages means starting over in many ways with regards to tools libraries and semantics. A graph of tasks can be made with a cdecl library.


Have you read Boehm's paper 'Threads Cannot be Implemented as a Library'?

It's the same reason.

Your dataflow semantics need to be part of the language semantics, otherwise they're bound to be loosely defined and even more loosely enforced.


That's an assertion, but not anything to back it up.

First, threads have been implemented as libraries many times. Second, if checks need to happen theg can happen at debug run time if they can't happen at compile time. I don't know what specifically has to be integrated into a language here that makes throwing away the enormous amount already built in other languages.


These points are all clearly addressed in the paper I referenced.

> That's an assertion, but not anything to back it up.

Yes it's an opinion on style, not a falsifiable claim.

> First, threads have been implemented as libraries many times.

The title isn't intended to be taken quite so literally. The author explains why they think these don't work correctly.

> Second, if checks need to happen theg can happen at debug run time if they can't happen at compile time.

But languages don't have mechanisms to implement these kind of checks.

> I don't know what specifically has to be integrated into a language here that makes throwing away the enormous amount already built in other languages.

You're mistaken. It's not 'integrated into', it's 'taken out'.

Libraries let you add things, but to design a model for parallelism you generally want to take things away. You want to take away the ability to do things outside the model's rules.


An often-overlooked quality of any new language is the set of things that you can't do in it. Some features can only be accomplished when certain negative guarantees can be made about programs. And it's really hard to implement negative guarantees as a library.


It's worth noting that Regent is the language that implements the Legion programming model. The Legion runtime system is just a C++ library with bindings for C, Fortran, Python, Terra, and Lua. Writing Regent code is much higher productivity than writing to the C++ Legion library directly, but if you want to you can drop down and write your tasks in any of the other languages that Legion supports. You can even mix and match tasks written in different languages.


Zero hedge for some reason had a crazy downward spiral in their comments. Many years ago I don't remember them being out of the ordinary, but one or two years ago I checked back and they were full of insane alt right conspiracies and super pro trump adoration.


The reason is that they're one of the few sites that doesn't moderate comments into the dust.


Trump is your elected president (assuming yo are in the US), plenty of people voted for him, why would it be unusual that a lot of people are pro Trump? Especially considering the vast majority of the media seems to be against him.


Rendering at a higher resolution makes a giant difference, even for low poly games. Old games were not only low res, but didn't have a lot of antialiasing either. On an emulator you can have resolution, antialiasing and better texture filtering.


Were you upset about this before the article?


What are you basing this on exactly?


Despite years of propaganda, C is not well matched to CPUs currently in use (quite opposite in places), and the typical optimizations don't necessarily work when dealing with external I/O that you need to do in a switch.

Essentially, even if you write in C, reaching higher speeds will involve using "assembly masquerading as C" rather than depending on compiler optimizations.

Also, Snabb uses LuaJIT, which already generated quite tight code, so the performance gap that I suspect some imagine just isn't that wide.


>C is not well matched to CPUs + >depending on compiler optimizations + >Snabb uses LuaJIT .. quite tight code

==

You can write great C-based systems and avoid assembly if you a) know, always, what your compiler is doing and b) know, always, what your CPU is doing...


My point was that the same optimizations that made C fast break down when you need to take into account often intricate dance between CPU caches, memory, I/O bus etc. - so that unless you go into cpu-model-specific assembly tweaks just using C might not bring you as much benefit.

Is it possible to get better? yes. Would it count as "normal C"? I would say not really (if we say yes, then CL code on SBCL with huge custom VOPs counts)


I'd be interested to learn about significantly complex, nontrivial systems of say 100K to 1M LOC scale that require reasoning about every single instruction from the perspective of every other instruction, in order for the system to work.


You do not do that. Instead you optimize the short, critical part.

Of course it does not apply to everything, you need a few hotspots, but it is quite common: audio/video codecs, scientific computation, games, crypto... And even networking.


And with Lua (and C, and C++) its pretty easy to manage the complexity. Just put things where they belong.


I think the answer here is that LuaJIT is fast, and that well written native programs would still be faster, not that C "isn't well matched to CPUs". Modern optimizations are more about memory access patterns than anything else, with SIMD and concurrency beyond that. Focusing on assembly is really not the apex is used to be. For starters CPUs have multiple integer and floating point units, and they get scheduled in an out of order CPU. Out of order execution is as much about keeping the various units busy as it is about doing loads as soon as possible to avoid stalling.

I think if you are going to claim that C or C derivatives aren't actually fast and the idea that they are is due to "propaganda" then you should back that up with something concrete, because it goes against a lot of established expertise.


I don't think anyone rejected it because of sandboxing. It is a new API that would only work on windows with that installed. Learning something completely new for a niche target is not very enticing. Also, if you can write something to be sandboxed, there is a good chance it can be a web page instead.

Sandboxing also needs to come from the other direction.

Having programs be made to be sandboxed defeats the purpose. What is needed is the ability to do contain all of a program's files in one place and isolate the access to the file system with permissions for other resources.


Why do you think there is a risk to babies? What are you having that on specifically? It seems like you are reaching for something "untested". Also microwaves use 2.4Ghz, heat lamps are infrared, 5G is a wide array of spectrums, so what specific frequency and power do you think will harm babies and why?

(Also if something just affects babies, it doesn't make sense to talk about the cumulative effects over decades)


The established consensus is that mm-waves do penetrante the dermis of babies. Even recent research can agree on that. Here's one recent source.

https://www.researchgate.net/publication/334078749_Millimete...

I do not believe that anything in the tens of GHz is either safe or unsafe. The frank reality is that we simply do not know, as not one person in human history has ever lived with a mm-wave source of any power for an extended period of time.

With new RF technology, the burden is usually on the creators to demonstrate its safety, not random Internet commenters to prove its danger.

I was hoping to have an actual discussion here, but I guess people would rather just blindly defend any new technology without even a cursory look at its safety.


That's not even remotely close to being a source for what you are saying.

You need a source that shows that there is something different about babies, different about the 5G frequencies, and different from the light, infrared, microwave and radio frequencies that are currently being used everywhere.

Instead, you referenced a paper on a military pain ray that uses focused super high power 100Kw 95Ghz waves that says it penetrates 1/64th of an inch. Your microwave is 2.4Ghz, just like your router, but it works at 1kw. There is an enormous gap between saying "what about babies" and what you linked. They have basically nothing to do with each other. I think you realize that, but you keep saying "we don't know", when you really mean "I don't know".

> the burden is usually on the creators to demonstrate its safety

How can anything be deomstrated to someone who ignores what they are being told and repeats "we just don't know" while supporting their predefined beliefs with giants leaps in logic and excessively irrelevant information?

> I was hoping to have an actual discussion here

I don't believe you. You didn't even confront my original question of what frequencies and effects you are specifically worried about. All you have seemed to being up so far is heat from power 100 times what your microwave uses. 100kw is 4,000 times the power of a soldering iron that can melt tin and lead.


Oh come on. What I am saying - and I will repeat this - is "mm-waves do penetrate the dermis of human babies." No more, no less. This is the scientific consensus. I provided the first source I found, which related to military research, yes. The broader context is irrelevant, I'm not sure why you wrote a whole comment harping on that. I provided it as a simple counterpoint to someone saying that mm-waves did not penetrate skin.

And the broader context of what I am saying is not that 5G is dangerous, just that there should be more effort to determine its safety before rolling out a new type of EM radiation that humans have _never lived with before_.

Looks like I am not the only one saying existing studies do not cover things, and there is a need for further research: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6765906/


You realize that a source would be something that has to do with what you are saying right? I asked for a source on how a 5G frequency at their amplitude would affect a baby differently and you linked a 100kw military pain ray that has nothing to do with what you are saying, then said, 'i just linked the first source I could find'. That isn't a 'source'. Also your current link doesn't say anything about what you are claiming and actually contradicts it. Even what you linked says that it doesn't go as deep as the thickness of a fingernail. Wifi penetrates more and infrared heat lamps are far more powerful, so again, what frequencies and what effects are you specifically worried about?

> new type of EM radiation that humans have _never lived with before_.

It isn't a 'new' type of em radiation. It isn't a mystery, it is just a mystery to you. You just avoid confronting real information by saying "I don't know". You have been shown facts, your ignorance of the subject is your own fault at this point, but you are desperate to hang on to it.


We've warned you multiple times that we'd ban you for attacking other users. You've kept doing it, so I've banned you.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.

https://news.ycombinator.com/newsguidelines.html


It looks to me like I've been banned for calling a ycombinator funded hypnosis app nonsense.


We moderate HN less when YC or YC-funded startups are the topic: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu.... That doesn't give you a pass to break HN's rules egregiously and ignore our requests to stop. You've been doing this for a long time, including with accounts before this one, and we've warned you on many occasions. Actually we probably let you get away with it for longer than we normally would.

Looking back at your account history I noticed other cases of you attacking other users, which we didn't see at the time and certainly would have moderated and perhaps banned you for. Example: https://news.ycombinator.com/item?id=23186380.

I appreciate that you've also posted a lot of good comments (but then again, also a lot of snarky ones). If you genuinely want to commit to using HN as intended, we can unban you.


There is a difference between attacking someone and being blunt. If you want to ban people and scold them for being blunt then you might as well say that, but to say this is attacking someone is not reasonable. It would be different if there was some sort of uniformity to this, but it seems more like a self righteous crusade to get people to apologize to you personally for not sugar coating what they say.


I've occasionally bit my tongue really hard in not calling someone an idiot outright.

Semi-recent example (shared previously w/ dang):

https://news.ycombinator.com/item?id=22133112

The person I'd replied to ... thanked me.

Your reply reads better without the attacks.


I don't need you or anyone to apologize to me personally. The only issue is whether you'll abide by the site guidelines in the future.

Interpretation is part of applying the rules, but I don't think there's so much variance in how we interpret them. Something like that gets pretty regular and tedious after you've done it a hundred thousand times.


Nah, sorry, you can't use that both ways. Either you generalize existing research to cover a range of EM and circumstances of use, or you don't. You can't just pick one when it's convenient.

>> You have been shown facts, your ignorance of the subject

OK, +1 for making me actually laugh. Dude, I am the only one of us linking any scientific studies or literature review whatsoever. You're the one sprinkling snarky one liners about routers burning you. You also still didn't respond to https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6765906/

Just relax: I'm not saying 5G is the illuminati turning the frogs gay, I'm saying -maybe- we should research a new technology before deploying it to humans at an unprecedented scale.


Please don't post flamewar comments to HN. It's exactly the opposite of what this site is for.

https://news.ycombinator.com/newsguidelines.html


> we should research a new technology before deploying it to humans at an unprecedented scale.

5G is an industry marketing term, not some fundamental redefining of physics. Nothing about emission of radio waves is some new or unknowable phenomenon that requires more research to understand. If you're suggesting otherwise, the burden of providing even a mildly plausible hypothesis is on you, not just "I dont understand this, therefore maybe it is bad".


All you did was link a 20 page paper without quoting what you think is relevant, and yet I already said that it contradicts what you are saying. It doesn't talk about children or infants or babies but it does talk about how shallow the penetration depth is.

I've given you real numbers from your own sources and broken them down into intuitive examples. You might not realize this, but this is just an exercise in seeing what someone does when they have to continually reject information while having none of their own to hold on to a belief that is rooted in emotion.

> I'm saying -maybe- we should research a new technology

Now you are saying that, before you were saying "what about penetrating a babies thin skin". It has been researched, you found some research, it just doesn't say what you want.

We haven't even gotten to the fact that microwave communications are already on lots of cellphone towers and used to communicate with office buildings etc. Not only that but these higher frequencies don't even go through walls. They barely go through rain or fog.


Your microwave uses 2.4 GHZ radiation, does your router burn you?


Even though that is indeed a simple, quite trivial example, it does not really apply to the case of 30~50 GHz in combination with significant beamforming, because the average power density deposited into surface tissue can be much higher than with 2.4 GHz. You just can't focus 2.4 GHz onto a square inch (~6.5 cm^2) with a far-field-optimized antenna. According to [0], humans can sense an energy dose of 4J (0.2 K threshold, 5 cm^2 area, 1 cm penetration depth) in their palm due to the temperature rise. Assuming, say, 1 W focused onto the spot, that'd be just 4 seconds until it's perceptible. Measurable, harmful effects at thresholds slightly lower than what's perceptible don't see that far-fetched, once you consider the shallow sub-surface heating.

I don't think there'd be any frequency-dependent effects that don't stem from the interaction of penetration depth, triggered circulation (transporting heat away), and deposited power. Neglecting triggered circulation, it'd seem viable to just test thermally-mediated health effects from focused, 0.01~10 W @ 20~50 GHz (spot size limited due to a reasonable (0.1~0.8) numerical aperture of the beam focusing system) radiation.

[0]: https://books.google.de/books?id=4dL6CAAAQBAJ Fig 3.a page 437


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: