Hacker Newsnew | past | comments | ask | show | jobs | submit | johnp271's commentslogin

I have not seen any indication that Rowling, Musk, or Adams assert that trans people are categorically a "major threat". That said, these folks do view trans people as a "major threat" to those athletes who compete in the category that once was exclusive to humans distinguished by having XX chromosomes. They believe, and rightly so in my opinion, that this athletic competition category should remain exclusive to those humans scientifically established (usually pretty obvious at birth) to have XX chromosomes.


>those humans scientifically established (usually pretty obvious at birth) to have XX chromosomes

It is entirely possible your heart is in the right place, but this specific comment gives it away that you haven't actually looked at this issue too closely. There are "scientifically established" reasons why this issue is a lot more complicated than the anti-trans folks always make it out to be, even if we completely ignore the existence of trans people. Look up Swyer Syndrome[1] for example.

[1] - https://my.clevelandclinic.org/health/diseases/swyer-syndrom...


CAIS is a better example. Even then it's overrepresented in athletic competition compared to the general population.

Swyer syndrome isn't a condition compatible with an athletic career because of the bone weakening caused by hormone deficiency.


The obvious indication is that they’re putting a huge amount of time and money into this issue. If they don’t think it’s a major threat then what are they even doing?


Researchers locate neurons in the brain that respond to '0' is in same region as neurons that respond to '1', '2', etc. The empty set (nothing) is apparently encoded differently.


I always figured that liberal radio, e.g. Air America, struggled and failed because that point of view had too much competition from other media, e.g. newspapers and television. Back before internet, conservatives only had radio to hear what they considered their point of view taken seriously and the other side derided whereas liberals had lots of other options to hear their points of view taken seriously while the other side was painted as rubes.


I don't see it as ironic at all. One way anthropologists studying a society can gauge what that society values is to look at the punishments applied for various transgressions. There are severe consequences for transgressions against things that a society values highly. Thus if a society values each human life as the most precious component of their society, then it is not unreasonable if that society places the most severe penalty on anyone who ends such a life. For that "most severe penalty" to be the ending of the life of the perpetrator, if such judgement is determined with grave concern for all parties involved including the perpetrator, is not ironic. It signals to the members of that society that their very life is the most precious aspect of the society of which they are a member.


And yet we know that people get wrongfully convicted and executed somewhat regularly. If you truly value life highly, you must prefer life in prison over the death penalty. The risk of wrongful execution is unacceptable.


I too enjoy reading Tao, but this approach to inequalities did not work for me at all. I am a retired PhD mathematician and I've taught the full gamut of undergraduate math in college. BUT all my life I have struggled with the simplest currency conversion arithmetic when I travel overseas. When I saw Tao's first example I said to myself, if this was how I was introduced to inequalities back in school (long ago) I'd likely have been a history major.


The research and discoveries that are most deserving of a Nobel Prize are precisely the sort that are unexpected and unpredicted in advance. All this "Monday morning quarterbacking" by everyone who now suggest that this discovery should have been obvious 20+ years ago or that the talent of those who made the discovery should have been obvious is rather silly.

Arguably the story of how this researcher was treated and what she still managed to accomplish can serve as inspiration and motivation to persevere to future generations of folks with unconventional ideas or ideas that are disparaged by the 'experts'. Yes, it can also serve as motivation to research institutions to take risks and go out on limbs every now and then as there can be some wheat hidden within the chaff.


"Unfortunately universities are not immune to the realities of living in a capitalistic society."

There might be unpleasant realities of living in a capitalistic society but they are less unpleasant than living in any other sort of society.


The CRTC News Release explicitly states:

"Today, the CRTC is advancing its regulatory plan to modernize Canada’s broadcasting framework and ensure online streaming services make meaningful contributions to Canadian and Indigenous content."

I would ask, who decides what is 'meaningful'? This sounds like an open door to censorship to me. Was not the requirement to make 'meaningful' contributions to society the excuse used by the Soviets and Red Chinese to stifle publications.


They are referring to the CanCon rules https://crtc.gc.ca/eng/cancon.htm

The rules have been around for 40+ years in one form or another.


A lot of the links on that page don't work, but I infer that it only applies to radio and TV, and it's not going to apply to podcasts just yet (perhaps in the future). Still, not much concrete info.


It will indeed apply to online services and affect their recommendation algorithms.


Do you have a reference for that?


Yes, I saw that, and there is no mention of any censorship, or even any plan to regulate content, other than the sentence you mention which seems to be in the future. So there might perhaps be some "censorship" or regulation in the future, but not now.


From what I know, it's just that you need a specified amount of air time from Canadian, French and indigenous presenters. What they choose to do with it is up to them.

It's more of a pass the mic around so every group gets some airtime law, and not so much a you cannot talk about this or that.

The idea is that it acts a bit like human rights, at balancing majority rules. Where financial incentives would prefer to always tailor to the majority since it makes more money, this enforces that minorities get some proportional airtime on big media channels.


> it's just that you need a specified amount of air time from French Canadian and indigenous presenters.

It's only specific to Canadian and Indigenous content. The content does not have to be French.


In a streaming on demand, user driven world, how do you define “air time”? I can’t see how you could force consumers to consume a certain amount of CanCon, and simply ensuring CanCon %| of content available doesn’t make sense either since a small amount of content could be consumed up to 100% of total platform consumption, and a large amount of content could be consumed as little as 0% of the total platform consumption.


I'm not sure if they're going after a single podcast show, or after podcast apps. If the latter, it's probably just making available Canadian made podcasts to some amount of Canadian made podcast and promoting them in recommendations and listings. If the former, it's a bit more strange, maybe inviting guests that are Canadians on the show, for some amount of episodes?


Adobe Photoshop has just introduced their "generative fill" tool in the new beta version. It requires an internet connection and it goes out to the web and uses AI to fill photo regions with whatever text request the user inputs. While in a way it does nothing that could not have been done before with PS, it now can do in seconds - and by someone like me with very limited PS expertise - what previously may have taken hours and required considerable PS skill. We've come a long way since the predictions of 1984 (and not just as regards Orwell).


This artist's several sentence summary of an ANN and relating it to prejudice is fascinating: "The output of an artificial neural network can be roughly defined as a conclusion obtained by generalising a limited set of observations. Surprisingly prejudice can be defined in the same way. This will always be a problem with systems that generalize information. No matter how large and representative a dataset might be there will always be an eccentric outlier that will break the system." On the one hand this succinctly sums up the challenges we face with AI systems becoming more and more ubiquitous and on the other hand the reality we non-artifical intellegent humans face in living our lives and dealing with day to day encounters.


"The output of an artificial neural network can be roughly defined as a conclusion obtained by generalising a limited set of observations. Surprisingly prejudice can be defined in the same way."

Not really surprising. The first thing they teach in data science is that bias is everywhere. One of the first things taught in programming is garbage in garbage out and that computers do exactly what we tell them. Once you start making decisions with biased data you will start to prejudice some group.

The quest for non-biases systems is a little like a perpetual motion machine. If we all have biases and these machines learn from the same data we do, using systems we write, how could one expect a different outcome?

The


This is why you strive to identify the biases and move them from system 1 to system 2 thinking. AI will help humanity operate in a direct cognitive regime rather than sub-conscious one.


Not all or even most subconscious influence on decision making is bad. There is plenty we can’t yet quantify and fully understand.


There is a classic thereom from computational learning theory that says, if all hypotheses are equally likely, then no generalization can happen. Ie bias is necessary for learning.

To respond to some sibling comments: Yup, this is prejudice. I'll try to analogize the thereom with an example: Without prejudice, you can't recognize a leaf in a figure, because alternate hypotheses (there are an arbitrary number of things in this universe that look like leaves but in fact are not) are equally likely.

My advisor one told me that machine learning is the study of biases.

"Without the aid of prejudice and custom, I should not be able to find my way across the room." - William Hazlitt


Isn't this more a study of priors and statistics than bias? Bias would be an error between some latent underlying value and an estimate of it.


This labels all imperfection in reasoning "prejudice".

Seems like a biased premise.


Definition 1 of prejudice in a lazy google search is "preconceived opinion that is not based on reason or actual experience." Certainly that figures into most reasoning, considering that perfect information is impossible.


That definition seems to cover everything you've learned a textbook, video, lecture, another person, or in any other indirect way, and which you didn't have an opportunity to think through yet.

Which is... most of the thing people know? Including, ironically, this very definition, which I learned about from a HN comment that quoted a Google search result...


I guess it depends on what we define reason and reasoning as. Are the rules a “reason” even if not “reasoning”?


It doesn't label. It doesn't do anything to "all" of anything. It doesn't refer to any "imperfection," and it doesn't address "reasoning."

This is a misrepresentation of the parent comment and the article.


It's interesting. Everybody is always talking about creating unbiased machine learning models, but we're still no closer to cracking the code on unbiased humans.


In the data sense isn't bias literally just the result of limited/narrow data? So isn't the problem not in how you train models but simply the fact that it's impossible/exceeding difficult to provide omnipotent and universal data?


Bias of a data set is when it doesn't reflect the true underlying distribution of nature.

So a face corpus with only white faces doesn't reflect the diversity of faces one encounters in the world.

With that said, unbiasing data is extremely difficult because the true distribution of things is unknown and sometimes subjective. The visual images you would encounter as a human from birth to death growing up in a first world country would be very different from that of a drone's video camera. Are we really sure that imagenet should be K% animals and not K/2% animals? And if you train a machine learning algorithm on every possible image with every possible pixel, it will just learn noise.


I'm not biased. It's everyone else that is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: