We, collectively as a community, are forced to play these stupid RL career games because if you refuse, you become illegible and, consequently, invisible to sources of physical, emotional, and intellectual sustenance. It’s like we are all trapped in vicious cycles of RL career games while hoovering up others in these cycles.
Once a critical threshold of people start playing these RL career games, these terrible metrics get elevated to some weird group fairness metrics for hiring/admissions/compensation decisions, no matter how inequitable these games are and how disparate the outcomes are. The metric has moved beyond convenience to something hard to root out. The terrible metric becomes tyrannical, and complaining about it makes you sound like someone who “blames the game for being a bad player”. Even if it was a game you never wanted to play, to begin with.
Be the change you want to see in the world. Hire illegible people. Someone is going to tell you can’t and cite vague legal reasons. Unless that person is an actual lawyer giving formal legal advice (or your boss) just ignore him.
As a general matter you should always ignore non-lawyers citing vague legal reasons for why you can’t do something.
Therein lies the problem. Most people are apathetic and incompetent. Winning rat race points requires some amount of competence and "oomph", so rat race points will always correlate positively with applicant quality. Most rat-race-losers, unfortunately, aren't Socrates.
Or just leave academia. In the US at least, the job is like 80% government contracting and 20% teaching.
Teaching is great, so there's that. But literally every company will let your ad junct, and Professor of Practice usually pays more than 20% of a faculty salary. You can supervise PhD students as interns or by taking a courtesy affiliation (and often even have more impact on those students than their overworked and under-engaged advisors). And university classroom teaching in the US now looks a lot more like 90s/mid-naughts high school teaching.
Government contracting sucks, and the academic variety is not any better. I'd literally whether watch paint dry at a military base than contract for DARPA. NSF isn't actually that much better.
Who the fuck wants to be a combination high school teacher and federal government contractor? Saints or sociopaths, and there are a LOT more of the latter than the former in higher ed.
Honestly, is there a big difference anymore? The vast majority of papers I read are either by industry directly or have industry as a partner (as an author, not just acknowledgements). There are of course some, and even plenty of examples, but it does seem industry partners is almost necessary these days. I'm not convinced that level of interaction is healthy, for either parties.
Only a very small subset of industry cares about academic publishing, and even within that subset it's only a fraction of groups at a fraction of corps that consider publishing a primary or even secondary objective.
The groups that do care about those things can be good gigs, but are generally not the place in the company you want to be anyways, unless you can get in and out (for good) in <10 years. If you can do something that actually impacts the business -- that is actually useful to other humans -- no one gives a shit about h-indices or kaggle scores. And you'll be better compensated anyways.
You're measuring the wrong direction. Don't measure what percentage of industry publishes with academics. Instead, measure what percent of academics __in ML__ publish with industry. This direction because one is much larger than the other. Second, I mean... I am a researcher... and I'm talking about the environment I'm working in. It sounds like you're outside this environment trying to educate me on it. Am I misunderstanding here?
> can do something that actually impacts the business -- that is actually useful to other humans
Do not confuse these two. That's incredibly naive.
>
Honestly, is there a big difference anymore? The vast majority of papers I read are either by industry directly or have industry as a partner (as an author, not just acknowledgements).
Read more pure math papers, then you will see the difference. :-)
I thought we were talking ML here. I mean you're not wrong (I do do this) but context. But in ML, well... I mean even Max Welling is connected with Microsoft.
This is no contradiction: there exist quite pure math papers whose content is very relevant for the mathematics behand ML algorithms. :-)
I do have the impression that the kind of research in ML that is not strongly associated with the recent "machine-learning industrial complex" by now tends to become published in another subject area.
Sure, I agree with you. I just wouldn't refer to that work as pure math. And let's be real, most people are not working on the theoretical side of ML. Realistically people are anti theory in the ML space and it's really weird to me because it's a self fulfilling prophecy and the complaints are "it's not very good because not a lot of community effort hasn't been put in so let's not waste our time"
The problem is that AI is weird not because of academia. In fact, right not it has been captured by industry and it is why we've severely slowed down in progress[0]. Most people in the space now are working in industry labs. Frankly, you can do more, you get paid A LOT more (2-3x) and you have less bureaucratic bullshit. But I think you're keenly aware of this industry capture as you're mentioning aspects of it.
I don't want there to be any confusion: I think it is good that industry and academia work together. There's lots of benefits. But we also need to recognize that these two typically have very different goals, work at different TRLs, and have have very different expectations on the time where the work will be seen as impactful. Traditionally, academia has generally been the dominating player in the high risk high reward/low level research space (yes, much more goes on too, but of people that do this type of research, you think academia) while industry research typically is focused on higher TRL because they're focused on selling things in the near future. There's just a danger when you work too closely to industry: you can't have any wizards if you don't have any noobs.
But I'm not sure it is just ML that's been going this way. There's a lot of sentiment on this website where people dismiss research papers (outside ML) that show up here due to them not being viable products. I mean... yeah... they're research. We can agree that the value is oversold, but often that's by the publisher (read university) and not the paper (not sure if I can say the same for ML). But it's a kinda environmental problem because if everything has to be a product you can't be honest about what you did and if discussing the limits and where you need to still improve upon to actually get an product down the line gets you rejected, well... you just don't talk about that.
This is all the "RL hacking" or better known as Goodhart's Law. I've been saying we're living in Goodhart's Hell because it seems, especially in the last 5-10 years, we've recognized that a lot of metric hacking is going on and decided that the best course of action is not to resolve the issues, but lean into it. We've seen the house of cards that this has created. Crypto is a good example. Shame is if we kill AI because there is a lot of real value there. But if you're a chocolate factory and promise people that eating your chocolate will give them superpowers, it doesn't matter how life changingly delicious that chocolate is, people will be upset and feel cheated. Problem is, the whole chocolate industry is doing this right now and we're not Willy fucking Wonka.
[0] More progress looks like it is being made and there is a lot of progress that should have been made but wasn't but these types of nuances are a bit harder to discuss without intimate knowledge of the field. I'll say that diffusion should have happened much sooner but industry capture had everyone looking at GANs. Anything not, got extra scrutiny and became easy to reject due to not having state of the art results (are we doing research or are we building products?)
Only a relatively tiny sliver of PhDs doing top-tier ML research are in groups that care about publishing at corps the care about publishing in academic conferences.
Once a critical threshold of people start playing these RL career games, these terrible metrics get elevated to some weird group fairness metrics for hiring/admissions/compensation decisions, no matter how inequitable these games are and how disparate the outcomes are. The metric has moved beyond convenience to something hard to root out. The terrible metric becomes tyrannical, and complaining about it makes you sound like someone who “blames the game for being a bad player”. Even if it was a game you never wanted to play, to begin with.
Be the change you want to see in the world. Hire illegible people. Someone is going to tell you can’t and cite vague legal reasons. Unless that person is an actual lawyer giving formal legal advice (or your boss) just ignore him.
As a general matter you should always ignore non-lawyers citing vague legal reasons for why you can’t do something.