When I was managing, I informally typed people during the interview process, which was very helpful in predicting both how they would interact with other team members and how they would do at particular tasks. However, I learned not to rely on self-reported personality types or tests, which are often answered aspirationally, not realistically.
To me, MBTI or Big 5 (not the actual tests, but their framing of aspects of personality) are mental tool kits for trying to make better predictions from limited data (i.e., the interview process). As a manager, I've found them incredibly helpful for avoiding problems (i.e., assigning the wrong task to a person).
Interestingly, in my personal experience, I've found logicians (ISTJs in MTBI) seem to be the most resistant to quantifying aspects of personality.
You might be experiencing a confirmation bias. Without a double blind study there is no proof that your valuations—let alone your conclusions—are valid.
I’m sorry to say, but your methods are flawed, as are your predictions. Your methods may only be helpful in confirming your biases, nothing more.
>As a manager, I've found them incredibly helpful for avoiding problems (i.e., assigning the wrong task to a person).
it sounds like you're assigning tasks based on prejudicial categorization of genetic traits (as MTBI claims to be) instead of actual demonstrated job performance.
Some problems are very detail oriented, others are much more vaguely defined. As a manager, I assigned vague problems to one person because they weren't worried about the lack of structure to the problem. The devil-in-the-details problems I assigned to another person. Both people were happier and clearly more productive. Could I have made the same decisions without an explicit personality framework in my mental toolkit? Yes, but it would probably have required more experience than I had.
> To me, MBTI or Big 5 (not the actual tests, but their framing of aspects of personality) are mental tool kits for trying to make better predictions from limited data (i.e., the interview process). As a manager, I've found them incredibly helpful for avoiding problems (i.e., assigning the wrong task to a person).
I think of MBTI as a framework for categorizing human tendencies. But the real power of MBTI is in cognitive functions as vocabulary to classify how people will react in situations.
Given a person without major forcing functions changing their behavior, what tendencies do they have?
Cognitive functions defines a set of tendencies that you can have a discussion around, assuming you agree on what each cognitive function really means.
For instance, someone with Ti as a primary function subconsciously looks for correctness that can be validated. Whereas someone with Fi as a primary function subconsciously evaluates whether someone is expressing their true self or not. This can be used as a manager tool to do behavioral risk management.
But everyone has forcing functions that change their behavior, and people with enough practice can get good at anything. As an evaluation tool MBTI probably accounts for at most 10-25% of the outcome. And the older you get, the more well rounded your cognitive stack becomes, making it even less of a predictor of your behavior.
Additionally, cognitive functions are abstract enough to not correlate highly to specific tasks, although one can certainly argue a correlation (I've heard arguments where programming suits Ti and Ni the best).
"Interestingly, in my personal experience, I've found logicians (ISTJs in MTBI) seem to be the most resistant to quantifying aspects of personality."
There's an internal, voluntary, just-for-fun MBTI within Google that breaks down stats of the test-taking population. 75+% of takers are N's (vs. an estimated 25% in the general population). The number of INTJs (22%, vs. ~3% in the general population) itself outnumbers all S types combined.
While it's possible that Google is unrepresentative of the general population (something much more likely in 2005 than 2022, though), I think it's more likely that N's are just more drawn to completing a personality test and sharing their results with the company.
I've used strengthsfinder before (after hiring, haven't used it before) and it's helpful for team composition and for understanding who might be good AND also interested in certain things.
Not a huge sample size, but everyone has agreed on the output of their tests. Only in a few instances (like 1 of the strengths for a few people) were there surprises, (ex: Woo under Influencing) but they understood why it was a top strength and how it did make sense even though they wouldn't have chosen that for themselves if they were tasked with picking their top 5 strengths from the list instead of doing the test.
I don't like the personality tests at all. A strong team IMO should have a mix of personalities and strengths and I prefer to build teams based on just a few core values. From that approach, I've gotten a great mix of both personalities and strengths organically.
I could see maybe some teams using personality tests (ex: for sales) to architect a desired team type in a specific industry. Research is all over the place though on what's ideal. And can it be trusted as a hiring framework?
There are many other factors to consider. Let's say 5 specific personality traits perform the best in the aggregate in sales for one industry. But then people with those traits may be hard to retain. They may not get along with one another. They may create a toxic environment. There are too many things that are very hard to measure IMO to use that as your framework.
So I don't buy the personality test stuff in general. I suppose it may be a decent data point as your example of avoiding "assigning the wrong task to a person", but a strengths based approach makes more sense for that to me.
I’ve been an ISTJ (but have moved closer to an ISFJ over the years) and am not resistant to quantifying personality aspects - as long as results are disclosed and I’m informed it’s being done. Still, I’m not going to do a 128 question personality test to get hired - interviews are already long enough.
A stethoscope in the hands of a doctor is a useful, if limited, tool. Don't judge its value if you have never seen one except in the hands of the village idiot. (One could say the same about various programming languages.)
If measuring skull shape during the interview process was a) socially acceptable and b) actually predictive of outcomes, why not? The problem is that it is neither. The reason that it is not a) is because it was not b) and thus easily misused to justify stereotypes.
> A stethoscope in the hands of a doctor is a useful, if limited, tool.
The MBTI is tea leaves reading. It has been experimentally shown to have no relationship to reality and no predictive power time and time again. Even the axis don’t make sense as some are heavily correlated.
The Big 5 is useful but far more complicated. It uses scale and has no pretty little boxes but at least it seems to consistently measure something.
MBTI is the rough equivalent of taking a Big 5 test, but then instead of presenting the scores, it arbitrarily classifies you as Yes/No for each trait based on whether you're above or below average. Even if the scores themselves would be informative, you're still getting basically randomly sorted on noise for the traits where you're just about average.
> MBTI is the rough equivalent of taking a Big 5 test, but then instead of presenting the scores, it arbitrarily classifies you as Yes/No for each trait
No, just no.
The Big 5 was statistically designed using factor analysis so that its axis are independent. The axis actually came before their description. It’s actual serious research. Psychologists found that some characteristics clustered together and then spent time understanding what these clusters actually measured.
The MBTI however is all over the place. It was designed "using" Jung theory - itself a heap load of garbage - and its axis are total chaos. They even correlate between each others. That’s part of why the distribution of MBTI types is actually so skewed.
Seriously the MBTI is a perfect exemple of what’s wrong with psychology nowadays. On the one hand you have academics trying to do serious research and the other hand you have people motivated by greed pushing random rubbish and gathering a following in the corporate world like a pseudo-cult. It’s both sad and maddening.
A galvanometer in a physician’s hand is even more dangerous than that of the village idiot. Just ask my late grandmother who was preyed on until the board de-certified that “doctor.”
Look, the reality is that almost all interviewing is far from scientific. I'm sure there are worse heuristics many of us use than MBTI[1]. The comment being downvoted is being perceived as trying to appeal to an argument of extremes: It doesn't follow that if you're following some poor methodology that one should consider even crazier ideas.
If we had nailed interviewing down to a science, I could see the point in the comment. Instead it's coming off as "Hey if you're going to do something imperfect, you should consider something extremely wrong!" We all have mostly wrong approaches to interviewing.
[1] Stuff I've seen:
Rejecting a candidate because they read Chapters 1-4 but not 5 of a textbook when the interview prep material mentioned chapters 1-5. Note that chapter 5 is not used at all for the job.
Favoring a candidate because of a strong handshake (cliched, but happens!)
Gauging the enthusiasm a candidate by assuming everyone is an extrovert (fairly common).
To me, MBTI or Big 5 (not the actual tests, but their framing of aspects of personality) are mental tool kits for trying to make better predictions from limited data (i.e., the interview process). As a manager, I've found them incredibly helpful for avoiding problems (i.e., assigning the wrong task to a person).
Interestingly, in my personal experience, I've found logicians (ISTJs in MTBI) seem to be the most resistant to quantifying aspects of personality.