The Economist MBA Ranking has become a punching bag for the experts seeking status quo and those applicants who don’t want to see volatility. We reviewed the ranking methodology from the time The Economist began ranking Full-time MBA programs. As we have warned earlier, ranking is a reflection of what the publication finds valuable for the MBA applicants, not an absolute truth about the Business School and the MBA programs.
Booth has continued the reign on top for the third consecutive year while Kellogg was the big surprise taking in 2nd position from Darden, which has been pushed to third. Last year, the biggest criticism was when Stanford was demoted to #13th rank. This year, a fifth ranking position behind Harvard’s 4th have stabilized the top tier schools in the Economist Full-time MBA rankings.
The 2016 survey invited 143 schools; some did not make it to the top 100 while quite a few chose to opt out. Indian School of Business, CEIBS, Aston, and McGill were notable absentees. The survey was divided into two sections: Quantitative (GMAT, Number of Alumni, post-MBA Salary) and Qualitative (Quality of Faculty, career service, and facilities). The balance is not even. 80% of the ranking score depends on the quantitative score, which means the ranking focused on the MBA program’s short-term value (salary, quality of incoming class). 20% were on qualitative. The Nobel Laureate professor or the networking skills of the career service team didn’t make much difference to the ranking.
When we offer consulting service, clients often quote ranking. We remind them that some survey response of top publications were as low as 16% and had ranking factors like the opinion of Dean fetching higher weightage than factors that matter to candidates. Luckily, the Economist has set a minimum response rate of 25% to be included in the ranking. 25% response is not great, but at least the publication is transparent about the process. One data that was missing in the methodology was the number of responses. Percentage instead of responses can hide the scale of the survey.
The 50-30-20 rule that built weightage to 2016(50%), 2015(30%) and 2014(20%) data respectively made sure that schools that shared the latest data had an advantage if the results had a correlation with the quantitative scores. Also, the process averaged out any spikes in performance in one year. The Z-score index means the raw score is mapped against the mean of the individual scores while considering the standard deviation, thus avoiding tied rankings that you see in other publications.
Why the huge variation from other publications?
35% weightage was given to "Opening new career opportunities”. The interesting part was the split up of this weightage. From the 35%, three sub-factors contributed equally to this ranking factor:
1) Diversity of Recruiters
2) Placement Success (% Job-seeking students after 3 months of graduation)
3) Student Assessment of career service
The traditional top MBA programs have low recruiter diversity as the consulting giants, and top Investment Banks recruits the majority. The career service team is not required to be pro-active for being effective as the brand pull, and the drive of the candidates fulfills the placement goals. Students are likely to attribute their initiative over the career service team for the results – a reason for the volatility.
Source: The Economist 2016 Full-time MBA Ranking
Comprehensive MBA Research Guide
Winning MBA Essay Guide
|Rank||Previous Rank||Full-time MBA Programs|
|1||1||Booth School of Business|
|2||7||Kellogg School of Management|
|3||2||Darden School of Business|
|4||4||Harvard Business School|
|5||13||Stanford Graduate School of Business|
|6||3||Tuck School of Business|
|7||6||Haas School of Business|
|8||14||IESE Business School|
|10||16||The University of Queensland Business School|