“What is your algorithm?”
We were recently asked this question at the conclusion of a demonstration of our career.place anonymous candidate screening platform.
There is a ton of advice now encouraging people to ask this question when evaluating technology, especially within diversity, equity, and inclusion. We’ve been providing similar advice for years. It’s critical to understand what vendor technology is doing as part of due diligence to ensure you do not inadvertently introduce bias into your hiring processes.
There is just one problem… in this case, the question made no sense.
Career.Place doesn’t use embedded algorithms for candidate selection. Employers set the requirements and questions on our platform and then the employer evaluates the quality of the candidate responses to determine if the candidate passes the screening. In other words, the “algorithm” is completely in the control of the employer.
So, you can imagine, after showing step-by-step how our solution works we were a bit surprised by the question.
Then the truth dawned on us…
“Do you know what an algorithm is?” we asked politely.
They didn’t. They were just following the common advice to ask about the algorithm. The individual even confessed that they didn’t usually understand the answers given to that question. They just wrote it down and hoped someone in the organization evaluating the notes would know.
Well, that doesn’t make sense, does it?
Except… it happens all the time.
Whether evaluating candidates or technology, it’s all too common to find individuals in a position where they must ask what they do not know.
The DEIB challenge of asking without understanding
How many times have recruiters been given technical or skill-specific questions to ask candidates without any explanation of the question or answers? The only guidance - “Just write down the answer” or “Just make sure they say…<key word list>”. Great…
How many times have teams of technology users and/or administrators been charged with evaluating new solutions with little more than a list of questions from the technical experts? “Ask about APIs and let us know if they say… <a bunch of acronyms or weird words you’ve never heard before>”. Great…
Beyond making us feel uncomfortable, asking questions we don’t understand decreases equity and inclusion of the selection process in several ways.
Inequity due to ‘lost in translation’. If we don’t understand the question or answer, we don’t really know which parts of the answer are important so we can’t accurately paraphrase. That means, even with great notes, the lost word or phrase here and there could mean the difference between a good answer and a mess of nonsense. As we naturally gravitate toward words we recognize or can spell or can type/write quickly, how someone talks and the words they choose can make a big difference in how much loss there is in the translation.
Inequity due to perception of the question. Without understanding the question or answers, we lose the ability to ask follow-up questions or seek clarifications. Therefore, for example, we won’t know if a candidate (or vendor) misinterprets the question or focuses on the wrong part of an answer. And as we all interpret questions through our own background and experience, the risk of this challenge increases based on how different our backgrounds and experiences are to the one evaluating the answers. In other words, the inability to seek clarification lowers the potential diversity of the candidate pool because it’s more likely that those with similar backgrounds interpret the question and therefore answer correctly
Lack of inclusion and engagement because your candidates or vendors will know you don’t know. Once the candidate or vendor realizes you don’t understand the question or the response, they may not bother answering it properly which could put them at a disadvantage. Or they may provide an outlandish or less truthful answer knowing you can’t challenge them. Or they may simply lose interest in the interaction because of the lack of value given to their responses. How can the candidates or vendors tell? There are two common signs. 1) The questions don’t make sense, like the case of the person asking about our algorithm. 2) There is no follow-up question or response which gives the impression that the answer had no meaning.
The solution: Those who know, ask.
Luckily, the solution to this challenge is relatively simple – those who know, ask.
There are several ways to do this:
Assign the questions to the person who knows the answer. For example, in the hiring process, defer the skill and knowledge questions to the subject matter expert with those skills and knowledge.
Shift from an asked question to a written or recorded question. Rather than rely on someone to capture the response, let the candidate or vendor do it for you. Requesting written or recorded answers to questions allows the subject matter experts to review the answers without anything lost in translation.
Train the interviewers with the answers. Just because someone doesn’t understand the question or answers doesn’t mean they can’t. Take the time to teach the person conducting the interviews or evaluations the basic concepts behind the question and potential answers so that they can effectively interact with the candidates and/or vendors.
And, going back to Algorithms …
An algorithm is a fancy way of referring to the information and rules used to determine an outcome.
For example, an algorithm to determine someone’s grade in a high school class is:
The information is the results of tests and quizzes
The rules include how each test and quiz is scored, how they are averaged together to get the overall score, and how that score translates to a grade.
By understanding this algorithm, you understand how a student can earn a good grade. Even more, students can then apply strategies to get the best scores. For example, if the algorithm uses tests as 90% of the grade and getting over 90% means getting an A, students know that the tests are far more important than the quizzes and may choose to ignore the quizzes and focus just on the tests to get a higher score.
In some technologies, these algorithms are built in and, as a user, you have little to no control over them. And not all algorithms are equitable. Therefore, using the wrong technology introducing biases into your processes.
For example, an algorithm to determine match rate of candidates for your job where:
The information includes a ranked list of schools for a discipline and a data set of the work history of thousands in that discipline generated by LinkedIn.
The rules include generating an ‘education’ score based on where their school falls on the ranked list (so a better score for ‘better schools’ that are higher on the list) and a ‘work history’ score calculated by how closely the candidate’s work history matches the common work history from the LinkedIn data. The overall match blends the two scores.
By knowing the algorithm, you can identify the biases that will impact the match rate of your candidates and therefore can determine if this is a technology you want to use. For example, in this algorithm:
There is a socioeconomic bias by favoring schools over capability and favoring the rank of the school. Access to money, good grade schools, and having family who have gone to specific universities and colleges greatly impacts an individual’s ability to have access to those universities and colleges.
There is a bias toward the ‘typical’ career paths by favoring common work history trends over capability. Therefore, those who have taken atypical paths are at a disadvantage even if they have all the skills and experiences necessary to thrive in the position.
There is a bias toward LinkedIn users by defining the LinkedIn user community as ‘typical’. Missing or underrepresented communities aren’t taken into account in the data so will more likely fall into the ‘atypical’ and not get a high matching score. For example, consider military and recent veterans. As active military do not have a job search’ process they do not need LinkedIn profiles, so are the underrepresented. Therefore, their career path is not within the data so they are not seen as a match when their profiles are compared to the data.
When asking vendors about their algorithms, you’re looking for the information and rules that are used to determine an outcome such as a candidate match score.
Data sources gives insight into what communities are being used to determine what is ‘normal’ or ‘good’ or ‘bad’. Look for under and unrepresented groups and/or communities. If people are not within the data, it increases the risk of those groups being evaluated as ‘bad’ or ‘not normal’.
Rules gives insight into how the data is used to determine an outcome. Look for rules that use shortcuts or inaccurate assumptions such as pedigree of education as an indicator of success. If you’re not sure, ask for their reasoning. Good rules have strong data trends that show the predictive nature of that rule (for example, if the vendor has quantitative proof that those from certain universities are significantly more likely to be successful at the specific role).
As for career.place. We don’t use built-in algorithms to match candidates to jobs. Our anonymous candidate screening solution combines structure and anonymity to ensure all candidates are given equal opportunity to compete for positions to your standards while protecting the process from bias. Rather than using resumes or keyword searches or profiles, candidates respond directly to employer requirements and screening questions anonymously. Hiring teams are protected from biases because they don’t know the gender, age, ethnicity, education pedigree, or any other unnecessary biasing information.
Want to learn more? We’d love to show you!
Comments