You may have already forgotten that old tweet you sent after a night of partying in 2012. Probably buried in the 45,000 other tweets you’ve posted since.
And safe from the prying eyes of potential employers. Or so you think.
While it may be hard for a human to singlehandedly pull up all your past social media posts, it wouldn’t be that difficult for a machine to do so. Algorithms can drill down and unearth even the most unsavoury social media rants.
In fact, 70% of employers screen social media profiles before making a hiring decision, according to a 2017 study by CareerBuilder.
Social media screening
The practice has become so widespread that there is now a demand for background screening services that specifically focus on a candidate’s social media accounts.
US-based Fama Technologies, for instance, claims to fish out the “red flags” in a person’s social media profile by using machine learning and natural language processing. It also alerts candidates if they are being screened.
The company said the AI-powered service isn’t used in monitoring a person’s recreational activities for the sake of intruding on their private life, but it does read public posts for any potential of hate speech and bigotry.
“Employers are looking for folks who don’t think there is anything wrong with what they are saying,” Fama CEO and co-founder Ben Mones told CNBC in 2016.
If the AI tags any post as hateful, misogynistic, or racist, it then sends the link to the recruiting team. The tool is thus used to catch bullies even before they join the company.
The intention seems noble, but is AI all that reliable when evaluating “red flags” on social media?
Jay Stanley, senior policy analyst at the American Civil Liberties Union, is suspicious of how AI-enabled social media screening works.
“The automated processing of human speech, including social media, is extremely unreliable even with the most advanced AI. Computers just don’t get context,” Stanley said.
“I hate to think of people being unfairly rejected from jobs because some computer decides they have a ‘bad attitude’ or some other red flag.”
Patrick Ambron, CEO of BrandYourself, an online reputation management company, doesn’t believe an algorithm can gauge who is worth hiring and who isn’t.
“While they may save a company time, they’re often inaccurate and unfair, penalizing people for issues that aren’t their fault and rewarding people who simply fit a particular mold,” Ambron wrote in AdWeek.
“[If] you’re not proactive, you could lose opportunities you otherwise deserve.”