Tackling unconscious bias in AI models
IN Partnership with
In the critical early stages of AI adoption, how can HR prevent unconscious bias in AI models? Martian Logic’s founder discusses the quest for bias-free technology
More
ARTIFICIAL INTELLIGENCE is making a firm foothold in the human resources industry, and human resource information systems (HRIS) are increasingly relying on AI’s power and efficiency. But as companies continue to foster diversity, equity and inclusion, industry developments have yielded a significant question: How can organisations ensure that their AI models are impartial and free from unconscious bias?
According to a 2022 study by the University of Southern California, as much as 38.6% of ‘facts’ used by AI were biased. Even global tech giants with virtually unlimited resources can get their AI training wrong, as proven by Google’s flawed attempt to create an AI image generator.
To solve the problem of bias in HRIS, Martian Logic’s founder and CEO, Anwar Khalil, says that having a solid training data set is key. However, data sets often mirror the biases already present in society.
Martian Logic is a SaaS-based HRIS that provides modern and enterprise-grade core HR functions to HR teams in Australia and New Zealand, the US, the UK and Canada.
The system's main modules include recruitment and onboarding, employee database, employee dashboard, employee update workflow, employee offboarding workflow, automated reference checking, recorded video interviews and more.
Martian Logic delivers over 800% ROI to clients even in the first year of being introduced because it comes with no setup cost. This high ROI is achieved through time and money saving, improved compliance, collaboration between HR and managers, and providing positive experiences to candidates and employees.
“Infinite diversity with zero bias is never really going to be possible. That’s why you need a quality data set, and something that can help your model train itself to steer away from biases”
Anwar Khalil, Martian Logic
“That’s why it’s all about diversity,” Khalil tells HRD.
“But how can you create diversity, and is there such a thing as ‘infinite’ diversity to make your bias zero? I don’t think that’s ever really going to be possible. That’s why you need the combination of a quality data set, and something that can help your model adjust itself and train itself to steer away from biases,” he says.
The impact of unconscious bias on recruitment is already well known to HR professionals. That’s why HR teams have added tools such as Applicant Tracking System (ATS) into their arsenal, enabling professionals to efficiently handle an influx of job applications. Here, AI’s potential to either perpetuate or counteract bias can make all the difference.
Khalil highlights a project that Martian Logic started four years ago, where his company attempted to refine recruitment processes with a new AI model. However, he quickly noticed that the unconscious biases baked into some users’ decision-making were creeping into the training data set, and therefore began manifesting in the trained model’s behaviour.
“We saw the impact of bias in training data sets on our AI model very clearly. So, how do you remove it? Nobody has an infinite data set, and so you really need to focus on the quality,” Khalil says.
He notes that the best starting point is to train AI models to deal exclusively with facts. This means that if a decision has been made without the assessor having completed certain tasks – for example, reading a job applicant’s résumé or looking at responses to assessment questions – then the decision cannot be used as part of the training data set.
“In recruitment, you might be going through a list of people that have applied for a role. You might look at a first and a last name, and based on the world that you grew up in, an unconscious bias might creep into your mind and result in a quick decision that someone with this name couldn’t do this job,” he explains.
That applicant will be placed into the ‘no’ pile, and you’ll move on to the next candidate. The AI needs to learn that it shouldn’t take that opinion into account.”
Khalil says this example illustrates that nobody is immune to unconscious bias, and so a deliberate, informed approach to AI models is vital. Without a varied data set based exclusively on facts, AI systems risk mimicking the biases that they have sought to eliminate.
Over the past four years, Martian Logic has not only been innovating within the HR tech space, but also fundamentally redefining it. The company’s work has focused on creating a data set that allows ubiased recruitment and HR processes.
Martian Logic’s AI models have been trained to give greater weight to decisions based on ‘considerable actions’ that assess a candidate’s suitability for a job, and less weight to decisions based on limited efforts to analyse a candidate’s application.
“We’re at a point where almost infinite improvement can happen, so it’s not an area where you can ‘wait and see’. You need to be moving with it”
Anwar Khalil, Martian Logic
“It’s still early days, and these tools are very much in their infancy. However, not taking them seriously would be a huge mistake,” Khalil says. “We’re at a point where almost infinite improvement can happen, so it’s not an area where you can ‘wait and see’. You need to be moving with it.”
Whether you’re a HR vendor like Martian Logic or an organisation looking to understand how you can harness the power of AI and turn it into a competitive advantage, Khalil says it all starts with simply getting on board and having a go at it. Everyone, from HR leaders to marketing leaders and department managers, should be assessing the potential of using this technology and considering how best to implement it in their existing processes.
“You don’t want to leave it too late because it’ll be difficult to catch up. It’ll already be 10 times more expensive for you to produce something compared to your competitor who found and embraced the potential of AI early enough,” Khalil says.
To find out more about Martian Logic and its AI-powered HR and recruitment solutions, click here.
Share
Reducing bias in recruitment
Redefining AI in HR
Redefining AI in HR
Redefining AI in HR
Published 15 Apr 2024
Share
AU
NZ
ASIA
CA
US
News
SPECIALISATION
Events
Best in HR
Resources
Subscribe
Companies
People
Newsletter
Copyright © 2024 KM Business Information Australia Pty Ltd.
About us
Authors
Privacy Policy
Conditions of use
Contact us
RSS
News
Specialisation
EVents
Best in HR
Resources
Subscribe
AU
NZ
ASIA
CA
US
Companies
People
Newsletter
Copyright © 2024 KM Business Information Australia Pty Ltd.
About us
Authors
Privacy Policy
Conditions of use
Contact us
RSS
News
Specialisation
EVents
Best in HR
Resources
Subscribe
AU
NZ
ASIA
CA
US
Companies
People
Newsletter
About us
Authors
Privacy Policy
Conditions of use
Contact us
RSS
Copyright © 2024 KM Business Information Australia Pty Ltd.
Bias in data: the figures
Source: “Data Bias: The Hidden Risk of AI” by Progress (2023)
65%
of business and IT executives believe there is a data bias in their organisation
of businesses are currently addressing data bias
13%
believe data bias will become a bigger concern as the use of AI increases
78%
Recruitment
Employee life cycle
Core HR
Martian Logic: Key features of HRIS
Organisational chart
Source: “Data Bias: The Hidden Risk of AI” by Progress (2023)
believe data bias will become a bigger concern as the use of AI increases
78%
of businesses are currently addressing data bias
13%
of business and IT executives believe there is a data bias in their organisation
65%
Bias in data: the figures
IN Partnership with