Business

The balancing act: Ensuring responsible AI in a rapidly evolving world

Artificial intelligence (AI) is revolutionizing industries, transforming the way businesses operate, and enhancing daily life. Some of the ways it’s doing this is evident in usage of AI in certification, identity checks, item writing, proctor assist, and credentialing, by decreasing workload, cost, and time. However, this rise in AI usage faces challenges such as bias, data security vulnerabilities, and misinformation. To harness its full potential responsibly, businesses and organizations must strike a delicate balance between innovation and ethical accountability.

The market size of AI is projected to reach US$407 billion by 2027, expanding at a CAGR of about 36.2% between 2022 to 2027, according to Markets and Markets. The growing confidence in AI is such that as per a Forbes Advisor survey, 64% of businesses expect AI to increase productivity. In fact, 72% of businesses have adopted AI for at least one business function. 

The reason for such interest in AI is its high adoption rates. As per IBM data, India is witnessing the highest AI adoption for organizations in (59%), followed closely by the United Arab Emirates (58%). Businesses in Singapore (53%) and China (50%) are also ahead in AI use. However, at the same time, more than 75% of consumers are concerned about AI misinformation.

Buzz Walker, CRO at Kryterion

We see AI adoption in organizational processes like certification, identity checks, item writing, proctor assist, and credentialing. But how can organizations make sure that this AI usage remains responsible?

Buzz Walker, CRO at Kryterion, a global test development and test delivery company, says that every organization needs a robust AI policy that includes at minimum policies and procedures related to human oversight, data privacy, data security and cyber security, transparency and explainability, fairness and bias mitigation, accuracy and consistency, regulatory compliance and the promotion of human values. 

“I believe this requires a multi-faceted approach that prioritizes ethical guidelines and human oversight to make the right decisions. Strong data privacy measures and continuous monitoring will ensure AI remains responsible. Because AI doesn’t have the level of nuanced judgement or contextual understanding of humans, we need to ensure accuracy and fairness of decisions, we must rely heavily on human oversight and human-in-the-loop when we deploy AI,” he explains.

For example, for exam precheck processes, AI collects and analyzes the data on the candidate and then passes this to a human to make the final decision as to whether the person can sit for the exam or not. 

“AI assists with decision making but doesn’t replace humans – humans are the final decision makers for all of our processes that employ AI,” he adds.

How Organizations Can Remain Ethical in AI

It’s no secret that AI systems inherit biases from the datasets they are trained on. Organizations must implement robust data auditing practices, involve diverse teams in algorithm development, and regularly test AI models for unintended biases.

Leveraging AI While Meeting Certification Needs

For organizations, implementing AI systems often requires compliance with industry certifications and standards to ensure safety, reliability, and fairness. Certification programs like ISO 27001 for information security or specific AI-focused certifications such as those by IEEE or NIST provide a roadmap. Organizations can leverage AI while fulfilling these requirements by conducting independent audits of AI systems, maintaining detailed documentation of algorithms, data sources, and testing results, and using explainable AI tools to make decision-making processes transparent.

AI can greatly enhance certification programs by improving efficiency, accuracy, and test security, says Walker, “For example AI powered ID verification ensures candidates are who they say they are, reducing fraud. AI proctoring tools can help monitor test takers, flag suspicious behaviour, and help create a more consistent proctoring experience. AI assisted item writing can help with creating quality exam questions faster and streamline test development.”

AI’s Role in Identity Verification

AI has significantly improved identity verification processes, offering faster, more accurate, and scalable solutions. Facial recognition, document analysis, and biometric matching powered by AI are being used to verify identities in real-time. This is particularly significant in areas like financial services, healthcare, and online transactions, where security and accuracy are paramount.

However, ensuring fairness in identity verification systems is critical. Organizations must ensure their systems are trained on diverse datasets to avoid misidentification or discrimination against certain demographics.

Enhancing Proctor Assist with AI

In remote learning and testing environments, AI is enhancing proctoring by monitoring test-takers through video, audio, and behavioral analytics. AI-powered tools can detect unusual activities, such as looking off-screen or the presence of unauthorized devices. These systems provide scalability and cost-efficiency, enabling institutions to oversee large-scale examinations without compromising integrity. 

AI-enhanced proctoring uses algorithms to monitor eye movement, facial recognition, and object detection. 

“AI serves as a second set of eyes for our human proctors to ensure we are not missing any abhorrent behavior or unauthorized materials. Because AI never gets tired, it will increase the consistency and accuracy of proctoring and can also help reduce false flags or false positives. For example, AI can differentiate between eye movements which may indicate reading versus looking away to relax eye strain. So, we don’t disrupt candidates when it’s not really needed,” Walker explains.

Improving the Credentialing Process Without Impacting Jobs

AI streamlines credentialing by automating data analysis, verifying qualifications, and cross-checking records efficiently. By reducing administrative burden, AI enables organizations to scale their credentialing operations without compromising accuracy.

However, one of the concerns in this area is that AI in credentialing or item writing means replacing jobs. But Walker thinks instead of replacing jobs, AI can help support and streamline roles.

“AI can help generate initial drafts of test questions, but human subject matter experts still refine and publish the items. With AI assisted proctoring, AI helps flag potential issues, but our human proctors still make the final decision to alert or suspend the candidate,” he says.

“AI will change the nature of the job role but not replace the humans. Humans will be able to focus on more high-value tasks in which human judgement is required such as reviewing and revising AI-generated items or acting on AI-generated proctoring alerts,” he adds.

Addressing Security Concerns in AI Systems

Despite its advantages, AI’s use in credentialing, proctoring, or identity verification poses data security risks. Breaches can lead to exposure of sensitive information, while poorly designed systems may inadvertently introduce vulnerabilities.

To mitigate these risks, organizations can ensure all data processed by AI is encrypted both in transit and at rest. They should also restrict access to sensitive data and implement multi-factor authentication while continuously monitoring AI systems for vulnerabilities and addressing them promptly. Organizations can also use Privacy-Enhancing Technologies (PETs) like differential privacy to anonymize data while retaining its utility for AI systems.

“Organizations should adhere to data protection regulations like GDPR to ensure user privacy and data security. Organizations should perform a risk assessment to identify threats and prioritize the higher risks so they can be mitigated. It’s all important for regulatory compliance,” Walker says.

Benefits with Ethics

AI is a powerful tool that can redefine industries, but its benefits must be tempered with ethical considerations and robust safeguards. By addressing challenges such as bias, security, and misinformation, organizations can ensure AI contributes positively to society. Ultimately, a collaborative approach involving policymakers, technologists, and end-users will be critical to maintaining AI’s role as a responsible and transformative force.

Disclosure: This article includes a client of an Espacio portfolio company.

Navanwita Sachdev

An English literature graduate, Navanwita is a passionate writer of fiction and non-fiction as well as being a published author. She hopes her desire to be a nosy journalist will be satisfied at The Sociable.

Recent Posts

AI and ML automation platform wins the Sadosky Prize for fintechs

Viallion, a company that empowers investors to scale performance and access untapped opportunities, announced this…

18 hours ago

WEF president calls for an algorithm to control other algorithms to benefit humankind

World Economic Forum (WEF) president Børge Brende suggests that there should be an algorithm above…

2 days ago

US has enough natural gas to power AI, ESG policies & logistics are roadblocks: Goldman Sachs exec

By 2027 the US will need up 60 gigawatts to power roughly 3,000 data centers,…

2 days ago

UNESCO targets influencers for disinformation, hate speech training

Unelected globalists are teaming up with legacy media to brainwash content creators into thinking like…

2 days ago

El Salvador’s Bukele hints at “rent your own volcano” Bitcoin mining program

Guest author: Elizabeth Bratton, Reporter, Latin America Reports El Salvador’s President Nayib Bukele has hinted at…

2 days ago

Over-tourism reimagined: How regenerative travel can address today’s environmental challenges (Brains Byte Back podcast)

In this episode of Brains Byte Back, we explore the complex issue of over-tourism, focusing…

2 days ago