Business

The AI Tipping Point: Balancing Innovation, Security, and Trust

Guest author: Lucas Bonatto

The seismic shifts in the world of artificial intelligence (AI), such as multimodality, generative AI, and text-to-video, have propelled it into a new era, where finding a balance between innovation, security, and trust has emerged as a real challenge for businesses across various sectors. 

According to a recent report from Extrahop, almost 3/4 of business leaders acknowledged that their employees use generative AI tools frequently at work. There is absolutely nothing wrong with that—such widespread AI adoption is just a result of changing times. However, the majority also admitted that they were uncertain about how to address the minefield of associated security risks.

They expressed concern about the potential for employees to use nonsensical responses from language models while exposing personally identifiable customer and employee information. Furthermore, just 46% have established regulations on permissible use, and even less (42%) have training on using the technology safely

So, with such widespread use of AI in the workplace, how can businesses balance the use of AI with security and trust? Let’s dive in.

A Roadmap to Responsible AI

Broadly speaking, responsible AI revolves around the idea of commitment to safety, security, and trustworthiness. It refers to using AI by prioritizing safe behavior and output, adhering to relevant laws and regulations, and safeguarding against malicious attacks. 

A recent Gartner report charts an interesting course towards reaching this concept. They emphasize combining trust, risk, and security (AI TRiSM) into the AI ecosystem. Gartner is predicting a future where prioritizing AI TRiSM translates to enhanced decision-making accuracy by 2026, aligning with global trends toward ethical AI governance.

Another interesting nugget from the report cites Continuous Threat Exposure Management (CTEM) as a linchpin in AI security, as it enables the development of preemptive measures against emerging threats. If organizations can fortify cybersecurity this will make them more resilient with their AI-driven systems against potential vulnerabilities.

A crucial element to ensuring responsible AI security also comes from specialized training. Businesses can consider offering their employees a certification such as the Certified Ethical Hacker (CEH) from the EC-Council, arming professionals with the skills they need to spot and fix security issues in AI systems. 

Aid from Government Initiatives 

Aside from what’s been outlined above, the Department of Defense has also outlined their own responsible AI framework. They have secured funding of over $145 billion for this year, and this commitment extends beyond just national security. It offers opportunities to enhance productivity and streamline bureaucracy across federal agencies and private businesses alike. For instance, the Social Security Administration announced recently that they will be leveraging AI to improve the consistency and efficiency of disability claims processing.

As companies start to think about how to integrate AI responsibly into their operations, the role of regulation rears its (ugly) head. President Biden has already signed the executive order in October last year on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. However, that’s just the tip of the iceberg, and regulatory frameworks are a necessary evil in AI deployment. Federal agencies, Congress, and industry partners must collaborate to ensure responsible AI practices.

Final Thoughts

Embracing responsible AI embodies the principles of safety, security, and trust so that we can all safely coexist with AI. Organizations can make the most of the huge potential of AI through collaborative efforts and proactive security measures while safeguarding against its inherent risks. A shared commitment to ethical AI governance will pave the path toward a future where innovation thrives in tandem with societal well-being.

Lucas Bonatto, Director of AI/ML at Semantix, an artificial intelligence (AI) platform that offers ready-made applications for businesses.

Disclosure: This article mentions a client of an Espacio portfolio company.

Sociable Team

Recent Posts

WEF calls on stakeholders to ‘inoculate’ public against disinformation ‘super-spreaders’: report

Those who decry 'disinformation' the loudest almost never give any examples of what they're denouncing:…

1 day ago

Shift left, ship fast: How software teams can offer speed without sacrificing quality (Brains Byte Back Podcast)

Even the biggest software companies understand that moving quickly is no longer a luxury; it's…

2 days ago

Extremists weaponize COVID, climate issues with conspiracy theories about state & elite control: RAND Europe

The RAND Europe authors are so stuck in their own echo chamber they don't realize…

6 days ago

Digital ID, vaccine passports are expanding to pets & livestock: UN AI for Good report

Humans, animals & commodities alike are all to be digitally tagged, tracked-and-traced equally: perspective The…

1 week ago

Teaching with tech: What’s changing and why It Matters (Brains Byte Back Podcast)

Teaching has changed a lot over the years, from chalkboards to laptops, from printed worksheets…

1 week ago

‘Enormously intrusive’ collaborative sensing is beneficial to society: WEF podcast

The massive city-wide surveillance that collaborative sensing requires is a tremendous temptation for tyrants: perspective…

2 weeks ago