SME horizon

2025 technology predictions: securing a future with AI

Photo by Steve Johnson

In the first of our two-part feature, we explored what business leaders looked forward to AI bringing in the year ahead. In this second part, leaders pause to reflect on the HR and security implications of the new technology, and how businesses can ready themselves while reaping the benefits.

Trends in the HR sphere

Advancements in technology will continue to reshape skill requirements, making digital literacy and adaptability critical for job seekers across industries. As businesses adapt to an ever-changing economic landscape, agility in hiring and workforce planning will become essential to staying competitive. Furthermore, ongoing government initiatives to support lower-wage workers will likely create more opportunities for upskilling and career advancement, contributing to a more inclusive and resilient workforce. 

According to Aon’s Global Risk Management Survey, failure to attract and retain talent now ranks as the fourth highest risk on the minds of organisations – two years ago this was not even among the top ten risks. Employers today are in an unenviable situation of balancing the rising cost of compensation with the distinct challenge of attracting and retaining top talent.

The talent market is dynamic, providing agile firms the opportunity to be proactive in their talent strategies with the help of total reward levers. To be a first mover in this environment, firms need to use real-time data and predictive analytics to understand broader market trends including what roles are in demand, what skills are fetching premiums and where cost-saving opportunities exist.”

The intersection of AI and HR

AI agents are transforming the workforce by automating repetitive and time-consuming tasks, freeing employees to focus on higher-value work that drives innovation and growth.

This presents an opportunity for the workforce to transform their skill sets and take on more strategic roles. As AI agents become increasingly integrated into the workforce, employees will need to develop new skills to manage and optimise them. They will also have to leverage their industry knowledge to train these agents so that they can deliver the desired business outcomes.

With AI agents embedded directly into workflows, businesses can reimagine customer service, delivering faster and more accurate responses without increasing complexity or requiring extensive training. AI agents enable organisations in ASEAN to move beyond traditional service models, creating a personal and frictionless experience while providing lasting value through smarter, real-time support.

In 2025, the need for traditional security operations center (SOC) analyst roles will rapidly decline as AI and machine learning take over routine security tasks. Organizations will prioritize hiring AI specialists who can interpret, manage and guide advanced AI-driven security systems.

Threat-hunting roles will surge in demand, as human expertise is needed to contextualize and act on AI-generated insights. Companies will no longer rely on generalist cybersecurity teams but instead seek highly specialized professionals to stay ahead of increasingly sophisticated AI-powered attacks. The future of cybersecurity jobs will hinge on human expertise paired with AI innovation.

AI is not just going to start taking over processes within the workplace; I believe we’ll see full ‘digital workers’ and robots deployed into workforce teams across the country. We’ll reach a stage where we’ll be able to ask our digital worker team member about compliance, how to upskill, and how we can improve our experience, just as we might do with our human peers today…with these developments comes a major shift the HR and WFM industries need to consider – how do we measure and evaluate digital workers’ or AIs’ performance? How will we structure performance reviews, set goals and KPIs, and monitor and measure improvements or deteriorations? A few outliers have already attempted this, and been pressured to remove it. But this is delaying the inevitable.

Generative AI will displace 100,000 frontline agents from the top global contact center outsourcers. An average of 62% of contact centers in consumer-facing industries are outsourced. With genAI poised to automate low-complexity issues, the demand for human agents will decline.

Just as organisations have employees specialised in specific functions, AI agents will soon be assigned unique roles within a network. These agents will work alongside human employees, communicate with other agents, and create new agents as business needs evolve. Each agent will have a defined function, allowing the network to handle a wide range of tasks efficiently.

In this agent network, meta-agents will be crucial, coordinating actions across other agents to keep workflows seamless. This setup enables collaboration on platforms like Slack, where human employees and AI agents can interact as a unified team, enhancing responsiveness and coordination.

This new era of agents will redefine collaboration, creating a blended environment where humans and agents work side by side to enhance productivity, improve customer experiences, and support business growth through streamlined operations.

Security Threats and new capabilities

You won’t need to be a coder to create sophisticated malware in 2025—AI will do it for you. Generative AI models trained specifically to generate malicious code will proliferate in underground markets, making it possible for anyone with access to deploy ransomware, spyware and other types of malware with little effort.

These “hacker-in-a-box” tools will automate everything from writing to deploying attacks, democratizing cybercrime and increasing the volume and diversity of threats.

Malicious actors will increasingly use generative AI to create morphing malware—code that adapts and mutates to evade detection, making traditional defenses obsolete. These new strains of AI-generated malware will be more efficient and harder to trace. At the same time, defenders will lean on AI tools to streamline threat detection, asking more sophisticated questions and flagging abnormal behavior more quickly.

With the global cybersecurity talent gap projected to reach 85 million workers by 2030, organizations face a critical shortage of cybersecurity professionals. The issue is especially acute in Asia Pacific, which accounts for over half of the global cybersecurity talent gap.

The introduction of AI copilots presents a huge opportunity for organizations to bridge the talent gap in two ways. Firstly, AI-powered copilots can automate routine work, freeing security professionals from manual, time-consuming tasks to deliver strategic impact. Secondly, AI-powered copilots can empower analysts with accessible insights to handle more complex tasks.

Analysts can easily ask questions in natural language and receive step-by-step guidance from AI to mitigate cyberthreats. Not only does this minimize training time needed for new analysts, it can also make security roles that previously required significant on-the-job experience and training more accessible to the talent pool. 

With the increasing sophistication of AI-generated digital humans, deepfakes are becoming a powerful tool for fraud and misinformation. IT leaders are prioritising AI-powered detection tools and content authentication methods, such as blockchain, to combat the rising threat of AI-powered cyberattacks and ensure the integrity of their data. AI ranks as the second-most disruptive force to business operations, just behind talent shortages, with a disruption risk score of 3.55 out of 5, according to Info-Tech’s findings.

The threat landscape is constantly evolving, with malicious actors leveraging gen AI to develop new attacks and exploit vulnerabilities in banking systems. But financial institutions are fighting back with their own AI-powered defenses. Our recent ROI of Gen AI research found that 3 in 5 financial institutions are seeing measurable improvement in their cybersecurity posture by using gen AI. 

Fraudsters often rely on unstructured data sources like forged documents or suspicious online activity, which are extremely difficult to monitor manually. At the same time, security teams are being overwhelmed by the extensive volume of alerts generated by traditional fraud monitoring systems. AI’s ability to analyze unstructured data, identify complex patterns, and prioritize alerts can significantly enhance fraud detection and protect customers against emerging threats. 

This AI-driven vigilance will help financial services institutions stay ahead, turning a potential vulnerability into a strength as they actively counter AI-powered fraud with equally advanced defenses.

Exit mobile version