AI and security in the education space

Photo by cottonbro studio

The education landscape has changed significantly with the introduction of artificial intelligence. Not only did AI augment learning methods and strategies— it also empowered instructors with new capabilities.

However, with AI being leveraged by hackers to carry out attacks, AI-enabled learning platforms have to be secure and robust in order to withstand current and emerging threats.

Zoe Shang, Chief Operating Officer of MaivenPoint, the education arm of AvePoint, sat down with SMEhorizon to discuss her transition from AvePoint, the evolving education space in APAC, and how AI and security figure into MaivenPoint’s long-term strategy.

Zoe Shang, Chief Operating Officer of MaivenPoint

What was the transition like from AvePoint to MaivenPoint, which has a very specialised industry focus?

The transition from my role as COO at AvePoint to COO at MaivenPoint was both exciting and challenging. While the core leadership skills remained crucial, the shift required significant adaptation.

At AvePoint, we built a strong foundation in data management and collaboration solutions to deliver best-in-class SaaS operations. However, we recognised a significant opportunity to make a more profound impact in the education industry, which led to the creation of MaivenPoint. This platform aimed to modernise education technology by integrating advanced solutions with Microsoft 365, addressing the unique challenges faced by higher education institutions and commercial businesses with training needs.

At AvePoint, my role involved overseeing operations across a broad spectrum of data management and collaboration solutions. Moving to MaivenPoint meant narrowing that focus to the education technology sector, which demanded a deep dive into the unique challenges and opportunities within this space.

The transition involved:

  • Intensive learning about the education sector’s specific needs and pain points
  • Adapting our operational strategies to align with the more specialised market
  • Building new relationships with stakeholders in the education industry
  • Refocusing our team’s efforts and expertise towards education-centric solutions

One of the most significant changes was the shift from managing a diverse portfolio of products to concentrating on a more targeted suite of solutions. This required a different approach to resource allocation, product development cycles, and go-to-market strategies.

The specialised nature of MaivenPoint also meant cultivating a different organisational culture – one that deeply values educational outcomes and understands the nuances of learning environments. This cultural shift was a key part of my transition, as it influenced everything from hiring decisions to how we measure success.

While challenging, this transition has been incredibly rewarding. It’s allowed me to apply my operational expertise to make a meaningful impact in an industry that shapes the future through education. The focused nature of MaivenPoint has also enabled more agile decision-making and innovation, which has been invigorating from a leadership perspective.

A lot of edtech platforms today already have AI capabilities. What separates MaivenPoint from the rest?

AI has been augmenting new expertise at the fingertips of every individual, and businesses globally have been experiencing an unprecedented speed of innovation that is driven by AI. It is critical for organisations to equip their workforce with digital skills to responsibly exploit the use of AI technologies such as Copilot M365, and to realise the productivity gains it offers to a modern AI-powered workforce. With up to 85% of costs tied up in people, companies today are pivoting workforce development strategies towards employee upskilling, to sustain long-term growth with critical digital skillsets – honed and nurtured as a strategic objective.

However, the use of AI may not be intuitive to all users, and user adoption and knowledge of use are key success factors to ensure maximum ROI on AI digital investments.

MaivenPoint recognises the importance of making the most out of these investments by business and education institutions. This includes partnerships with institutes of higher learning to offer courses aimed at empowering SMEs, equipping them with the knowledge required for effective AI adoption. Beyond that, MaivenPoint also collaborates with continuing education and training providers to strengthen efforts in training and placement. This was seen in the collaboration with NTUC LearningHub in January 2024 to train workers in foundational and essential emerging technology skills.

The infusion of AI into MaivenPoint’s platforms are centred on enhancing productivity for learners and trainers or educators. With generative AI and LLM technologies available, we have weaved in AI in some of the most tedious tasks to speed up work efficiency.

For example, our latest AI Marking feature offers exam markers a structured recommended grading of answers to open ended questions, which follows the preferred pre-defined rubric that assessors would use. Users of our platform can also leverage the new AI enhanced user guide to ask any functional questions about the platform to get targeted advice instead of traditional manual exploration of steps.

Our AI Course Creation feature also enables trainers to create course outlines and session objectives in just under 3 minutes (this usually takes days or weeks of research and compilation), and includes auto creation of relevant learning objects and quizzes by topics, which are just left for course planners to adjust and improve.

Speaking about AI, a recent study by AvePoint found a gap between perceived and actual data management complexity. What common AI adoption mistakes have you noticed related to data handling?

Organisations are using public AI tools without implementing an AI Acceptable Use Policy. This can lead to significant data risks such as the damaging loss of intellectual property.

In relation to this, information management (IM) policies are not robust enough. Organisations do not have archiving and retention policies, and they also lack lifecycle management solutions. Additionally, organisations are also not leveraging automation in their IM policies, despite its value in managing an increasing volume of organisational data.

Many organisations also have a fragmented data ecosystem, where data is stored in the cloud, self-hosted storage, and physical documents. This makes it challenging for organisations to locate their data, and in turn reduces the amount of data available for AI to build appropriate learning models.

Last but not the least, outdated data is kept in the storage system, and such redundant, obsolete and trivial (ROT) data burdens storage systems while compromising the validity of data-driven insights when AI is trained using these data.

A lot of cyberattacks today are fuelled by AI. How do you ensure that your AI-powered platform is safe and secure?

At MaivenPoint, we take a multi-layered approach to ensure the safety and security of our platforms.

  • Robust Security Framework: Each of our platforms – Examena, Vitae, and Curricula – operates within a comprehensive security framework that includes continuous monitoring, anomaly detection, and real-time threat mitigation. This ensures that any potential threats are identified and addressed promptly.
  • Advanced Encryption: We employ state-of-the-art encryption techniques to protect sensitive data across all our platforms. Whether it’s exam data in Examena, student records in Vitae, or learning materials in Curricula, data is encrypted both at rest and in transit to prevent unauthorised access.
  • Ethical AI Implementation: Our AI systems are designed with ethical considerations in mind. This includes implementing safeguards to prevent misuse and ensuring that AI enhances rather than replaces human oversight.

Furthermore, our projects are usually paired with AI readiness solutions from our parent company, AvePoint, such as AvePoint Policies and Insights, to help enterprises identify where sensitive or highly exposed data resides in their enterprise workspace, and perform the required data labelling and classification procedures before introducing AI such as Copilot across their workspace. This new wave of democratised AI will impact the way we govern our data in our personal or enterprise workspace and require heightened awareness of responsible AI across every individual employee.

By implementing these measures, we strive to create a secure environment for our users, protecting their data and ensuring the integrity of our AI-powered learning platform against the evolving landscape of AI-fuelled cyberattacks.

Previous article87% of DDoS attacks targeted Windows OS devices in 2023
Next articleMore Singapore SMBs expect to grow and innovate this year