Texas A&M University | Technology Services
View All News

Why AI Security Matters

AI Security

AI tools are transforming how we work, including summarizing documents, analyzing data, drafting communications, and streamlining countless tasks. These tools offer real benefits, and we understand why faculty, staff, and researchers are eager to explore them. But as AI becomes more integrated into our daily work, it's important to understand the security risks that come with this new technology.

AI Is Changing the Threat Landscape

While AI is changing how we do work, it's also changing how attackers operate. Malicious actors are using AI to write more convincing phishing emails, automate attacks, and scale social engineering in ways that weren't possible before. At the same time, AI systems themselves introduce new types of vulnerabilities that traditional security tools weren't designed to address.

This combination means we all need to be more thoughtful about how we adopt and use AI tools, especially when university data is involved. 

A Simple Framework for Thinking About AI Risk

A curious aspect of AI-enabled systems is that the more useful those tools are to our lives, the more risky they become. This is because of an attack pattern that security researchers have called the Lethal Trifecta(opens in new window). It occurs when an AI system combines three capabilities:

  1. Access to sensitive data — such as emails, research files, student records, or documents
  2. Exposure to outside content — like websites, uploaded files, or messages from others
  3. Ability to send information out — through emails, links, or connections to other services

When all three exist together, hidden instructions in outside content from a webpage, an email, or an attachment can potentially trick the AI into sharing your sensitive data without you realizing it.

The good news? If you can limit even one of these capabilities, you significantly reduce the risk.

What This Means for You

Before adopting a new AI tool for your work, make sure to ask a few questions:

  • What data will this tool be able to access?
  • Will it process content from outside sources I don't control?
  • Can it send information externally?
  • What safeguards does the vendor have in place?

These aren't meant to discourage you from using AI – they're meant to help you use it safely.

Work With Us Early

We know it can be tempting to try a new tool first and ask questions later. But involving Texas A&M's IT Security team early in your evaluation process significantly increases the chances we can help you find a path forward. We don't want to say no. We want to help you do your work safely and securely (and we’re pretty good at finding safe ways to say “Yes”).

If you're exploring AI tools for research, teaching, or administrative work, reach out before you start feeding in sensitive data. The earlier we can collaborate, the better the chances of keeping your data secure. Technology Services maintains a list of approved AI tools that have been evaluated for use with university data.

Questions?

If you have questions about AI tools, data security, or want guidance on evaluating a specific application, please contact Help Desk Central at 979.845.8300 or helpdesk@tamu.edu.

Thank you for helping us navigate this evolving landscape safely together.