Summer is officially here, and now that so many students have left town, campus feels surprisingly empty. On the plus side, it’s easier to get a table at your favorite restaurant!

You might recall that we’ve been talking about our core principles of security: in March we tackled automation, and in May we considered trust users but get telemetry. This month we are going to dive deeper into the topic of trust, and examine how transparency and informed consent can provide the basis for establishing that trust.

Transparency and Informed Consent

Let’s recap our discussion on trust from last month. A customer relationship built on trust means that our users work with us to accomplish our security goals, and proactively involve us in technology decisions. A customer relationship that lacks trust is adversarial, and results in anxiety and avoidance.

Here’s the logic:

  1. A positive working relationship between our users and the security team is incredibly valuable and worth nurturing
  2. This relationship requires a foundation of trust that is built on informed consent and transparency
  3. Our users are capable of making rational and informed decisions about security risks when properly educated treated with respect

It is often against our nature to be open about the tools and techniques we use to monitor our data and devices. This is reasonable; the more information an attacker has about our infrastructure, the better they can target their attack. The benefits of maintaining secrecy must be weighed against the benefits of openness and transparency with our users, and what that will gain us.

In an academic environment that values open inquiry and the free exchange of ideas, obfuscating or hiding the data collection practices of security tools can breed suspicion and mistrust. Our users will perceive that behavior as an invasion of their privacy, or an attempt to exert excessive control. In the absence of clear information about the data that is collected about them, users will almost certainly suspect the worst (I know I would!).

Conversely, when users understand the purpose behind security measures they are more likely to trust the intentions behind these practices and cooperate willingly. Being transparent about the security tools we install, the types of data being gathered, and the reasons behind these measures demonstrates respect for our users and their rights to privacy and autonomy. I believe that the benefits we will gain from building this trust with our users will greatly outweigh any potential benefits we might get from keeping more secrets (and let’s be honest—sophisticated attackers know many details about our systems already).

A closely related concept is informed consent, which is rooted in medical ethics and healthcare research. It means that individuals have the right to be fully informed about any actions or risks that affect them. In the context of information security, informed consent means that we take the time to make our users fully aware—at a high level—of our security measures. Because of the regulatory environment in which we operate, we don’t always have a choice about many of the security controls that we enforce. However, we can ensure that we are open and honest about what’s happening, and take the time to talk about why.

Let me be clear: embracing transparency does not mean compromising our security. There are certainly times that it is not appropriate to share technical details. When I talk about transparency and informed consent in this context, I mean that we are open about the general principles and approaches that we use, and the nature of the data that we collect. It doesn’t mean that we should be specific about configuration details, or leak details about infrastructure that might compromise the integrity of our systems. Telling the difference will require judgment and discernment from each one of you, and is a good example of why security—when done correctly—is a Really Hard Problem.

Hiring New Team Members

We have two open position searches in Security: we are currently conducting interviews for Security Analyst positions (at multiple levels) that will focus specifically on network security, and we are still accepting applications for a new Associate Director for the Cloud & Platform Security team. Spread the word and tell your friends!

 

Wins & Successes

  • We are now collecting over 2 BILLION event log entries per day in Elastic (and growing!). We recently completed a migration in the Elastic platform onto AWS, creating a savings of $10/hour, or about $85,000 per year.

  • Garrett Yamada recently appeared as a guest on the Decipher Podcast hosted by Duo. He spoke about the big changes underway on campus with our Sailpoint project, and the unique challenges for identity problems presented by a university environment.
  • Since the launch of our new platform documentation site, docs.security.tamu.edu, we have published over 100 pages of content that has been used by teams both inside and outside of Security. We continue to upload new content, and our next focus is on endpoint security documentation surrounding security tools and agents.

 

Security by the Numbers

📈 Just in the last month:

  • 96.7% of all network connections from internet blocked at firewall 
  • 181.5B cyber attacks and malware blocked
  • 176 petabytes of network data scanned
  • 30k computers monitored; with 4.8B endpoint processes analyzed
  • 92.7M mail messages scanned for spam, phishing, viruses; 53.3M messages blocked at the gateway
  • 9.2M auth events recorded across 293k active NetIDs
  • 150k devices tracked in the IT asset management system

Major Project Updates

Sign in with a NetID to see this content

 

Wrapping Up & Reminders

I’m sure many of you have heard about the Ticketmaster data breach. Over half a billion user’s data was exposed and is up for sale on the dark web. Since Ticketmaster uses Snowflake as their backend, several initial reports indicated that Snowflake had been compromised. Well, Snowflake has come out swinging and says that it’s not them—and they got backup from CrowdStrike & Mandiant to confirm. Ultimately, it looks like this all boiled down to a targeted campaign directed at users without MFA.

Two takeaways here: it’s easy to jump to conclusions before all the facts are gathered, and it seems like the security “researchers” who published this report initially were more interested in making news than in being accurate. Secondly: MFA all the things!!

As always, thank you all for your hard work and dedication. I depend on you to share your ideas and suggestions with me, and I encourage you to schedule a meeting with me at any time if you want to talk (it doesn’t have to be about work!).

 

Adam Mikeal

Associate Vice President and Chief Information Security Officer