I’m not sure there’s such a thing as a “Security Engineer”.
In the traditional world, an engineer is hired to build a “success state.” They take raw materials and assemble them into a bridge, a building, or a piece of software that works. Their focus is the “happy path”, that is the sequence of events that leads to a functional outcome.
I believe we are the exact opposite. We are Inverse Architects.
It isn’t that we don’t understand how things work. On the contrary, we often understand the mechanics, the protocols, and the logic gates better than the architects themselves. But while they study the architecture to ensure its stability, we study it to find how it breaks: the attack paths, the failure modes, the hidden seams where controls don’t hold under pressure [1][2]. We don’t look at a new business model and simply understand the “innovation”; we also see a fresh surface for logic flaws and unhandled exceptions [1][3].
This mindset is our greatest superpower, but in today’s additive corporate landscape, it has also become a recipe for cognitive redlining: a constant overload created by the combination of adversarial pressure, tool sprawl, and context switching [4][5].
1) The Burden of the Infinite Stack
The burnout in cybersecurity isn’t just about the hours; it’s about the adversarial tax. SOCs and defenders operate under relentless alert volume, triage pressure, and the constant risk that “one missed signal” becomes tomorrow’s incident [4][6]. Unlike most functions where the “new” replaces the “old,” security inherits everything: every legacy dependency that cannot be retired, and every modern component that must be secured on day one [7][8].
We are currently tasked with maintaining deep technical empathy for 30‑year‑old legacy systems while simultaneously predicting how new AI-driven systems and automation can fail—sometimes in ways that are opaque and hard to inspect [9][10].
One of the most prominent principles in our field is brutally simple: complexity is the worst enemy of security [11][12].
Yet the corporate reality often trends toward infinite expansion. That constant oscillation, moving from the basement of technical debt to the penthouse of future strategy, creates a context-switching tax that is fundamentally unsustainable.
Cognitive research shows task-switching imposes measurable “switch costs” that degrade performance, especially as tasks become more complex [13]. Security roles amplify this cost because the switching isn’t between similar tasks; it’s between entirely different threat models, stacks, and operational realities [4][13].
2) From “Trial and Error” to Resilience Validation
In a boardroom, “trial and error” is a challenging phrase (to say the least). It sounds like a lack of a plan and a waste of capital. But because we operate in a failure mindset, we know that the first version of any defense is essentially a hypothesis waiting to be disproven, especially in complex systems where real behavior diverges from design assumptions [11][14].
In essence, trial and error is not and ask for “permission to experiment” We ask for adversarial quality control.
Since we cannot predict every failure mode of a new business model on Day One, we must integrate live-fire validation into the product lifecycle: disciplined testing that verifies resilience under real-world stress and failure conditions [14][15].
- The Shadow Launch: Run new systems in parallel or production-like environments, limit the blast radius, and intentionally test how the system behaves when things fail [14][16].
- The Business Case: This is not guessing. It’s converting uncertainty into measurable assurance by validating resilience through controlled experiments and repeatable tests [14][16].
This is the difference between security as opinion and security as evidence, and it aligns with the modern expectation that security practices are integrated throughout the lifecycle, not bolted on at the end [17].
3) The Partnership Protocol: Beyond “I Don’t Know”
In a boardroom, “I don’t know” is often mistaken for lack of preparedness and planning. But in a hyper-complex environment, pretending to have all the answers is the real liability. Mature governance doesn’t demand omniscience, it demands clarity about what we know, what we know that we don’t know, and how risk will be managed (the ones we who don’t know we don’t know sort of speak) [18][19].
The pragmatic shift is moving from “Knowing” to joint discovery. Let’s explore this.
When the business wants to deploy an experimental tool, the response isn’t a checklist, it’s a partnership protocol (we’ll discuss partnerships at a later state). We use our understanding of the system’s “guts” to facilitate a safe rollout. So, instead of “I don’t know how it works” we can anchor:
“We understand the architecture, but because this is an emerging technology, the failure modes are still being mapped. Let’s jointly sign up for a 30‑day ‘break‑fix’ phase where we intentionally hunt for seams before we commit the full capital spend.”
That’s what board-level cyber governance actually looks like: decision-making, accountability, and oversight based on evidence and continuous validation, not vibes or gut feelings (although gut feelings may stem from experience) [18][20].
This is how the cyber leader evolves from “gatekeepers” to “partners” in risk-taking: still skeptical, but aligned to business outcomes and responsible “experimentation” [18][19].
4) Systemic Rationalization: Deletion as a Control
To manage our cognitive load, we must treat removal of legacy tech as a high-value security win. Every time we kill an old, legacy system, we’re not just reducing exposure: we’re reclaiming the mental bandwidth of the team and shrinking the attack surface [7][21].
We must advocate for the deletion mandate: if the business adds a new layer of complexity, it must fund the removal of a legacy asset (another sunk cost fallacy). Decommissioning is not clerical work; it is a strategic control that reduces unpatched risk, unsupported dependencies, and the operational fragility that accumulates in legacy estates [21][22].
Deletion is one of the most effective security controls ever invented: the most secure component is the one that no longer exists [11][12]. If we don’t aggressively prune the past, we will never have the cognitive capacity to interrogate the future [13][21].
Conclusion: Leading the Failure
The most successful cyber leaders of the next decade won’t be those who claim to be invincible builders. They will be adversarial partners, leaders who make the experimental nature of modern business explicit and then build a culture that values active stress-testing, fast learning, and rapid recovery over the illusion of perfect knowledge.[14][18]
We don’t need more tools; we need permission (ok, and budget…) to validate resilience continuously and prove control effectiveness with evidence, not optimism [4][20][23].
We need to use our deep understanding of how things work to find where they break; before the adversary does [1][2]. And let’s remove this legacy system.
Sources
[1] Stephan Miller, “What Is Red Team Testing, and How Does It Work? What You Need to Know,” Infosec Institute, May 30, 2024. [infosecinstitute.com]
[2] Crimson7, “Understand Red Team and Adversary Simulation,” web page. [crimson7.io]
[3] CrowdStrike, Red Team / Blue Team Exercise (data sheet). [crowdstrike.com]
[4] Shahroz Tariq, Mohan Baruwal Chhetri, Surya Nepal, and Cecile Paris, “Alert Fatigue in Security Operations Centres: Research Challenges and Opportunities,” ACM Computing Surveys 57, no. 9 (April 2025): Article 224. [dl.acm.org]
[5] Ann Rangarajan et al., “A Roadmap to Address Burnout in the Cybersecurity Profession: Outcomes from a Multifaceted Workshop,” HCI for Cybersecurity, Privacy and Trust (HCII 2025), June 11, 2025. [link.springer.com]
[6] Pierluigi Paganini, “Burnout in SOCs: How AI Can Help Analysts Focus on High-Value Tasks,” Security Affairs, December 5, 2024. [securityaffairs.com]
[7] Robert Nord, Ipek Ozkaya, and Carol Woody, Examples of Technical Debt’s Cybersecurity Impact (July 2021). [apps.dtic.mil]
[8] “When technical debt strikes the security stack,” CSO Online, September 25, 2024. [csoonline.com]
[9] Palo Alto Networks, “Black Box AI: Problems, Security Implications, & Solutions,” Cyberpedia. [paloaltonetworks.com]
[10] Jack Tilbury and Stephen Flowerday, “Automation Bias and Complacency in Security Operation Centers,” Computers 13, no. 7 (2024). [mdpi.com]
[11] Bruce Schneier, “Complexity Is the Worst Enemy of Security,” PDF (March 2025). [schneier.com]
[12] Bruce Schneier and Anthony Vance, “Guest Editorial: ‘Complexity is the Worst Enemy of Security’: Studying Cybersecurity Through the Lens of Organizational Complexity,” MIS Quarterly 49, no. 1 (2025): 205–210. [misq.umn.edu]
[13] American Psychological Association, “Multitasking: Switching Costs,” APA research summary (discusses task-switching experiments and measurable switching costs, including Rubinstein, Evans, and Meyer). [apa.org]
[14] Microsoft Learn, “Understand chaos engineering and resilience with Chaos Studio,” Azure Chaos Studio documentation (explains resilience validation in production-like environments and fault injection). [learn.microsoft.com]
[15] Amazon Web Services, “REL12‑BP04 Test resiliency using chaos engineering,” AWS Well-Architected Framework: Reliability Pillar (run chaos experiments regularly, close to production, limit blast radius). [docs.aws.amazon.com]
[16] Amazon Web Services, “REL12‑BP04 Test resiliency using chaos engineering,” implementation guidance on experiments as regression tests in the SDLC/CI/CD. [docs.aws.amazon.com]
[17] National Institute of Standards and Technology (NIST), “SP 800‑218 Rev. 1 (Initial Public Draft), Secure Software Development Framework (SSDF) Version 1.2,” December 17, 2025 (integrate secure practices throughout SDLC). [csrc.nist.gov]
[18] Cybersecurity and Infrastructure Security Agency (CISA), “Corporate Cyber Governance: Owning Cyber Risk at the Board Level,” January 8, 2025. [cisa.gov]
[19] National Institute of Standards and Technology (NIST), NIST IR 8286 Rev. 1: Integrating Cybersecurity and Enterprise Risk Management (ERM), December 2025. [csrc.nist.gov]
[20] NIST, SP 800‑137: Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations, September 2011 (visibility into assets, threats, vulnerabilities, and control effectiveness). [csrc.nist.gov]
[21] Dirk Schrader, “The Hidden Threat of Legacy Systems: Lessons from a Massive Recent Data Breach,” Cybersecurity Insiders, December 5, 2024 (legacy assets, decommissioning failures, need for inventory and risk reduction). [cybersecur…siders.com]
[22] Infosys, A Strategic Framework for Decommissioning Legacy Tech (white paper; decommissioning reduces cost, risk, and addresses unpatched vulnerabilities and compliance gaps). [infosys.com]
[23] NIST, “Detect,” NIST Cybersecurity Framework mappings page (continuous monitoring references). [nist.gov]
Leave a Reply