Category: Leadership

  • The Professional Skeptic’s Burden: Defending a Stack That Never Sleeps

    I’m not sure there’s such a thing as a “Security Engineer”.

    In the traditional world, an engineer is hired to build a “success state.” They take raw materials and assemble them into a bridge, a building, or a piece of software that works. Their focus is the “happy path”, that is the sequence of events that leads to a functional outcome.

    I believe we are the exact opposite. We are Inverse Architects.

    It isn’t that we don’t understand how things work. On the contrary, we often understand the mechanics, the protocols, and the logic gates better than the architects themselves. But while they study the architecture to ensure its stability, we study it to find how it breaks: the attack paths, the failure modes, the hidden seams where controls don’t hold under pressure [1][2]. We don’t look at a new business model and simply understand the “innovation”; we also see a fresh surface for logic flaws and unhandled exceptions [1][3].

    This mindset is our greatest superpower, but in today’s additive corporate landscape, it has also become a recipe for cognitive redlining: a constant overload created by the combination of adversarial pressure, tool sprawl, and context switching [4][5].

    1) The Burden of the Infinite Stack

    The burnout in cybersecurity isn’t just about the hours; it’s about the adversarial tax. SOCs and defenders operate under relentless alert volume, triage pressure, and the constant risk that “one missed signal” becomes tomorrow’s incident [4][6]. Unlike most functions where the “new” replaces the “old,” security inherits everything: every legacy dependency that cannot be retired, and every modern component that must be secured on day one [7][8].

    We are currently tasked with maintaining deep technical empathy for 30‑year‑old legacy systems while simultaneously predicting how new AI-driven systems and automation can fail—sometimes in ways that are opaque and hard to inspect [9][10].

    One of the most prominent principles in our field is brutally simple: complexity is the worst enemy of security [11][12].

    Yet the corporate reality often trends toward infinite expansion. That constant oscillation, moving from the basement of technical debt to the penthouse of future strategy, creates a context-switching tax that is fundamentally unsustainable.

    Cognitive research shows task-switching imposes measurable “switch costs” that degrade performance, especially as tasks become more complex [13]. Security roles amplify this cost because the switching isn’t between similar tasks; it’s between entirely different threat models, stacks, and operational realities [4][13].

    2) From “Trial and Error” to Resilience Validation

    In a boardroom, “trial and error” is a challenging phrase (to say the least). It sounds like a lack of a plan and a waste of capital. But because we operate in a failure mindset, we know that the first version of any defense is essentially a hypothesis waiting to be disproven, especially in complex systems where real behavior diverges from design assumptions [11][14].

    In essence, trial and error is not and ask for “permission to experiment” We ask for adversarial quality control.

    Since we cannot predict every failure mode of a new business model on Day One, we must integrate live-fire validation into the product lifecycle: disciplined testing that verifies resilience under real-world stress and failure conditions [14][15].

    • The Shadow Launch: Run new systems in parallel or production-like environments, limit the blast radius, and intentionally test how the system behaves when things fail [14][16].
    • The Business Case: This is not guessing. It’s converting uncertainty into measurable assurance by validating resilience through controlled experiments and repeatable tests [14][16].

    This is the difference between security as opinion and security as evidence, and it aligns with the modern expectation that security practices are integrated throughout the lifecycle, not bolted on at the end [17].

    3) The Partnership Protocol: Beyond “I Don’t Know”

    In a boardroom, “I don’t know” is often mistaken for lack of preparedness and planning. But in a hyper-complex environment, pretending to have all the answers is the real liability. Mature governance doesn’t demand omniscience, it demands clarity about what we know, what we know that we don’t know, and how risk will be managed (the ones we who don’t know we don’t know sort of speak) [18][19].

    The pragmatic shift is moving from “Knowing” to joint discovery. Let’s explore this.

    When the business wants to deploy an experimental tool, the response isn’t a checklist, it’s a partnership protocol (we’ll discuss partnerships at a later state). We use our understanding of the system’s “guts” to facilitate a safe rollout. So, instead of “I don’t know how it works” we can anchor:

    “We understand the architecture, but because this is an emerging technology, the failure modes are still being mapped. Let’s jointly sign up for a 30‑day ‘break‑fix’ phase where we intentionally hunt for seams before we commit the full capital spend.”

    That’s what board-level cyber governance actually looks like: decision-making, accountability, and oversight based on evidence and continuous validation, not vibes or gut feelings (although gut feelings may stem from experience) [18][20].

    This is how the cyber leader evolves from “gatekeepers” to “partners” in risk-taking: still skeptical, but aligned to business outcomes and responsible “experimentation” [18][19].

    4) Systemic Rationalization: Deletion as a Control

    To manage our cognitive load, we must treat removal of legacy tech as a high-value security win. Every time we kill an old, legacy system, we’re not just reducing exposure: we’re reclaiming the mental bandwidth of the team and shrinking the attack surface [7][21].

    We must advocate for the deletion mandate: if the business adds a new layer of complexity, it must fund the removal of a legacy asset (another sunk cost fallacy). Decommissioning is not clerical work; it is a strategic control that reduces unpatched risk, unsupported dependencies, and the operational fragility that accumulates in legacy estates [21][22].

    Deletion is one of the most effective security controls ever invented: the most secure component is the one that no longer exists [11][12]. If we don’t aggressively prune the past, we will never have the cognitive capacity to interrogate the future [13][21].

    Conclusion: Leading the Failure

    The most successful cyber leaders of the next decade won’t be those who claim to be invincible builders. They will be adversarial partners, leaders who make the experimental nature of modern business explicit and then build a culture that values active stress-testing, fast learning, and rapid recovery over the illusion of perfect knowledge.[14][18]

    We don’t need more tools; we need permission (ok, and budget…) to validate resilience continuously and prove control effectiveness with evidence, not optimism [4][20][23].

    We need to use our deep understanding of how things work to find where they break; before the adversary does [1][2]. And let’s remove this legacy system.

    Sources

    [1] Stephan Miller, “What Is Red Team Testing, and How Does It Work? What You Need to Know,” Infosec Institute, May 30, 2024. [infosecinstitute.com]

    [2] Crimson7, “Understand Red Team and Adversary Simulation,” web page. [crimson7.io]

    [3] CrowdStrike, Red Team / Blue Team Exercise (data sheet). [crowdstrike.com]

    [4] Shahroz Tariq, Mohan Baruwal Chhetri, Surya Nepal, and Cecile Paris, “Alert Fatigue in Security Operations Centres: Research Challenges and Opportunities,” ACM Computing Surveys 57, no. 9 (April 2025): Article 224. [dl.acm.org]

    [5] Ann Rangarajan et al., “A Roadmap to Address Burnout in the Cybersecurity Profession: Outcomes from a Multifaceted Workshop,” HCI for Cybersecurity, Privacy and Trust (HCII 2025), June 11, 2025. [link.springer.com]

    [6] Pierluigi Paganini, “Burnout in SOCs: How AI Can Help Analysts Focus on High-Value Tasks,” Security Affairs, December 5, 2024. [securityaffairs.com]

    [7] Robert Nord, Ipek Ozkaya, and Carol Woody, Examples of Technical Debt’s Cybersecurity Impact (July 2021). [apps.dtic.mil]

    [8] “When technical debt strikes the security stack,” CSO Online, September 25, 2024. [csoonline.com]

    [9] Palo Alto Networks, “Black Box AI: Problems, Security Implications, & Solutions,” Cyberpedia. [paloaltonetworks.com]

    [10] Jack Tilbury and Stephen Flowerday, “Automation Bias and Complacency in Security Operation Centers,” Computers 13, no. 7 (2024). [mdpi.com]

    [11] Bruce Schneier, “Complexity Is the Worst Enemy of Security,” PDF (March 2025). [schneier.com]

    [12] Bruce Schneier and Anthony Vance, “Guest Editorial: ‘Complexity is the Worst Enemy of Security’: Studying Cybersecurity Through the Lens of Organizational Complexity,” MIS Quarterly 49, no. 1 (2025): 205–210. [misq.umn.edu]

    [13] American Psychological Association, “Multitasking: Switching Costs,” APA research summary (discusses task-switching experiments and measurable switching costs, including Rubinstein, Evans, and Meyer). [apa.org]

    [14] Microsoft Learn, “Understand chaos engineering and resilience with Chaos Studio,” Azure Chaos Studio documentation (explains resilience validation in production-like environments and fault injection). [learn.microsoft.com]

    [15] Amazon Web Services, “REL12‑BP04 Test resiliency using chaos engineering,” AWS Well-Architected Framework: Reliability Pillar (run chaos experiments regularly, close to production, limit blast radius). [docs.aws.amazon.com]

    [16] Amazon Web Services, “REL12‑BP04 Test resiliency using chaos engineering,” implementation guidance on experiments as regression tests in the SDLC/CI/CD. [docs.aws.amazon.com]

    [17] National Institute of Standards and Technology (NIST), “SP 800‑218 Rev. 1 (Initial Public Draft), Secure Software Development Framework (SSDF) Version 1.2,” December 17, 2025 (integrate secure practices throughout SDLC). [csrc.nist.gov]

    [18] Cybersecurity and Infrastructure Security Agency (CISA), “Corporate Cyber Governance: Owning Cyber Risk at the Board Level,” January 8, 2025. [cisa.gov]

    [19] National Institute of Standards and Technology (NIST), NIST IR 8286 Rev. 1: Integrating Cybersecurity and Enterprise Risk Management (ERM), December 2025. [csrc.nist.gov]

    [20] NIST, SP 800‑137: Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations, September 2011 (visibility into assets, threats, vulnerabilities, and control effectiveness). [csrc.nist.gov]

    [21] Dirk Schrader, “The Hidden Threat of Legacy Systems: Lessons from a Massive Recent Data Breach,” Cybersecurity Insiders, December 5, 2024 (legacy assets, decommissioning failures, need for inventory and risk reduction). [cybersecur…siders.com]

    [22] Infosys, A Strategic Framework for Decommissioning Legacy Tech (white paper; decommissioning reduces cost, risk, and addresses unpatched vulnerabilities and compliance gaps). [infosys.com]

    [23] NIST, “Detect,” NIST Cybersecurity Framework mappings page (continuous monitoring references). [nist.gov]

  • The Cognitive Architecture of Cyber Security: Why Boardroom Governance Fails

    The fundamental crisis in modern cybersecurity is not a deficit of cryptography or a lack of firewall throughput; it is a mismatch in systems thinking.

    We are attempting to secure 21st‑century hyper‑complex infrastructure using outdated mental models [1][2].

    In cybersecurity, technology is only half the equation; the other half is human behavior, economics, and organizational design[3][4]. When executives rely on cognitive shortcuts to manage cyber risk, those shortcuts manifest as catastrophic strategic blind spots [5]. Cybersecurity is fundamentally a risk‑management and decision‑quality problem[6]. Resilience is built by debugging the organization’s governance before debugging the code [7].

    Here are the six principles—the “cognitive firewalls”—required for high‑fidelity board‑level defense [8].

    1) The “Secure‑by‑Default” Principle: When the System, Not the Human, Fails

    When a breach occurs, the corporate reflex is to search for a single throat to choke. This usually leads to scapegoating an employee who clicked a malicious link. This is a failure of leadership [9].

    We must invert this blame. A robust system assumes human error will occur and builds “secure‑by‑default” safeguards (like phishing‑resistant MFA) to ensure a single point of failure never results in a terminal event [10][11][12]. If one human error can compromise your enterprise, your system design is a failure [12].

    2) The Explainability Mandate: Surviving the AI Oracle Trap

    As organizations race to integrate artificial intelligence, they are falling into automation complacency and automation bias: forms of over‑reliance where humans stop verifying and start obeying.[13][14]

    This is an abdication of fiduciary duty. Relying on a sophisticated black box without human oversight simply creates a highly efficient, invisible point of failure [15]. If your security tooling cannot explain the “why” behind its recommendation in plain, verifiable logic, its output must be treated as a guess [13][16]. You cannot outsource risk ownership to an algorithm [7][17].

    3) The Near‑Miss Metric: Success Is the Most Dangerous Illusion

    Boards frequently suffer from survivorship bias, deriving a false sense of security from the absence of major crises [18]. Leaders often mistake a “clean record” for superior defense, when it is frequently just the result of a quiet threat landscape: or more likely, a lack of telemetry and visibility [19][20].

    Resilience must be measured by how the organization identifies, learns from, and responds to near‑misses, not by the illusion of invulnerability [21][22].

    A company that claims it is never attacked is either lying or blind; “no alarms” often means “no visibility”[19][20].

    4) The Business Translation Principle: Bridging the Governance Gap

    Technical experts often suffer from the curse of knowledge. In the boardroom, the CISO speaks in patch cycles, zero‑days, and acronym‑heavy threat intel. The Board hears only noise—and disengagement follows.[7][8]

    Security is a business enablement function, not a dark art. If a CISO cannot explain a technical risk without hiding behind acronyms—anchoring instead on revenue exposure, regulatory consequence, and operational downtime—then the risk is not governable by the Board [6][7][8]. Don’t tell the board about a CVSS score; CVSS is severity, not risk [23]. Tell them that the system processing 40% of daily revenue can be taken offline, and what the business impact is [6][23].

    5) The Debt Eradication Principle: Killing Zombie Projects

    The sunk cost fallacy traps organizations into doubling down on obsolete technology simply because millions were spent on it years ago [24]. These “Zombie Projects” are the essence of security technical debt: they drain capital, require manual workarounds, and expand the attack surface while projecting an illusion of safety [25][26].

    Leaders must enforce a “Day Zero” mindset: “If we started today with zero spend, would we invest $1 in this architecture?” If the answer is no, a decommissioning plan must be drafted [25]. Complexity is the enemy of security; simplification is often the strongest control [1][2]. (Replace defense in depth with depth in defense).

    6) The Cognitive Ladder of the Security Professional

    Security cognition must mature as a professional moves from the keyboard to the boardroom:

    • The Engineer (Tactical Anticipation): Complex systems fail in complex ways; if you can’t name three bypass paths, you probably haven’t modeled the threat properly [1][2].
    • The Architect (Systemic Design): Assume compromise, and demand red‑team testing to validate what actually holds under attack [27].
    • The CISO (Business Translation): Stop speaking in cryptographic standards; speak in capital allocation, regulatory risk, and operational resilience [6][7].
    • The Board (Risk Governance): Avoid the trap of “best practices” and demand proof of control effectiveness, especially through continuous monitoring and evidence [19][20].

    Conclusion: Security Is a Process, Not a Purchase

    Consider a major enterprise that invests millions in a state‑of‑the‑art SOC, complete with predictive analytics and glowing dashboards. Yet, a basic misconfiguration by an internal admin goes undetected for months. Why? Because the culture was so mesmerized by vendor promises that no one continuously tested the fundamentals; and without monitoring and logging, organizations operate in the dark [19][20][26].

    Security is not a product you can buy; it is a continuous process of engineering and governance [28]. It is the discipline to challenge your own assumptions, simplify your architecture, and accept that your systems must be resilient to human failure [1][2].

    The most secure leaders are not those with the most expensive tools; they are the ones who know exactly how their systems will eventually fail: and can prove it with evidence [19][20].

    Sources

    [1] Bruce Schneier and Anthony Vance, “Guest Editorial: ‘Complexity is the Worst Enemy of Security’: Studying Cybersecurity Through the Lens of Organizational Complexity,” MIS Quarterly 49, no. 1 (2025): 205–210. [misq.umn.edu]

    [2] Bruce Schneier, “Complexity Is the Worst Enemy of Security,” PDF (March 2025). [schneier.com]

    [3] Kalam Khadka and Abu Barkat Ullah, “Human Factors in Cybersecurity: an Interdisciplinary Review and Framework Proposal,” International Journal of Information Security (April 29, 2025). [link.springer.com]

    [4] Wenjing Huang, Sasha Romanosky, and Joe Uchill, Beyond Technicalities: Assessing Cyber Risk by Incorporating Human Factors (Santa Monica, CA: RAND, July 9, 2025). [rand.org]

    [5] Neema Parvini, “Key Concepts: Dual‑Process Theory, Heuristics and Biases,” in Shakespeare and Cognition (Palgrave Macmillan, 2015). [link.springer.com]

    [6] National Institute of Standards and Technology (NIST), NIST IR 8286 Rev. 1: Integrating Cybersecurity and Enterprise Risk Management (ERM) (December 2025). [csrc.nist.gov]

    [7] Cybersecurity and Infrastructure Security Agency (CISA), “Corporate Cyber Governance: Owning Cyber Risk at the Board Level” (January 8, 2025). [cisa.gov]

    [8] Cybersecurity and Infrastructure Security Agency (CISA), “Cybersecurity Governance” (web page). [cisa.gov]

    [9] “Swiss cheese model,” Wikipedia, accessed February 2026. [en.wikipedia.org]

    [10] CISA et al., Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security‑by‑Design and ‑Default (April 13, 2023). [cisa.gov]

    [11] CISA, Implementing Phishing‑Resistant MFA (October 2022). [cisa.gov]

    [12] NIST, SP 800‑53 Rev. 5, control SA‑8(23) “Secure Defaults” (reference entry). [csf.tools]

    [13] Raja Parasuraman and Dietrich H. Manzey, “Complacency and Bias in Human Use of Automation: An Attentional Integration,” Human Factors 52, no. 3 (2010): 381–410. [depositonc…-berlin.de]

    [14] Jack Tilbury and Stephen Flowerday, “Automation Bias and Complacency in Security Operation Centers,” Computers 13, no. 7 (2024). [mdpi.com]

    [15] Palo Alto Networks, “Black Box AI: Problems, Security Implications, & Solutions,” Cyberpedia. [paloaltonetworks.com]

    [16] Parasuraman and Manzey, “Complacency and Bias in Human Use of Automation.” [depositonc…-berlin.de]

    [17] NIST, NIST IR 8286 Rev. 1: Integrating Cybersecurity and Enterprise Risk Management (ERM) (December 2025). [csrc.nist.gov]

    [18] David Gray, “Cybersecurity Survivorship Bias and How to Avoid it,” Infosecurity Magazine (March 10, 2021). [infosecuri…gazine.com]

    [19] NIST, SP 800‑137: Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations (September 2011). [csrc.nist.gov]

    [20] NIST, “Detect,” NIST Cybersecurity Framework mappings page (includes continuous monitoring references). [nist.gov]

    [21] “The Five Principles of High Reliability Organizations (HROs),” summary sheet citing Weick and Sutcliffe, including “preoccupation with failure” and attention to near misses. [mro.net]

    [22] International Humanistic Management Association, “High Reliability Organization (HRO) Principles Reference Sheet,” including near‑miss learning language. [High Relia…Principles]

    [23] National Vulnerability Database (NVD), “Vulnerability Metrics (CVSS): CVSS is not a measure of risk.” [nvd.nist.gov]

    [24] Hal R. Arkes and Catherine Blumer, “The Psychology of Sunk Cost,” Organizational Behavior and Human Decision Processes 35, no. 1 (1985): 124–140. [awspntest.apa.org], [docslib.org]

    [25] Jean‑Louis Letouzey and Declan Whelan, Introduction to the Technical Debt Concept (Agile Alliance, PDF). [agilealliance.org]

    [26] Letouzey and Whelan, Introduction to the Technical Debt Concept (technical debt “interest” metaphor via Cunningham). [agilealliance.org]

    [27] Microsoft Learn, “Attack Simulation in Microsoft 365 (Assume breach … continuous monitoring and testing),” Microsoft Service Assurance. [learn.microsoft.com]

    [28] Bruce Schneier, “The Process of Security,” (April 2000). [schneier.com]