{"id":99,"date":"2026-02-17T16:25:20","date_gmt":"2026-02-17T16:25:20","guid":{"rendered":"https:\/\/dpatsos.net\/?p=99"},"modified":"2026-02-17T16:29:22","modified_gmt":"2026-02-17T16:29:22","slug":"the-professional-skeptics-burden-defending-a-stack-that-never-sleeps","status":"publish","type":"post","link":"https:\/\/dpatsos.net\/index.php\/2026\/02\/17\/the-professional-skeptics-burden-defending-a-stack-that-never-sleeps\/","title":{"rendered":"The Professional Skeptic\u2019s Burden: Defending a Stack That Never Sleeps"},"content":{"rendered":"\n<p>I&#8217;m not sure there&#8217;s such a thing as a \u201cSecurity Engineer\u201d.<\/p>\n\n\n\n<p>In the traditional world, an engineer is hired to build a \u201csuccess state.\u201d They take raw materials and assemble them into a bridge, a building, or a piece of software that works. Their focus is the \u201chappy path\u201d, that is the sequence of events that leads to a functional outcome.<\/p>\n\n\n\n<p>I believe we are the exact opposite. We are <em>Inverse Architects<\/em>.<\/p>\n\n\n\n<p>It isn\u2019t that we don\u2019t understand how things work. On the contrary, we often understand the mechanics, the protocols, and the logic gates better than the architects themselves. But while they study the architecture to ensure its stability, <strong>we study it to find how it breaks<\/strong>: the attack paths, the failure modes, the hidden seams where controls don\u2019t hold under pressure [1][2]. We don\u2019t look at a new business model and simply understand the  \u201cinnovation\u201d; we also see a fresh surface for logic flaws and unhandled exceptions [1][3].<\/p>\n\n\n\n<p>This mindset is our greatest superpower, but in today\u2019s additive corporate landscape, it has also become a recipe for <strong>cognitive redlining<\/strong>: a constant overload created by the combination of adversarial pressure, tool sprawl, and context switching [4][5].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) The Burden of the Infinite Stack<\/h3>\n\n\n\n<p>The burnout in cybersecurity isn\u2019t just about the hours; it\u2019s about the <strong>adversarial tax<\/strong>. SOCs and defenders operate under relentless alert volume, triage pressure, and the constant risk that \u201cone missed signal\u201d becomes tomorrow\u2019s incident [4][6]. Unlike most functions where the \u201cnew\u201d replaces the \u201cold,\u201d <strong>security inherits everything: every legacy dependency that cannot be retired, and every modern component that must be secured on day one<\/strong> [7][8].<\/p>\n\n\n\n<p>We are currently tasked with maintaining deep technical empathy for 30\u2011year\u2011old legacy systems while simultaneously predicting how new AI-driven systems and automation can fail\u2014sometimes in ways that are opaque and hard to inspect [9][10].<\/p>\n\n\n\n<p>One of the most prominent principles in our field is brutally simple: <strong>complexity is the worst enemy of security<\/strong> [11][12]. <\/p>\n\n\n\n<p>Yet the corporate reality often trends toward infinite expansion. That constant oscillation, moving from the basement of technical debt to the penthouse of future strategy, creates a <strong>context-switching tax that is fundamentally unsustainable<\/strong>. <\/p>\n\n\n\n<p>Cognitive research shows task-switching imposes measurable \u201cswitch costs\u201d that degrade performance, especially as tasks become more complex [13]. Security roles amplify this cost because the switching isn\u2019t between similar tasks; it\u2019s between entirely different threat models, stacks, and operational realities [4][13].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2) From \u201cTrial and Error\u201d to Resilience Validation<\/h3>\n\n\n\n<p>In a boardroom, \u201ctrial and error\u201d is a challenging phrase (to say the least). It sounds like a lack of a plan and a waste of capital. But because we operate in a failure mindset, we know that the first version of any defense is essentially a hypothesis waiting to be disproven, especially in complex systems where real behavior diverges from design assumptions [11][14].<\/p>\n\n\n\n<p>In essence, trial and error is not and ask for <em>\u201cpermission to experiment\u201d<\/em> We ask for <strong>adversarial quality control<\/strong>.<\/p>\n\n\n\n<p>Since we cannot predict every failure mode of a new business model on Day One, we must integrate <strong>live-fire validation<\/strong> into the product lifecycle: disciplined testing that verifies resilience under real-world stress and failure conditions [14][15].<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>The Shadow Launch:<\/strong> Run new systems in parallel or production-like environments, limit the blast radius, and intentionally test how the system behaves when things fail [14][16].<\/li>\n\n\n\n<li><strong>The Business Case:<\/strong> This is not guessing. It\u2019s converting uncertainty into measurable assurance by validating resilience through controlled experiments and repeatable tests [14][16].<\/li>\n<\/ul>\n\n\n\n<p>This is the difference between security as <em>opinion<\/em> and security as <em>evidence, <\/em>and it aligns with the modern expectation that <strong>security practices are integrated throughout the lifecycle, not bolted on at the end<\/strong> [17].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3) The Partnership Protocol: Beyond \u201cI Don\u2019t Know\u201d<\/h3>\n\n\n\n<p>In a boardroom, \u201cI don\u2019t know\u201d is often mistaken for lack of preparedness and planning. But in a hyper-complex environment, <strong>pretending to have all the answers is the real liability<\/strong>. Mature governance doesn\u2019t demand omniscience, it demands clarity about what we know, what we know that we don&#8217;t know, and how risk will be managed (the ones we who don&#8217;t know we don&#8217;t know sort of speak) [18][19].<\/p>\n\n\n\n<p>The pragmatic shift is moving from \u201cKnowing\u201d to <strong>joint discovery<\/strong>. Let&#8217;s explore this.<\/p>\n\n\n\n<p>When the business wants to deploy an experimental tool, <strong>the response isn\u2019t a checklist, it\u2019s a partnership protocol<\/strong> (we&#8217;ll discuss partnerships at a later state). We use our understanding of the system\u2019s \u201cguts\u201d to facilitate a safe rollout. So, instead of \u201c<em>I don\u2019t know how it works<\/em>\u201d we can anchor:<\/p>\n\n\n\n<p><em>\u201cWe understand the architecture, but because this is an emerging technology, the failure modes are still being mapped. Let&#8217;s jointly sign up for  a 30\u2011day \u2018break\u2011fix\u2019 phase where we intentionally hunt for seams before we commit the full capital spend.\u201d<\/em><\/p>\n\n\n\n<p>That\u2019s what board-level cyber governance actually looks like: <strong>decision-making, accountability, and oversight based on evidence and continuous validation<\/strong>, not vibes or gut feelings (although gut feelings may stem from experience) [18][20].<\/p>\n\n\n\n<p>This is how the cyber leader evolves from \u201cgatekeepers\u201d to \u201cpartners\u201d in risk-taking: still skeptical, but aligned to business outcomes and responsible &#8220;experimentation&#8221; [18][19].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4) Systemic Rationalization: Deletion as a Control<\/h3>\n\n\n\n<p>To manage our cognitive load, <strong>we must treat removal of legacy tech as a high-value security win<\/strong>. Every time we kill an old, legacy system, <strong>we\u2019re not just reducing exposure: we\u2019re reclaiming the mental bandwidth<\/strong> of the team and shrinking the attack surface [7][21].<\/p>\n\n\n\n<p>We must advocate for the deletion mandate: if the business adds a new layer of complexity, it must fund the removal of a legacy asset (another sunk cost fallacy). <strong>Decommissioning is not clerical work; it is a strategic control that reduces unpatched risk<\/strong>, unsupported dependencies, and the operational fragility that accumulates in legacy estates [21][22].<\/p>\n\n\n\n<p><strong>Deletion is one of the most effective security controls ever invented<\/strong>: <strong>the most secure component is the one that no longer exists<\/strong> [11][12]. If we don\u2019t aggressively prune the past, we will never have the cognitive capacity to interrogate the future [13][21].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Conclusion: Leading the Failure<\/h3>\n\n\n\n<p>The most successful cyber leaders of the next decade won\u2019t be those who claim to be invincible builders. They will be <strong>adversarial partners<\/strong>, leaders who make the experimental nature of modern business explicit and then build a culture that values active stress-testing, fast learning, and rapid recovery over the illusion of perfect knowledge.[14][18]<\/p>\n\n\n\n<p>We don\u2019t need more tools; we need permission (ok, and budget&#8230;) to validate resilience continuously and prove control effectiveness with evidence, not optimism [4][20][23].<\/p>\n\n\n\n<p>We need to use our deep understanding of how things work to find where they break; before the adversary does [1][2]. And let&#8217;s remove this legacy system. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Sources<\/h3>\n\n\n\n<p>[1] Stephan Miller, \u201cWhat Is Red Team Testing, and How Does It Work? What You Need to Know,\u201d <em>Infosec Institute<\/em>, May 30, 2024. <a href=\"https:\/\/www.infosecinstitute.com\/resources\/penetration-testing\/what-is-red-team-testing-how-does-it-work\/\">[infosecinstitute.com]<\/a><\/p>\n\n\n\n<p>[2] Crimson7, \u201cUnderstand Red Team and Adversary Simulation,\u201d web page. <a href=\"https:\/\/www.crimson7.io\/red-team\">[crimson7.io]<\/a><\/p>\n\n\n\n<p>[3] CrowdStrike, <em>Red Team \/ Blue Team Exercise<\/em> (data sheet). <a href=\"https:\/\/www.crowdstrike.com\/content\/dam\/crowdstrike\/www\/en-us\/wp\/brochures\/CS_Red_BlueTeamServices_datasheet.pdf\">[crowdstrike.com]<\/a><\/p>\n\n\n\n<p>[4] Shahroz Tariq, Mohan Baruwal Chhetri, Surya Nepal, and Cecile Paris, \u201cAlert Fatigue in Security Operations Centres: Research Challenges and Opportunities,\u201d <em>ACM Computing Surveys<\/em> 57, no. 9 (April 2025): Article 224. <a href=\"https:\/\/dl.acm.org\/doi\/epdf\/10.1145\/3723158\">[dl.acm.org]<\/a><\/p>\n\n\n\n<p>[5] Ann Rangarajan et al., \u201cA Roadmap to Address Burnout in the Cybersecurity Profession: Outcomes from a Multifaceted Workshop,\u201d <em>HCI for Cybersecurity, Privacy and Trust<\/em> (HCII 2025), June 11, 2025. <a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-92833-8_8\">[link.springer.com]<\/a><\/p>\n\n\n\n<p>[6] Pierluigi Paganini, \u201cBurnout in SOCs: How AI Can Help Analysts Focus on High-Value Tasks,\u201d <em>Security Affairs<\/em>, December 5, 2024. <a href=\"https:\/\/securityaffairs.com\/171724\/security\/burnout-in-socs-how-ai-can-help-analysts-focus-on-high-value-tasks.html\">[securityaffairs.com]<\/a><\/p>\n\n\n\n<p>[7] Robert Nord, Ipek Ozkaya, and Carol Woody, <em>Examples of Technical Debt\u2019s Cybersecurity Impact<\/em> (July 2021). <a href=\"https:\/\/apps.dtic.mil\/sti\/trecms\/pdf\/AD1144728.pdf\">[apps.dtic.mil]<\/a><\/p>\n\n\n\n<p>[8] \u201cWhen technical debt strikes the security stack,\u201d <em>CSO Online<\/em>, September 25, 2024. <a href=\"https:\/\/www.csoonline.com\/article\/3532475\/when-technical-debt-strikes-the-security-stack.html\">[csoonline.com]<\/a><\/p>\n\n\n\n<p>[9] Palo Alto Networks, \u201cBlack Box AI: Problems, Security Implications, &amp; Solutions,\u201d Cyberpedia. <a href=\"https:\/\/www.paloaltonetworks.com\/cyberpedia\/black-box-ai\">[paloaltonetworks.com]<\/a><\/p>\n\n\n\n<p>[10] Jack Tilbury and Stephen Flowerday, \u201cAutomation Bias and Complacency in Security Operation Centers,\u201d <em>Computers<\/em> 13, no. 7 (2024). <a href=\"https:\/\/www.mdpi.com\/2073-431X\/13\/7\/165\">[mdpi.com]<\/a><\/p>\n\n\n\n<p>[11] Bruce Schneier, \u201cComplexity Is the Worst Enemy of Security,\u201d PDF (March 2025). <a href=\"https:\/\/www.schneier.com\/wp-content\/uploads\/2025\/03\/Complexity-is-the-Worst-Enemy-of-Security.pdf\">[schneier.com]<\/a><\/p>\n\n\n\n<p>[12] Bruce Schneier and Anthony Vance, \u201cGuest Editorial: \u2018Complexity is the Worst Enemy of Security\u2019: Studying Cybersecurity Through the Lens of Organizational Complexity,\u201d <em>MIS Quarterly<\/em> 49, no. 1 (2025): 205\u2013210. <a href=\"https:\/\/misq.umn.edu\/misq\/article\/49\/1\/205\/74\/Guest-Editorial-Complexity-is-the-Worst-Enemy-of\">[misq.umn.edu]<\/a><\/p>\n\n\n\n<p>[13] American Psychological Association, \u201cMultitasking: Switching Costs,\u201d APA research summary (discusses task-switching experiments and measurable switching costs, including Rubinstein, Evans, and Meyer). <a href=\"https:\/\/www.apa.org\/research\/action\/multitask\/\">[apa.org]<\/a><\/p>\n\n\n\n<p>[14] Microsoft Learn, \u201cUnderstand chaos engineering and resilience with Chaos Studio,\u201d <em>Azure Chaos Studio<\/em> documentation (explains resilience validation in production-like environments and fault injection). <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/chaos-studio\/chaos-studio-chaos-engineering-overview\">[learn.microsoft.com]<\/a><\/p>\n\n\n\n<p>[15] Amazon Web Services, \u201cREL12\u2011BP04 Test resiliency using chaos engineering,\u201d <em>AWS Well-Architected Framework: Reliability Pillar<\/em> (run chaos experiments regularly, close to production, limit blast radius). <a href=\"https:\/\/docs.aws.amazon.com\/wellarchitected\/latest\/reliability-pillar\/rel_testing_resiliency_failure_injection_resiliency.html\">[docs.aws.amazon.com]<\/a><\/p>\n\n\n\n<p>[16] Amazon Web Services, \u201cREL12\u2011BP04 Test resiliency using chaos engineering,\u201d implementation guidance on experiments as regression tests in the SDLC\/CI\/CD. <a href=\"https:\/\/docs.aws.amazon.com\/wellarchitected\/latest\/reliability-pillar\/rel_testing_resiliency_failure_injection_resiliency.html\">[docs.aws.amazon.com]<\/a><\/p>\n\n\n\n<p>[17] National Institute of Standards and Technology (NIST), \u201cSP 800\u2011218 Rev. 1 (Initial Public Draft), Secure Software Development Framework (SSDF) Version 1.2,\u201d December 17, 2025 (integrate secure practices throughout SDLC). <a href=\"https:\/\/csrc.nist.gov\/pubs\/sp\/800\/218\/r1\/ipd\">[csrc.nist.gov]<\/a><\/p>\n\n\n\n<p>[18] Cybersecurity and Infrastructure Security Agency (CISA), \u201cCorporate Cyber Governance: Owning Cyber Risk at the Board Level,\u201d January 8, 2025. <a href=\"https:\/\/www.cisa.gov\/news-events\/news\/corporate-cyber-governance-owning-cyber-risk-board-level\">[cisa.gov]<\/a><\/p>\n\n\n\n<p>[19] National Institute of Standards and Technology (NIST), <em>NIST IR 8286 Rev. 1: Integrating Cybersecurity and Enterprise Risk Management (ERM)<\/em>, December 2025. <a href=\"https:\/\/csrc.nist.gov\/pubs\/ir\/8286\/r1\/final\">[csrc.nist.gov]<\/a><\/p>\n\n\n\n<p>[20] NIST, <em>SP 800\u2011137: Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations<\/em>, September 2011 (visibility into assets, threats, vulnerabilities, and control effectiveness). <a href=\"https:\/\/csrc.nist.gov\/pubs\/sp\/800\/137\/final\">[csrc.nist.gov]<\/a><\/p>\n\n\n\n<p>[21] Dirk Schrader, \u201cThe Hidden Threat of Legacy Systems: Lessons from a Massive Recent Data Breach,\u201d <em>Cybersecurity Insiders<\/em>, December 5, 2024 (legacy assets, decommissioning failures, need for inventory and risk reduction). <a href=\"https:\/\/www.cybersecurity-insiders.com\/the-hidden-threat-of-legacy-systems-lessons-from-a-massive-recent-data-breach\/\">[cybersecur&#8230;siders.com]<\/a><\/p>\n\n\n\n<p>[22] Infosys, <em>A Strategic Framework for Decommissioning Legacy Tech<\/em> (white paper; decommissioning reduces cost, risk, and addresses unpatched vulnerabilities and compliance gaps). <a href=\"https:\/\/www.infosys.com\/services\/application-modernization\/documents\/decommissioning-legacy-tech.pdf\">[infosys.com]<\/a><\/p>\n\n\n\n<p>[23] NIST, \u201cDetect,\u201d NIST Cybersecurity Framework mappings page (continuous monitoring references). <a href=\"https:\/\/www.nist.gov\/cyberframework\/detect\">[nist.gov]<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m not sure there&#8217;s such a thing as a \u201cSecurity Engineer\u201d. In the traditional world, an engineer is hired to build a \u201csuccess state.\u201d They take raw materials and assemble them into a bridge, a building, or a piece of software that works. Their focus is the \u201chappy path\u201d, that is the sequence of events [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9,8],"tags":[],"class_list":["post-99","post","type-post","status-publish","format-standard","hentry","category-cognitive","category-leadership"],"_links":{"self":[{"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/posts\/99","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/comments?post=99"}],"version-history":[{"count":1,"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/posts\/99\/revisions"}],"predecessor-version":[{"id":100,"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/posts\/99\/revisions\/100"}],"wp:attachment":[{"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/media?parent=99"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/categories?post=99"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dpatsos.net\/index.php\/wp-json\/wp\/v2\/tags?post=99"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}