blog < /dev/random

Jeff McJunkin's thoughts on Penetration Testing, Systems Administration, and Network Defense


Often when doing penetration tests, clients will ask me to scan their external network presence[1]. For smaller companies, I can often use nmap from start to finish for all my scanning needs. However, for the sake of larger network ranges let’s separate out some of our scanning needs:

1. Network sweeping: Determining which IPv4 addresses have any listening services (finding “live” hosts)

2. Port scanning: Determining listening TCP and UDP ports on target systems

3. Version scanning: Determining the version of services and protocols spoken by open TCP and UDP ports

If the external IP range is roughly ten thousand hosts or fewer, nmap will work just fine for each of these needs. Often, though, larger companies can own tens or even hundreds of thousands of IPv4 addresses. How can we determine in a few hours which of these IPv4 addresses have a listening host? nmap’s default behavior only sends a few probe requests — if all of those probe requests fail, the host is marked offline and no further probes are sent. We can skip the network sweeping with the -Pn option, but then nmap will scan every single configured port for every single IP address. Since the large majority of external IPv4 ranges won’t have listening services, for large network ranges this could take weeks, months, or even years! What we need is some way to efficiently do a network sweep (find which IPv4 addresses have listening services) before handing that smaller list to nmap for further port and version scanning.

Why does nmap have a hard time with such huge network ranges? Fundamentally, nmap is a synchronous tool — that is, it tracks the connection requests and waits for replies. If a TCP connection request (a SYN) doesn’t get any reply, nmap will eventually timeout and declare that service filtered. nmap certainly runs many probe requests in parallel, but filtered services (and unassigned IPv4 addresses) can really slow it down.

In contrast to synchronous tools like nmap, there are several tools that don’t track connections — also known as asynchronous scanners. Examples include scanrand, ZMap, and my personal favorite masscan.

masscan is my favorite of the asynchronous scanning tools for several reasons. First and foremost, it uses the same syntax as nmap whenever possible, which makes it easier to pick up. Second, even amongst asynchronous scanning tools it’s really, really fast. Effectively, with proper network interfaces and drivers it’s limited only by your bandwidth. With two Intel 10 gigabit ethernet adapters it can scan the entire IPv4 internet in six minutes, transmitting over 10 million packets per second. If nmap is light speed, ZMap and scanrand are ridiculous speed, and masscan is ludicrous speed.

First, let’s look at masscan’s basic syntax for scanning the well-known TCP ports of a large network, such as Apple’s ~16 million IPv4 addresses:

masscan -p0-1023

Scanning speed

By default, masscan will only send 100 packets per second. Counting 18 bytes for the Ethernet header, 20 bytes for a TCP header, and 20 more for the IPv4 header, that’s only 5,800 bytes per second, or ~46 kilobits per second. Because masscan scans ports and hosts evenly (that is, randomly), the scanning bandwidth you use will be evenly distributed across the hosts and ports you scan. Unintentional Denial-of-Service can be a concern with high-bandwidth scans on smaller network ranges, but a 1-10 megabits per second (or --rate 20000, twenty thousand packets per second) should be pretty safe. Virtual machines can safely go up to --rate 200000, which is 93 megabits per second of outgoing scanning traffic — but check with your client if you need to use these higher speeds.

Doing a network sweep

How can we determine if a given IPv4 address has any listening TCP services? Well, we could scan 65,536 (ports zero through 65,535) ports, but for larger network ranges that’ll make for long scanning times, even with a high --rate. More commonly, I’ll select nmap’s top 100 or 1,000 ports by popularity. If any IPv4 address responds to any SYN packet (whether it’s closed with a RST or open with a SYN-ACK), we’ll save out that host and scan it using more specialized tools such as nmap or even a vulnerability scanner like Nessus.

Let’s use a small trick to get nmap to tell us that list of ports. We’ll scan our own system and output XML format to STDOUT. The XML format of nmap shows the exact parameters used for a scan, but crucially it’ll also translate between --top-ports X and the actual list of ports in a concise fashion. Here I’ll choose to display the top hundred ports, but you could just as easily choose the top ten or the top thousand.

$ nmap localhost --top-ports 100 -oX - | grep services
<scaninfo type="connect" protocol="tcp" numservices="100" services="7,9,13,21-23,25-26,37,53,79-81,88,106,110-111,113,119,135,139,143-144,179,199,389,427,443-445,465,513-515,543-544,548,554,587,631,646,873,990,993,995,1025-1029,1110,1433,1720,1723,1755,1900,2000-2001,2049,2121,2717,3000,3128,3306,3389,3986,4899,5000,5009,5051,5060,5101,5190,5357,5432,5631,5666,5800,5900,6000-6001,6646,7070,8000,8008-8009,8080-8081,8443,8888,9100,9999-10000,32768,49152-49157"/>

Now we can copy that list of ports into masscan and scan our target range. We’ll keep using Apple as our victim example network. At 100,000 packets per second, this will use around 32 megabits per second of traffic.

$ sudo masscan -oG apple-masscan.gnmap -p 7,9,13,21-23,25-26,37,53,79-81,88,106,110-111,113,119,135,139,143-144,179,199,389,427,443-445,465,513-515,543-544,548,554,587,631,646,873,990,993,995,1025-1029,1110,1433,1720,1723,1755,1900,2000-2001,2049,2121,2717,3000,3128,3306,3389,3986,4899,5000,5009,5051,5060,5101,5190,5357,5432,5631,5666,5800,5900,6000-6001,6646,7070,8000,8008-8009,8080-8081,8443,8888,9100,9999-10000,32768,49152-49157 --rate 100000

Note that masscan supports the same -oG filename.gnmap option as nmap does. We’ll read through that output list (the so-called “greppable” format) to find the list of hosts that are alive. Given 16 million target IPv4 addresses and 100 TCP ports each, this scan will take around five hours to complete — which is well within what I’d consider a “reasonable” timeframe. Let’s look at the first few lines of the resulting file:

# Masscan 1.0.3 scan initiated Thu Jul 20 22:24:40 2017
# Ports scanned: TCP(1;7-7,) UDP(0;) SCTP(0;) PROTOCOLS(0;)
Host: ()  Ports: 443/open/tcp////
Host: ()   Ports: 179/open/tcp////
Host: () Ports: 8081/open/tcp////
Host: () Ports: 8081/open/tcp////

We only need the IPv4 address, so we’ll use egrep to search for lines beginning with “Host: ” and cut to take the second field. We’ll also sort and make the lines unique with uniq, just in case masscan writes the same IPv4 address twice.

$ egrep '^Host: ' apple-masscan.gnmap | cut -d" " -f2 | sort | uniq > apple-alive

Now we have a much smaller list of IPv4 addresses to work with, one address per line. As a parting example, we can use this as an input list with nmap to do a more thorough scan:

# nmap -PN -n -A -iL apple-alive -oA apple-nmap-advanced-scan

Using our masscan-generated file, nmap will now be able to do its job much more quickly!

Please let me know in the comments if you find this useful for your workflow. I love nmap, but sometimes larger tasks call for more specialized tools.

Thanks for reading! – Jeff McJunkin

[1] Hopefully the client is risk-aware enough to consider a “assume breach” mentality and give penetration testers internal network access in addition to the external network scan, but that’s a separate story.

Spare: In fact, nmap is absolutely my favorite tool for service scanning (that is, differentiating between Apache 2.2 and IIS 8.0 on the listening port 80).

“Negative Unemployment and Great Job Satisfaction? Why Infosec is AWESOME” presentation posted

As promised, here is my January 2014 presentation at Southern Oregon University. There was video of the event, which I hope to find and share later.

EDIT: The video, graciously provided by Sustainable Valley Technology Group, is now included below.

April 2013 SOU Presentation

Though long delayed, below is the slideshare for my April 2013 talk at SOU, entitled “Getting Involved In Network Security”:


Expect another post with my January 2014 presentation soon.

Introduction to Network Penetration Testing – Module 1, Networking Overview

I’m mentoring a highly-motivated high school student through a senior project, as he’s interested in network security and wants to do a penetration test of his high school. He’s got permission and I’ve got the spare cycles, so I agreed to mentor him.

What will hopefully follow is a series of blog posts of the compressed education I’ll give. I’m trying to constrain this to 40-ish hours, and also trying to pass enough education so he knows to a reasonable extent what he’s doing by the end, not just showing off a series of tools. With such a short time limitation, this will obviously be a whirlwind of topics, so advanced readers will have to forgive me glossing over some rather important details.

Module One – Overview of Practical Networking

I’m a big believer in learning networking from a practical point of view, as it helps with many aspects of troubleshooting. Troubleshooting, as I use the word, is having a goal and working through or bypassing the obstacles encountered, often using many different approaches to the problem. It’s okay to have eleven failed solutions, as long as you have twelve different ways to solve the problem. Penetration testing is just troubleshooting your way to Domain Admin, so it relates well.

Though in a classroom environment I’d also teach the OSI model, for the defined purpose of this class the TCP/IP model fits better, so we’ll work with that. The one-sentence descriptions of each layer below are my own attempt to sum up the intent of each layer.

I’d be foolish not to link to Wikipedia’s page on the TCP/IP model, because it is very well fleshed out. Specifically, the section on encapsulation, which I’ll quote below, explains the concept very well:

The Internet protocol suite uses encapsulation to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers, being further encapsulated at each level.

The layers of the protocol suite near the top are logically closer to the user application, while those near the bottom are logically closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer protocols from the details of transmitting bits over, for example, Ethernet and collision detection, while the lower layers avoid having to know the details of each and every application and its protocol.

Layer 1 – Network Access Layer – “Physical network interface to network interface communication, within the same subnet”

Example protocols: Ethernet, 802.11{a,b,g,n}

The network access layer is scoped to just allowing hosts (or more precisely, their network interfaces) on the same network to communicate. This also includes the physical components (such as cabling and interfaces) and protocols for sending and receiving the physical signals. As an example protocol, an Ethernet address is assigned to the network card by the manufacturer and is supposed to be unique.

The link light on an Ethernet card, for example, is solely indicating whether the involved interface believes it is connected to another device speaking the same protocol. A laptop connected via Ethernet to a switch with no other hosts attached, for example, would still have an active link light, because both the switch and network interface speak Ethernet.

Moving packets from one interface to another at the link layer is called switching. A switch is a network device that connects hosts within the same subnet (or “switches packets”), and therefore operates at layer one.

Layer 2 – Internet Layer – “Logical host to host communication, across separate subnets (or within a subnet)”

Example protocols: IPv4, IPv6

As a building block above layer one, the Internet layer allows hosts in different subnets to communicate. Note that while network access layer addresses are hardware addresses assigned by the manufacturer, Internet layer addresses are assigned by the user to particular hosts. As such, a machine can have different Internet layer addresses while in different subnets (such as a laptop with a different IP address at home and at a coffee shop), whereas a network access layer device will always have the same address (assuming no MAC spoofing shenanigans).

Moving packets from one subnet to another is called routing. A router is a network device that connects multiple subnets (or “routes packets”), and therefore operates at layer two.

Layer 3 – Transport Layer – “Service to service communication”

Example protocols: TCP, UDP

The transport layer builds upon the Internet layer (starting to get the theme, here?) by allowing multiple services on a single host. Look at the IPv4 header, for example. There’s a field for destination address, but how do you speak to a particular service on a host? Imagine a server that runs both HTTP and FTP. How do I, as a client, tell the server which service I want to talk to? Using solely IPv4, I can only send a packet to the host as a whole — there isn’t a field for which service I mean to talk to. The transport layer, at a minimum, provides this service through the concept of ports, of which there are 65,535 (or 2^32 – 1, since port zero is reserved).

Particular services are by convention found on particular ports, and vice versa (see IANA Assigned Port Numbers). If you see port 80 is open, for example, you’d expect a web server (HTTP) to be running on that port. However, there is no “Internet police” regulating this,  so people can and do run services on non-standard ports, for a multitude of reasons. High ports are commonly used for a client to connect from (i.e., as an ephemeral port), so it’s very common to see a client connect from port 49,273 (for example) to port 80, in order to connect to a web server.

As for the two major protocols used at this layer, TCP (Transport Control Protocol) and UDP (User Datagram Protocol), there are some fundamental differences.

User Datagram Protocol (UDP)

UDP is connectionless, meaning messages are sent directly, without any of the overhead that TCP has and as minimal as possible, barely providing anything beyond source and destination ports. For IPv4, even the checksum field is optional. In general, the protocols that emphasize low latency (Voice over IP, real-time video) or extremely high throughput with low overhead (DNS primarily) will tend to prefer UDP. However, since reliability isn’t guaranteed, protocols choosing UDP must be able to work despite occasional loss of packets. Voice over IP, for example, will just skip a few phonemes, where a missed DNS request will just result in sending the request again after a short timeout.

Transmission Control Protocol (TCP)

TCP is more complex, and provides reliable delivery of data in proper order. Even if some packets are dropped, if the overall connection *can* pass packets successfully, then eventually the two application-layer protocols above TCP will get their data. On a lossless network (which almost all local networks should be), the overhead is very minimal and high-speed communications are in no way hampered by TCP. As shown in the TCP header, all these features mean the protocol header itself has many more fields to track these options. We’ll skip going over the TCP fields at this level, for now.

Layer 4 – Application Layer – “Application to application communication, often on behalf of a user”

Example protocols: HTTP, DNS, FTP

As the top-most layer in the networking stack, the application layer is the one that carries traffic from a particular application. As such, there’s a ton of variation in this layer, and many, many protocols. I’ll touch on this concept in a further blog post, but application layer packets are, in blunt terms, the entire point of the communication. Though there’s a lot of scaffolding in lower layers to get an HTTP client connected to an HTTP server, the point of all of those connections it the HTTP (which is application layer) communication. The HTTP communication (in this example) is what the user requested, which is an exceedingly common theme.

Building a virtual lab for security testing

UPDATE – if you’re looking for my article on “Building A Pen Test Lab”, it’s located on the SANS Pen Test blog, not here.

tl;dr — Building a lab like the following is very useful:
Virtual Lab Diagram

It’s not a debate that most IT professionals should have a lab environment in which they can practice their trade. Many don’t have one at work, though, and don’t make one at home. Those of us in network security (whether offense or defense) aren’t an exception, either. Ed Skoudis (of SANS and InGuardians fame) posted on this recently, and a DEFCON 20 talk from Trustwave featured their testing labs heavily.

The purpose of this design is to go into more detail than most security labs, to more closely simulate a standard small business network. You can learn some basics of Metasploit, for example, by using a BackTrack or Kali VM as well as Metasploitable, but more comprehensive attacks and defenses need a more realistic network.

The Active Directory domain controller, file server, and external blog in this lab all represent unique (and common) attack opportunities. Client desktops are almost always of multiple security levels and OS levels, which explains both the Windows XP and 7 workstations. The DMZ is slightly unusual for a small business, but is reasonable in simulating a larger environment. The larger environments, by the way, are the ones that have money for vulnerability assessments and penetration tests, so they’re certainly the networks worth studying.

Not included in the lab diagram is a Security Onion VM for intrusion detection capabilities, and a Splunk server (for now — Graylog2 might replace this) allowing all kinds of logs (syslogs and Windows event logs, to start) to be collected.

Though the hardware I used to put together this lab certainly wasn’t free, it was less expensive than you might think. I’ll put up another post about it shortly, but for now, know that it was based on this fine gentleman’s home lab. One awesome resource that I checked into heavily, by the way, can be found at If you have any quick questions, you can also reach some of those folk at #r_homelab on Freenode IRC.

In further posts, I’ll go into how and why I designed the lab this way, what licensing I used, and how I went about building it from a practical point of view.

Step-by-step Implementation of Local Administrator Password Randomization Script

Since the documentation was a bit sparse on my script in my previous post, I thought I’d post clearer instructions, for those not as familiar with Group Policy. This post is going live with my guest Tech Segment on PaulDotCom today.


For this implementation guide, I assume you have an Active Directory domain and several clients to manage. You’ll also need to do your work from a machine with Group Policy Editor, and either the correct delegated permissions or Domain Admin privileges. If there are any instructions that are unclear, leave a comment and I’ll update the post.

1. Download script

Please download the randomize-local-admin.vbs script from my GitHub (right-click, Save Link As…) and save it your Desktop or another accessible location. You’ll need this shortly.

2. Create the Group Policy Object

Open up Group Policy Management Console, browse to the Group Policy Objects folder, then right-click on it and create a new Group Policy Object.

Name it something recognizable such as “Local Administrator Password Randomization”, then right-click and Edit it.

Now browse to Computer Configuration -> Policies -> Windows Settings and double-click on Startup. This is where we’ll set the script to run on boot.

Once the new window pops up, click Show Files to open the GPO’s directory and copy the VBScript (randomize-local-admin.vbs) inside. Make sure it has the right extension, or Windows won’t recognize it as a VBScript.

Now add the script to the Group Policy Object by clicking Add and selecting the script.

3. Create the WMI Filter (optional)

The intent of this script is to randomize local Administrator accounts on desktops and member servers, but domain controllers don’t have local accounts. So as to not randomize the builtin Domain Admin account, we’ll need to exclude DC’s, either via the Organization Units (OU’s) we target or by WMI Filters. If your Active Directory OU structure isn’t built with separate areas for Domain Controllers, or you want to link the entire domain, we can use a WMI Filter to exclude all machines classified as DC’s.

Under WMI Filters, right-click and click New. Right a name and description (“Exclude Domain Controllers” seems reasonable) and then click Add. You’ll need the following WMI Filter:

select * from Win32_OperatingSystem where ProductType <> “2”

4. Link the GPO to the proper OU’s

Now that we’ve created the GPO and its WMI Filter, we can link it to an Organizational Unit. First, though, you’ll need to associate the WMI filter if you created one. After clicking on the Group Policy Object, select the WMI filter from the lower side of the right pane, under WMI Filtering.

Right-click an existing OU which has systems you’d like to target and click “Link an Existing GPO…”. Select the GPO you just created, and it will take effect on the next reboot on all Computer objects in that OU. You can select other OU’s in the same way.

To make an acceptable level of overkill, this script creates 120-character passwords using the full ASCII character set (1-255).

Again, if you have any issues feel free to leave a comment or send me an email.

Creating a Network Boot Menu including Kon-Boot to Bypass Local Authentication

In a former post I referred to Kon-Boot, but didn’t go into much detail. Here, I’ll expand on my use of Kon-Boot and how to set it up for your own network. Specifically, to keep things as easy as possible I add it to my PXE menu, so that you’re simply a reboot and a few keystrokes away from logging on to an otherwise locked machine.

What is Kon-Boot?

Kon-Boot is a bootable shim which bypasses local account authentication on Windows machines. In other words, by booting into Kon-Boot (via floppy, CD, or over the network via PXE) you can bypass local passwords, such as for the built-in Administrator account.

As an aside, note that it cannot bypass domain authentication. This makes sense, as domain-joined machines contact domain controllers for domain user accounts, unless the client machine is offline and has cached credentials.

Kon-Boot is the reason I randomize the local Administrator password without disabling the account. For whatever reason, the author of Kon-Boot didn’t bypass enabled/disabled checks on local accounts, just their passwords.

Creating a PXE Boot Server

I’ll assume you already have a DHCP server on your network. The FOG Project, which I heartily recommend for anyone looking for a free Ghost replacement, has a wonderful page on setting DHCP options for many types of DHCP servers. The same options will apply for us — option 066 (“Next Server”) will be the IP address of the PXE server, and option 067 (“Boot File”) will be “pxelinux.0”. In fact, if FOG itself is interesting, they have great setup guides that will get a FOG menu installed on Ubuntu, to which you can simply add Kon-Boot as an additional option.

Next, you’ll need a trivial file transfer protocol (TFTP) server. For this tutorial, I’ll go through the setup of tftpd-hpa and syslinux on Ubuntu 12.04 Server. I’d recommend making this machine a VM, as the resources required are laughably low. I won’t, however, go into the installation of Ubuntu Server, as there are plenty of fantastic guides for that piece already. If you use the previous link, keep track of the IP address you statically assigned, or make a static DHCP reservation for the PXE server.

Once you’re logged in, there’s a few pieces of software to install.

apt-get install tftpd-hpa syslinux vim-nox openssh-server

This will install the TFTP server, syslinux (think of it as GRUB for PXE), a minimal version of vim to edit text files, and an SSH server. We’ll also need to create the directory structure and copy over some of the syslinux files to the TFTP root directory.

mkdir /var/lib/tftpboot/pxelinux.cfg

cp /usr/lib/syslinux/memdisk /var/lib/tftpboot # used to boot a floppy image

cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftpboot # the PXE boot file

cp /usr/lib/syslinux/vesamenu.c32 /var/lib/tftpboot # for the PXE menu

You’ll also have a choice between setting a root password and a fair number of sudo commands. For the purposes of this post, I’ll recommend setting a strong password (as an example, this password generator site is xkcd-approved!) for root.

sudo passwd # Enter your user password, then enter a strong password for root, twice

Next, we’ll install the free version of Kon-Boot. Earlier this year, the author came out with a new paid version, but maintained the original version for free download. The major restriction is the free version (v1.1) doesn’t support 64-bit Windows systems, while the paid version (v2.0) does. Go to the Kon-Boot download page, download from the first mirror, and save the zip locally. Inside you’ll find (password is “kon-boot”), and in that you’ll find FD0-konboot-v1.1-2in1.img. Save it to another location locally, such as your desktop.

If you don’t already have it installed, you’ll also need a program to copy the Kon-Boot floppy image over to the PXE server. I recommend installing WinSCP (skip the “sponsored” version if you prefer) for those on a Windows platform.

Use your PXE server’s address instead

Copy the FD0-konboot-v1.1-2in1.img file to the /var/lib/tftpboot directory.

Now you’ll need to write the pxelinux configuration file at /var/lib/tftpboot/pxelinux.cfg/default. You can either use the command below, or a text editor like vim or nano to write the text below:

echo -n ”

prompt 0
DEFAULT vesamenu.c32
timeout 50

label local
localboot 0
MENU LABEL Boot from hard disk
Boot from the local hard drive.
If you are unsure, select this option.

label Kon-Boot
kernel memdisk
append initrd=FD0-konboot-v1.1-2in1.img
MENU LABEL Kon-Boot Floppy Image
Kon-Boot will bypass local authentication for
local administrative purposes.

” > /var/lib/tftpboot/pxelinux.cfg/default

If you’ve set up your DHCP server options as mentioned before, you should now be able to boot a client to the network and see the menu we’ve created. You may have to press a specific key during boot or change your BIOS options. For VMware virtual machines, pressing F12 during boot attempts a PXE boot.

Booting into Kon-Boot will take a few moments showing a fancy startup image, and then boot into Windows.

Kon-Boot Startup

Screenshot from paid version (v2.0)

Now that you’re inside Windows, you can log in as the local Administrator account (or any other local account) without worrying about the password. You can leave the password field blank, or type in any particular password you want. It makes no difference, as the password check is more-or-less replaced with “return true”. Whereas before you’d see this:

Failed Login

Now you’ll be able to log on and take any actions you wish.

Successful Login

What do you think? As I don’t have need for remote access to computers without using domain privileges, I see this as a useful way to log in to infected or suspicious machines locally. In terms of return on investment, I’ve been extremely happy with further customizing my PXE boot menu with SpinRite (drive recovery), Recovery is Possible (live Linux over PXE!), and even Memtest86+.

What to do with the local Administrator account?

After my last post with the local Administrator randomization script got some attention from John Strand (@strandjs), Tim Medin (@timmedin), and Tim Tomes (@LaNMaSteR53), I realized the issue what to do with local Administrator accounts was considerably more complicated. Here’s my attempt to map out the different possibilities.

As background, be sure you know that resetting a local Administrator (or bypassing it entirely) is trivial given physical access to a machine (via NT Password Reset, Kon-Boot, or similar utilities). Offline registry attacks assume there’s no drive encryption in place. The two types of attackers I’m assuming below are external (as in a penetration tester or malicious hacker) and internal, as in employees or contractors. I assume the external attackers don’t have physical access to each machine in the domain, as that would defeat any of the recommendations below.

In order of least to most secure, here are the basic options I see:

  1. Standardized local Administrator password on all domain-joined machines. While convenient, this is a horrible idea security-wise. Brief physical access to any domain-joined machine to acquire the hashes gives Administrator access to all machines in the domain via passing the hash. Though this is exactly what I’m trying to combat, it’s a likely situation in many shops. Some will mitigate this slightly by having a different desktop password than the server login, but that doesn’t help for long. At some point, a Domain Administrator will be logged on to a desktop, and stealing his/her token will result in Domain Admin privileges. Please, please don’t choose this option.
  2. Standardized on a local Administrator password, but also set SeDenyNetworkLogonRight on that account. Make sure to still set a reasonable length (15+ characters) password on that account. In this situation, any employee who was told this password (say, a remote laptop user) could then abuse that privilege on any machine he/she can get physical access to until the password is changed. I’ve been to many places where the local Administrator “break-glass” password is fairly common knowledge among end users. I urge you to not choose this option, either.
  3. Individualized (not randomized) local Administrator passwords. Set SeDenyNetworkLogonRight for the local Administrator as in #2, but also include individual passwords for each machine. This prevents a single password having rights throughout the domain, but adds complexity. I don’t know of any shops going this route, but it seems reasonable.
  4. Randomized local Administrator passwords. Setting SeDenyNetworkLogonRight should be unnecessary, but is a harmless additional precaution. This was the focus of my previous blog article. In order to use the account at this point, you’ll need to either reset the password using NT Password Reset (or similar), or bypass it entirely using Kon-Boot. Do note that this is the same method an attacker would use to get local Administrator privileges. Disabling the account is optional, but precludes Kon-Boot (which bypasses the password, but respects whether or not an account is enabled). The biggest downside I’ve experienced thus far is the lack of a way to help a remote (i.e., not domain-connected) user who needs administrative access to their machine.

Do you need to access machines remotely using the local Administrator account? Since we’re discussing domain-joined Windows machines, the use cases for this requirement are small. The two situations I see this need are machines that aren’t connected to the domain (assuming Domain Admin credentials aren’t cached, which is a topic for a different blog post), and accessing machines that are infected or untrusted (as the domain token could be stolen and re-used elsewhere).

If so, be careful! This is where most people get into trouble. Standardizing on a local Administrator password means that anyone with brief physical access to any one of those machines can use those same credentials (without cracking the password hash, even) on all machines in the network. If this is both scary and new information, look up “Pass the Hash” attacks.

If you’re still sure you need remote access to machines using the local Administrator accounts, you’ll need to ensure the same password isn’t used across all machines in your domain. In other words, we’ll need to make those passwords individualized to each machine. These individualized passwords can be created using the output of a hashing algorithm seeded with some unique identifier of the machine together with a secret padding. If we can guarantee the padding isn’t disclosed, we’ll have a fairly secure password. The output of the SHA-1 algorithm is 160 bits, expressed as 40 hexadecimal characters. Plugging that output into Jason Fossen’s password complexity spreadsheet gives us an output of 1,157,804,805,602.22 years to crack an average password of that length directly, with 1,000 machines each generating 200,000 hashes per second. Needless to say, an attacker would be far better off trying to discover the method used to create the passwords (including the secret padding) than to attack each password individually.

The biggest concern for individualized passwords is setting them. My preferred approach, Group Policy, would show the secret padding, as it would have to be in the script. Accordingly, we’ll need to push out the individualized passwords from a trusted computer, setting the passwords on each remote machine. Unless I get requests otherwise, I’ll leave the actual individualized password script as an exercise for the reader.

Delicious RSS Feed

I’ve recently started putting my bookmarks into Delicious. I have categories for Systems Administration, Security (mostly pen-testing), and Forensics.

This should obviate the need to put up “look, a shiny new link!” posts up, which has been otherwise quite tempting.

Here are a few examples of bookmarks to look at:

Solving the NIST Hacking Case with RegRipper

Configuring a Power Plan with Group Policy Preferences

10 Immutable Laws of Security

If you’re interested, you can add my feed to your RSS reader of choice.

Review of Top-Down Network Design Posted

Amazon has just posted my five-star review of Top-Down Network Design (3rd Edition).

From the review:

Whereas most other networking books focus on one technology or one aspect of network design, Oppenheimer really does guide the reader through designing a network in a top-down (gathering requirements to documentation) fashion. Overall, the book takes you from a 30,000 foot view to about a 2,000 foot view. Despite Oppenheimer’s Cisco-focused background and being published by Cisco Press, her book admirably avoids plugging Cisco as the end-all-be-all solution. Overall, I would recommend this book for all parts of network design, and recommend others for the actual IOS/device configuration (which Top-Down Network Design avoids).

As others have identified, the books is divided into four sections. The section titles are descriptive enough, so I’ll just point out the highlights by chapter instead.

The first chapter covers an introduction to network design, including analyzing business goals and constraints. It emphasizes the need for the network design to make “business sense,” as in justify the business itself. Constraining the network design to the budgetary and staffing constraints of the customer is also important — after all, what good is a highly-reliable, highly-complex network to a CCENT-level network engineer who can’t make necessary modifications without taking down the network?

The fourth chapter relates to the network traffic of the existing network. By categorizing users into “user communities” based off job role (usage of sets of applications) and categorizing the traffic flows themselves, one can describe at a high level the network flow while still providing useful data for technical purchase decisions. The traffic flows are categorized into groups such as peer-to-peer, client/server, server-to-server, terminal/host, and distributed computing.The traffic is also displayed by frame size, broadcast/multicast/unicast type, and error rate to give useful data. QoS requirements for voice and other sensitive applications are also discussed, as are different service categories based off the Asynchronous Transfer Mode Forum definitions.

I appreciate this model, as without these abstractions it’s tough to talk about network flow at a high level without losing specificity.

In chapter six, there’s a sizable section on naming schemes, especially in the Windows world. The “Guidelines for Assigning Names” section is full of solid advice, though the section on WINS can probably be safely removed in the next edition.

Chapter seven is mostly focused on the actual switching and routing protocols, but also covers the creation of decision trees to assist with protocol selection. Table 7-5 on page 230 is a *very* handy summary of the routing protocols covered in the text.

In the last chapter, number 14, Oppenheimer gives a summary of top-down network design by going through the steps of the design methodology, which I found very useful. She also highlights the importance of the network design document, details how to respond to client RFP’s, and then goes over the sections of the network design document in detail.

I found this to be an extremely useful book for its intended purpose. Top-Down Network Design will be a text I refer to for any network design needs I encounter in my future.

Full disclosure: I do know Priscilla Oppenheimer. I have taken several classes from her in networking and network forensics.