Share:
By Shannon Davis December 06, 2021

If you haven't already read the episode on process hunting, I recommend that you go back and do so, at least for a couple of my jokes, and to help keep our clicks/metrics up. Where that episode concentrated on tracking processes, this blog will concentrate on, you guessed it, pipes. And due to the depth I tried to go with this one, it has been split into a two-part series, so make sure to come back for the second part after you've finished this one.

Most people have a good grasp on processes but not many people understand pipes (I put my hand up here too). There are many good primers out there. This one from VerSprite is quite good in my opinion and includes links to some non-security pipe primers. Pipes are a form of inter-process communication (IPC), which can be used for abuse just like processes can. Lucky for us, we can ingest a lot of pipe-related data into Splunk, both from endpoints, as well as from network wire data. As in the process hunting blog, I will only look at pipes from a Windows perspective, but in the future, I will try to revisit this for other operating systems. Before we get into the actual tracking activities, I'd like to cover how you can capture pipe data in your environment.

Prerequisites

The two Splunk add-ons I'm using, on top of the Windows Universal Forwarder, to capture pipe events on Windows endpoints are:

  • Splunk Add-on for Microsoft Sysmon
  • Splunk Add-on for Microsoft Windows

To capture pipe activity within wire data, I'd recommend using either Splunk Stream or Zeek/Corelight. For my examples here I am using Zeek data, as that also provides protocol decodes for DCE/RPC events. I have a Zeek sensor capturing data (for this exercise I am mainly concerned with SMB and DCE/RPC traffic) across my test environment via a SPAN port. I am using the following add-on for this in Splunk:

  • Splunk Add-on for Zeek
Capturing Pipe Events

For pipes, there are two event types to watch for on an endpoint: the creation of the pipes themselves, and the connection between client and server on the aforementioned pipe. Now don't get confused with the client and server terminology here. Quite often, a client and server in a pipe connection will reside on the same host. The client is just the process initiating the connection to the server process, which created the pipe.

To capture both the creation of pipes from endpoint data and the actual connections to them, we want to capture two different Sysmon event codes (I recommend using either SwiftOnSecurity or Olaf Hartong's Sysmon configs). These are:

  • Sysmon Event Code 17 (pipe creation)
  • Sysmon Event Code 18 (pipe connection)

One big difference between the two types of pipes (named and anonymous), is that named pipes can be used across the network, while anonymous pipes are constrained to a single host. Typically anonymous pipes are spawned by a parent named pipe process.

Named pipe network traffic uses SMB or RPC protocols. Wire data is right up there with endpoint data in my list of favorite data sources. If you aren't already capturing wire data, I'd ask your manager right now to release some funds to allow you to do so (the AI/ML-enabled next-gen firewall upgrade can wait a bit longer).

Named Pipe Creation and Connection

We can look for pipe creation and connection events in our environment using a simple search for EventCode 17 and EventCode 18 within Sysmon data.


index=main source="xmlwineventlog:microsoft-windows-sysmon/operational" EventCode IN (17,18)
| stats count by PipeName host EventCode EventDescription process_name

The search is using the stats command across this data to return a count of events grouped by PipeName, then the host, then the event type (Event Code 17 or 18), a description of that event for easier reading, the process name involved in that event, and finally, the overall number of each grouping. Searching for the process ID or process name that created the pipes can be powerful, but more on that later.

In my results, there are several crashpad-related pipe connections from chrome.exe, some other GoogleCrashServices pipe connections from GoogleUpdate.exe, and some TSVCPIPE creation and connection events from svchost.exe. On its own, that information isn't useful for us right now. In my lab environment, I have far fewer events returned than you would see in a production environment. The point, just looking for pipe creation and connection events will leave you with lots of data to sift through when you're hunting.

Named Pipes over RPC

If we want to see named pipes in use over RPC from wire data in our environment, we can use this search that makes use of Zeek data.


index=main sourcetype=bro:dce_rpc:json
| stats count by named_pipe src_ip dest_ip dest_port endpoint operation

Again, this search uses the stats command to provide a count of named pipes, but now we're looking at what source IPs, destination IPs, destination ports, endpoints, and operations are in use on these named pipes. The named pipes will most likely look different compared to the ones captured on the hosts themselves. Looking at the endpoints and operations in use here, we can pretty quickly deduce what's going on. We can see several connections to a named pipe of 135 on port 135 (this is TCP here as we are searching on Zeek DCE/RPC events). These are connecting to the epmapper endpoint, and running an operation of ept_map. For some fun bedtime reading, here's a link to the Endpoint Mapper Interface Definition.

You're probably saying something like, "Great! That's super useful too!" in a sarcastic tone. But I'm just trying to set the scene on how to search for named pipes in your environment. If you don't stick around, you won't get to see the exciting stuff later on.

Quick, Call a Plumber! We've Got Some Bad Pipes!

The Splunk Threat Research Team (STRT) created several great detections for surfacing malicious pipe activity. For this exercise, I'm using their Cobalt Strike Named Pipes detection to find Cobalt Strike using named pipes in my test environment. If you're not familiar with Cobalt Strike, here's a primer. Cobalt Strike uses predefined pipe names. If the bad guys stick to those names, it's quite easy to detect. Cobalt Strike also employs the concept of malleable profiles, which allows you to modify these names to try and avoid detection. Like a lot of lazy threat actors out there, I'm going to stick with the defaults, because it's easy.

I'm running a Windows 10 client, a Windows 2016 server, and a Windows 2016 Domain Controller in this test environment. I initially compromised the Windows 10 client then moved laterally via PSExec to both 2016 servers, and then used named pipes over SMB for host to host communication. All C2 traffic is proxied back through my Windows 10 client. I'm running the following detection from the link above:


`sysmon` EventID=17 OR EventID=18 PipeName IN (\msagent_*, \wkssvc*, \DserNamePipe*, \srvsvc_*, \mojo.*, \postex_*, \status_*, \MSSE-*, \spoolss_*, \win_svc*, \ntsvcs*, \winsock*, \UIA_PIPE*)
| stats count min(_time) as firstTime max(_time) as lastTime by Computer, process_name, process_id process_path, PipeName
| rename Computer as dest
| `security_content_ctime(firstTime)`
| `security_content_ctime(lastTime)`
| `cobalt_strike_named_pipes_filter`

There are several macros in this search that ship with the detections in the following apps and repo:

  • Splunk Enterprise Security Content Update
  • Splunk Security Essentials
  • Splunk Security Content

I'll run out of words in this blog if I explain the full search, but in essence, this search looks for Sysmon event types 17 and 18 (remember those from the start of the blog?), and then it looks for specific pipe names that typically show up with Cobalt Strike. Stats is then run to calculate the number of occurrences across the various computers.

Ta-da! We've got Cobalt Strike! From the bottom, we see win-host-370.surge.local with 3 matched pipe names from the search, being run by 3 different processes. The first msagent_5c pipe being run by the System process is the SMB beacon used to communicate back to the win-client-425.surge.local host. The MSSE-6013-server pipe is the initial lateral movement pipe where PSExec was run. And the final msagent_5c pipe being run by the rundll32.exe process is another pipe where I ran a "show processes" command within Cobalt Strike.

Looking at the win-dc-483.surge.local host, we detect a few more pipes. Again, the pipe starting with MSSE is my initial lateral movement pipe. Cobalt Strike by default creates a 4 digit random number after MSSE for these pipes. Then we see the same msagent_5c pipe in use for SMB communication (we could create other SMB listeners with different pipe names, but remember we're being lazy here). And then we see some postex_fc32 pipes in use. This was me running Mimikatz to dump credentials from the domain controller.

And at the top, we can see a couple of similar pipes used on win-client-425.surge.local. The postex_e472 pipe was first used for reconnaissance (I ran Cobalt Strike's net computers command to find the other hosts on the network) and used again for setting up the SMB beacons on the other hosts. Cobalt Strike appends a random 4 digit string to each postex_ pipe name, just like the MSSE ones. And then we have the first pipe run on win-client-425.surge.local, which is MSSE-1630-server, being run by the lazy_beacon.exe process. This was the initial Cobalt Strike compromise in my environment.

Phew! Time for all of us to take a break. Go away and digest all of that, and come back for part 2 of this blog where things start to get a bit more tricky. Here's a trailer for a great new rom-com to help you pass the time.

In the meantime, happy hunting!

This blog post is part twenty-six of the "Hunting with Splunk: The Basics" series. Shannon Davis provided a blog on process hunting in part twenty-five, and will now provide a similar piece on pipes! - Ryan Kovar

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

Splunk Inc. published this content on 06 December 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 06 December 2021 19:11:03 UTC.