Malicious actors are constantly finding new ways to deliver their malicious payloads. With the recent migration of businesses moving to web application-based services, file storage, email, calendar, and other channels have become valuable means for delivering malicious code and payloads. In some instances, these services are abused as Command and Control infrastructure since many enterprises trust these services by default.
The Splunk Threat Research Team (STRT) recently observed a phishing campaign using GSuite Drive file-sharing as a phishing vector. The malicious actors created a document that was shared with many accounts with a description mentioning a well-known financial institution. Additionally, the document attempted to trick the victim by including the official branding of a prominent financial institution and a known cryptocurrency organization.
Moreover, the document had a malicious link embedded that took visitors to a malicious website. This example of a malicious payload delivered via File Sharing is an example of how malicious actors are using web-based services to deliver malicious payloads.
These types of attacks are challenging to detect even after placing source domain restrictions, as it is practically impossible to block every possible malicious domain from sharing files. In addition, when using Google Suite to carry out payloads, malicious actors often avoid sending email notifications. This way, when users find the file in their Google Drive, they will likely believe that it was shared by someone they know and click on the malicious file.
Malicious actors can also combine multiple delivery vectors to lead victims into viewing files like the following screenshot.
Many of the staging servers for these malicious payloads detect Operating Systems user agents. As a result, devices quickly switch their types of payloads according to their victim's platform.
Most of these initially delivered documents will directly get to the victim. For an attack to succeed, victims must input sensitive, compromised information such as passwords, tokens or codes sent to them by multiple authentication applications. This information is usually obtained via a phishing site using popular phishing tools like evilginx2 or King Phisher.
The above information shows that detecting these phishing attacks is challenging even with imposing restrictions such as source domains, service providers, types of documents, or even message analysis. This type of attack vector requires a logging infrastructure in place configured to ingest Gsuite logs, and specifically configured to look at the following elements:
The above fields can reveal, for example, what type of access a document has ( public_in_the_domain or public_on_the_web) and may display a large number of targeted users. Attackers can make it even more challenging by targeting specific users to avoid making noise and sharing documents with multiple users. The visibility is not a definitive indicator; however it can help when investigating this attack vector.
Two more interesting fields that can provide important information are owner and target_user.
A malicious actor can obfuscate the origin of communication by creating a generic gsuite account to send emails or share documents. This is why many times significant numbers of file shares or calendar invites, especially containing for example phishing related terms may indicate malicious intent.
Here is an example of a search that investigates when someone shares a Google Drive, who shared it, and what was shared in the Google Drive within the company's domain
We can refine this search by looking at parameters.target_user (users that were shared the document), the type of document (parameters.doc_type), or the title of the document (parameters.doc_title). In many observed campaigns, we see that bad actors share these malicious documents in significant numbers. We can search for the above parameters and add a benchmark for the number of users that share such document.
The above search can be refined to target specific users.
In the following search, we can view whether or not a specific user is sharing documents with a single person or multiple people.
We can also address another GSuite phishing use case such as phishing via rogue calendar invite. In this use case, the attacker sends multiple invitations via Google Calendar, usually with an event title or even a document attachment. In this case the bad actors send the calendar invites out in large quantities.
The above is a clear example of how can we have visibility on outside domain invitations directed to specific users, things such as number of parameters.target_calendar_id, event title (parameters.event_title), number of invitations and information from fields such as "name" which contain evens like add_event_guest or create_event.
It is important to consider that if this was a case of spear or targeted phishing these searches would help, however further analysis and compensating detections would be required in order to narrow the search for the source of the attack.
It is important to understand that many of these searches have to be adjusted per specific environments and specific findings behind a detection, hunt policy which can be customized per timeframe, subdomains, or organizational units.
Below are other possible phishing anomaly detections that we've developed during our research on the common techniques used by attackers for spear phishing attacks in GSuite.
GSuite Email with Suspicious Attachment
This anomaly use case also takes advantage of the gsuite logs for gmail to catch possible suspicious executable files ( .exe, .dll, .com and etc) and scripts (.py, .pl, .js, .vbs and etc) delivered via attachment. Sometimes, attackers use a double file extension technique like "tax return.pdf.exe '' in order to lure users to click on attachments.
It is a good practice to archive this type of file before sending it to the intended email account, to avoid accidental execution and help to lessen the noise of this use case.
Gsuite Email with Suspicious Subject or Shared File Name
In this detection we look at common social engineering vectors used in spear phishing attacks such as the subject of a message as a lure to victims. Malicious actors use the subject of messages with content that may drive the victim to open attachments (doc, xls, ppt, zip, rar, etc). Common phishing techniques include the use of messages that present situations of urgency (must act), trust (an entity known or reputable to the subject) or authority (Government, Law Enforcement).
In the screenshot above we can see a simulated event where the external email uses the common social engineering subject example "delivery order email confirmation" to lure the user to open the attachment.
Another possible way of this technique is to share a malicious document file in Google Drive with a filename related to this social engineering approach to make the user open it, which will lead to execution of malicious code.
In the search above, you can see how to detect a suspicious file share from an external account to the targeted organization email account related to fake "fedex delivery". You'll notice how gsuite logs give us valuable information by checking the doc_type of attachment, the source email, and the target_user.
Gsuite Email With Known Abuse Web Service Link
In this use case we search for external gmail accounts as a source of communications targeting organization's email accounts. We also look at the presence of phishing related items such websites like pastebin, discord, telegram or other web services used by malware actors to deliver malicious payloads. This search may indeed create some standard data depending on white listing policy of targeted organization.
Detections & D3fend
The following detections address the above case scenarios using GSuite file sharing and GSuite calendar invitation and Gmail messages and attachments as phishing delivery. These detections are part of a new Analytic Story called DevSecOps.
We would like to thank the following for their contributions to this post.