Background

A Microsoft Defender incident has been triggered by an analytics rule matching known IOCs tied to the Scattered Spider threat group. You’ll take on the role of a security analyst tasked with investigating the incident from initial detection to post-compromise activity.

Objectives

Investigate a security incident detected by Microsoft Defender Identify malicious behavior through analysis of the different log sources Track actor movement and actions in the environment

Question 1

How many users received the internal phishing lure related to the “bonus” theme?
This one was quick, knowing we already have the email messages logs, I just built the following trivial query to find the answer. \

image

Question 2

We now know the threat actor shared a document via SharePoint. What is the full object ID (URL) of the file that was shared?
Checking on Audit Data Sources available, answer might be likely in one of those two.

image

Given a document was shared, it had to be uploaded, so this event should help us figuring out what was it. image

Checking on the uploaded files we can observe that there is only one related to the “Bonus” lead given in the first question

image

Question 3

What was the IP address used by the threat actor while performing activities on SharePoint?
Adding as a filter the sharepoint file found inthe previous question and extracting the ClientIP of that action, we were able to find the Source IP of the threat actor.

image

That IP is tagged as VPN and associated with Malware by VT

image

Question 4

What is the Object ID of the file accessed by the threat actor using the ‘154.47.30.133’ IP address? (Note: we are not looking for .jpg or image files) Filtering by the FileAccessed Sharepoint Operation value + with the known IP + excluding jpg files, we are able to get the file the TA accessed.

image

Question 5

An existing inbox rule was modified — what is the name of this inbox rule?

From experience I know that modified inbox rules will have the Operation value of Set-InboxRule, leveraging that filter + the UserId of the compromised user, I was able to get the name of the inbox rule.

image

Question 6

We asked Isabella to share a screenshot of all her mailbox rules. Notice another interesting rule in the list… From what IP address was this rule created?

If you look at the value of the Name field, it has two dots as a name, which has been one of the most (if not the most) common names in BEC cases, I myself have seen it being used in real BEC compromises. Also notice that it tries to delete the emails coming from any account from the domain acme-suite.com, which is not just a shot in the dark the TA is doing, but it is likely that an account from this tenant was used to compromise this user and wants to avoid the user being able to read any email from it in case they want to notify their email partners.

image

Expanding further we can find the IP that created the rule.

image

Question 7

What is the domain targeted in this rule, where all emails are being deleted from? \

From the explanation in the previous answer, the domain is acme-suite.com

Question 8

Interesting, all emails from this domain are being removed. According to the IT administrator, this is a company we’ve done business with in the past. Can you identify the email address associated with this domain? \

As it was a company that it has been doing business in the past, I started searching for any value in the RecipientAddress field that contains the domain, but I was unable to find anything. Then, I tried my luck using the SenderAddress field and I was successful on it.

image

Question 9

Isabella received a phishing email from the partner domain, which led to her account compromise. How many other users received the same original phishing email that Isabella received?

Checking on the timestamps, Isabella sent the malicious emails on 04/07/2026 10:50:06.232 PM UTC. Filtering for the emails that she received in day that that aren’t internal, and by the fact that question 8 mentions that acme-suite.com is the domain of a company they have worked before + the fact that the TA created an inbox rule so that emails from any address from that domain gets deleted because they could try to notify to their partners that this account has been hacked, we assume that invoices.platform@acme-suite.com is the name of the account that sent the email and compromised Isabella.

image

Searching for other emails sent from that account we can observe there are indeed a couple sent.

image

Question 10

Isabella remembers receiving a phishing email and says she opened the link and filled in her details, thinking it was needed to secure her Microsoft 365 account. Now that we know how the threat actor gained access and some of their actions, the remaining question is: was any data stolen? How many unique emails were accessed by the threat actor across all folders?

Sadly for this question had to leverage the hints… and provided this query with the result… Given my little experience with KQL + it was already midnight when I was answering this question sadly I gave up :

image

Question 11

Unfortunately, this wasn’t the only way the threat actor accessed emails. They registered a well-known application used for data exfiltration. What is the name of this application?

The question makes it sound as if the attacker registered a malicious oauth application (T1671). Data exfiltration would be performed by creating an oauth app that has the Mail.Read permissions for “Thunderbird” it would allow the TA to receive the emails of the user that granted the consent.

Checking on all the Operation values available in the AzureActiveDirectory, we can observe that the Consent to application event could tell us something.

image

Going through the log, we got no name of the application, but we got a ServicePrincipal ID which could help us get some extra information on the OAuth Path.

image

💡A ServicePrincipal object is created when an OAuth App gets created because the app needs to be represented by a security principal in order to be able to access resources within the tenant.

Filtering why the Add service principal. events + by the Service Principal ID, we only got one log, which then we were able to extract the display name of the malicious OAuth App.

image

This was not asked, but we can also get the scope of permissions given to the oauth app. As we can observe there are several of them including able to modify the mails/mailbox item, access to calendar, contacts and user data. Notice the presence of offline_access, which allows the oath app to get a refresh token to get persistent access and not only an access token which would be available for around an hour.

image