How I was able Find mass leaked AWS s3 bucket from js File

 TOOLS used for the exploitation

1. Subfinder (https://github.com/projectdiscovery/subfinder)

2. httpx (https://github.com/projectdiscovery/httpx)

3. gau(Corben) — https://github.com/lc/gau

4. waybackurls(tomnomnom) — https://github.com/tomnomnom/waybackurls.


Story Behind the bug:


This is the write of my Recent bug that i found . While I was doing recon on js file. How I was able Find mass leaked AWS s3 bucket from js File.


Here it goes:

Suppose we assume the target name is example.com where every thing is in-scope like this:


In-scope : *.example.com

To gather all the subdomain from internet archives i have used subfinder , waybackurls tool and gau.


Command used:

subfinder -d example.com silent | httpx |subjs

gau -subs example.com | grep ‘.js$’

waybackurls example.com | grep ‘.js$’


So the chance of missing the js file still exist so in-order to be ahead of the game I don’t want to miss any js file for testing so I used subfinder and pipe to waybackurls to get all the js file for all the subdomain if exist and save it to a file.


So the final command will look like this:

gau -subs example.com | grep ‘.js$’ >> vul1.txt

waybackurls example.com | grep ‘.js$’ >> vul2.txt

subfinder -d example.com -silent | | httpx |subjs>> vul3.txt


Now collecting all js file in one and sorting out the duplicates


cat vul1.txt vul2.txt vul3.txt | sort -u >> unique_sub.txt


NOW the actual js file recon start:

Now as js file has a huge resource of data and doing js file recon is an time consume task to filter out the information the target. As files are huge any read each js file one after another is an impossible task. So the used my bash skill to extraction data from each file.I used curl command to read each file and grep out any leaked AWS s3 bucket leaked in that. SO the syntax i used was like this below


cat vul3.txt | xargs -I% bash -c ‘curl -sk “%” | grep -w “*.s3.amazonaws.com”’ >> s3_bucket.txt

cat vul3.txt | xargs -I% bash -c ‘curl -sk “%” | grep -w “*.s3.us-east-2.amazonaws.com”’ >> s3_bucket.txt

cat vul3.txt | xargs -I% bash -c ‘curl -sk “%” | grep -w “s3.amazonaws.com/*”’ >> s3_bucket.txt

cat vul3.txt | xargs -I% bash -c ‘curl -sk “%” | grep -w “s3.us-east-2.amazonaws.com/*”’ >> s3_bucket.txt


Now I collected all the aws s3 bucket url and then saved it to a file as s3_bucket.txt.

Now once we have all the s3 aws bucket url then we will collect the s3 buck name. Below is the command to get all the s3 bucket name from the url or u can manually clean it.


cat s3_bucket.txt | sed ‘s/s3.amazonaws.com//’ >> bucket_name.txt


cat s3_bucket.txt | sed ‘s/s3.us-east-2.amazonaws.com//’ >> bucket_name.txt


Now How I was able to find mass AWS s3 bucket with write and delete permission:

So there how i was able to automate the AWS s3 bucket scanning process for write and delete permission.


As I have already collected the s3 bucket from js file and stored it on a file name as “bucket_name.txt”.


Now using the aws cli command we will automate the process:


cat bucket_name.txt |xargs -I% sh -c ‘aws s3 cp test.txt s3://% 2>&1 | grep “upload” && echo “ AWS s3 bucket takeover by cli %”’

cat bucket_name.txt |xargs -I% sh -c ‘aws s3 rm test.txt s3://%/test.txt 2>&1 | grep “delete” && echo “ AWS s3 bucket takeover by cli %”’


I Finally got 6 more aws s3 bucket with write and delete permission.

I quickly reported the bug and in the next day the report was triage to critical

After seeing this my reaction



S3 Bucket Recon

                                                         -------------------------------------------

### Method 1 google Dork to find S3 Buckets

site:s3amazonaws.com site.com

site:amazonaws.com inurl:s3.amazonaws.com site:s3.amazonaws.com intitle:index.of.bucket


### Method 2 - Using burp Suite

crawl the whole application through the browser proxy

and then discover the S3 buckets from the sitemap feature of the burp suite. Look for a web addresses or Special headers that mention S3 Bucket like "s3.amazonaws.com" or "x-am-bucket".



### Method 3 from the application

To find the target application's S3 bucket, right-click on any image available on the application, open it in a new tab, and check if the image URL format is something like this "https://name.s3.amazonaws.com/image1.png" in this case "name" before ".s3" is this bucket name where the images or data is stored.



### Method 4 - there are many online tools available on Git Hub for discovering the S3 bucket associated with a website.


S3Scanner: https://github.com/sa7mon/S3Scanner 

Mass3: https://github.com/smiegles/mass3

slurp: https://github.com/exbarath/slurp

Lazy S3: https://github.com/nahamsec/lazys3

bucket finder: https://github.com/msttweidner/bucket_finder 

AWSBucketDump: https://github.com/netgusto/AWSBucketDump 

sandcastle: https://github.com/0xSearches/sandcastle 

Dumpster Diver: https://github.com/securing/DumpsterDiver 

S3 Bucket Finder: https://github.com/gwen001/s3-buckets-finder


### Method 5 - online Websites

grayhatwarfare : https//Buckets.grayhatwarfare.com 

osint.sh: https://osint.sh/buckets


### Method 6 Nuclei Template 

Template to find S3 Bucket https://github.com/projectdiscovery/nuclei-templates/blob/master/technologies/s3-detect.yaml



### Method 7 - Simple command to extract $3 bucket from a list of js URLS from a file. You can modify the regex based on your requirements


cat js_url.txt | xargs -| {} curl -s {} | grep -oE 'http[s]?://[^]*.s3.amazonaws.com'

cat js_url.txt | xargs -| {} curl -s {} | grep -OE 'http[s]?://[^]*.s3.amazonaws.com/**



### Method 8 Extract using #subfinder and #httpx

subfinder -d domain.com -all -silent ❘ httpx -status-code title -tech-detect | grep "Amazon S3"



Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.