Sat 2 December 2017
One of the pieces of advice on the github repo is:
Randomise your bucket names! There is no need to use company-backup.s3.amazonaws.com.
But this actually wouldn't keep your buckets private (as matt_wulfeck pointed out in the HN comments) because DNS resolution is essentially public.
Passive DNS means looking at the results of DNS queries performed by others, rather than performing the queries yourself. Lots of networks and DNS resolvers send off the results of DNS queries that they process to a centralised database, which can then be searched by anybody subscribing to the database. Although most of the data is not publicly available, it is available to lots of organisations, and a small sample is available from VirusTotal, e.g. "Observed subdomains" on https://www.virustotal.com/en/domain/s3-us-west-2.amazonaws.com/information/ shows 100 examples of subdomains.
100 subdomains is good enough for a proof of concept. If we then multiply that by the number of AWS regions, and come back periodically to look for new subdomains, it might even be good enough for an active attack.
I queried the list of 100 subdomains for s3-us-west-2.amazonaws.com and found that 24 of the buckets were publicly readable. I didn't look at any of the contents, but based on the filenames most of them are intended to be public so I don't think there's a serious leak here.
But the moral of the story is that you should put actual authentication on your S3 buckets (indeed, on anything you don't want public!) and don't just rely on DNS to keep it private, because DNS is not private.If you like my blog, please consider subscribing to the RSS feed or the mailing list: