Spider a Website and Return URLs Only

The absolute last thing I want to do is download and parse all of the content myself (i.e. create my own spider). Once I learned that Wget writes to stderr by default, I was able to redirect it to stdout and filter the output appropriately.

wget --spider --force-html -r -l2 $url 2>&1 \
  | grep '^--' | awk '{ print $3 }' \
  | grep -v '\.\(css\|js\|png\|gif\|jpg\)$' \
  > urls.m3u

This gives me a list of the content resource (resources that aren’t images, CSS or JS source files) URIs that are spidered. From there, I can send the URIs off to a third party tool for processing to meet my needs.

The output still needs to be streamlined slightly (it produces duplicates as it’s shown above), but it’s almost there and I haven’t had to do any parsing myself.

Leave a Comment