UDdup - Urls De-Duplication Tool For Better Recon

The tool gets a list of URLs, and removes "duplicate" pages in the sense of URL patterns that are probably repetitive and points to the same web template.

For example:


All the above are probably points to the same product "template". Therefore it should be enough to scan only some of these URLs by our various scanners.

The result of the above after UDdup should be:


Why do I need it?

Mostly for better (automated) reconnaissance process, with less noise (for both the tester and the target).


Take a look at demo.txt which is the raw URLs file which results in demo-results.txt.


With pip (Recommended)
pip install uddup

Manual (from code)
# Clone the repository.git clone https://github.com/rotemreiss/uddup.git# Install the Python requirements.cd udduppip install -r requirements.txt


uddup -u demo.txt -o ./demo-result.txt

More Usage Options

uddup -h

Short Form Long Form Description
-h --help Show this help message and exit
-u --urls File with a list of urls
-o --output Save results to a file
-s --silent Print only the result URLs
-fp --filter-path Filter paths by a given Regex

Filter Paths by Regex

Allows filtering custom paths pattern. For example, if we would like to filter all paths that starts with /product we will need to run:

# Single Regexuddup -u demo.txt -fp "^product"





Advanced Regex with multiple path filters
uddup -u demo.txt -fp "(^product)|(^category)"


Feel free to fork the repository and submit pull-requests.


Create new GitHub issue

Want to say thanks? :) Message me on Linkedin

Source: feedproxy.google.com
UDdup - Urls De-Duplication Tool For Better Recon UDdup - Urls De-Duplication Tool For Better Recon Reviewed by Anonymous on 12:33 PM Rating: 5