photo by mystical antiquity
hello guys , LC again
today we are here to talk about a python program that will perform cross site scripting attack simply by giving the target url.
so lets don't waste the time and begin.
before starting i want to mention that this script will just do the search and the given web page not all the routes of the website , (wait for the update of script in part 2)
so lets start the program by importing the require libraries
simple definations will be in the ul > li tag .
in this script we will work with kxss tool too . so let's install it first.
i asume that you use debian base os
kxss tool is lined in here.
run
Bash:
go install github.com/Emoe/kxss@latest
then copy kxss to /usr/local/go/bin
cp /root/go/bin/kxss /usr/local/go/bin/if you did so and it did not work , search for sourcing the kxss to the command line
then install the pre requirements by doing
pip install beautifulsoup4
Python:
import subprocess
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse, parse_qs
we use os for executing the results , requests for sending a get request , bs4 for searching for parameters and urllib for concating
import subprocess: This imports the subprocess module which allows you to run system commands from within your Python script.import requests: This imports the requests library, which is used to send HTTP requests.from bs4 import BeautifulSoup: This imports the BeautifulSoup class from the bs4 (BeautifulSoup 4) module, which is used for parsing HTML and XML documents.from urllib.parse import urlparse, parse_qs: This imports urlparse and parse_qs functions from the urllib.parse module, which help in parsing URLs and their query strings.
so now we go to define a proxy server .
what proxy server will do is that it will proxy your traffic through proxy server
Python:
proxies = {
'https' : '127.0.0.1',
}
This sets up a proxy dictionary to route HTTPS requests through 127.0.0.1. This could be useful if you are using a local proxy for monitoring or debugging requests.
now we proxied trough the proxy server , and we have to crawl and send get request to the target web page . so let's define the func.
Python:
def get_url_parameters(url):
# Send a GET request to the URL
response = requests.get(url,proxies=proxies)
if response.status_code != 200:
print(f"Failed to retrieve the URL: {url}")
return {}
# Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(response.content, 'html.parser')
# Find all links on the page
links = soup.find_all('a', href=True)
# Dictionary to store URL parameters
url_parameters = {}
get_url_parameters function will send a request to the webpage and sets the response to a variable called response , then it will check of the http status code is 200 or not (checks of its valid or not) .
after this we parse the response from response variable and generate the new response to the variable soup .
now we try to find the links and parameters so :
we create
Python:
for link in links:
href = link['href']
parsed_url = urlparse(href)
params = parse_qs(parsed_url.query)
if params:
url_parameters[href] = params
then we simply return url_parameters by doing
Python:
return url_parameters
def get_url_parameters(url): This defines a function that takes a URL as an argument.response = requests.get(url, proxies=proxies): Sends an HTTP GET request to the provided URL using the defined proxy.if response.status_code != 200: Checks if the response status code is not 200 (OK). If not, it prints an error message and returns an empty dictionary.soup = BeautifulSoup(response.content, 'html.parser'): Parses the HTML content of the response using BeautifulSoup.links = soup.find_all('a', href=True): Finds all <a> tags with an href attribute on the page.url_parameters = {}: Initializes an empty dictionary to store URL parameters.For each link, it extracts the href attribute, parses the URL, and extracts the query parameters.If the link has query parameters, they are added to the url_parameters dictionary.Finally, the dictionary is returned.
now we have to save the result to a text file so that we will be able to make a change on them and create the execution result so :
Python:
def save_to_file(data, filename):
with open(filename, 'w') as file:
for url, params in data.items():
file.write(f'echo "{url}" | ./kxss \n')
save_to_file function gets two parameters , first data and second filename
data is the parsed response and filename is the name of the file that we want to create.
the we do a foreach of the parameters with two params called url and params
then we build the final text by saying
echo "url" piped to kxss
and at the final , we save the result to the text file
def save_to_file(data, filename): This defines a function that takes data and a filename as arguments.with open(filename, 'w') as file: Opens the specified file in write mode.For each URL and its parameters in the data, it writes a command to the file that echoes the URL and pipes it to ./kxss.
Python:
website_url = input('enter target website: ')
Python:
parameters = get_url_parameters(website_url)
we call the get_url_parameters function and set the response of the fucntion to a variable called parameters.
now we have to call the save_to_file function and save the data because we have all the dependencies.
Python:
output_filename = 'url_parameters.txt'
save_to_file(parameters, output_filename)
we have a variable called output_filename , this variable will contain the name of the result file ( the name of the file we want to save the response of the data in )
then we called the save_to_file function that will do the process of saving the result datas to the file we specified .
Python:
print(f"URL parameters saved to {output_filename}")
then we print that the file is saved
website_url = input('enter target website: '): Prompts the user to enter a target website URL.parameters = get_url_parameters(website_url): Calls the function to get URL parameters from the entered website.output_filename = 'url_parameters.txt': Defines the output filename.save_to_file(parameters, output_filename): Saves the collected URL parameters to the specified file.print(f"URL parameters saved to {output_filename}"): Prints a confirmation message.
now we go to stage 2 -> executing the kxss with the parameters
so for this we need to create a function that will read the output file data , do a foreach on them , call the subprocess function and execute each lines.
Python:
def run_commands_from_file(file_path):
try:
with open(file_path, 'r') as file:
commands = file.readlines()
for command in commands:
command = command.strip() # Remove any leading/trailing whitespace
if command: # Ensure the command is not empty
result = subprocess.check_output(command, shell=True, text=True)
print(f"Executed: {command}")
print(result)
except FileNotFoundError:
print(f"The file {file_path} does not exist.")
except Exception as e:
print(f"An error occurred: {e}")
we basically created a fucntion called run_commands_from_file that will need a parameter called file_path .
in the function we open the file that is specified in the func param and read all of the lines of the text file and save the data in the variable called commands
then we do a foreach on the command variable and strip each line , and finally put the result to the command variable again to basically command variable will be generated with a new entry
then we check if the command is not empty and after that we create a variable called result that will contain the response of the subprocced check output function
subprocess.check_output function will basically return the result of a excecuted command so we will have the result set into result variable
the we do a print so we can see what was executed . after that we do the except for error handling .
def run_commands_from_file(file_path): This defines a function that takes a file path as an argument.Tries to open the specified file and read all lines.For each line (command) in the file, it strips any whitespace and checks if the command is not empty.If the command is not empty, it executes the command using subprocess.check_output and prints the command and its output.Handles exceptions if the file is not found or any other error occurs.
Python:
file_path = 'url_parameters.txt'
[I][B]run_commands_from_file(file_path)
we created a variable called file_path that contains the path of the result parameter text file that we generated in the first function of the script [/B][/I]
then we run the run_command_from_file function and its all over
Defines the path to the file containing the commands.Calls the function to run the commands from the specified file
with this script you will be able to search for the cross site scripting vulnerabilities on the given web page with kxss tool.
we mentioned how you can install and run the program .
you can develop your programming skills by reading the script too cause we used the libraries that are so imortent within these days.
if you want to donate this writer you can feel free to use :
0x066c519333AeC9dd0623e33C5ea9f84785910E96
main file will be attached
Enjoy


. I told u to learn about what proxy will do .