• XSS.stack #1 – первый литературный журнал от юзеров форума

Статья creating a python script that will find [XSS] vulnerabilities and tunnels traffic through proxy

looonely_creature

HDD-drive
Пользователь
Регистрация
16.10.2023
Сообщения
40
Реакции
6
photo_2024-07-04_04-16-29.jpg

photo by mystical antiquity
hello guys , LC again

today we are here to talk about a python program that will perform cross site scripting attack simply by giving the target url.

so lets don't waste the time and begin.
before starting i want to mention that this script will just do the search and the given web page not all the routes of the website , (wait for the update of script in part 2)

so lets start the program by importing the require libraries

simple definations will be in the ul > li tag .

in this script we will work with kxss tool too . so let's install it first.

i asume that you use debian base os

kxss tool is lined in here.

run

Bash:
go install github.com/Emoe/kxss@latest

then copy kxss to /usr/local/go/bin

cp /root/go/bin/kxss /usr/local/go/bin/

if you did so and it did not work , search for sourcing the kxss to the command line

then install the pre requirements by doing

pip install beautifulsoup4

Python:
import subprocess
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse, parse_qs


we use os for executing the results , requests for sending a get request , bs4 for searching for parameters and urllib for concating

  • import subprocess: This imports the subprocess module which allows you to run system commands from within your Python script.
  • import requests: This imports the requests library, which is used to send HTTP requests.
  • from bs4 import BeautifulSoup: This imports the BeautifulSoup class from the bs4 (BeautifulSoup 4) module, which is used for parsing HTML and XML documents.
  • from urllib.parse import urlparse, parse_qs: This imports urlparse and parse_qs functions from the urllib.parse module, which help in parsing URLs and their query strings.

so now we go to define a proxy server .

what proxy server will do is that it will proxy your traffic through proxy server
Python:
proxies = {
    'https' : '127.0.0.1',
}

This sets up a proxy dictionary to route HTTPS requests through 127.0.0.1. This could be useful if you are using a local proxy for monitoring or debugging requests.

now we proxied trough the proxy server , and we have to crawl and send get request to the target web page . so let's define the func.

Python:
def get_url_parameters(url):
    # Send a GET request to the URL
    response = requests.get(url,proxies=proxies)
    if response.status_code != 200:
        print(f"Failed to retrieve the URL: {url}")
        return {}
    # Parse the HTML content using BeautifulSoup
    soup = BeautifulSoup(response.content, 'html.parser')
    # Find all links on the page
    links = soup.find_all('a', href=True)
    # Dictionary to store URL parameters
    url_parameters = {}

get_url_parameters function will send a request to the webpage and sets the response to a variable called response , then it will check of the http status code is 200 or not (checks of its valid or not) .
after this we parse the response from response variable and generate the new response to the variable
soup .
now we try to find the links and parameters so :
we create
url_parameters dictionary to store url parameters . then we do a foreach on the dictionary so we do :
Python:
for link in links:
        href = link['href']
        parsed_url = urlparse(href)
        params = parse_qs(parsed_url.query)
        if params:
            url_parameters[href] = params

then we simply return url_parameters by doing

Python:
return url_parameters

  • def get_url_parameters(url): This defines a function that takes a URL as an argument.
  • response = requests.get(url, proxies=proxies): Sends an HTTP GET request to the provided URL using the defined proxy.
  • if response.status_code != 200: Checks if the response status code is not 200 (OK). If not, it prints an error message and returns an empty dictionary.
  • soup = BeautifulSoup(response.content, 'html.parser'): Parses the HTML content of the response using BeautifulSoup.
  • links = soup.find_all('a', href=True): Finds all <a> tags with an href attribute on the page.
  • url_parameters = {}: Initializes an empty dictionary to store URL parameters.
  • For each link, it extracts the href attribute, parses the URL, and extracts the query parameters.
  • If the link has query parameters, they are added to the url_parameters dictionary.
  • Finally, the dictionary is returned.

now we have to save the result to a text file so that we will be able to make a change on them and create the execution result so :

Python:
def save_to_file(data, filename):
    with open(filename, 'w') as file:
        for url, params in data.items():
            file.write(f'echo "{url}" | ./kxss \n')

save_to_file function gets two parameters , first data and second filename

data is the parsed response and filename is the name of the file that we want to create.
the we do a foreach of the parameters with two params called url and params
then we build the final text by saying
echo
"url" piped to kxss
and at the final , we save the result to the text file

  • def save_to_file(data, filename): This defines a function that takes data and a filename as arguments.
  • with open(filename, 'w') as file: Opens the specified file in write mode.
  • For each URL and its parameters in the data, it writes a command to the file that echoes the URL and pipes it to ./kxss.
Python:
website_url = input('enter target website: ')
in here we get the target url from the user , so we will be able to make a request to it and parse the data out of it
Python:
parameters = get_url_parameters(website_url)

we call the get_url_parameters function and set the response of the fucntion to a variable called parameters.
now we have to call the save_to_file function and save the data because we have all the dependencies.
Python:
output_filename = 'url_parameters.txt'
save_to_file(parameters, output_filename)

we have a variable called output_filename , this variable will contain the name of the result file ( the name of the file we want to save the response of the data in )
then we called the save_to_file function that will do the process of saving the result datas to the file we specified .


Python:
print(f"URL parameters saved to {output_filename}")

then we print that the file is saved

  • website_url = input('enter target website: '): Prompts the user to enter a target website URL.
  • parameters = get_url_parameters(website_url): Calls the function to get URL parameters from the entered website.
  • output_filename = 'url_parameters.txt': Defines the output filename.
  • save_to_file(parameters, output_filename): Saves the collected URL parameters to the specified file.
  • print(f"URL parameters saved to {output_filename}"): Prints a confirmation message.

now we go to stage 2 -> executing the kxss with the parameters
so for this we need to create a function that will read the output file data , do a foreach on them , call the subprocess function and execute each lines.

Python:
def run_commands_from_file(file_path):
    try:
        with open(file_path, 'r') as file:
            commands = file.readlines()
        
        for command in commands:
            command = command.strip()  # Remove any leading/trailing whitespace
            if command:  # Ensure the command is not empty
                result = subprocess.check_output(command, shell=True, text=True)
                print(f"Executed: {command}")
                print(result)
    except FileNotFoundError:
        print(f"The file {file_path} does not exist.")
    except Exception as e:
        print(f"An error occurred: {e}")

we basically created a fucntion called run_commands_from_file that will need a parameter called file_path .
in the function we open the file that is specified in the func param and read all of the lines of the text file and save the data in the variable called commands
then we do a foreach on the command variable and strip each line , and finally put the result to the command variable again to basically command variable will be generated with a new entry
then we check if the command is not empty and after that we create a variable called result that will contain the response of the subprocced check output function
subprocess.check_output function will basically return the result of a excecuted command so we will have the result set into result variable

the we do a print so we can see what was executed . after that we do the except for error handling .
  • def run_commands_from_file(file_path): This defines a function that takes a file path as an argument.
  • Tries to open the specified file and read all lines.
  • For each line (command) in the file, it strips any whitespace and checks if the command is not empty.
  • If the command is not empty, it executes the command using subprocess.check_output and prints the command and its output.
  • Handles exceptions if the file is not found or any other error occurs.
now we should go for the final part : making the program word with calling the run_command_from_file function

Python:
file_path = 'url_parameters.txt'
[I][B]run_commands_from_file(file_path)

we created a variable called file_path that contains the path of the result parameter text file that we generated in the first function of the script [/B][/I]
then we run the run_command_from_file function and its all over
  • Defines the path to the file containing the commands.
  • Calls the function to run the commands from the specified file
summery:
with this script you will be able to search for the cross site scripting vulnerabilities on the given web page with kxss tool.

we mentioned how you can install and run the program .
you can develop your programming skills by reading the script too cause we used the libraries that are so imortent within these days.


if you want to donate this writer you can feel free to use :
0x066c519333AeC9dd0623e33C5ea9f84785910E96


main file will be attached

Enjoy
 

Вложения

  • xss_finder.zip
    1 КБ · Просмотры: 23
Пожалуйста, обратите внимание, что пользователь заблокирован
are you using a proxy for harmless scraping, but not using it for XSS testing? Good job
 
Пожалуйста, обратите внимание, что пользователь заблокирован
lmao, kxss makes ~13 requests per each query param's in 40 gorutines for only one parsed url in urls list for target site. Nothing suspicious for firewalls, right?
proxies = { 'https' : '127.0.0.1', }
will it work without specifying the proxy protocol? I mean, proxies = { 'https' : 'https://127.0.0.1', }
 
Buddy here its the matter of learning what it will happen in real life, i did this so people will understand how to proxy in python , thanks for your reply.
And if you want to send a request without doing proxy , you should change
request.get and remove where i said proxies=
 
Пожалуйста, обратите внимание, что пользователь заблокирован
Buddy, you can't teach anyone until you teach yourself.
look at this https://requests.readthedocs.io/en/latest/user/advanced/#proxies
e.g.
Python:
import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}

requests.get('http://example.org', proxies=proxies)
but in any case, what's the point of use proxy if the kxss gives more requests (attack requests) without of them
 
Пожалуйста, обратите внимание, что пользователь заблокирован
At the beginning you say that the proxy is to bypass the firewall...
Now you say they are for anonymity...
But (!), your real IP address will be visible when working with the kxss.
Do you understand this or not?
Proxy absolutely useless in you case.
Learn a little before you disgrace yourself with such statements
 
First of all i said my script here is for teaching people how these things work.
Showing them that firewalls can block your ip if u send a lots of requests that i dont care if it works for here or not because its for learning purpose only,
Second
When u send a get request with proxy set , u send it through the ip addr u set . So its useful still 😁 . I told u to learn about what proxy will do .
And if u have any better option or idea
Im gladly waiting to read about ur article
LC
 


Напишите ответ...
  • Вставить:
Прикрепить файлы
Верх