• XSS.stack #1 – первый литературный журнал от юзеров форума

MYSQL DUMP TO EXCEL

Kelentovich

(L3) cache
Пользователь
Регистрация
22.05.2024
Сообщения
165
Реакции
3
Гарант сделки
3
I want to convert a Mysql file I dumped from a database access to an Excel file so that it can be readable in a Tables, Rows & Columns format.

Please anyone mind telling me how I can do this without the need of connecting to the access.

Just the dumped file.​
 
It's easier to export to CSV using (sample) "select concat_ws(';',id,name,mail) INTO OUTFILE '/tmp/dump.csv'"
It already has been dumped. It's only the SQL file that I have.

But since I only have the SQL file, how can I do this?
 
It already has been dumped. It's only the SQL file that I have.

But since I only have the SQL file, how can I do this?
You can't, or at least, you could but you shouldn't because the format is not stable and most of the times it's going to provide bad results.
you should rather load the sql into a local sql db and use a script that uses the database functions to print it.


Python:
import sqlite3
import pandas as pd

# Function to load SQL file into SQLite database
def load_sql_into_db(sql_file, db_file):
    conn = sqlite3.connect(db_file)
    cursor = conn.cursor()
    with open(sql_file, 'r') as sql_file:
        sql_script = sql_file.read()
        cursor.executescript(sql_script)
    conn.commit()
    conn.close()

# Function to query and dump hypothetical rows
def dump_hypothetical_rows(db_file):
    conn = sqlite3.connect(db_file)
    query = "SELECT * FROM users LIMIT 10"  # Example query to fetch first 10 rows from 'users' table
    df = pd.read_sql_query(query, conn)
    conn.close()
    return df

if __name__ == "__main__":
    # Replace with your SQL file and SQLite database file paths
    sql_file = "data.sql"
    db_file = "local.db"
    
    # Load SQL file into SQLite database
    load_sql_into_db(sql_file, db_file)
    
    # Dump hypothetical rows from the 'users' table
    df_users = dump_hypothetical_rows(db_file)
    
    # Print or process the dumped data (DataFrame 'df_users')
    print("Dumped Hypothetical Rows:")
    print(df_users)


You can format the dump to be line by line and each row can be separated like value1 + '~' + value2 + '~'
when importing the excel file you can select a custom separator
 
You can't, or at least, you could but you shouldn't because the format is not stable and most of the times it's going to provide bad results.
you should rather load the sql into a local sql db and use a script that uses the database functions to print it.


Python:
import sqlite3
import pandas as pd

# Function to load SQL file into SQLite database
def load_sql_into_db(sql_file, db_file):
    conn = sqlite3.connect(db_file)
    cursor = conn.cursor()
    with open(sql_file, 'r') as sql_file:
        sql_script = sql_file.read()
        cursor.executescript(sql_script)
    conn.commit()
    conn.close()

# Function to query and dump hypothetical rows
def dump_hypothetical_rows(db_file):
    conn = sqlite3.connect(db_file)
    query = "SELECT * FROM users LIMIT 10"  # Example query to fetch first 10 rows from 'users' table
    df = pd.read_sql_query(query, conn)
    conn.close()
    return df

if __name__ == "__main__":
    # Replace with your SQL file and SQLite database file paths
    sql_file = "data.sql"
    db_file = "local.db"
   
    # Load SQL file into SQLite database
    load_sql_into_db(sql_file, db_file)
   
    # Dump hypothetical rows from the 'users' table
    df_users = dump_hypothetical_rows(db_file)
   
    # Print or process the dumped data (DataFrame 'df_users')
    print("Dumped Hypothetical Rows:")
    print(df_users)


You can format the dump to be line by line and each row can be separated like value1 + '~' + value2 + '~'
when importing the excel file you can select a custom separator
I tried this, and it gave me an error. I checked to see why then remembered that the dumped SQL files where from MYSQL database & not SQLITE.

Please is there a remedy for this?
 
Okay, but it was not really helpful though, so I had to import the data into my local host then export with DBeaver but still not the result I was hoping for as the process imported the files separately there by having 100s of files from one SQL file. Now imagine what it would be with 10 SQL files.
 
I want to convert a Mysql file I dumped from a database access to an Excel file so that it can be readable in a Tables, Rows & Columns format.

Please anyone mind telling me how I can do this without the need of connecting to the access.

Just the dumped file.​
Код:
import re
import pandas as pd


def parse_sql_dump(dump_file):
    with open(dump_file, 'r') as file:
        sql_content = file.read()
   
    # Extract column headings and data
    table_pattern = re.compile(r"INSERT INTO `your_table` \((.*?)\) VALUES (.*?);", re.DOTALL)
    match = table_pattern.search(sql_content)
   
    if not match:
        raise ValueError("Failed to find data in the SQL dump.")
   
    columns_str, values_str = match.groups()
    columns = [col.strip('`') for col in columns_str.split(',')]
   
    values_str = values_str.strip("()").replace("),(", ");(")
    rows = values_str.split(';')
   
    data = []
    for row in rows:
        if row:
            values = row.strip("'").split("','")
            data.append(values)
   
    return columns, data


def save_to_csv(columns, data, csv_file):
    df = pd.DataFrame(data, columns=columns)
    df.to_csv(csv_file, index=False)


# Path to the SQL dump file
dump_file = 'path/to/your_dump.sql'
# Path to CSV file
csv_file = 'output.csv'


columns, data = parse_sql_dump(dump_file)
save_to_csv(columns, data, csv_file)
 


Напишите ответ...
  • Вставить:
Прикрепить файлы
Верх