Decrease huge database dumps.

Have you ever worked with big database dumps? I suppose, yes.  And, usually, you have a big chunk of data that you don’t need at all. However, you need to restore a whole file and waste your precious time.

So, I provide my own way to deal with it and save your time (and money).

Step 1: parse dump file into table-per-file mode.
For this I am using this simple script:


#!/bin/bash

####
# Split MySQL dump SQL file into one file per table
####

if [ $# -lt 1 ] ; then
echo “USAGE $0 DUMP_FILE [TABLE]”
exit
fi

if [ $# -ge 2 ] ; then
csplit -s -ftable $1 “/– Table structure for table/” “%– Table structure for table \`$2\`%” “/– Table structure for table/” “%40103 SET TIME_ZONE=@OLD_TIME_ZONE%1”
else
csplit -s -ftable $1 “/– Table structure for table/” {*}
fi

[ $? -eq 0 ] || exit

mv table00 head

FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
mv $FILE foot
else
csplit -b ‘%d’ -s -f$FILE $FILE “/40103 SET TIME_ZONE=@OLD_TIME_ZONE/” {*}
mv ${FILE}1 foot
fi

for FILE in `ls -1 table*`; do
NAME=`head -n1 $FILE | cut -d$’\x60′ -f2`
cat head $FILE foot > “$NAME.sql”
done

rm head foot table*

Step 2: investigate files for their size.

Step 3: delete/erase/other action with massive file.

* Step 4: merge files to one file.
cat * > merged-file
cat *.sql > merged-file

PROFIT!

* – optional

Leave a Reply

Your email address will not be published. Required fields are marked *