You are hereThe squeezed dump

The squeezed dump

By IgnacioRuiz - Posted on 04 April 2008

Sometimes you'll try to move data from one DB to another, or just between platforms. If you use the old export/import duo there are some workarounds to split big dump files in smaller pieces... but ¿what if, even with smaller pieces my file is unmanageable?

There is a workaround when working with Unix and Linux platforms: pipes and IO redirection.

These simple scripts would allow you to compress and decompress dump files 'on the fly'


# mknod exp.pipe p
# gzip < ./exp.pipe > /backups/export.dmp.gz &
# exp user/password full=y file=exp.pipe
log=export.lis statistics=none direct=y consistent=y


# mknod imp.pipe p
# gunzip < /backups/export.dmp.gz > imp.pipe &
# imp file=imp.pipe fromuser=dbuser touser=dbuser log=import.lis commit=y

Important: you must have every program path in your PATH environment variable, or find where are located mknod, gunzip and exp/imp and modify these scripts with absolute references.

I've taken statistics for resulting file sizes and compression ratios are between 10% to 20% from original size.

View my page on Oracle Community
View my blog on Blogger

Subscribe in a reader


Syndicate content

Follow DatabasesLA on Twitter

Who's online

There are currently 0 users and 42 guests online.


Locations of visitors to this page

hidden hit counter