Skip to main content


Showing posts from November, 2013

Critical use of touch & chmod command

Critical use of touch & chmod command When you think about touch command then you may ignore its importance sometime chmod command has no use for normal users. But a critical use comes during the permission which is changed by chmod command and timestamp of a file using touch command. What is critical case use of these commands? The answer comes when you work with a large data and you may lost whole file just because of permission and timestamp. When you archive or preserve permission and timestamp using rsync command for a file then rsync delete the unmatched case and recopy the files. I case of cURL resume copy  files, the permission and timestamp is not same as original. So when you update the folder with rsync then it delete the file and again copy the file which is already copied by cURL and deleted by rsync. This type of action taken by rsync will waste the time of cURL copy time. So you need to update the original permission and timestamp of the given file. Now for doing t…

Postfix Problem with IPv6 settings

Postfix Problem with IPv6 settings
When trying to send email to a gmail account through postfix you can see an error like this in the postfix logs “/var/log/maillog”:

Oct 17 09:47:07 localhost postfix/smtp[4352]: connect to[2a00:1450:4008:c01::1a]:25: Network is unreachable

Oct 17 09:47:07 localhost postfix/smtp[4353]: 6FC6ABFECD: to=<>, relay=none, delay=122398, delays=122396/0.03/1.8/0, dsn=4.4.1, status=deferred (connect to[2a00:1450:4008:c01::1a]:25: Network is unreachable)

A common cause is because of wrong IPv6 settings. Correct your IPv6 settings and try again. If it doesn’t work post your particular solution here!
If you want to use IPv4 instead, then you should edit the Postfix configuration file:
$ vi /etc/postfix/ And change “inet_protocols = all” to “inet_protocols = ipv4”, and restart or reload Postfix:
$ /etc/init.d/postfix reload and flush the Postfix queue:
$ postfix -f
or just wait and mai…

Copy a Big size file(100GB or more) with resume within a Linux System

Copy a Big size file(100GB or more) with resume within a Linux System You may have problem during copy a GB size file(100 GB or more size of  a single file).
You may not have time to leave system ON for so many hours. Also more then 100GB file size can take several hours depending upon system.
But you need resume copy option during big files copy, so that you can copy a single file in different time interval.
I use "curl" command to copy a single file more then 100GB.
Go to destination folder and run this command
$ mkdir destination_folder
$ cd destination_folder
$ curl -C - -O file:///media/1TB_HDD/source_folder/100-GB-source-file.gz
This command work well and anytime you can stop copy and resume copy without any problem.
For verification of data file, you can use checksum command like this
$ cd /media/1TB_HDD/source_folder/
$md5sum 100-GB-source-file.gz > test.checksum.md5
Go back to destination_folder
$ cd -
$ md5sum -c /media/1TB_HDD/source_folder/test.checksum.md5