Skip to main content


Showing posts from 2013

Critical use of touch & chmod command

Critical use of touch & chmod command When you think about touch command then you may ignore its importance sometime chmod command has no use for normal users. But a critical use comes during the permission which is changed by chmod command and timestamp of a file using touch command. What is critical case use of these commands? The answer comes when you work with a large data and you may lost whole file just because of permission and timestamp. When you archive or preserve permission and timestamp using rsync command for a file then rsync delete the unmatched case and recopy the files. I case of cURL resume copy  files, the permission and timestamp is not same as original. So when you update the folder with rsync then it delete the file and again copy the file which is already copied by cURL and deleted by rsync. This type of action taken by rsync will waste the time of cURL copy time. So you need to update the original permission and timestamp of the given file. Now for doing t…

Postfix Problem with IPv6 settings

Postfix Problem with IPv6 settings
When trying to send email to a gmail account through postfix you can see an error like this in the postfix logs “/var/log/maillog”:

Oct 17 09:47:07 localhost postfix/smtp[4352]: connect to[2a00:1450:4008:c01::1a]:25: Network is unreachable

Oct 17 09:47:07 localhost postfix/smtp[4353]: 6FC6ABFECD: to=<>, relay=none, delay=122398, delays=122396/0.03/1.8/0, dsn=4.4.1, status=deferred (connect to[2a00:1450:4008:c01::1a]:25: Network is unreachable)

A common cause is because of wrong IPv6 settings. Correct your IPv6 settings and try again. If it doesn’t work post your particular solution here!
If you want to use IPv4 instead, then you should edit the Postfix configuration file:
$ vi /etc/postfix/ And change “inet_protocols = all” to “inet_protocols = ipv4”, and restart or reload Postfix:
$ /etc/init.d/postfix reload and flush the Postfix queue:
$ postfix -f
or just wait and mai…

Copy a Big size file(100GB or more) with resume within a Linux System

Copy a Big size file(100GB or more) with resume within a Linux System You may have problem during copy a GB size file(100 GB or more size of  a single file).
You may not have time to leave system ON for so many hours. Also more then 100GB file size can take several hours depending upon system.
But you need resume copy option during big files copy, so that you can copy a single file in different time interval.
I use "curl" command to copy a single file more then 100GB.
Go to destination folder and run this command
$ mkdir destination_folder
$ cd destination_folder
$ curl -C - -O file:///media/1TB_HDD/source_folder/100-GB-source-file.gz
This command work well and anytime you can stop copy and resume copy without any problem.
For verification of data file, you can use checksum command like this
$ cd /media/1TB_HDD/source_folder/
$md5sum 100-GB-source-file.gz > test.checksum.md5
Go back to destination_folder
$ cd -
$ md5sum -c /media/1TB_HDD/source_folder/test.checksum.md5

System user tracking for login on email

System user tracking for login on email
For every Linux system orientated work, which going for development and production purpose, system admin need to follow-up system user activity. But it is very hard to login every system for checking the user activity when you have number of systems. You need to fix a policy when you can do every work without any problem with linux systems. Linux system user tracking is one of the activity that has own importance.
I use this method for maintain my user tracking over email. It is a simple method that you can add to your system is given below:- 1. You can add the following line for specific to user in their $HOME/.bash_profile  or to apply this setting system wise then add it in /etc/profile.d/ folder as a saperate file
[root@localhost ~]# vim /etc/profile.d/
[root@localhost ~]# cat /etc/profile.d/
echo "ALERT - User (`whoami`) Shell Access on:"`hostname` `date` `who` | mail -s "Alert: `hostname` User (`whoami…

Using KVM disk image on NTFS partition

Using KVM disk image on NTFS partition
It is difficult to use the NTFS partition for the KVM storage.
It has the permission denied error for the NTFS partition storage.

I like to use this trick for using the KVM image for using the virtualization.

Its a simple idea of mount a virtual disk image as a device on the linux machine.
I have tested in the centos6.4 x86_64 and work well.

step 1: mount the VM disk as loop device
you can use kpartx or losetup command

# kpartx -av <path of image>

now the loop device show its loop number
like /dev/loop0

step 2: create VM using this loop device as VM disk.
use the loop number as given by command
give path of loop device

step 3: start the VM and use the VM.

Create a Facebook Links for a site

Testing start Test code 1

test code 2

test code 3

test facebook share link Share on Facebook

test facebook share link

this is follow up butten test tweeter test tweeter complete. Testing End The code use on the page can show later.

Nmap technique for remote scan

Here are some really cool scanning techniques using Nmap
1) Get info about remote host ports and OS detection

nmap -sS -P0 -sV -O <target>

Where < target > may be a single IP, a hostname or a subnet

-sS TCP SYN scanning (also known as half-open, or stealth scanning)

-P0 option allows you to switch off ICMP pings.

-sV option enables version detection

-O flag attempt to identify the remote operating system

Other option:

-A option enables both OS fingerprinting and version detection

-v use -v twice for more verbosity.
nmap -sS -P0 -A -v < target >
2) Get list of servers with a specific port open

nmap -sT -p 80 -oG – 192.168.1.* | grep open

Change the -p argument for the port number. See “man nmap” for different ways to specify address ranges.
3) Find all active IP addresses in a network

nmap -sP 192.168.0.*

There are several other options. This one is plain and simple.

Another option is:

nmap -sP

for specific  subnets
4)  Ping a range of IP addresses

nmap -sP…

rsync trick to save sar log files in RedHat 5

rsync trick to save sar log files in RedHat 5
I have faced the problem while managing the log files of RHEL5. By default RHEL5 does not support to save the log more then 28 days.
What will you do if you need sar log file for making monthly report of server  activity? Solution: To Keep sysstat log files longer than 28 days limition, i use the following method with rsync to save the log file with monthly folder.its very easy to maintain the log file, just simply add the crontab in the /etc/cron.d/sysstat in the end of file#echo "55 11 * * * root /usr/bin/rsync /var/log/sa/sar`date +\%d` /var/log/sa/sar-`date +\%b\%Y`" >> /etc/cron.d/sysstatthis command will sync the file sar[two digit date] in the folder (if not exist then it will create it month and year wise"  at 11.55pm every night just after the sar file created on 11.53pm.--Rakesh

Change Picture folder path in gnome screen saver - CentOS-6.3

Change Picture folder path in gnome screen saver - CentOS-6.3
Change the given file for given line as follows:- 
user@hostname#vi  /usr/share/applications/screensavers/personal-slideshow.desktop
## edit the line  Exec=/usr/libexec/gnome-screensaver/slideshow --location=PATH

where PATH is the full path of picture folder.
Save the file and choose the Picture folder in screen server.
Then reboot the system.
Test your screen server configuration.