Sunday, October 18, 2015

Using ‘Stress / Stress-ng’ Tools to Impose High CPU Load and Stress Test on Linux

As a System Administrator, you may want to examine and monitor the status of your Linux systems when they are under stress of high load. This can be a good way for System Administrators and Programmers to:
  1. fine tune activities on a system.
  2. monitor operating system kernel interfaces.
  3. test your Linux hardware components such as CPU, memory, disk devices and many others to observe their performance under stress.
  4. measure different power consuming loads on a system.
In this guide, we shall look at two important tools, stress and stress-ng for stress testing under your Linux systems.

1. stress – is a workload generator tool designed to subject your system to a configurable measure of CPU, memory, I/O and disk stress.

2. stress-ng – is an updated version of the stress workload generator tool which tests your system for following features:
  • CPU compute
  • drive stress
  • I/O syncs
  • Pipe I/O
  • cache thrashing
  • VM stress
  • socket stressing
  • process creation and termination
  • context switching properties
Though these tools are good for examining your system, they should not just be used by any system user.

Important: It is highly recommended that you use these tools with root user privileges, because they can stress your Linux machine so fast and to avoid certain system errors on poorly designed hardware.
How to Install ‘stress’ Tool in Linux

To install stress tool on Debian and its derivatives such Ubuntu and Mint, run the following command.
$ sudo apt-get install stress

To install stress on a RHEL/CentOS and Fedora Linux, you need to turn on EPEL repository and then type the following yum command to install the same:
# yum install stress

The general syntax for using stress is:
$ sudo stress option argument

Some options that you can use with stress.
  • To spawn N workers spinning on sqrt() function, use the –cpu N option as follows.
  • To spawn N workers spinning on sync() function, use the –io N option as follows.
  • To spawn N workers spinning on malloc()/free() functions, use the –vm N option.
  • To allocate memory per vm worker, use the –vm-bytes N option.
  • Instead of freeing and reallocating memory resources, you can redirty memory by using the –vm-keep option.
  • Set sleep to N seconds before freeing memory by using the –vm-hang N option.
  • To spawn N workers spinning on write()/unlink() functions, use the –hdd N option.
  • You can set a timeout after N seconds by using the –timeout N option.
  • Set a wait factor of N microseconds before any work starts by using the –backoff N option as follows.
  • To show more detailed information when running stress, use the -v option.
  • Use –help to view help for using stress or view the manpage.

How Do I use stress on Linux systems?


1. To examine effect of the command every time you run it, first run the uptime command and note down the load average.
Next, run the stress command to spawn 8 workers spinning on sqrt() with a timeout of 20 seconds. After running stress, again run the uptime command and compare the load average.
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --cpu  8 --timeout 20
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime    
 17:20:00 up  7:51,  2 users,  load average: 1.91, 2.16, 1.93  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --cpu 8 --timeout 20
stress: info: [17246] dispatching hogs: 8 cpu, 0 io, 0 vm, 0 hdd
stress: info: [17246] successful run completed in 21s
nilaxan@localhost ~ $ uptime
 17:20:24 up  7:51,  2 users,  load average: 5.14, 2.88, 2.17  [<-- Watch Load Average]

2. To spwan 8 workers spinning on sqrt() with a timeout of 30 seconds, showing detailed information about the operation, run this command:
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --cpu 8 -v --timeout 30s
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime
 17:27:25 up  7:58,  2 users,  load average: 1.40, 1.90, 1.98  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --cpu 8 -v --timeout 30s
stress: info: [17353] dispatching hogs: 8 cpu, 0 io, 0 vm, 0 hdd
stress: dbug: [17353] using backoff sleep of 24000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 8 [17354] forked
stress: dbug: [17353] using backoff sleep of 21000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 7 [17355] forked
stress: dbug: [17353] using backoff sleep of 18000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 6 [17356] forked
stress: dbug: [17353] using backoff sleep of 15000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 5 [17357] forked
stress: dbug: [17353] using backoff sleep of 12000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 4 [17358] forked
stress: dbug: [17353] using backoff sleep of 9000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 3 [17359] forked
stress: dbug: [17353] using backoff sleep of 6000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 2 [17360] forked
stress: dbug: [17353] using backoff sleep of 3000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 1 [17361] forked
stress: dbug: [17353] 
nilaxan@localhost ~ $ uptime
 17:27:59 up  7:59,  2 users,  load average: 5.41, 2.82, 2.28  [<-- Watch Load Average]

3. To spwan one worker of malloc() and free() functions with a timeout of 60 seconds, run the following command.
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --vm 1 --timeout 60s 
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime
 17:34:07 up  8:05,  2 users,  load average: 1.54, 2.04, 2.11  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --vm 1 --timeout 60s 
stress: info: [17420] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: info: [17420] successful run completed in 60s
nilaxan@localhost ~ $ uptime
 17:35:20 up  8:06,  2 users,  load average: 2.45, 2.24, 2.17  [<-- Watch Load Average]

4. To spwan 4 workers spinning on sqrt(), 2 workers 
spwaning on sync(), 2 workers on malloc()/free(), with a time out of 20 
seconds and allocate a memory of 256MB per vm worker, run this command 
below.
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --cpu 4 --io 3 --vm 2 --vm-bytes 256M --timeout 20s 
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime
 17:40:33 up  8:12,  2 users,  load average: 1.68, 1.84, 2.02  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --cpu 4 --io 3 --vm 2 --vm-bytes 256M --timeout 20s
stress: info: [17501] dispatching hogs: 4 cpu, 3 io, 2 vm, 0 hdd
stress: info: [17501] successful run completed in 20s
nilaxan@localhost ~ $ uptime
 17:40:58 up  8:12,  2 users,  load average: 4.63, 2.54, 2.24  [<-- Watch Load Average]

How to Install ‘stress-ng’ Tool in Linux

To install stress-ng, run the following command.
$ sudo apt-get install stress-ng             [on Debian based systems]
# yum install stress-ng                      [on RedHat based systems]
The general syntax for using `stress-ng` is :
$ sudo stress-ng option argument

Some of the options that you can use with stress-ng:
  • To start N instances of each stress test, use the –all N option as follows.
  • To start N processes to exercises the CPU by sequentially working through all the different CPU stress testing methods, use the –cpu N option as follows.
  • To use a given CPU stress testing method, use –cpu-method option. There are many methods available that you can use, to view the manpage to see all the methods to use.
  • To stop CPU stress process after N bogo operations, use the –cpu-ops N option.
  • To start N I/O stress testing processes, use the –io N option.
  • To stop io stress processes after N bogo operations, use the –io-ops N option.
  • To start N vm stress testing processes, use the –vm N option.
  • To specify amount of memory per vm process, use –vm-bytes N option.
  • To stop vm stress processes after N bogo operations, use –vm-ops N options
  • Use the –hdd N option to start N harddisk exercising processes.
  • To stop hdd stress processes after N bogo operations, use –hdd-ops N option.
  • You can set a timeout after N seconds by using the –timeout N option.
  • To generate a summary report after bogo operations, you can use –metrics or –metrics-brief options. The –metrics-brief displays non zero metrics.
  • You can also start N processes that will create and remove directories using mkdir and rmdir by using the –dir N option.
  • To stop directory operations processes use –dir-ops N options.
  • To start N CPU consuming processes that will exercise the present nice levels, include the
  • –nice N option. When using this option, every iteration will fork off a child process that runs through all the different nice levels running a busy loop for 0.1 seconds per level and then exits.
  • To stop nice loops, use the –nice-ops N option as follows.
  • To start N processes that change the file mode bits via chmod(2) and fchmod(2) on the same file, use the –chmod N option. Remember the greater the value for N then the more contention on the file. The stressor will work through all the combination of mode bits that you specify with chmod(2).
  • You can stop chmod operations by the –chmod-ops N option.
  • You can use the -v option to display more information about ongoing operations.
  • Use -h to view help for stress-ng.

How Do I use ‘stress-ng’ in Linux systems?

1. To run 8 CPU stressors with a timeout of 60 seconds and a summary at the end of operation 

nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --cpu 8 --timeout 60 --metrics-brief
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:15:29 up 12 min,  1 user,  load average: 0.00, 0.01, 0.03  [<-- Watch Load Average]
nilaxan@localhost:~$ sudo stress-ng --cpu 8 --timeout 60 --metrics-brief
stress-ng: info: [1247] dispatching hogs: 8 cpu
stress-ng: info: [1247] successful run completed in 60.42s
stress-ng: info: [1247] stressor      bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
stress-ng: info: [1247]       (secs)    (secs)    (secs)   (real time) (usr+sys time)
stress-ng: info: [1247] cpu   11835     60.32     59.75      0.05       196.20       197.91
nilaxan@localhost:~$ uptime
 18:16:47 up 13 min,  1 user,  load average: 4.75, 1.47, 0.54  [<-- Watch Load Average]

2. To run 4 FFT CPU stressors with a timeout of 2 minutes.
nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --cpu-method fft --timeout 2m
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:25:26 up 22 min,  1 user,  load average: 0.00, 0.26, 0.31  [<-- Watch Load Average]
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --cpu-method fft --timeout 2m
stress-ng: info: [1281] dispatching hogs: 4 cpu
stress-ng: info: [1281] successful run completed in 120.01s
nilaxan@localhost:~$ uptime
 18:27:31 up 24 min,  1 user,  load average: 3.21, 1.49, 0.76  [<-- Watch Load Average]

3. To run 5 hdd stressors and stop after 100000 bogo operations, run this command.
nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --hdd 5 --hdd-ops 100000
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:29:32 up 26 min,  1 user,  load average: 0.43, 1.00, 0.67  [<-- Watch Load Average] 
nilaxan@localhost:~$ sudo stress-ng --hdd 5 --hdd-ops 100000
stress-ng: info: [1290] defaulting to a 86400 second run per stressor
stress-ng: info: [1290] dispatching hogs: 5 hdd
stress-ng: info: [1290] successful run completed in 136.16s
nilaxan@localhost:~$ uptime
 18:31:56 up 29 min,  1 user,  load average: 4.24, 2.49, 1.28  [<-- Watch Load Average]

4. To run 8 CPU stressors, 4 I/O stressors and 1 
virtual memory stressor using 1GB of virtual memory for one minute, run 
this command below.
nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --io 4 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:34:18 up 31 min,  1 user,  load average: 0.41, 1.56, 1.10  [<-- Watch Load Average]
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --io 4 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief
stress-ng: info: [1304] dispatching hogs: 4 cpu, 4 iosync, 1 vm
stress-ng: info: [1304] successful run completed in 60.12s
stress-ng: info: [1304] stressor      bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
stress-ng: info: [1304]       (secs)    (secs)    (secs)   (real time) (usr+sys time)
stress-ng: info: [1304] cpu    1501     60.07      2.67     10.39        24.99       114.93
stress-ng: info: [1304] iosync  381463     60.01      0.00     12.90      6357.10     29570.78
nilaxan@localhost:~$ uptime
 18:35:36 up 32 min,  1 user,  load average: 4.66, 2.80, 1.59  [<-- Watch Load Average]

Summary


As recommended, these tools should be used with superuser privileges as they have certain effects on the system. These tools are good for general System Administration in Linux. I hope this guide was useful and if you have any additional ideas on how to test the health status of your system using these tools or any other tools.

Tuesday, June 2, 2015

Useful keytool commands for certificate management

keytool command

keytool command location : $JAVA_HOME/jre/bin/keytool
cacerts location : $JAVA_HOME/jre/lib/security/cacerts

(Generally this is the location of keytool command or cacerts, It may vary based on your environment)

Parameters for below examples


Alias Name/Label: "This is a cert"
Certifcate Filename: testcert.cer
Keystore Name: cacerts

Importing Certificate


keytool -import -trustcacerts -alias "Alias_Name" -file "Filename" -keystore "keystore_Name"

Example:


keytool -import -trustcacerts -alias "This is a cert" -file testcert.cer -keystore cacerts

This above command will import testcert.cer into the keystore cacerts with the label "This is a cert"

Listing Certificate

keytool -list -keystore "keystore_name"

Examples:

keytool -list -keystore cacerts
Lists all the certificates in the keystore cacerts

keytool -list -v -keystore cacerts
Lists all the details of all certificates in the keystore cacerts

keytool -list -alias "This is a cert" -keystore cacerts
Lists the certificate withe the Alias "This is a cert" in the kestore cacerts

keytool -list -v -alias "This is a cert" -keystore cacerts
Lists the certificate with the label "This is a cert" in the keystre cacerts

keytool -list -v -keystore cacerts |grep Alias
Lists the Alias of all the certificates in the keystre cacerts.

Deleting the certificate


keytool -delete -alias "Alias_Name" -keystore "Keystore_Name"

Sunday, May 31, 2015

List Of Free Windows SSH Client Tools To Connect To Your Linux Server

You have Windows as operating system and you need to connect to Linux server to transfer files from Linux to Windows and inversely. So you need to have Secure Shell known as SSH. In fact, SSH is a network protocol which enables you to connect to Linux and Unix servers over the network. It uses public key cryptography to authenticate the remote computer. You can use SSH by several ways, either by using it automatically or by using a password authentication to log in.


PuTTY


PuTTY is the most famous SSH and telnet client, developed originally by Simon Tatham for the Windows platform. PuTTY is open source software that is available with source code and is developed and supported by a group of volunteers.

putty

Putty is very easy to install and to use.You don’t usually need to change most of the configuration options. To start the simplest kind of session, all you need to do is to enter a few basic parameters.You can download PuTTY here

Bitvise SSH Client


Bitvise SSH Client is an SSH and SFTP client for Windows. It is developed and supported professionally by Bitvise. The SSH Client is robust, easy to install, easy to use. Bitvise SSH Client is a feature-rich graphical SSH/SFTP client for windows and allow you dynamic port forwarding through an integrated proxy with auto-reconnecting capability.

bitvise

Bitvise SSH Client is free for personal use, as well as for individual commercial use inside organizations. You can download Bitvise SSH Client here.

MobaXterm


MobaXterm is your ultimate toolbox for remote computing. In a single Windows application, it provides loads of functions that are tailored for programmers, webmasters, IT administrators and pretty much all users who need to handle their remote jobs in a more simple fashion.

mobaxterm

MobaXterm provides all the important remote network tools (SSH, X11, RDP, VNC, FTP, MOSH, …) and Unix commands (bash, ls, cat, sed, grep, awk, rsync, …) to Windows desktop, in a single portable exe file which works out of the box. MobaXterm is free for personal use. You can download MobaXterm from here.

DameWare SSH


I think that DameWare SSH is the best free ssh client.

ssh

This free tool is a terminal emulator that lets you make multiple telnet and SSH connections from one easy-to-use console.
  • Manage multiple sessions from one console with a tabbed interface
  • Save favorite sessions within the Windows file system
  • Access multiple sets of saved credentials for easy log-in to different devices
  • Connect to computers and devices using telnet, SSH1, and SSH2 protocols
You can download DameWare SSH  from this link.

SmarTTY


SmarTTY is a free multi-tabbed SSH client that supports copying files and directories with SCP on-the-fly.

smart

Most SSH servers support up to 10 sub-sessions per connection. SmarTTY makes the best of it: no annoying multiple windows, no need to relogin, just open a new tab and go!

Cygwin


Cygwin is a large collection of GNU and Open Source tools which provide functionality similar to a Linux distribution on Windows.

cyq

Cygwin consists of a Unix system call emulation library, cygwin1.dll, together with a vast set of GNU and other free software applications organized into a large number of optional packages. Among these packages are high-quality compilers and other software development tools, an X11 server, a complete X11 development toolkit, GNU emacs, TeX and LaTeX, OpenSSH (client and server), and much more, including everything needed to compile and use PhysioToolkit software under MS-Windows.

Saturday, May 30, 2015

How to Setup VNC Server (Linux Remote Desktop Access) on CentOS/RHEL and Fedora

VNC ( Virtual Network Computing ) Servers enables remote desktop access for Linux systems similar to MSTSC in windows. Generally Linux administrators doesn’t prefer to use windows access, But some times we required to have access remote desktop of Linux. In that case we need to install vnc server on our Linux system. This tutorial will help you to Setup VNC Server and configure remote access for users on CentOS, RHEL and Fedora Users.

Step 1: Install Required Packages


Most of Linux servers doesn’t have desktop installed on their system. So make sure you have installed else use following command to install it.
For CentOS/RHEL 6:
# yum groupinstall "Desktop"

For CentOS/RHEL 5:
# yum groupinstall "GNOME Desktop Environment"
Now install few required packages for vnc-server
# yum install pixman pixman-devel libXfont

Step 2: Install VNC Server


After installing required packages, lets install vnc-server in your system. vnc-server is available under default yum repositories.
# yum install vnc-server

On CentOS/RHEL 6, you will see that tigervnc-server package will be installed.

Step 3: Create User for VNC


Lets’ create few users for connecting through vnc. You can also use existing system users by connecting through vnc, In that case we only need to set vncpasswd for that account.
# useradd user1
# passwd user1

# useradd user2
# passwd user2
Now set the vnc password for all accounts need to connect through vnc.
# su - user1
$ vncpasswd
$ exit

# su - user2
$ vncpasswd
$ exit

Step 4: Configure VNC Server for Users


Now edit /etc/sysconfig/vncservers configuration file and add the following to the end of the file.
VNCSERVERS="1:user1 2:user2"
VNCSERVERARGS[1]="-geometry 800x600"
VNCSERVERARGS[2]="-geometry 1024x768"

Where VNCSERVERS is the list of users need to connect, VNCSERVERARGS defined the screen size. Like user1 have a 800×600 screen, and user2 have 1024×768 screen size on his client.

Now start vnc-server service using following command and check the output
# service vncserver start

Starting VNC server: 1:user1 xauth:  creating new authority file /home/user1/.Xauthority

New 'svr1.tecadmin.net:1 (user1)' desktop is svr1.tecadmin.net:1

Creating default startup script /home/user1/.vnc/xstartup
Starting applications specified in /home/user1/.vnc/xstartup
Log file is /home/user1/.vnc/svr1.tecadmin.net:1.log

2:user2 xauth:  creating new authority file /home/user2/.Xauthority

New 'svr1.tecadmin.net:2 (user2)' desktop is svr1.tecadmin.net:2

Creating default startup script /home/user2/.vnc/xstartup
Starting applications specified in /home/user2/.vnc/xstartup
Log file is /home/user2/.vnc/svr1.tecadmin.net:2.log
                                                        
As per above output, you can see that user1 desktop is available on :1 and user2 desktop is available on :2. We will use :1 to connect to user1 and :2 to connect to user2.

Step 5: Connect VNC Server using VNC Viewer


To access from remote Linux system use following command.
# vncviewer 192.168.1.11:1
To access remote desktop on vnc-server from windows system, you must have vnc-viewer installed on your system. There are various vnc viewer available to use. Download any one and install on your system, for example:

TightVNC: http://www.tightvnc.com/download.php
RealVNC: https://www.realvnc.com/download/vnc/
TigerVNC: http://sourceforge.net/projects/tigervnc/files/tigervnc/

After installing vnc viewer connect to your system, In below example we are connected to user1 (:1).

connect-vnc

Now enter vnc password of account assigned with vncpasswd command.

vncviewer-password

You are connected to x windows system of your Linux machine.


vnc-connected

Monday, May 18, 2015

Bash Tips: If -e Wildcard File Check => [: Too Many Arguments

Here is a quick bash tip that might be useful if you need to use inside a bash script a check to see if a wildcard expression of files/folders exists or not. For example:

if [ -e /tmp/*.cache ]
then
echo "Cache files exist: do something with them"
else
echo "No cache files..."
fi
This is using -e (existing file check) that is working fine on individual files. Also the above code might seem to work fine if the result of the expression if one file; but if you have more files returned by the expression this will fail with the following error:

line x: [: too many arguments
This is unfortunately normal as ‘-e’ can take only one parameter, so it will not work with multiple results. This means we have to find a walkaround for this issue… There are probably many solutions for this problem; my  idea is to run ls and count the results and then feed that to if. Something like:

files=$(ls /tmp/*.cache)
if [ $files ]
then
echo "Cache files exist: do something with them"
else
echo "No cache files..."
fi
Now this is obviously wrong as if there are no files in the directory we will get a result like this:

ls: cannot access /tmp/*.cache: No such file or directory
and we don’t want to count that. We need to ignore such errors when files are not there, and we will use:

files=$(ls /tmp/*.cache **2> /dev/null**)
this way ls errors printed to STDERR now go to /dev/null and we don’t have to worry about them anymore.
This is still not good enough, as if will feed it with more values than it will still fail (when we have more files):

line x: [: /tmp/1.cache: unary operator expected
Obviously the proper way to do this would be to count the number of files and for this we just add “wc -l” to count the result of ls. This should look like this:

files=$(ls /tmp/*.cache 2> /dev/null | wc -l)
if [ **"$files" != "0"** ]
then
echo "Cache files exist: do something with them"
else
echo "No cache files..."
fi
and if needed we can even use the $files variable that now has the number of file. I hope you will find this useful if you need to do something similar; obviously the file check was just an example, and you would use this based on your needs and inside a script that does something useful ;–) .

Sunday, May 17, 2015

Reset Windows 7 Admin Password with Ubuntu Live CD/USB

Have you forgotten your Windows 7 password? If the administrator account you are trying to access has been disabled, expired, locked out or simply reject your password, you’ll lose control and can’t install software, update drivers or do any kind of administration whatsoever.

Don’t panic. Just boot your computer from a Ubuntu Live CD or USB drive, and then run the chntpw program which enables you to unlock / reset forgotten Windows admin password. The following tutorial will walk you through the procedure to reset Windows 7 admin password with Ubuntu Live CD/USB drive.

How to Reset Windows 7 Admin Password with Ubuntu Live CD/USB?

Boot the machine that you’re having trouble with from Ubuntu live disk. If you don’t have one, you can create a Ubuntu Live USB drive with the freeware Universal USB Installer.

After booting into Ubuntu, open the web browser to download chntpw from the Ubuntu Universe repository: http://packages.ubuntu.com/lucid/chntpw.

download-chntpw

Scroll down to the download section and grab the chntpw setup package. If your Ubuntu is 64-bit, please use the 64-bit version instead. Once it’s downloaded, double-click on the downloaded file to install it on your Ubuntu live disk.

By default the system will automatically mounted the hard disk partition where Windows 7 is installed, and you can access the Windows partition from Places menu. Click on your hard disk and navigate to the SAM file where Windows stores your Windows login password.

ubuntu-winows-sam

Right-click on the SAM file and select Properties. In the Properties dialog, note down the file location which will be used in the next step.

ubuntu-sam-path

Now press the keyboard shortcut Ctrl + Alt + T to launch a Terminal window. Since we have to make changes to Security Accounts Manager (SAM), which is residing in Windows/System32/config folder. Run this command to navigate to the config folder:


cd /media/<drive_identifier>/WINDOWS/system32/config

ubuntu-terminal

To reset the administrator password, enter following command to run chntpw tool:


sudo chntpw -u Administrator SAM

ubuntu-chntpw

Chntpw will show all the configured Windows user accounts with their current status. It presents 4 different Windows user account tweaking options at the bottom, including clear / blank user password, Set a new password / edit password, promote user to administrator, and unlock locked or disabled user account. Type 1 and press Enter.

ubuntu-clear-password

Once you’ve cleared the user account password, type y to save your changes.

ubuntu-save-sam

Reboot your system, and plug out Ubuntu Live media disk. You can then log in to Windows without a password!

Conclusion

While Ubuntu Live CD is widely used to troubleshoot PC problems, you can use it to reset Windows 7 administrator passwords as well. Follow this tutorial and you can regain access to your computer on your own when you forgot the password. 

Friday, May 15, 2015

10 Amazing and Mysterious Uses of (!) Symbol or Operator in Linux Commands

The '!' symbol or operator in Linux can be used as Logical Negation operator as well as to fetch commands from history with tweaks or to run previously run command with modification. All the commands below have been checked explicitly in bash Shell. Though I have not checked but a major of these won’t run in other shell. Here we go into the amazing and mysterious uses of '!' symbol or operator in Linux commands.

1. Run a command from history by command number.


You might not be aware of the fact that you can run a command from your history command (already/earlier executed commands). To get started first find the command number by running ‘history‘ command.
$ history


Find Last Executed Commands with History Command

Now run a command from history just by the number at which it appears, in the output of history. Say run a command that appears at number 1551 in the output of ‘history‘ command.
$ !1551


Run Last Executed Commands by Number ID

And, it runs the command (top command in the above case), that was listed at number 1551. This way to retrieving already executed command is very helpful specially in case of those commands which are long. You just need to call it using

![Number at which it appears in the output of history command].

2. Run previously executed command as 2nd last command, 7th last command,etc.


You may run those commands which you have run previously by their running sequence being the last run command will be represented as -1, second last as -2, seventh last as -7,….

First run history command to get a list of last executed command. It is necessary to run history command, so that you can be sure that there is no command like rm command > file and others just to make sure you do not run any dangerous command accidentally. And then check Sixth last command, Eight last command and Tenth last command.


$ history
$ !-6
$ !-8
$ !-10


Run Last Executed Commands By Numbers

Run Last Executed Commands By Numbers

3. Pass arguments of last command that we run to the new command without retyping


I need to list the content of directory ‘/home/$USER/Binary/firefox‘ so I fired.
$ ls /home/$USER/Binary/firefox

Then I realized that I should have fired ‘ls -l‘ to see which file is executable there? So should I type the whole command again! No I don’t need. I just need to carry the last argument to this new command as:
$ ls -l !$

Here !$ will carry arguments passed in last command to this new command.

Pass Arguments of Last Executed Command to New

Pass Arguments of Last Executed Command to New

4. How to handle two or more arguments using (!)


Let’s say I created a text file 1.txt on the Desktop.
$ touch /home/avi/Desktop/1.txt

and then copy it to ‘/home/avi/Downloads‘ using complete path on either side with cp command.
$ cp /home/avi/Desktop/1.txt /home/avi/downloads

Now we have passed two arguments with cp command. First is ‘/home/avi/Desktop/1.txt‘ and second is ‘/home/avi/Downloads‘, lets handle them differently, just execute echo [arguments] to

print both arguments differently.
$ echo “1st Argument is : !^”
$ echo “2nd Argument is : !cp:2”

Note 1st argument can be printed as “!^” and rest of the arguments can be printed by executing “![Name_of_Command]:[Number_of_argument]”.

In the above example the first command was ‘cp‘ and 2nd argument was needed to print. Hence “!cp:2”, if any command say xyz is run with 5 arguments and you need to get 4th argument, you may use “!xyz:4”, and use it as you like. All the arguments can be accessed by “!*”.

Handle Two or More Arguments

Handle Two or More Arguments

5. Execute last command on the basis of keywords


We can execute the last executed command on the basis of keywords. We can understand it as follows:
$ ls /home > /dev/null      [Command 1]
$ ls -l /home/avi/Desktop > /dev/null                  [Command 2] 
$ ls -la /home/avi/Downloads > /dev/null                 [Command 3]
$ ls -lA /usr/bin > /dev/null            [Command 4]

Here we have used same command (ls) but with different switches and for different folders. Moreover we have sent to output of each command to ‘/dev/null‘ as we are not going to deal with the output of the command also the console remains clean.

Now Execute last run command on the basis of keywords.
$ ! ls     [Command 1]
$ ! ls -l    [Command 2] 
$ ! ls -la    [Command 3]
$ ! ls -lA    [Command 4]

Check the output and you will be astonished that you are running already executed commands just by ls keywords.

Run Commands Based on Keywords

Run Commands Based on Keywords.

6. The power of !! Operator


You can run/alter your last run command using (!!). It will call the last run command with alter/tweak in the current command. Lets show you the scenario
Last day I run a one-liner script to get my private IP so I run,

$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/

Then suddenly I figured out that I need to redirect the output of the above script to a file ip.txt, so what should I do? Should I retype the whole command again and redirect the output to a file? Well an easy solution is to use UP navigation key and add '> ip.txt' to redirect the output to a file as.

$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/ > ip.txt

Thanks to the life Savior UP navigation key here. Now consider the below condition, the next time I run below one-liner script.

$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

As soon as I run script, the bash prompt returned an error with the message “bash: ifconfig: command not found”, It was not difficult for me to guess I run this command as user where it should be run as root.

So what’s the solution? It is difficult to login to root and then type the whole command again! Also (UP Navigation Key) in last example didn’t came to rescue here. So? We need to call “!!” without quotes, which will call the last command for that user.

$ su -c “!!” root

Here su is suitable user which is root, -c is to run the specific command as the user and the most important part !! will be replaced by command and last run command will be substituted here. Yeah! You need to provide root password.

The Power of !! Key

The Power of !! Key
I make use of !! mostly in following scenarios,

1. When I run apt-get command as normal user, I usually get an error saying you don’t have permission to execute.
$ apt-get upgrade && apt-get dist-upgrade

Opps error…don’t worry execute below command to get it successful..
$ su -c !!

Same way I do for,
$ service apache2 start
or
$ /etc/init.d/apache2 start
or
$ systemctl start apache2

OOPS User not authorized to carry such task, so I run..
$ su -c 'service apache2 start'
or
$ su -c '/etc/init.d/apache2 start'
or
$ su -c 'systemctl start apache2'


7. Run a command that affects all the file except ![FILE_NAME]


The ! (Logical NOT) can be used to run the command on all the files/extension except that is behind '!'.
A. Remove all the files from a directory except the one the name of which is 2.txt.
$ rm !(2.txt)
B. Remove all the file type from the folder except the one the extension of which is ‘pdf‘.
$ $ rm !(*.pdf)


8. Check if a directory (say /home/avi/Tecmint)exist or not? Printf if the said directory exist or not.


Here we will use '! -d' to validate if the directory exist or not followed by Logical AND Operator (&&) to print that directory does not exist and Logical OR Operator (||) to print the directory is present.

Logic is, when the output of [ ! -d /home/avi/Tecmint ] is 0, it will execute what lies beyond Logical AND else it will go to Logical OR (||) and execute what lies beyond Logical OR.
$ [ ! -d /home/avi/Tecmint ] && printf '\nno such /home/avi/Tecmint directory exist\n' || printf '\n/home/avi/Tecmint directory exist\n'


9. Check if a directory exist or not? If not exit the command.


Similar to the above condition, but here if the desired directory doesn’t exist it will exit the command.
$ [ ! -d /home/avi/Tecmint ] && exit


10. Create a directory (say test) in your home directory if it does not exist.


A general implementation in Scripting Language where if the desired directory does not exist, it will create one.
[ ! -d /home/avi/Tecmint ] && mkdir /home/avi/Tecmint

That’s all for now. If you know or come across any other use of '!' which is worth knowing, you may like to provide us with your suggestion in the feedback. Keep connected!

Source :  http://www.tecmint.com

Thursday, May 14, 2015

awk command to print columns from file


In this tutorial we will learn, how to use awk command to print columns from file. This tip is often used by Unix/Linux System Administrator.

AWK Command

Awk is one of the awesome tool for processing the rows and columns.It also has feature to search keywords/strings . Awk command is widely used by Unix/Linux users for text processing the files. We can also use regex expression with awk command.

print columns from files with awk command

As per the post title, we will explain some tips on awk command to print columns from file.
For practical we will use the file called employee.list . I have created this file in /tmp and lets have a look on its content.

Figure 1:

awk command

Print single columns from file

To print the single column from file,use below given syntax
awk '{print $column-number}' /path/file-name
Example: Printing column no. 2
awk '{print $2}' /tmp/employee.list
Example: In another example, we will print column no. 4
awk '{print $4}' /tmp/employee.list

Below is the screenshot of both awk command example. (compare the column no. shown in Figure 1)

awk example

Print multiple columns from file

Printing multiple columns require comma(,) separating each defined multiple column number in awk command.
Use below given syntax for printing multiple columns by using awk command.
awk '{print $column-number1,$column-number2,$column-number-N'} /path/file-name
Example 1: Now we will print column number 2,column number 3 and column number 5
awk '{print $2,$3,$5}' /tmp/employee.list

See the output in below given screenshot

awk command

Printing single or multiple columns by removing field-separator

I hope from above examples, you learned about using awk command to print single and multiple columns from file.

What if you have field-separator in files like semicolon( ; ),colon( : ),comma( , ),Tab,space etc. ?
In this case, we have option called –field-separator . Or in short we can also use -F .

Use the given below syntax for awk with field-separator option
awk -F'separator' '{print $column-number-N}' /path/file-name 
Lets take an example of /etc/passwd file. We see colon(:) is field-separator there.

Task: Here our task is printing column no. 6 from /etc/passwd which has value of user’s home directory path .
Whereas we also have to deal with separator that is colon ( : ) .

See the result in below given screenshot (Because /etc/passwd file output was long, I have taken screenshot of upper portion only)

awk example

Similarly, We can print multiple columns with field-separator

Here, in this example we will print column no. 1 , column number 6 and column no. 7 from /etc/passwd.
awk -F':' '{print $1,$6,$7}' /etc/passwd 

Check the result in your system , it will help you to understand

Prints the entire file using awk command

This one is interesting command.Printing the entire file using awk command .
It is similar to cat file-name
Syntax:
awk '{print $0}' /path/filename

Example: We are showing this example in below given screenshot

awk example print $0

Tuesday, May 12, 2015

Troubleshooting steps for Cron job not working

In this post, I am sharing the troubleshooting steps for cron job problem. You may have seen many times, the cron job stopped working suddenly or not working on first instance. As a Linux System Engineer,I also faced this problem many times.The reasons are many and we are sharing few them which generally occurs.

What is Cron

Cron is a utility in Unix Like Operating System,which is a time based job scheduler. To maintain the Cron, we use the cron table called crontab .

Here,the job means the script/command . When we define any job(script/command/process) in crontab to be excecuted ,that job is called cron job.

Cron is different from at scheduler. The one basic different is at can be used for only one time. Whereas in Cron, the job is executed every time as per the defined date,day,time and month.

Crontab Syntax :
m h dom mon dow command

Here in above syntax,
m = Minute
h = Hour
dom = Date of month
dow = Day of week (eg. Sunday,Monday,Tuesday etc.)
Command = The command or script to be run from cronjob.

Recommendation : http://cronchecker.net

Troubleshooting Steps for cronjob

We are listing some common practices for troubleshooting cronjobs. We expect you have some knowledge about setting cronjob.

1. Command / Script not executed from cronjob

This is the common problem generally seen by many Linux system admins. Here are the possible reason

(a) Command not found : Command is not available in system which system admin want to execute. Check the command if it is available in system. If not, then install the command/utility on your system.

(b)In this same section, we will talk about script do not run from cronjob. The possible reason for script not running are –

* Execute permission is not given to script for the user.
* Appropriate Ownership and group is not given to script.
* Command used in script is not available in system.
* Typo error in script.

2. Password expires

It might be possible the password is expire for user who will run the cronjob. For troubleshooting, check password expiration for root and other users.

To check password expiration status, use the below given syntax. Replace username with your system user.

chage -l username

3. User account is inactive

The inactive user account is another reason for cronjob suddenly stop working or do not work.
You can use similar command as we have used for checking password expiration.
chage -l username

4. Disk usage or inode is 100% full

When disk is full with no free space, it will not allow script or command to be run.
It might be possible when inode is 100% for partition, script or command will not run from cronjob.

To check disk space and inode usage togetherly, use the following command.

df -thi

There can also be possible reason which vary from network to network and type of servers.

Sunday, May 10, 2015

How to Reset MySQL root Account Password

MySQL is an open source database software widely used for websites as back-end storage. Some times we forgot our MySQL root account password. This article will help you to reset root account password with simple steps.

Step 1: Stop MySQL

First stop MySQL server using below following command
# service mysqld stop

Step 2: Start MySQL in SafeMode

Now start MySQL server in safe mode using following command. while starting mysql in safe mode it will not prompt for password.
# mysqld_safe --skip-grant-tables &

Step 3: Change root Password

Now login to mysql server using root user and change password using following set of commands.
# mysql -u root
mysql> use mysql;
mysql> update user set password=PASSWORD("NEW-ROOT-PASSWORD") where User='root';
mysql> flush privileges;
mysql> quit

Step 4: Restart MySQL

After changing password stop mysql (running in safe mode) and start it again
# service mysqld stop
# service mysqld start

Step 5: Verify New Password

After resetting mysql root account password and restarting, just verify new password by login.

# mysql -u root -p

Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 51
Server version: 5.5.34 MySQL Community Server (GPL) by Remi

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql< 

How to Backup and Restore a MySQL Database

MySQL is a database server for storing data permanently. If you are using MySQL server for an production use, then its necessary to create dataase backup for recovering from any crash. MySQL provides an utility mysqldump for taking backups. In this article you will learn, to taken databases backup in .sql format for archive format. Also we will explain various options for it.

Options for Creating MySQL Databases Backup

You have many options for creating databases backups. read few options below. For this example we have using databse name “mydb”.

1. Full Database Backup in Plain .sql File

 # mysqldump -u root -p mydb > mydb.sql

2. Full Database Backup in Archive .sql.gz File

 # mysqldump -u root -p mydb | gzip > mydb.sql.gz

3. Backup Single Table Only

 # mysqldump -u root -p mydb tbl_student > tbl_student.sql

4. Backup Multiple Databases

# mysqldump -u root -p --databases mydb1 mydb2 mydb3 > mydb1-mydb2-mydb3.sql

5. Backup All Databases

 # mysqldump -u root -p --all-databases > all-db-backup.sql

6. Backup Database Structure Only (no data)

 # mysqldump -u root -p --no-data mydb > mydb.sql

7. Backup Database Data Only (no table structure)

 # mysqldump -u root -p --no-create-info mydb > mydb.sql

8. Backup MySQL Database in XML Format

 # mysqldump -u root -p --xml mydb > mydb.xml

How to Restore MySQL Backup

For restoring databases from backup is quite simple. We use mysql command for it. For example following command will restore all backup from mydb.sql to mydb database.

# mysql -u root -p mydb < mydb.sql

How to Reset Admin Password on Ubuntu 14.10

When you install Ubuntu on your system. The first user you have created get administrative privileges along with root account. You can also create administrative user latest installation of Ubuntu using main account.

In some case if you lost your administrative account access on Ubuntu, you can reset it within 2 minutes. I am running Ubuntu 14.10 on Virtual Box. Follow below steps to reset password.

  • 1. Restart you Ubuntu system.
  • 2. On Grub loading screen press ESC to view list.
  • 3. Now select “Advanced options for Ubuntu” and press enter.
ubuntu-password-reset-1
4. Now select following (recovery mod) option and press enter.
ubuntu-password-reset-2
5. Here you will see Recovery menu. Select “Drop to root shell prompt”.
ubuntu-password-reset-3
  • 6. Change password of your administrative user. For this example, I am changing password of user “root”
     root@ubuntu:~# passwd root
    
7. In case you get error like below.
passwd: Authentication token manipulation error
passwd: password unchanged
remount your file system in read/write mode using following command and try reset password again.
root@ubuntu:~# mount -o remount,rw /
ubuntu-password-reset-4

How To Edit Hosts File on Your System

Hosts file is useful for mapping hostname or domain names with IP locally on system. This file is available on each operating systems and useful for mapping domains with IP address without making any DNS entry.

Sample Hosts File:
127.0.0.1   localhost.localdomain   localhost
::1         localhost6.localdomain6 localhost6

For example if you are running a server on public network and configured a website on that server. Now to access website through domain name, you must be register a valid domain and point DNS records to that server. But using hosts file we can use any test domain name like example.com, www.example.com configure on server and map with ip in hosts file, Using this we can access site from server without any domain registration and pointing dns.

Edit Hosts File on Windows:

On Windows operating systems this file is available at following location with name “hosts
 C:\Windows\System32\drivers\etc\

Navigate to above location in file explorer and edit “hosts” file in notepad and make any entry like below at end of file.

127.0.0.1   localhost.localdomain   localhost
::1         localhost6.localdomain6 localhost6

192.168.1.100     example.com
192.168.1.100     www.example.com
10.10.0.11        site1.example.com site2.example.com

Save file and close it. You have done mapping between domain name and IP locally on your system.

Edit Hosts File on Linux/Unix:

On Linux/Unix operating systems this file is generally available at following location
 /etc/hosts

Edit this file and make proper entry with hostname and ip address as below.

# vim /etc/hosts
127.0.0.1   localhost.localdomain   localhost
::1         localhost6.localdomain6 localhost6

192.168.1.100     example.com
192.168.1.100     www.example.com
10.10.0.11        site1.example.com site2.example.com


Save file and close it. You have done mapping between domain name and IP locally on your system

How check If File is Empty or Not using Bash Script

While working with bash shell programming, when you need to read some file content, It is good to test that given file is exits or not after that test if file is empty or not. This will safe your script from throwing errors. This article will help you to test if file exists or not and file is empty or not.

1. Check if File Empty or Not:-

This script will check if given file is empty or not. As per below example if /tmp/myfile.txt is an empty file then it will show output as “File empty”, else if file has some content it will show output as “File not empty”.
#!/bin/bash

if [ -s /tmp/myfile.txt ]
then
     echo "File not empty"
else
     echo "File empty"
fi

Above same if condition can be write in single line as below.
#!/bin/bash

[ -s /tmp/myfile.txt ] && echo "File not empty" || echo "File empty" 

2. Check if File Exists and Not Empty:

Below script will check if file exists and file is empty or not. As per below example if /tmp/myfile.txt does not exists, script will show output as “File not exists” and if file exists and is an empty file then it will show output as “File exists but empty”, else if file exists has some content in it will show output as “File exists and not empty”.
if [ -f /tmp/myfile.txt ]
then
    if [ -s /tmp/myfile.txt ]
    then
        echo "File exists and not empty"
    else
        echo "File exists but empty"
    fi
else
    echo "File not exists"
fi

3. Check if File Exists and Not Empty with Variable:

This is same as #2 except here file name is saved in a variable.

#!/bin/bash

FILENAME="/tmp/myfile.txt"

if [ -f ${FILENAME} ]
then
    if [ -s ${FILENAME} ]
    then
        echo "File exists and not empty"
    else
 echo "File exists but empty"
    fi
else
    echo "File not exists"
fi

Tuesday, April 28, 2015

SSH Tunnel – Port Forwarding With SSH

SSH has a huge number of features, SSH Tunnel being just one of them. SSH Tunnel is a secure connection between two machines and is often refered to as “SSH Tunneling” or also “Port Forwarding”.

Using the “ssh” command we can bind a desired port on a local machine to a desired port on a remote machine. This creates an encrypted SSH Tunnel between these machines and enables direct communication via localhost socket address. We can use SSH Tunnel to secure an insecure connection or to bypass different firewall restrictions.



Before we create our first SSH Tunnel check that you can run “ssh” command on your system. If you are running CentOS 6 minimal then you probably need to install openssh-clients package (Ubuntu users need to install openssh-client package).

SSH Tunnel

There are three types of Port Forwarding and thus three ways of using an SSH Tunnel:

  • Local Port Forwarding (enables access from local socket address via intermediate SSH server to a destination socket address)
  • Remote Port Forwarding (enables access from remote location via intermediate SSH server socket address to a localsocket address)
  • Dynamic Port Forwarding (SOCKS Proxy Server – NOT COVERED IN DETAIL IN THIS HOW TO!)

I use “SSH Tunneling” (Local Port Forwarding) on a daily basis since an environment from a customer I am working for, is designed in a way I can only access Workstation linux server on SSH port 22. All of the other infrastructure machines are only accessible from this Workstation, so using “SSH Tunneling” is the best way to go, to directly access different services.
SSH Tunnel – Port Forwarding with SSH

SSH Tunnel – Local Port Forwarding

Local Port Forwarding lets you connect from a local machine to a remote machine even if you do not have direct access to this remote machine from your local environment. In order for this to work, you need to have SSH access to an intermediate machine, which of course has access to the remote machine you want to connect to. The intermediate machine can reside in your local network and be subject to a different firewall policy or be outside of your local network.

Example #1:

We have SSH access on port 22 to a Workstation machine (user: wsuser, hostname: workstation). Behind the Workstation machine is an Application server (hostname: appserver) running Apache Tomcat on port 8080.We can not directly access Apache Tomcat administrator webpage on port 8080 from our Local machine, but Tomcat webpage port 8080 is accessible from Workstation machine thus we can create an SSH Tunnel and forward local port 8080 from our Local machine via Workstation to the Application server.

We can do this by running the following command on my Local machine:

ssh -f wsuser@workstation -L 8080:appserver:8080 -N


After authenticating to Workstation SSH server the connection is established and Apache Tomcat administration webpage is accessible when we open a web browser on our Local machine and point it to http://localhost:8080

Example #2:

Let’s say the situation is the same as in Example #1 but with one difference – there is also a firewall between Workstation (user: wsuser, hostname: workstation) and Application server (user: appuser, hostname: appserver) which only allows SSH access on port 22 from Workstation to Application server. This means Workstation can not directly access Apache Tomcat on port 8080.

There is still a way we can access Apache Tomcat administration webpage from our Local machine, but we need to make 2 hops via SSH.

1.
SSH to Workstation machine:
ssh wsuser@workstation

2.
When connected to Workstation machine, forward port 8080 via SSH to the Application server:
ssh -f appuser@appserver -L 8080:localhost:8080 -N

3.
Next we need to forward port 8080 via SSH from our Local machine to Workstation:
ssh -f wsuser@workstation -L 8080:localhost:8080 -N

Voila, Apache Tomcat administration webpage is accessible if i open up a web browser on my Local machine and point it to http://localhost:8080


SSH Tunnel – Remote Port Forwarding

Remote Port Forwarding works the other way around like the Local Port Forwarding. With Local Port Forwarding we enable access from our local machine via intermediate machine with SSH Server to a remote machine and with Remote Port Forwarding we enable access from a remote machine via intermediate machine with SSH server to our local machine. Of course for this to work we need to have SSH access to the intermediate machine. Remote Port Forwarding comes useful when we do not have Router administration rights so we can not configure port forwarding on a Router level. SSH Tunnel Remote Port forwarding does the same trick.

Before we can start Remote Port Forwarding we must reconfigure SSH server on an intermediate machine to accept it. We must edit “/etc/ssh/sshd_config” and uncomment and change to “yes” the following option:
GatewayPorts yes

of course followed by a SSH service restart!

Example #1:

We are running Apache Tomcat on our Local machine on port 8080. We want our friend, not from our local network, to access ourApache Tomcat administration webpage on port 8080 and help us configure something or deploy a new application. Thank god we have a Webserver (user: myuser, hostname: webserver) hosting some webpage accessible from the internet and also accessible via SSH from our local network. We will configure a Remote Port Forwarding and enable our friend to access the Apache Tomcat administration webpage we are running on our Local machine via Webserver.

We can do this by running the following command:
ssh -f myuser@webserver -R 8080:localhost:8080 -N

Voila, we can now tell our friend to access Webserver on port 8080 and Apache Tomcat administration webpage running on our Local machine will open up to him.
As we can see the only difference when using Remote Port Forward is the syntax change from “-L” to “-R” option.

SSH Tunnel – Dynamic Port Forwarding

Dynamic Port Forwarding will turn your machine into a SOCKS Proxy. SOCKS Proxy can proxy all requests through the network or the internet, but programs usually must be configured to use the SOCKS proxy. SOCKS proxy can be started with the following command:

ssh -C -D 1080 localmachine

where 
the -C options enables compression, -D option specifies dynamic port 
forwarding and 1080 is the standard SOCKS Proxy port. The next step 
would be to reconfigure your web browser to use 127.0.0.1 on port 1080 as a SOCKS Proxy.

Using Dynamic Port Forwarding and configuring your browser to use local SOCKS Proxy will encrypt all traffic visited from your web browser and make your connections secure.

SSH Tunnel – GeekPeek Tips

  • If you are running SSH server on a non-default port, you need to specify your port when running “ssh” command with the “-p” option
ssh -f wsuser@workstation -p 22222 -L 8080:appserver:8080 -N
  • Double check the ports that are already used on the intermediate machine before doing Local or Remote Port Forwarding. You can use netstat command and grep the port you want to forward
netstat -anp |grep 8080
  • Do not forget to reconfigure SSH server before trying Remote Port Forwarding and restarting SSH server
GatewayPorts yes
  • The “-f” option requests SSH to go to backgrouns and “-N” option tells it not to execute a remote command. If you do not want the SSH to go into background just remove the “-f” and “-N” option
  • Make sure your IPTables configuration is compatible with Port Forwarding you configured!

Thursday, April 23, 2015

How to Configure Linux Cluster with 2 Nodes on RedHat and CentOS

In an active-standby Linux cluster configuration, all the critical services including IP, filesystem will failover from one node to another node in the cluster.
This tutorials explains in detail on how to create and configure two node redhat cluster using command line utilities.
The following are the high-level steps involved in configuring Linux cluster on Redhat or CentOS:

  • Install and start RICCI cluster service
  • Create cluster on active node
  • Add a node to cluster
  • Add fencing to cluster
  • Configure failover domain
  • Add resources to cluster
  • Sync cluster configuration across nodes
  • Start the cluster
  • Verify failover by shutting down an active node
Red Hat Cluster

1. Required Cluster Packages

First make sure the following cluster packages are installed. If you don’t have these packages install them using yum command.

[root@rh1 ~]# rpm -qa | egrep -i "ricci|luci|cluster|ccs|cman"
modcluster-0.16.2-28.el6.x86_64
luci-0.26.0-48.el6.x86_64
ccs-0.16.2-69.el6.x86_64
ricci-0.16.2-69.el6.x86_64
cman-3.0.12.1-59.el6.x86_64
clusterlib-3.0.12.1-59.el6.x86_64

2. Start RICCI service and Assign Password

Next, start ricci service on both the nodes.
[root@rh1 ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]

You also need to assign a password for the RICCI on both the nodes.

[root@rh1 ~]# passwd ricci
Changing password for user ricci.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Also, If you are running iptables firewall, keep in mind that you need to have appropriate firewall rules on both the nodes to be able to talk to each other.

3. Create Cluster on Active Node

From the active node, please run the below command to create a new cluster.
The following command will create the cluster configuration file /etc/cluster/cluster.conf. If the file already exists, it will replace the existing cluster.conf with the newly created cluster.conf.

[root@rh1 ~]# ccs -h rh1.mydomain.net --createcluster mycluster
rh1.mydomain.net password:

[root@rh1 ~]# ls -l /etc/cluster/cluster.conf
-rw-r-----. 1 root root 188 Sep 26 17:40 /etc/cluster/cluster.conf
Also keep in mind that we are running these commands only from one node on the cluster and we are not yet ready to propagate the changes to the other node on the cluster.

4. Initial Plain cluster.conf File

After creating the cluster, the cluster.conf file will look like the following:
[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="mycluster">
  <fence_daemon/>
  <clusternodes/>
  <cman/>
  <fencedevices/>
  <rm>
    <failoverdomains/>
    <resources/>
  </rm>
</cluster>

5. Add a Node to the Cluster

Once the cluster is created, we need to add the participating nodes to the cluster using the ccs command as shown below.
First, add the first node rh1 to the cluster as shown below.

[root@rh1 ~]# ccs -h rh1.mydomain.net --addnode rh1.mydomain.net
Node rh1.mydomain.net added.

Next, add the second node rh2 to the cluster as shown below.

[root@rh1 ~]# ccs -h rh1.mydomain.net --addnode rh2.mydomain.net
Node rh2.mydomain.net added.

Once the nodes are created, you can use the following command to view all the available nodes in the cluster. This will also display the node id for the corresponding node.

[root@rh1 ~]# ccs -h rh1 --lsnodes
rh1.mydomain.net: nodeid=1
rh2.mydomain.net: nodeid=2

6. cluster.conf File After Adding Nodes

This above will also add the nodes to the cluster.conf file as shown below.

[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="mycluster">
  <fence_daemon/>
  <clusternodes>
    <clusternode name="rh1.mydomain.net" nodeid="1"/>
    <clusternode name="rh2.mydomain.net" nodeid="2"/>
  </clusternodes>
  <cman/>
  <fencedevices/>
  <rm>
    <failoverdomains/>
    <resources/>
  </rm>
</cluster>

7. Add Fencing to Cluster

Fencing is the disconnection of a node from shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity.

A fence device is a hardware device that can be used to cut a node off from shared storage.
This can be accomplished in a variety of ways: powering off the node via a remote power switch, disabling a Fiber Channel switch port, or revoking a host’s SCSI 3 reservations.

A fence agent is a software program that connects to a fence device in order to ask the fence device to cut off access to a node’s shared storage (via powering off the node or removing access to the shared storage by other means).
Execute the following command to enable fencing.

[root@rh1 ~]# ccs -h rh1 --setfencedaemon post_fail_delay=0
[root@rh1 ~]# ccs -h rh1 --setfencedaemon post_join_delay=25

Next, add a fence device. There are different types of fencing devices available. If you are using virtual machine to build a cluster, use fence_virt device as shown below.

[root@rh1 ~]# ccs -h rh1 --addfencedev myfence agent=fence_virt

Next, add fencing method. After creating the fencing device, you need to created the fencing method
and add the hosts to the fencing method.

[root@rh1 ~]# ccs -h rh1 --addmethod mthd1 rh1.mydomain.net
Method mthd1 added to rh1.mydomain.net.

[root@rh1 ~]# ccs -h rh1 --addmethod mthd1 rh2.mydomain.net
Method mthd1 added to rh2.mydomain.net.

Finally, associate fence device to the method created above as shown below:
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh1.mydomain.net mthd1
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh2.mydomain.net mthd1

8. cluster.conf File after Fencing

Your cluster.conf will look like below after the fencing devices, methods are added.

[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="10" name="mycluster">
  <fence_daemon post_join_delay="25"/>
  <clusternodes>
    <clusternode name="rh1.mydomain.net" nodeid="1">
      <fence>
        <method name="mthd1">
          <device name="myfence"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="rh2.mydomain.net" nodeid="2">
      <fence>
        <method name="mthd1">
          <device name="myfence"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <cman/>
  <fencedevices>
    <fencedevice agent="fence_virt" name="myfence"/>
  </fencedevices>
  <rm>
    <failoverdomains/>
    <resources/>
  </rm>
</cluster>

9. Types of Failover Domain

A failover domain is an ordered subset of cluster members to which a resource group or service may be bound.
The following are the different types of failover domains:
  • Restricted failover-domain: Resource groups or service bound to the domain may only run on cluster members which are also members of the failover domain. If no members of failover domain are availables, the resource group or service is placed in stopped state.
  • Unrestricted failover-domain: Resource groups bound to this domain may run on all cluster members, but will run on a member of the domain whenever one is available. This means that if a resource group is running outside of the domain and member of the domain transitions online, the resource group or
  • service will migrate to that cluster member.
  • Ordered domain: Nodes in the ordered domain are assigned a priority level from 1-100. Priority 1 being highest and 100 being the lowest. A node with the highest priority will run the resource group. The resource if it was running on node 2, will migrate to node 1 when it becomes online.
  • Unordered domain: Members of the domain have no order of preference. Any member may run in the resource group. Resource group will always migrate to members of their failover domain whenever possible.

10. Add a Filover Domain

To add a failover domain, execute the following command. In this example, I created domain named as “webserverdomain”,

[root@rh1 ~]# ccs -h rh1 --addfailoverdomain webserverdomain ordered

Once the failover domain is created, add both the nodes to the failover domain as shown below:

[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain rh1.mydomain.net priority=1

[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain rh2.mydomain.net priority=2

You can view all the nodes in the failover domain using the following command.
[root@rh1 ~]# ccs -h rh1 --lsfailoverdomain
webserverdomain: restricted=0, ordered=1, nofailback=0
  rh1.mydomain.net: 1
  rh2.mydomain.net: 2

11. Add Resources to Cluster

Now it is time to add a resources. This indicates the services that also should failover along with ip and filesystem when a node fails. For example, the Apache webserver can be part of the failover in the Redhat Linux Cluster.

When you are ready to add resources, there are 2 ways you can do this.

You can add as global resources or add a resource directly to resource group or service.
The advantage of adding it as global resource is that if you want to add the resource to more than one service group you can just reference the global resource on your service or resource group.
In this example, we added the filesystem on a shared storage as global resource and referenced it on the service.

[root@rh1 ~]# ccs –h rh1 --addresource fs name=web_fs device=/dev/cluster_vg/vol01 mountpoint=/var/www fstype=ext4

To add a service to the cluster, create a service and add the resource to the service.

[root@rh1 ~]# ccs -h rh1 --addservice webservice1 domain=webserverdomain recovery=relocate autostart=1

Now add the following lines in the cluster.conf for adding the resource references to the service. In this example, we also added failover IP to our service.

  <fs ref="web_fs"/>
  <ip address="192.168.1.12" monitor_link="yes" sleeptime="10"/>

In the 2nd part of this tutorial (tomorrow), we’ll explain how to sync the configurations across multiple nodes in a cluster, and how to verify the failover scenario in a cluster setup.