Monday, April 17, 2017

Networking Bonding in RHEL/CentOS (Combining multiple NICs)

When hosting a successful & busy websites or operating a critical server, high availability & redundancy are major factors to consider. To achieve high availability & redundancy, backup of the server & server clusters are prepared. One other thing that is used for HA/redundancy is Network Bonding.

Network Bonding refers to combining of more than one NIC into a single NIC for the purpose of HA/redundancy or load balancing. When network bonding is achieved and one of the NICs fail, the load will be transferred to next NIC in the bonding or we can even also configure it for load balancing.

In this tutorial, we are going to create network bonding for two interfaces on RHEL/CentOS 7 servers.

Installation

To create a network bond between NICs, we will require bonding module. To load the bonding module into the system, run
$ modprobe bonding
Once the module has been loaded, we will create a file for bonding interface ‘ifcfg-bind0’ in ‘/etc/sysconfig/network-scripts’ directory.

Configuring Bond interface

Goto ‘/etc/sysconfig/network-scripts’ & create bond file with following content,
$ cd /etc/sysconfig/network-scripts
$ vi ifcfg-bond0DEVICE=bond0
TYPE=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.0.100
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
BONDING_OPTS=”mode=5 miimon=100″
Here, mode=5 means network bond will provide fault tolerance & load balancing as well. Details of all available modes for network bonding are mentioned below,

mode=0 (Balance Round Robin) - round-robin mode for fault tolerance and load balancing.
mode=1 (Active backup) - Sets active-backup mode for fault tolerance.
mode=2 (Balance XOR) - Sets an XOR (exclusive-or) mode for fault tolerance and load balancing.
mode=3 (Broadcast) - Sets a broadcast mode for fault tolerance. All transmissions are sent on all slave interfaces.
mode=4 (802.3ad) - Sets an IEEE 802.3ad dynamic link aggregation mode. Creates aggregation groups that share the same speed & duplex settings.
mode=5 (Balance TLB) - Sets a Transmit Load Balancing (TLB) mode for fault tolerance & load balancing.
mode=6 (Balance ALB) - Sets an Active Load Balancing (ALB) mode for fault tolerance & load balancing.

Next step is to configure network interfaces i.e. ifcfg-en0s3 &en0s5 , for the bonding.

Configuring network interfaces

Make changes to the both interfaces file & add parameters “MASTER” & “SLAVE” to it, so that it looks like,
TYPE=Ethernet
BOOTPROTO=none
DEVICE=enp0s3
ONBOOT=yes
HWADDR=”23:03:56:bh:56:9g”
MASTER=bond0
SLAVE=yes
Save both files & restart the networking services on the system,
$ systemctl restart network
We can now run ‘ifconfig’ command to verify our newly created bonding interface or we can also check the bonding by running the following,
$ cat /proc/net/bonding/bond0
This will provide complete information about bonding interface.

Testing fault tolerance

To test if the network bonding is working or not, bring one of the network interfaces down. To do so, run.
$ ifdown en0s3
& verify by making a http or ssh request to the server via the bonding interface IP address, network should be working just fine. Further we can also check which interface is working & which is down by running the above command i.e
$ cat /proc/net/bonding/bond0
This concludes our tutorial on network bonding. Please mention your queries/comments in the comment box below.

Saturday, September 3, 2016

Enable time tag in History command in Linux

The Bash history feature is an invaluable details/ information which allows users to recall commands previously entered into their shell with relative ease. This makes it easy to enter repeated commands and keep track of what was done on a system.

By default, however, a user is unable to see when these commands were actually entered. When auditing a system, it can sometimes be useful to see this type of information, for example when trying to determine how and when a file may have gone missing on the file system. Since Bash version 3, however, you are able to enable time-stamping of entries for review later.

Applicable versions of Bash provide the environment variable HISTTIMEFORMAT which you can set for this purpose. Although it is empty by default, you can simple assign a time format string to enable time-stamping.

First run this command :

user@localhost :~$ history
   53  vi /etc/fstab
   54  umount /ebs
   55  mount /ebs/local

Enable timestamp in history command

To enable time stamp on your bash history type following command on your terminal:
    user@localhost :~$ export HISTTIMEFORMAT="%F %T "

Again execute history command : 
   user@localhost:~$ history
   53  2016–03–02 08:27:38 vi /etc/fstab
   54  2016–03–02 08:27:38 umount /ebs
   55  2016–03–02 08:27:38 mount /ebs/local

More help for set time format in date command manual page :
   man date

Now you can Add it to your .bashrc end of the file so it’s always there.
   user@localhost:~$ vi .bashrc

# ~/.bashrc: executed by bash(1) for non-login shells.
    export HISTTIMEFORMAT="%F %T "

Sunday, October 18, 2015

Using ‘Stress / Stress-ng’ Tools to Impose High CPU Load and Stress Test on Linux

As a System Administrator, you may want to examine and monitor the status of your Linux systems when they are under stress of high load. This can be a good way for System Administrators and Programmers to:
  1. fine tune activities on a system.
  2. monitor operating system kernel interfaces.
  3. test your Linux hardware components such as CPU, memory, disk devices and many others to observe their performance under stress.
  4. measure different power consuming loads on a system.
In this guide, we shall look at two important tools, stress and stress-ng for stress testing under your Linux systems.

1. stress – is a workload generator tool designed to subject your system to a configurable measure of CPU, memory, I/O and disk stress.

2. stress-ng – is an updated version of the stress workload generator tool which tests your system for following features:
  • CPU compute
  • drive stress
  • I/O syncs
  • Pipe I/O
  • cache thrashing
  • VM stress
  • socket stressing
  • process creation and termination
  • context switching properties
Though these tools are good for examining your system, they should not just be used by any system user.

Important: It is highly recommended that you use these tools with root user privileges, because they can stress your Linux machine so fast and to avoid certain system errors on poorly designed hardware.
How to Install ‘stress’ Tool in Linux

To install stress tool on Debian and its derivatives such Ubuntu and Mint, run the following command.
$ sudo apt-get install stress

To install stress on a RHEL/CentOS and Fedora Linux, you need to turn on EPEL repository and then type the following yum command to install the same:
# yum install stress

The general syntax for using stress is:
$ sudo stress option argument

Some options that you can use with stress.
  • To spawn N workers spinning on sqrt() function, use the –cpu N option as follows.
  • To spawn N workers spinning on sync() function, use the –io N option as follows.
  • To spawn N workers spinning on malloc()/free() functions, use the –vm N option.
  • To allocate memory per vm worker, use the –vm-bytes N option.
  • Instead of freeing and reallocating memory resources, you can redirty memory by using the –vm-keep option.
  • Set sleep to N seconds before freeing memory by using the –vm-hang N option.
  • To spawn N workers spinning on write()/unlink() functions, use the –hdd N option.
  • You can set a timeout after N seconds by using the –timeout N option.
  • Set a wait factor of N microseconds before any work starts by using the –backoff N option as follows.
  • To show more detailed information when running stress, use the -v option.
  • Use –help to view help for using stress or view the manpage.

How Do I use stress on Linux systems?


1. To examine effect of the command every time you run it, first run the uptime command and note down the load average.
Next, run the stress command to spawn 8 workers spinning on sqrt() with a timeout of 20 seconds. After running stress, again run the uptime command and compare the load average.
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --cpu  8 --timeout 20
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime    
 17:20:00 up  7:51,  2 users,  load average: 1.91, 2.16, 1.93  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --cpu 8 --timeout 20
stress: info: [17246] dispatching hogs: 8 cpu, 0 io, 0 vm, 0 hdd
stress: info: [17246] successful run completed in 21s
nilaxan@localhost ~ $ uptime
 17:20:24 up  7:51,  2 users,  load average: 5.14, 2.88, 2.17  [<-- Watch Load Average]

2. To spwan 8 workers spinning on sqrt() with a timeout of 30 seconds, showing detailed information about the operation, run this command:
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --cpu 8 -v --timeout 30s
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime
 17:27:25 up  7:58,  2 users,  load average: 1.40, 1.90, 1.98  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --cpu 8 -v --timeout 30s
stress: info: [17353] dispatching hogs: 8 cpu, 0 io, 0 vm, 0 hdd
stress: dbug: [17353] using backoff sleep of 24000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 8 [17354] forked
stress: dbug: [17353] using backoff sleep of 21000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 7 [17355] forked
stress: dbug: [17353] using backoff sleep of 18000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 6 [17356] forked
stress: dbug: [17353] using backoff sleep of 15000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 5 [17357] forked
stress: dbug: [17353] using backoff sleep of 12000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 4 [17358] forked
stress: dbug: [17353] using backoff sleep of 9000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 3 [17359] forked
stress: dbug: [17353] using backoff sleep of 6000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 2 [17360] forked
stress: dbug: [17353] using backoff sleep of 3000us
stress: dbug: [17353] setting timeout to 30s
stress: dbug: [17353] --> hogcpu worker 1 [17361] forked
stress: dbug: [17353] 
nilaxan@localhost ~ $ uptime
 17:27:59 up  7:59,  2 users,  load average: 5.41, 2.82, 2.28  [<-- Watch Load Average]

3. To spwan one worker of malloc() and free() functions with a timeout of 60 seconds, run the following command.
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --vm 1 --timeout 60s 
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime
 17:34:07 up  8:05,  2 users,  load average: 1.54, 2.04, 2.11  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --vm 1 --timeout 60s 
stress: info: [17420] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: info: [17420] successful run completed in 60s
nilaxan@localhost ~ $ uptime
 17:35:20 up  8:06,  2 users,  load average: 2.45, 2.24, 2.17  [<-- Watch Load Average]

4. To spwan 4 workers spinning on sqrt(), 2 workers 
spwaning on sync(), 2 workers on malloc()/free(), with a time out of 20 
seconds and allocate a memory of 256MB per vm worker, run this command 
below.
nilaxan@localhost ~ $ uptime
nilaxan@localhost ~ $ sudo stress --cpu 4 --io 3 --vm 2 --vm-bytes 256M --timeout 20s 
nilaxan@localhost ~ $ uptime

Sample Output

nilaxan@localhost ~ $ uptime
 17:40:33 up  8:12,  2 users,  load average: 1.68, 1.84, 2.02  [<-- Watch Load Average]
nilaxan@localhost ~ $ sudo stress --cpu 4 --io 3 --vm 2 --vm-bytes 256M --timeout 20s
stress: info: [17501] dispatching hogs: 4 cpu, 3 io, 2 vm, 0 hdd
stress: info: [17501] successful run completed in 20s
nilaxan@localhost ~ $ uptime
 17:40:58 up  8:12,  2 users,  load average: 4.63, 2.54, 2.24  [<-- Watch Load Average]

How to Install ‘stress-ng’ Tool in Linux

To install stress-ng, run the following command.
$ sudo apt-get install stress-ng             [on Debian based systems]
# yum install stress-ng                      [on RedHat based systems]
The general syntax for using `stress-ng` is :
$ sudo stress-ng option argument

Some of the options that you can use with stress-ng:
  • To start N instances of each stress test, use the –all N option as follows.
  • To start N processes to exercises the CPU by sequentially working through all the different CPU stress testing methods, use the –cpu N option as follows.
  • To use a given CPU stress testing method, use –cpu-method option. There are many methods available that you can use, to view the manpage to see all the methods to use.
  • To stop CPU stress process after N bogo operations, use the –cpu-ops N option.
  • To start N I/O stress testing processes, use the –io N option.
  • To stop io stress processes after N bogo operations, use the –io-ops N option.
  • To start N vm stress testing processes, use the –vm N option.
  • To specify amount of memory per vm process, use –vm-bytes N option.
  • To stop vm stress processes after N bogo operations, use –vm-ops N options
  • Use the –hdd N option to start N harddisk exercising processes.
  • To stop hdd stress processes after N bogo operations, use –hdd-ops N option.
  • You can set a timeout after N seconds by using the –timeout N option.
  • To generate a summary report after bogo operations, you can use –metrics or –metrics-brief options. The –metrics-brief displays non zero metrics.
  • You can also start N processes that will create and remove directories using mkdir and rmdir by using the –dir N option.
  • To stop directory operations processes use –dir-ops N options.
  • To start N CPU consuming processes that will exercise the present nice levels, include the
  • –nice N option. When using this option, every iteration will fork off a child process that runs through all the different nice levels running a busy loop for 0.1 seconds per level and then exits.
  • To stop nice loops, use the –nice-ops N option as follows.
  • To start N processes that change the file mode bits via chmod(2) and fchmod(2) on the same file, use the –chmod N option. Remember the greater the value for N then the more contention on the file. The stressor will work through all the combination of mode bits that you specify with chmod(2).
  • You can stop chmod operations by the –chmod-ops N option.
  • You can use the -v option to display more information about ongoing operations.
  • Use -h to view help for stress-ng.

How Do I use ‘stress-ng’ in Linux systems?

1. To run 8 CPU stressors with a timeout of 60 seconds and a summary at the end of operation 

nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --cpu 8 --timeout 60 --metrics-brief
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:15:29 up 12 min,  1 user,  load average: 0.00, 0.01, 0.03  [<-- Watch Load Average]
nilaxan@localhost:~$ sudo stress-ng --cpu 8 --timeout 60 --metrics-brief
stress-ng: info: [1247] dispatching hogs: 8 cpu
stress-ng: info: [1247] successful run completed in 60.42s
stress-ng: info: [1247] stressor      bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
stress-ng: info: [1247]       (secs)    (secs)    (secs)   (real time) (usr+sys time)
stress-ng: info: [1247] cpu   11835     60.32     59.75      0.05       196.20       197.91
nilaxan@localhost:~$ uptime
 18:16:47 up 13 min,  1 user,  load average: 4.75, 1.47, 0.54  [<-- Watch Load Average]

2. To run 4 FFT CPU stressors with a timeout of 2 minutes.
nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --cpu-method fft --timeout 2m
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:25:26 up 22 min,  1 user,  load average: 0.00, 0.26, 0.31  [<-- Watch Load Average]
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --cpu-method fft --timeout 2m
stress-ng: info: [1281] dispatching hogs: 4 cpu
stress-ng: info: [1281] successful run completed in 120.01s
nilaxan@localhost:~$ uptime
 18:27:31 up 24 min,  1 user,  load average: 3.21, 1.49, 0.76  [<-- Watch Load Average]

3. To run 5 hdd stressors and stop after 100000 bogo operations, run this command.
nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --hdd 5 --hdd-ops 100000
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:29:32 up 26 min,  1 user,  load average: 0.43, 1.00, 0.67  [<-- Watch Load Average] 
nilaxan@localhost:~$ sudo stress-ng --hdd 5 --hdd-ops 100000
stress-ng: info: [1290] defaulting to a 86400 second run per stressor
stress-ng: info: [1290] dispatching hogs: 5 hdd
stress-ng: info: [1290] successful run completed in 136.16s
nilaxan@localhost:~$ uptime
 18:31:56 up 29 min,  1 user,  load average: 4.24, 2.49, 1.28  [<-- Watch Load Average]

4. To run 8 CPU stressors, 4 I/O stressors and 1 
virtual memory stressor using 1GB of virtual memory for one minute, run 
this command below.
nilaxan@localhost:~$ uptime
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --io 4 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief
nilaxan@localhost:~$ uptime

Sample Output

nilaxan@localhost:~$ uptime
 18:34:18 up 31 min,  1 user,  load average: 0.41, 1.56, 1.10  [<-- Watch Load Average]
nilaxan@localhost:~$ sudo stress-ng --cpu 4 --io 4 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief
stress-ng: info: [1304] dispatching hogs: 4 cpu, 4 iosync, 1 vm
stress-ng: info: [1304] successful run completed in 60.12s
stress-ng: info: [1304] stressor      bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
stress-ng: info: [1304]       (secs)    (secs)    (secs)   (real time) (usr+sys time)
stress-ng: info: [1304] cpu    1501     60.07      2.67     10.39        24.99       114.93
stress-ng: info: [1304] iosync  381463     60.01      0.00     12.90      6357.10     29570.78
nilaxan@localhost:~$ uptime
 18:35:36 up 32 min,  1 user,  load average: 4.66, 2.80, 1.59  [<-- Watch Load Average]

Summary


As recommended, these tools should be used with superuser privileges as they have certain effects on the system. These tools are good for general System Administration in Linux. I hope this guide was useful and if you have any additional ideas on how to test the health status of your system using these tools or any other tools.

Tuesday, June 2, 2015

Useful keytool commands for certificate management

keytool command

keytool command location : $JAVA_HOME/jre/bin/keytool
cacerts location : $JAVA_HOME/jre/lib/security/cacerts

(Generally this is the location of keytool command or cacerts, It may vary based on your environment)

Parameters for below examples


Alias Name/Label: "This is a cert"
Certifcate Filename: testcert.cer
Keystore Name: cacerts

Importing Certificate


keytool -import -trustcacerts -alias "Alias_Name" -file "Filename" -keystore "keystore_Name"

Example:


keytool -import -trustcacerts -alias "This is a cert" -file testcert.cer -keystore cacerts

This above command will import testcert.cer into the keystore cacerts with the label "This is a cert"

Listing Certificate

keytool -list -keystore "keystore_name"

Examples:

keytool -list -keystore cacerts
Lists all the certificates in the keystore cacerts

keytool -list -v -keystore cacerts
Lists all the details of all certificates in the keystore cacerts

keytool -list -alias "This is a cert" -keystore cacerts
Lists the certificate withe the Alias "This is a cert" in the kestore cacerts

keytool -list -v -alias "This is a cert" -keystore cacerts
Lists the certificate with the label "This is a cert" in the keystre cacerts

keytool -list -v -keystore cacerts |grep Alias
Lists the Alias of all the certificates in the keystre cacerts.

Deleting the certificate


keytool -delete -alias "Alias_Name" -keystore "Keystore_Name"

Sunday, May 31, 2015

List Of Free Windows SSH Client Tools To Connect To Your Linux Server

You have Windows as operating system and you need to connect to Linux server to transfer files from Linux to Windows and inversely. So you need to have Secure Shell known as SSH. In fact, SSH is a network protocol which enables you to connect to Linux and Unix servers over the network. It uses public key cryptography to authenticate the remote computer. You can use SSH by several ways, either by using it automatically or by using a password authentication to log in.


PuTTY


PuTTY is the most famous SSH and telnet client, developed originally by Simon Tatham for the Windows platform. PuTTY is open source software that is available with source code and is developed and supported by a group of volunteers.

putty

Putty is very easy to install and to use.You don’t usually need to change most of the configuration options. To start the simplest kind of session, all you need to do is to enter a few basic parameters.You can download PuTTY here

Bitvise SSH Client


Bitvise SSH Client is an SSH and SFTP client for Windows. It is developed and supported professionally by Bitvise. The SSH Client is robust, easy to install, easy to use. Bitvise SSH Client is a feature-rich graphical SSH/SFTP client for windows and allow you dynamic port forwarding through an integrated proxy with auto-reconnecting capability.

bitvise

Bitvise SSH Client is free for personal use, as well as for individual commercial use inside organizations. You can download Bitvise SSH Client here.

MobaXterm


MobaXterm is your ultimate toolbox for remote computing. In a single Windows application, it provides loads of functions that are tailored for programmers, webmasters, IT administrators and pretty much all users who need to handle their remote jobs in a more simple fashion.

mobaxterm

MobaXterm provides all the important remote network tools (SSH, X11, RDP, VNC, FTP, MOSH, …) and Unix commands (bash, ls, cat, sed, grep, awk, rsync, …) to Windows desktop, in a single portable exe file which works out of the box. MobaXterm is free for personal use. You can download MobaXterm from here.

DameWare SSH


I think that DameWare SSH is the best free ssh client.

ssh

This free tool is a terminal emulator that lets you make multiple telnet and SSH connections from one easy-to-use console.
  • Manage multiple sessions from one console with a tabbed interface
  • Save favorite sessions within the Windows file system
  • Access multiple sets of saved credentials for easy log-in to different devices
  • Connect to computers and devices using telnet, SSH1, and SSH2 protocols
You can download DameWare SSH  from this link.

SmarTTY


SmarTTY is a free multi-tabbed SSH client that supports copying files and directories with SCP on-the-fly.

smart

Most SSH servers support up to 10 sub-sessions per connection. SmarTTY makes the best of it: no annoying multiple windows, no need to relogin, just open a new tab and go!

Cygwin


Cygwin is a large collection of GNU and Open Source tools which provide functionality similar to a Linux distribution on Windows.

cyq

Cygwin consists of a Unix system call emulation library, cygwin1.dll, together with a vast set of GNU and other free software applications organized into a large number of optional packages. Among these packages are high-quality compilers and other software development tools, an X11 server, a complete X11 development toolkit, GNU emacs, TeX and LaTeX, OpenSSH (client and server), and much more, including everything needed to compile and use PhysioToolkit software under MS-Windows.

Saturday, May 30, 2015

How to Setup VNC Server (Linux Remote Desktop Access) on CentOS/RHEL and Fedora

VNC ( Virtual Network Computing ) Servers enables remote desktop access for Linux systems similar to MSTSC in windows. Generally Linux administrators doesn’t prefer to use windows access, But some times we required to have access remote desktop of Linux. In that case we need to install vnc server on our Linux system. This tutorial will help you to Setup VNC Server and configure remote access for users on CentOS, RHEL and Fedora Users.

Step 1: Install Required Packages


Most of Linux servers doesn’t have desktop installed on their system. So make sure you have installed else use following command to install it.
For CentOS/RHEL 6:
# yum groupinstall "Desktop"

For CentOS/RHEL 5:
# yum groupinstall "GNOME Desktop Environment"
Now install few required packages for vnc-server
# yum install pixman pixman-devel libXfont

Step 2: Install VNC Server


After installing required packages, lets install vnc-server in your system. vnc-server is available under default yum repositories.
# yum install vnc-server

On CentOS/RHEL 6, you will see that tigervnc-server package will be installed.

Step 3: Create User for VNC


Lets’ create few users for connecting through vnc. You can also use existing system users by connecting through vnc, In that case we only need to set vncpasswd for that account.
# useradd user1
# passwd user1

# useradd user2
# passwd user2
Now set the vnc password for all accounts need to connect through vnc.
# su - user1
$ vncpasswd
$ exit

# su - user2
$ vncpasswd
$ exit

Step 4: Configure VNC Server for Users


Now edit /etc/sysconfig/vncservers configuration file and add the following to the end of the file.
VNCSERVERS="1:user1 2:user2"
VNCSERVERARGS[1]="-geometry 800x600"
VNCSERVERARGS[2]="-geometry 1024x768"

Where VNCSERVERS is the list of users need to connect, VNCSERVERARGS defined the screen size. Like user1 have a 800×600 screen, and user2 have 1024×768 screen size on his client.

Now start vnc-server service using following command and check the output
# service vncserver start

Starting VNC server: 1:user1 xauth:  creating new authority file /home/user1/.Xauthority

New 'svr1.tecadmin.net:1 (user1)' desktop is svr1.tecadmin.net:1

Creating default startup script /home/user1/.vnc/xstartup
Starting applications specified in /home/user1/.vnc/xstartup
Log file is /home/user1/.vnc/svr1.tecadmin.net:1.log

2:user2 xauth:  creating new authority file /home/user2/.Xauthority

New 'svr1.tecadmin.net:2 (user2)' desktop is svr1.tecadmin.net:2

Creating default startup script /home/user2/.vnc/xstartup
Starting applications specified in /home/user2/.vnc/xstartup
Log file is /home/user2/.vnc/svr1.tecadmin.net:2.log
                                                        
As per above output, you can see that user1 desktop is available on :1 and user2 desktop is available on :2. We will use :1 to connect to user1 and :2 to connect to user2.

Step 5: Connect VNC Server using VNC Viewer


To access from remote Linux system use following command.
# vncviewer 192.168.1.11:1
To access remote desktop on vnc-server from windows system, you must have vnc-viewer installed on your system. There are various vnc viewer available to use. Download any one and install on your system, for example:

TightVNC: http://www.tightvnc.com/download.php
RealVNC: https://www.realvnc.com/download/vnc/
TigerVNC: http://sourceforge.net/projects/tigervnc/files/tigervnc/

After installing vnc viewer connect to your system, In below example we are connected to user1 (:1).

connect-vnc

Now enter vnc password of account assigned with vncpasswd command.

vncviewer-password

You are connected to x windows system of your Linux machine.


vnc-connected

Monday, May 18, 2015

Bash Tips: If -e Wildcard File Check => [: Too Many Arguments

Here is a quick bash tip that might be useful if you need to use inside a bash script a check to see if a wildcard expression of files/folders exists or not. For example:

if [ -e /tmp/*.cache ]
then
echo "Cache files exist: do something with them"
else
echo "No cache files..."
fi
This is using -e (existing file check) that is working fine on individual files. Also the above code might seem to work fine if the result of the expression if one file; but if you have more files returned by the expression this will fail with the following error:

line x: [: too many arguments
This is unfortunately normal as ‘-e’ can take only one parameter, so it will not work with multiple results. This means we have to find a walkaround for this issue… There are probably many solutions for this problem; my  idea is to run ls and count the results and then feed that to if. Something like:

files=$(ls /tmp/*.cache)
if [ $files ]
then
echo "Cache files exist: do something with them"
else
echo "No cache files..."
fi
Now this is obviously wrong as if there are no files in the directory we will get a result like this:

ls: cannot access /tmp/*.cache: No such file or directory
and we don’t want to count that. We need to ignore such errors when files are not there, and we will use:

files=$(ls /tmp/*.cache **2> /dev/null**)
this way ls errors printed to STDERR now go to /dev/null and we don’t have to worry about them anymore.
This is still not good enough, as if will feed it with more values than it will still fail (when we have more files):

line x: [: /tmp/1.cache: unary operator expected
Obviously the proper way to do this would be to count the number of files and for this we just add “wc -l” to count the result of ls. This should look like this:

files=$(ls /tmp/*.cache 2> /dev/null | wc -l)
if [ **"$files" != "0"** ]
then
echo "Cache files exist: do something with them"
else
echo "No cache files..."
fi
and if needed we can even use the $files variable that now has the number of file. I hope you will find this useful if you need to do something similar; obviously the file check was just an example, and you would use this based on your needs and inside a script that does something useful ;–) .