Linux Boot Process.

In the shortest possible form, the Linux boot sequence looks something like this:

1. Computer gets powered on, BIOS runs whatever it finds in the Master Boot Record (MBR), usually lilo or GRUB

2. lilo, in turn, starts up the Linux kernel
3. The Linux kernel starts up the primal process, init. Since init is always started first, it always has               PID of   1.
4. init then runs your boot scripts, also known as "rc files". These are similar in concept to DOS's autoexec.bat and config.sys, if those had been developed to a fine art. These rc files, which are generally shell scripts, spawn all the processes that make up a running Unix system.


One interesting consequence of this multi-step booting process is that it's very flexible. Once lilo has been loaded, and control over the boot process has been passed to it, it can run any sort of arbitrary (but self-contained) program that you'd care to run. This means that you can use lilo to boot into multiple operating systems.

Once lilo has started to run whatever operating system you've chosen (Linux would be a good choice ;-), it is no longer running.The lilo config file is usually located at /etc/lilo.conf; and you can find the grub.conf in /boot/grub/grub.conf remember to run the lilo command after editing it to save your changes to the master boot record on disk.

The next step in the Linux boot process is for the Linux kernel to run. This does all sorts of things, but the most important thing that we're interested in is that kernel spawns a copy of init, the First Process. Being the First Process, init will be assigned a PID of 1.

init is in charge of starting all of the normal processes that a running Linux system needs, including the mingetty processes that give your virtual consoles (ALT-F1 through ALT-F6 on most default Linux installations), starting up any needed services (like networking), and anything else that you might want to do while booting. These are controlled by shell scripts known as "rc files", which are started by commands in the init config file. The config file is usually located at /etc/inittab, and you'll have to tell init to re-read its config file by running telinit q as root. You can find a lot more information on this under the man pages for inittab and init (man inittab and man init).

There are two types of rc files that could be used: System V style or BSD style. Most Linux distributions use System V system, so that is what will be covered here. In general, System V rc files are considered more powerful than BSD ones, at the expense of simplicity. BSD rc files are generally stored in the /etc directory, while System V expects to find its files in the /etc/rc.d directory (and any subdirectories from there).

The idea behind System V rc files is that you may want to boot in different ways. For example, you may want to boot into single-user mode to fix a hard drive problem, or you may want regular multi-user console mode, or perhaps you want to boot straight to the X window system. All of these different types of boot sequences can be automated into "run levels"; Red Hat's convention is:

* Run Level 1 (directory: /etc/rc.d/rc1.d) - Single user mode
* Run Level 2 (directory: /etc/rc.d/rc2.d) - Same as 3, but without NFS
* Run Level 3 (directory: /etc/rc.d/rc3.d) - Regular multi-user and networking mode
* Run Level 4 (directory: /etc/rc.d/rc4.d) - Unused
* Run Level 5 (directory: /etc/rc.d/rc5.d) - Same as 3, but boots straight to the X window system



Of course, this convention is arbitrary and other distributions can (and do!) vary. The most common variations are for run level 2 to be the default mode (but have it operate the same as run level 3 in the chart above), and for X to be run level 4 rather than 5. There is also a special run level 6 that isn't really a run level, its a shortcut for rebooting.

Note: To change the run level that you're operating under, use the telinit command. For example, for a quick reboot, you can use telinit 6. Naturally, you have to be root to do this.

Note: The default runlevel is decided in /etc/inittab. The line that controls it looks like this: id:3:initdefault:

Inside the /etc/rc.d/init.d directory are all of the shell scripts that do the actual work. The rc#.d directories simply contain symbolic links to the scripts in the init.d directory.

In the /etc/rc.d directory there is also a file called rc.local, which is the rc file that init will run after everything else is done. You can add simple things to the end of it if you don't want to go through the process of setting up a full script in the /etc/rc.d/init.d directory, but it's not a good idea if you want an easy to understand and consistent boot process.

Knowing a bit about how the System V boot process works now, it's easer to understand how the boot sequence operates:

* Once the Linux kernel has been loaded by lilo, it looks in "all the usual places" for init and runs the first copy it finds
* In turn, init runs the shell script found at /etc/rc.d/rc.sysinit
* Next, rc.sysinit does a bunch of necessary things to make System V rc files possible
* init then runs all the scripts for the default runlevel
o It knows the default run level by examing /etc/inittab
o Symbolic links to the real scripts (in /etc/rc.d/init.d) are kept in each of the run level directories (/etc/rc.d/rc1.d through rc6.d)
* Lastly, init runs whatever it finds in /etc/rc.d/rc.local (regardless of run level). rc.local is rather special in that it is executed every time that you change run levels.


Note: There's some trickery in how the symbolic links in the rc#.d directories work. Link names that start with an "S" are scripts that "started", while links that begin with a "K" are scripts that are "killed", or stopped. The number that follows is simply a way to determine the order that the scripts run in: lower numbered scripts run first.

Note: There's also some trickery done with rc.sysinit for a file called rc.serial, used to set up your serial ports, but the file does not exist by default on most Linux distributions and is not often used.



Important Linux Port Numbers

21 => FTP
22 => SSH
23 => Telnet
25 => SMTP Mail Transfer
43 => WHOIS service
53 => name server (DNS)
80 => HTTP (Web server)
110 => POP protocol (for email)
995 => POP over SSL
3306 => MysQL Server
111 => rpcbind
953 => rndc
143 => IMAP Protocol (for email)
993 => IMAP Secure
443 => HTTP Secure
3128 => Squid Proxy
3306 => MysQL Server
3636 => Piranha
4643 => Virtuosso Power Panel
10000 => Webmin

For more port numbers Click Here 
and Click Here


Cisco CCNA Training




How to setup Nagios basic configuration from scratch

This setup is intended to provide you with simple instructions on how to install Nagios on Fedora , Redhat and Centos and have it monitoring your local machine. Nagios is very easy to setup and monitor.

We would require the following packages to install Nagios.

Apache --> yum -y install httpd*
PHP --> yum -y install php*
GCC --> yum -y install gcc, yum -y install gcc-c++, yum -y install glibc
GD --> yum -y install gd gd-level

* -y will not prompt you for an yes/no input.

Create a new nagios user account and give it a password.
useradd nagios

Create a new  nag group for allowing external commands to be submitted through the web interface. Add both the nagios user and the apache user to that group.

groupadd nag
usermod -a -G nag nagios
usermod -a -G nag apache

Download nagios tar file along with the plugins

wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.0.tar.gz

wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.11.tar.gz

Now go to the folder or directory where you have downloaded the files. In my case it is /root

cd /root
tar xzf nagios-3.2.0.tar.gz
cd nagios-3.2.0
./configure --with-command-group=nag
make all
make install
make install-init
make install-config
make install-commandmode

cd /root
tar xzf nagios-plugins-1.4.11.tar.gz
cd nagios-plugins-1.4.11
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make
make install

Now start Nagios and add it to the list of system services.

service nagios start
chkconfig nagios on

We are done with the basic installation of nagios and its corresponding plugins. Now, I will show you how to monitor a group of hosts. I cannot explain you all the configuration files but will try to throw light on those we require. On successful completion of the installation you will find localhost.cfg in the following path.

/usr/local/nagios/etc/objects/localhost.cfg

In short localhost.cfg is used to define the hosts and services that we want to monitor on the host system.(Host system here refers to the one we want to monitor)

open this file and add the following next to host definition lines.

define host{
use linux-server
host_name www.testing-server.com
address 192.168.36.1
}


use - uses the default template defined in templates.cfg
host_name - hostname can be anything
address- address of the host

Now save the file and restart Nagios.
service nagios restart

NOTE: Make sure you dont add anything to this file because if you are new to nagios debugging would be difficult.

Now you can check the output in the browser.

http://localhost/nagios (Create the password using the htpasswd command)

Computer Training

You can see the host that you have mentioned in the localhost.cfg file. You can add any number of hosts and by default it will check whether the host is alive or not using the check_host_alive plug in. If the host is dead it will display in red and you can click on that to see the details. This is the most basic configuration of Nagios one can configure it according to their requirements. There are lot many features which sets nagios apart from the crowd. Go through the official documentation for further information.
 

Accessing files of Linux system from Windows

Steps to access files from Linux:
  1. Install Samba On Linux, read article here
  2. Prepare VBScript to mount network drive for the mapped folder thru Samba, read article here
  3. Run the vbscript on startup, by creating shortcut on start menu, read article here