Saturday, May 25, 2013

Building Phinix - My Linux From Scratch

This blog is document my personal experience building a Linux system from scratch. For future reference, and also provide a reference for interested persons. I used the guide from http://www.linuxfromscratch.org. There are various levels all building from the base LFS system, such as Beyond Linux From Scratch, and Hardened Linux From Scratch.


Linux From Scratch v7.3


I Preliminaries


The prelims are quite uninteresting. Mostly introductory material on the project, required knowledge, and other concepts like creating a partition for the LFS build and packages and reasons why they were included -like I said introductory material.
Additionally, I have alternated between building 7.0 and 7.3 so many times I've lost count, a powerful motivating factor in the writing of this blog, so prelims is really not interesting.

The Concept


Building a Linux system revolves around building a set of temporary tools independent of our build machine. This is achieved by;
  • Building a cross-compiled toolchain (binutils, gcc, [linux-headers], glibc)
  • Building a temporary Linux system (gcc and all) using X-toolchain above
  • Building a complete (minimal) final system using the temporary system.
binutils is the BINary UTILities package (collection of programs) that contains development tools like assembler and the linker. gcc is the Gnu Compiler Collection package, that contains the compilers for different programming languages (basically). The linux-headers is the collection of user visible system API that the C standard library glibc depends on to provide it's various services, it also contains the dynamic linker.

See Chapter 5.2 of the LFS book for details. Please Note that /tools is a symlink to /mnt/lfs/tools.


II Building The Temporary System


Note: The packages we discuss here are those that did not go according to the book for me, or I did not completely understand first time. Also the temporary toolchain will be installed to a separate location, /tools or /mnt/lfs/tools -refer to chapter 4 and 5 for more details on the reason for this.

Binutils PASS 1

../binutils-2.23.1/configure     \
    --prefix=/tools            \
    --with-sysroot=$LFS        \
    --with-lib-path=/tools/lib \
    --target=$LFS_TGT          \
    --disable-nls              \
    --disable-werror

The configuration option --target=$LFS_TGT tells the build system we want to cross-compile binutils to build binaries for the $LFS_TGT architecture. --with-lib-path=/tools/lib specifies the library search part for ld, the linker, further isolating this binutils build from the host system.

Gcc PASS 1


for file in \ 
$(find gcc/config -name linux64.h -o -name linux.h -o -name sysv4.h)
do 
     cp -uv $file{,.orig} 
     sed -e 's@/lib\(64\)\?\(32\)\?/ld@/tools&@g' \      
     -e 's@/usr@/tools@g' $file.orig > $file  
     echo '#undef STANDARD_STARTFILE_PREFIX_1
     #undef STANDARD_STARTFILE_PREFIX_2
     #define STANDARD_STARTFILE_PREFIX_1 "/tools/lib/"
     #define STANDARD_STARTFILE_PREFIX_2 ""' >> $file  
     touch $file.orig
done

The above commands modify the gcc source files to look in /tools/* for the dynamic linker to use, for binaries to be built.

../gcc-4.7.2/configure         \
    --target=$LFS_TGT          \
    --prefix=/tools            \
    --with-sysroot=$LFS        \
    --with-newlib              \
    --without-headers          \
    --with-local-prefix=/tools \
    --with-native-system-header-dir=/tools/include \
        ...

Glibc PASS 1


../glibc-2.17/configure                             \
      --prefix=/tools                                 \
      --host=$LFS_TGT                                 \
      --build=$(../glibc-2.17/scripts/config.guess) \
     ...

Continuing with our breakaway plans we have cross-compile glibc. --host indicates that the binaries that are going to use our C libraries are going to run on the $LFS_TGT architecture, and we are building our glibc using the build tools on our current host the --build part. Critical in this step is that programs built with gcc above should use /tools/ld-linux.so.2 - our just compiled glibc, not /lib/ld-linux.so.2 from the host system.


At the end of the above set of installations, we have a cross-compiled toolchain which we will now use to build a complete native toolchain for $LFS_TGT machine, i.e, we will build binutils, gcc, and glibc to run natively (here natively means independent) on $LFS_TGT architecture, including testing packages such as expect, check and tcl, we also coreutils, findutils, file, etc. In sum we will be build a complete set of tools to build our final system, which is completely independent of our host system.

Please note that to do the native builds we use the toolchain installed above, for example $LFS_TGT-gcc, $LFS_TGT-ranlib.

Binutils PASS 2


CC=$LFS_TGT-gcc            \
AR=$LFS_TGT-ar             \
RANLIB=$LFS_TGT-ranlib     \
../binutils-2.23.1/configure \
    --prefix=/tools        \
    --disable-nls          \
    --with-lib-path=/tools/lib

Quoting the LFS book ...

CC=$LFS_TGT-gcc AR=$LFS_TGT-ar RANLIB=$LFS_TGT-ranlib
  Because this is really a native build of Binutils, setting these variables ensures that the build system uses the   
  cross-compiler and associated tools instead of the ones on the host system.

--with-lib-path=/tools/lib
  This tells the configure script to specify the library search path during the compilation of Binutils, resulting in 
   /tools/lib being passed to the linker. This prevents the linker from searching through library directories on    
   the host.

We also go to the trouble of building an ld properly configured to search for libraries in the /usr/lib:/lib path for later use during the build of our final system. 

make -C ld clean
make -C ld LIB_PATH=/usr/lib:/lib
cp -v ld/ld-new /tools/bin

Gcc PASS 2


Again we change the gcc source files to point to /tools/* for our dynamic linker, see gcc PASS 1.

../gcc-4.7.2/configure          \
    --prefix=/tools             \
    --with-local-prefix=/tools  \
    --with-native-system-header-dir=/tools/include \
    ...

Notice that there is no glibc PASS 2? This is because glibc was built to be linked against by binaries for $LFS_TGT using the --host=$LFS_TGT option to configure, and installed to /tools/* so everything is set already for us to start building, no need to build glibc again. So we go ahead to build more the additional packages to build a more feature full temporary Linux system (toolchain) and install them under /tools.

Coreutils


As of  version 8.17 coreutils no longer bundles su, su comes with the shadow-4.1.5 package now. This fact is mentioned because, earlier versions of LFS recommends building su and installing as /tools/bin/su-tools at this stage and  we chose to use the more_control_helpers package management option (http://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt) with phinix which requires su to be installed. The shadow su does not work with our package system, so we download a custom su written by the inventor of the User Based management system from http://wiki.linuxfromscratch.org/hints/browser/trunk/PREVIOUS_FORMAT/more_control_and_pkg_man.txt?rev=904 and compile it and copy it to /tools/bin/; 

# gcc -o /tools/bin/su su.c

Note: At this point the book advices us to backup our /tools directory. We can do that by;

# tar -cjvf lfs-temporary-20130524.tar.bz2 /tools

The catch however is restoring from our back up. It requires that you recreate the directory structure we've been using so far; 

# mkdir -pv /mnt/lfs/tools

and then following the steps in the Part III of the book, Building the LFS system.

III Building The LFS System


At this stage we have a working albeit unbootable clean building environment, all contained in /tools. Our next task is to chroot into /tools, setup access to all devices, and start building our final system.

The package management style we chose is the User Based Package system, devised by Matthias Benkmann, mostly for experimentation and the sound arguments he makes for it.

Package Installation with User Based Package Management


After the preparation and before the installation of the first package in our LFS system, linux-libc-headers, we will now install the more_control_and_pkg package from http://www.linuxfromscratch.org/hints/downloads/files/ATTACHMENTS/more_control_and_pkg_man/. We just follow the installation instructions no more.

Note: The version of su we got in the section on coreutils above will fail if called with '-'. So we modify the /usr/bin/install_package script to call our installed /tools/su like this;

# Absolute path forces the call to fail in case We forget to change it back after chapter 6
/tools/bin/su $2
# su - $2

The environment is properly setup any ways so we don't risk breaking our build. To check this run;

# echo $PATH

The output should be something like this;

/usr/lib/pkgusr:/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin

Another thing we do is to run the list_package program and save it's output to a file, this acts like our package file listing, and it's fast compared to running list_package everytime we want a listing of files installed by package A.

list_package pkgname > /var/lib/pkgusr/pkgname.lst


Glibc


Glibc installation fails with;

/usr/lib/pkgusr/install -c -m 644 ../sysdeps/unix/sysv/linux/sys/vt.h /usr/include/sys/vt.h
/usr/lib/pkgusr/install -c -m 644 ../sysdeps/unix/sysv/linux/sys/quota.h /usr/include/sys/quota.h
/usr/lib/pkgusr/install -c -m 644 ../sysdeps/unix/sysv/linux/sys/fsuid.h /usr/include/sys/fsuid.h
/usr/lib/pkgusr/install -c -m 644 ../sysdeps/unix/sysv/linux/scsi/sg.h /usr/include/scsi/sg.h
/tools/bin/install: cannot create regular file '/usr/include/scsi/sg.h': Permission denied
make[2]: *** [/usr/include/scsi/sg.h] Error 1
make[2]: Leaving directory `/usr/src/glibc/glibc-2.17/misc'
make[1]: *** [misc/subdir_install] Error 2
make[1]: Leaving directory `/usr/src/glibc/glibc-2.17'
make: *** [install] Error 2

This is because the subdirectories under the /usr/include path belong to the linux-libc-headers and is not accessible to the glibc user, to fix this we change the group of these directories to install, and make them group writable with the following commands;


# find /usr/include -maxdepth 1 -type d -exec chgrp -v install {} \;
group of '/usr/include' retained as install
changed group of '/usr/include/linux' from linux-libc-headers to install
changed group of '/usr/include/asm-generic' from linux-libc-headers to install
changed group of '/usr/include/sys' from glibc to install
...

# find /usr/include -maxdepth 1 -type d -exec chmod -v ug=rwx,o=rxt {} \;
mode of '/usr/include' retained as 1775 (rwxrwxr-t)
mode of '/usr/include/linux' changed from 0755 (rwxr-xr-x) to 1775 (rwxrwxr-t)
mode of '/usr/include/asm-generic' changed from 0755 (rwxr-xr-x) to 1775 (rwxrwxr-t)
mode of '/usr/include/sys' changed from 0755 (rwxr-xr-x) to 1775 (rwxrwxr-t)
mode of '/usr/include/drm' changed from 0755 (rwxr-xr-x) to 1775 (rwxrwxr-t)
...

# chgrp install /var
# chmod ug=rwx,o=rxt /var

Now glibc will build and install correctly. Dont forget to run list_package on the glibc user.

Configuring glibc

We create /etc/nsswitch.conf file as root, and installed the timzone information as glibc. All files and directories creatd during the configuration of glibc apart from timezone information is owned by root.

Adjusting the Toolchain

After our final installation of glibc, we adjust the toolchain to point to our new glibc installation. This involves 
  • Using the /tools/bin/ld-new we built in PASS 2 of glibc as our new linker. Remember it's /tools/bin/ld-new is already built to look for library files in /usr/lib:/lib.
  • Modifying the toolchain's gcc spec file to find correct headers and Glibc start files (those installed in Chapter 6 to /usr/include for headers, and /usr/lib for start files).
If all the tests we run in the Readjusting section is successful, then from now on all binaries we build be linked properly to our new LFS system. So we can go ahead and build the remainder of our packages.

Note however that some packages may install libraries and need to update /etc/ld.so.cache. These packages must be granted access so following the tip in the user based management document, we compile /usr/lib/pkgusr/ldconfig.c and assign the following permissions;

cd /usr/lib/pkgusr
gcc -O2 -W -Wall -o ldconfig ldconfig.c
chown root:install ldconfig
chmod u=rwxs,g=rxs,o= ldconfig

Despite the above precaution we notice that the zlib package changes the owner and group of /etc/ld.so.cache, so we change it back to root:install manually and a second build attempt fails to modify ownership of /etc/ld.so.cache, which is what we want.

On ownership of files created or manually edited by us, we assign the owner of those files to the root user as recommended by the author of the User Based Package management system. This way the aim of preventing package installers overwritten our manual configurations can be achieved.

Installing Packages Using the Build Script that Comes With more_control_and_package_helpers


The more_control_and_package_helper archives comes with a number of scripts written to simplify build, perform proper error logging, and package management actions like listing files installed by packages, suspicious files like those with setuid. 

We make some changes to these scripts to suit our build setup, and this section discusses the rationale behind these changes, and where to get a copy of our changes.

Package Management Scripts

The only script we modify in the category is the /usr/sbin/install_package. The modification is listed below;

if [ $UID -ne 0 ]; then echo Please run this script as root. ; exit 1; fi
add_package_user "${1}" $2 10000 20000 $3 10000 20000 || exit 1
# Absolute path forces the call to fail in case I forget to change it back after chap 6
/tools/bin/su $2
# su - $2

This modification was done because prior to the installation of glibc, the su program that comes with the shadow program does not work, due to its dependency on glibc. But we needed a su utility to progress with the installation methods in the User Based Package management approach, so we download a custom copy from http://wiki.linuxfromscratch.org/hints/browser/trunk/PREVIOUS_FORMAT/more_control_and_pkg_man.txt?rev=904, compile it and install this temporary su to /tools/bin.

Then we modify /usr/sbin/install_package to use this /tools/bin/su to switch users. You may have noticed the fact that /tools/bin/su is called without '-', this is because this su is built that way, and there is no problem of not setting up our package user environment properly. This fact can be verified by running echo $PATH after the switch, and noting that the path begins with /usr/lib/pkgusr and ends with /tools/bin as is expected.

We must however remember to revert /usr/bin/install_package to it's original behaviour after installing and configuring shadow, by deleting the line; /tools/bin/su $2 and uncommenting the line; su - $2

Package Building Scripts


Monday, November 12, 2012

Walking from Circle to Kasoa


#circletokasoawalk

So I decided to walk from circle to kasoa this past saturday. I must admit I just up and left, no special clothing, no nothing -don't prepare to do what you can already do, Bruce Pandolfini. I walked the approximately 22km -what 22km? didn't feel like that, in under 6 hours following the trotro routes no shortcuts mind you, at an almost constant rate of about 60 steps per minutes, and strapping my Toshiba Satellite A25 -a heavy mo'fo, ask anyone, tweeting most of the way #circletokasoawalk

Check out my tweets and you'll see a sharp decline in humor at a certain point, yeah that was around the time I was dying ;)

Lessons learnt? I am a hardy guy, not as hardy as should be, but working on it. For those of you thinking it was a loony idea, I concede it was, but hey! a brother gotta to do some loony thing once in a while, don't they?

Oh on lessons learnt, don't walk with boxers on, man! I spent the whole weekend walking with my legs apart because of the chafing of my inner thighs, -where is antiP when you need it? More lessons, dude don't do it, seriously don't do it, no seriously, pick a kasoa trotro, I will save you a lot of hussle, for only GHS 1, the savings is tremendous. No raw thighs, no thirst, just dissing the driver and doing the distance in under 1 hour on a good day, whats not to like about that.

How did I do it? It was not easy. In the beginning from circle to Abossey Okai, I was scared I couldn't make it, oh the humiliation, I had posted it on facebook and stuff. Then I got to Kaneshie and the numerous years of cheap ass unnecessary walking started to pay off, I forgot the possibility of failure, started enjoying the walk. By Mallam I was feeling confident I could pull it off.

With the fact that Mallam was just the beginning in my mind -apparently I decieved myself, Old Barrier was the real beginning, I thought to myself, ENDURE! By Tetegu I was dying, by Old barrier my body took over and set the pace, my mind was off.

Oh by the time I got home, guess what surprise awaited me? LIGHT OFF!!!! psych!!!























































Friday, March 25, 2011

Making your locally hosted Web Application Look Live

INTRODUCTION
------------------
I didn't really know  what to call this article, but the title notwithstanding,
this article takes you through how to get rid -not exactly, of that damned
http://localhost, thingie when you are running your webapp locally -using a local
httpd like apache, or the more complete package, lampp, wampp or xampp.
This article only discusses apache, but I bet there is some smartie out there who
will make it work for IIS or something.

Please note that I used xampp for this example, because I was trying to stop myself
from, como se dice, swala-ing. So no GNU/Linux (lampp).

The above issue having been brought to the reader's attention, I must add that for
people who have been working with apache, lampp and GNU/Linux will find a way around this
article to make it work for them. However if the gentlemen -ladies too, refered to in
this paragraph just can't make it work, please let me know so I extend this issue to
cover lampp.

PRE-REQUISITES
--------------------
A knowledge of how http daemons, web servers work is assumed in this article
Also a knowledge of name resolution, be it DNS -if you don't know what this means,
this article is not for you, is assumed.
For those lacking the above pre-requisites, I am not in the mood to provide links,
so you just might as well google it.
>:) Happy Hunting!!! (:<


DISCUSSION
---------------
Okay, again a recap of the top story...

The article is targeted at final year students who are working on web applications
for their projects. During development the project is hosted locally using a httpd
like -most commonly apache.

The project is then accessed in the browser via http://localhost/theproject, for
viewing and whatever one does during web development.

The problem is that during the presentation, after the development of the project,
using the http://localhost/theproject address to access the webapp is not really
'ish'.

We would want to use an address like www.theproject.org, -please note I am making an
assumption here that the reader is fairly, como se dice, knows whats up!

So how do we achieve this.

This process is two-fold,
    1. We need a way to resolve the address www.theproject.org to our ip address.
    2. We need a way to make our httpd know what page to serve depending on what
       site name we request.
      
WARNING: All changes made in this article are made as root, so please make backup
copies of the configuration files before you edit them, just in case.
   
1. RESOLVING www.theproject.org TO IP ADDRESS USING /etc/hosts
------------------------------------------------------------------------------------
Whenever you type an address, like www.facebook.com in the browser, the browser does
name resolution to map the human-rememberable form www.facebook.com to the form the browser can work with, ip adress. This is called Name resolution -approximately.

Name resolution on personal computers is done in two steps, first the TCP/IP stack,
tries to resolve a domain name|host by refering to a file in /etc/hosts.

If a match is found in this file, the TCP/IP stack replaces the domain name|host with
the ip adress blah blah blah.

If the attempt at using /etc/hosts does not turn up a match, the TCP/IP stack then uses
DNS to do the name resolution, and if everything works out fine, DNS returns the ip
address for the domain name|host. DNS is not something I will talk about here.

In our situation, our site www.theproject.org is not a registered domain, it is just a
name we want to use, so we will like to avoid the DNS part, by making an entry for our
site in the /etc/host file.

The /etc/hosts path is for those using GNU/Linux, for those using Windows, the host
file is in c:\windows\system32\drivers\etc\hosts.

Open a new line, and type the following, please use tabs, not spaces;

127.0.0.1        www.theproject.org
    ::1            www.theproject.org


The above lines causes any name resolution attempt for www.theproject.org to return
the address 127.0.0.1, you can replace the loopback address with another address if
you are going to access the project over a local network.

Usefulness: If you want to, you can actually eliminate the time the browser uses to do
DNS name resolution, by creating entries for all the sites you frequent, hypothetically
this should improve your browsing speed, -initially, please note, I haven't actually
done the study, hence the word hypothetical you see.

2. MAKING APACHE SERVE THE RIGHT PAGE(S) FOR www.theproject.com
----------------------------------------------------------------------------------------
The Apache httpd, has a functionality known as virtual hosts http://httpd.apache.org/docs/2.2/vhosts/.
According to the apache docs:
     The term Virtual Host refers to the practice of running more
     than one web site(such as www.company1.com and
     www.company2.com) on a single machine.
     Virtual hosts can be "IP-based", meaning that you have a 
     different IP address for every web site, or "name-based",
     meaning that you have multiple names running
     on each IP address. The fact that they are running on the  
     same physical server is not apparent to the end user.
   
Interesting huh? We are interested in name-based virtual host. Why?
Because, we don't want to mess with the httpd serving correctly via http://localhost,
we just want it to add www.theproject.org to the mix, hence our choice.

To do that the NameVirtualHost directive must be added to the httpd.conf file, with path
c:/xampp/apache/conf/httpd.conf for Windows and for GNU/Linux i have no idea, but if you
need it that bad you could run a find for it, with;
 find / -name httpd.conf    -exec gedit {}\;

We need to add the following under the 'Main Server configuration' section, right below
the DocumentRoot settings;
    NameVirtualHost    *:80

    <VirtualHost *:80>
        ServerName    localhost
        DocumentRoot    "C:/xampp/htdocs"
    </VirtualHost>

    <VirtualHost *:80>
        ServerName    www.theproject.org
        DocumentRoot    "C:/xampp/htdocs/theProject"
    </VirtualHost>



The above assumes you are using xampp.
   
The NameVirtualHost *:80 directive says that we want to declare a virtual host for all
ip addresses on the server. A corresponding VirtualHost container must also be declared
for every NameVirtualHost.

The first VirtualHost block handles request for the server localhost, and serves up pages
from the c:/xampp/htdocs directory, DocumentRoot.This block is important because
when you start using virtual hosts, the default configuration will not work, so you must
explicitly specify how request for the default configuration, localhost in this case is to 
handled.

The second VirtualHost block, the one we are interested in, says 
that for every request sent to all ip addresses, where the
server name requested is 'www.theproject.org', serve pages from the DocumentRoot
"C:/xampp/htdocs/theProject", this DocumentRoot will be replaced by
/opt/lampp/htdocs/theProject assuming you installed lampp to /opt in GNU/Linux, or again,
/www/theProject if you installed apache directly.
For more explanation on how virtual host works please refer to the apache docs, at
http://httpd.apache.org/docs/2.2/vhosts/.

After you have done all this, restart apache, and now go ahead and access your project
via www.theproject.org.

After doing all the above, congratulations, you may not be able to  access you webapp. Why?
Because your browser may be configured to use a proxy, so it will try to resolve
www.theproject.org anyways.

The solution to this is simple;
    1. disable proxy, set to no proxy
    2. add the site www.theproject.org to the proxy exception list in your browser.
    3. or both.
Now try again and voila, we are in business. If you did not do voila at the point, please
go over again, and if failing persists, consult me, I might be able to help :)

Usefulness: I used this procedure to setup a bulletin board system on my department's local network, and made it possible for
members to access that local BBS with a name like www.css.org.

Wednesday, March 9, 2011

HOW GRUBSTERS TAME THE GRUB MMOTIA -An adventure in the Live disk world

BLURB:
Assumption: You have a faulty grub bootloader and you have in your hand a live disk.
The power is YOURS!!

1. sudo mount /dev/sda1 /mnt
2. sudo mount --bind /dev /mnt/dev
3. sudo mount --bind /proc /mnt/proc
4. sudo chroot /mnt
5. sudo grub-install /dev/sda
6. sudo update-grub
6. sudo reboot

After Reboot:
7. sudo update-grub

PROLOGUE:
The procedure for (re)installing the GRUB2 is not very different from
(re)installing GRUB Legacy (GRUB1). They are essentially the same, i.e;
    - install the primary bootloader into the Master Boot Record [MBR],
    - copy the image files for grub into /boot/grub,
    - update the grub configuration file.
The word grub will sometimes be used in in this text to refer to grub2, they are
are used interchangeably.

CHAPTER ALL-IN-1:
Being aware of the simple concept of grub (re)installation, the heros of this story,
us, Grubsters, champions and savior of the world of 'Linux installation', will procede to
tame the grub mmotia -mmotia: a SCARY apparition in Ghanaian[Akan] folklore, what the
english refer to as a dwarf, scary huh!

The command to install the bootloader to MBR;
    ubuntu@ubuntu:~$ sudo grub-install /dev/sda1

However running this command in the live disk world, will not only OUCH!, burn our left
asses, but taunt us with the following error;
    /usr/sbin/grub-probe: error: cannot find a device for /boot/grub (is /dev mounted?).

DAMN, DAMN, DAMN!!!, no one told us that grub-install -the mmotia, uses another monster
program grub-probe.

grub-probe, 'probes' device info for the /boot/grub path. In this case /boot/grub maps to
the live disk world's /boot/grub not our Linux Installation, hence the error, grub-probe
cannot find the device.

Attempts to probe any other device, eg. /dev/sda1, will return useful information, try;
    sudo grub-probe /dev/sda1

So after applying salve to our burnt left asses, we come up with a clever way of fixing this
grub-probe monster, which is why grub-install will not work, ehm sorry, why we can't tame the
grub mmotia, we need to lure the two of them to our world, 'Linux installation', on the
hard disk plane.

Let us now welcome on stage the CHROOT fairy -neat right.

A hit-and-run-affair: root@ttousai:~# chroot DIR

The chroot command changes the root -duh- of a command. Commands
executed at the terminal have / as their root, so all commands are relative to /. However
chroot, changes the default / to the one specified by DIR, so now all commands will
be relative to DIR.

We are going to use chroot to make our Linux installation our root. This works because
our Linux installation also has the grub-install, and all relevant programs to install
grub on our hard disk. But first we have to mount our Linux partition;
    sudo mount /dev/sda1 /mnt

then chroot by;
    sudo chroot /mnt

Now we have access to all the commands and programs -weapons, if you like, we installed
on our Linux partition.

We can now go ahead and run grub-probe, expecting it to work since our /boot path
now is the /boot of our Linux installation. Wise guy! [sarcastic]. We get our right butts
burnt this time round.

    ubuntu@ubuntu:~$sudo grub-install /dev/sda
    /usr/sbin/grub-probe: error: cannot find a device for /boot/grub (is /dev mounted?).
    -- and some more trashy talk --

So we find ourselfs asking, WTF! After considerable background knowledge, thinking and
googling, we find out that, our chroot environment has no access to two important filesystems
 needed by grub, /dev and /proc. Talk about a hard-to-tame mmotia!

Now to make matters worse or better, which ever, these directories are generated automagically
by the operating system, oh yeah and our chroot environment is definitely not the OS.

But fear not, as we will realise during our brain and google storming period, we can
actually leverage the /dev and /proc filesystems generated in the world of live session, so
that grub-install will stop trash talking and just be tamed.

Question is, HOW?
    # NOTE: If you are following this explanation first exit the chroot environment by
    # typing exit.
    sudo mount --bind /dev /mnt/dev
    sudo mount --bind /proc /mnt/proc

The above commands just 'bind' -I don't know what else to say', our live session's /dev and
/proc to the /dev and /proc in our Linux installation, so any OS generated devices will be
mapped to our Linux installation.

Now we can chroot back to our Linux installation, and attempt to -not so confident huh,
Mr. Grubster?, defeat and silence the grub monster once and for all, >:)

So we once again engage in the battle, this time on our own world, Linux installation;
    ubuntu@ubuntu:~$ sudo chroot /mnt
    ubuntu@ubuntu:~$ grub-install /dev/sda

Finally, having tamed the grub mmotia, how do we know? Because it did not trash talk us to
death, or burn both our butts this time, thats how. We can now generate a file to make it
behave itself, and do some cool tricks, who says you can't teach an old dog new tricks?
Actually a list of all the available operating systems and other petty details;
    ubuntu@ubuntu:~$ update-grub   
    # to create the grub.cfg, a list of all the worlds
    # we have saved from the grub mmotia.

We can finally reboot and we will see a familiar grub boot menu. Tada!!!
And that is pretty much how you tame the grub mmotia. Cool huh!

EPILOGUE: Run update-grub again after booting to your Linux installation, to generate
and for a lack of a better word, more accurate grub.cfg.