So I answering my own question for everyone's benefit who has to deal with these type of fake raid controllers.
Here is what I found
Interestingly the md126 is not the main RAID array here, it is md127, so all I did was re-adding this new drive to md127 with:
mdadm --manage /dev/md127 --force --add /dev/sdb
and the Raid started to rebuild itself.
now the results of cat/proc/mdstat are:
root@himalaya:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active raid1 sda[1] sdb[0]
1953511424 blocks super external:/md127/0 [2/2] [UU]
md127 : inactive sdb[1](S) sda[0](S)
6320 blocks super external:imsm
unused devices: <none>
And this changes were reflected in the BIOS screen as well.
Intel RST RAID Volumes status was Normal.
Below are the list of commands I used to restore this RAID 1 Array successfully.
To check the raid status:
cat /proc/mdstat
Removing the failed disk:
First we mark the disk as failed and then remove it from the array:
mdadm --manage /dev/md126 --fail /dev/sdb
mdadm --manage /dev/md126 --remove /dev/sdb
Then power down the system and replace the new drive:
shutdown -h now
Adding the new hard drive: First you must create the exact same partitioning as on /dev/sda:
sfdisk -d /dev/sda | sfdisk /dev/sdb
To check if both the harddrive are having the same partitioning:
fdisk -l
Next we add this drive to the RAID array (you can use md126 or md127 accordingly whichever is your main RAID array) below is the command I used:
mdadm --manage /dev/md127 --force --add /dev/sdb
That's it. You can now see the Raid started to rebuild.