Thursday 22 December 2011

Help! Some dick deleted the Author for the default sharepoint site!

Hi All,

While I was doing the below, I had a problem.
Someone had deleted the default author for a site, this stopped the stsadm tool running.

Do this nice DB stuff to get it working.
First:



Select author, siteid from webs where fullurl =
'[yoursite]'


Remember the siteid!

run this one:



Select * from userinfo where tp_siteid='[that really long code you
just remembered]'


Now choose another user who is active and remember the tp_id

Now run this one:




update webs
set author = [that tp_id you remembered for the
user]

where fullurl = '[yoursiteid]'




Thanks for reading


Trev

Migrating from Sharepoint

Hi All,

Sharepoint is a whore to get away from. But here is some useful information and how to do it, quick and dirty.

First things first, do you know where your data is? Nope! It's in the database.

This is just for interest. Run this against your sharepoint DB:

Select d.dirname, d.leafname, d.setuppathuser, a.[content] from docs d
left outer join alldocversions a on a.id = d.id
where dirname = '[your directory name in the URL]'
and a.[content] is not null

What this will do is return all the documents. The important column here is a.[content]. This is your file converted into hex. If you don't believe me copy and paste it into a hex convertor on the web or something. Paste that into notepad, save as the right file type yada yada.

Anyway, get rid of the a.[content] entries in the sql then run it again.

Acceptable loss 1. The only docs with pathsetupuser are ones with multiple versions. Single versions do not have this metadata. That is in another hex encoded column called metainfo (select metainfo from docs).

Unless you want to faff with all that, you will lose the metadata. I couldn't personally be arsed and it was an acceptable loss as we are moving to a new file storage system.

Now you need to rescue your files. You can do this by running a tool here:

c:\program files\common files\microsoft shared\web server
extensions\12\bin

This is a sample command line you will need to run:

stsadm.exe -o export -url http://[servername]/sitename]/ -versions 1 -filename d:\sharepointexport -nofilecompression
Ok, quick overview.

-o export: does what it says
-url: the base url before the /forms/allitems.aspx
-versions 1: only the latest versions of files
-filename: destination directory. This mustn't exist before you run the command.
-nofilecompression: important for us. This tool exports .dat files, with compression on we will just get one big one.

OK, run that commandline and get your files. You should have all sorts of files like:

0000000A.dat
00000004.dat
....

These are your files. You should also get an xml file called manifest. You need this as it has the metadata like the real bloody filename!

At this point, I gave up on windows. I was 3 hours in and I managed to get some help off a friend of mine.

We copied the xml file to a linux box, and used perl to parse the xml file and generate a .BAT file at the end that would rename the files and move them into the right directory. Here is the perl code:

use strict;
print "\@echo off\n";
open (my $FL, "<manifest.xml");
foreach my $line (<$FL>) {
chomp($line);
if ($line =~ /^<file url="'\">)
my $realfilename = $line;
$realfilename =~ s/.*?Name=\"(.*?)\".*/\1/;
my $directory = $line;
$directory =~ s/.*?File Url\"(.*?)\".*/\1/;
$directory =~ s/\/.*?$//;
my $currfilename = $line;
$currfilename =~ s/.*?FileValue=\"(.*?)\".*/\1/;
print "copy $currfilename \"$directory\\$realfilename\"\n";
}

on your linux box run this script and > rescuemyfiles.BAT

Right!
Copy this file to your sharepoint server, and drop it into the same directory as your rescued .dat files.
All the .dat files are, are your files. You can rename one from .dat to .jpg or whatever and they will work. All the .bat file does is grab the file info from manifest.xml and copies them into a directory for you.

So there you have it. How to resuce your files from sharepoint. Have fun!

thanks for reading,

Trev
p.s. I never said this was clean did I? :-)




 

Sunday 6 November 2011

Mythtv - Perl script to email the recordings list.

Hi All,

So, I don't want my new recordings list being available on t'interwebs, but I do have my own email server.

So I wrote some perl to parse the RSS feed and email me the results. I stuck this to a cron job and now I get a weekly email with the new recordings on.

It does need tweaking, I only want the newest recorded programs email to me. Currently this emails the lot to me. Anyway, here's the code:

#!/usr/bin/perl

use LWP::Simple;

#to read html streams

use strict;


use Mail::Sender;

my $emailmsg;


my $sender;


print "Getting Content\n";

my $content = get('http://localhost/mythweb/rss/tv/recorded') || die print "Oh fuck it";


#print $content;



$content =~ s/[^[:ascii:]]+//g; #get rid of weird chars

print 'Splitting atoms...oh LOL :-)';


print "\n";



my @lines = split(//, $content);



print "Parsing some stuff\n";

for my $line (@lines){


#print $line =~ /\(.*)\<\/title\>/;


$line =~ /\(.*)\<\/title\>/;


$emailmsg .= $1;


$line =~ /\(.*)\<\/pubDate\>/;


$emailmsg .= " - " . $1;


$emailmsg .= "\n";


#print $line =~ /\\<\!\[CDATA\[(.*)\]\]\>\<\/description\>/;


$line =~ /\\<\!\[CDATA\[(.*)\]\]\>\<\/description\>/;


$emailmsg .= $1;


$emailmsg .= "\n\n\n";


#print $emailmsg;


}




#(my $headlines) = ($content =~ /type\=\"html\"\>(.*)\<\/title\>/);


#print "$headlines\n";


$sender = new Mail::Sender {


smtp => 'IP or server address',


from => 'address@tosend.from',


auth => 'NTLM',


authid => 'username',


authpwd => 'password',


on_errors => undef,


}


or die "Can't create the Mail::Sender object: $Mail::Sender::Error\n";


$sender->Open({


to => 'who@tosendit.to',


subject => 'Mythtv recordings update'


})


or die "Can't open the message: $sender->{'error_msg'}\n";


$sender->SendLineEnc("$emailmsg");


$sender->Close()


or die "Failed to send the message: $sender->{'error_msg'}\n";



There you go. I've left my commented code in so you can see where I was going with it.

Thanks for reading,

Wednesday 7 September 2011

Just upgraded your VMWare vmhost from 3.5 to 4.0 or 4.1? Read on

Hi all,

If you have recently upgraded your VM Host to ESX4.0, 4.1 or 5.0 you might want to run this code (after installing the vmware powershell extensions, this is a powershell script after all):

function reportchangetracking{

$vm_name = Get-VM -location [cluster_name] | get-view

$vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec

foreach ($objitem in $vm_name) {

#Write-host $objitem.name, $objitem.config.changetrackingenabled

if($objitem.config.changetrackingenabled -neq "True"){

$vmConfigSpec.changeTrackingEnabled = $true

$vmView.ReconfigVM($vmConfigSpec)

#punch it Chewy (reloads config spec. better than having to shut the machine down)

sleep 3

Get-VM $objitem.name | New-Snapshot -Name "Temp"

sleep 5

Get-VM $objitem.name | Get-Snapshot | Where {$_.Name -eq "Temp"} | Remove-Snapshot -Confirm:$false

}


}

}

In ESX3.5, VMTools never quiesced the base disk properly when taking snapshots. This meant that when Windows Server 2008 came along with VSS, the tools didn't use it, and 3.5 never added in the setting the code does above.

When you upgraded your host, it never added this setting at the VM level, so this task needs to be done manually. If you have a lot of machines, headaches ensue. After running the script, it might be an idea to update the tools installation on your server anyway.

Note that you do not have to run this if you created a machine on ESX4.0 or greater. Its been done for you.

Use this script at your own risk. It's no fault of mine if you break something.

Thanks for reading,

Trev

Monitoring Citrix Xenapp licenses through Nagios

Hi all,

More scripts. Again in VBS. This time for monitoring citrix licensing.

All this script does is check to see how many licenses are being used as a percentage of the total. Then returns normal, warning or critical depending on %age.



' change this line to suit the location of the lmstat utility
' ***
CommandLine = "C:\Program Files\Citrix\Licensing\LS\lmstat -c ""C:\Program Files\Citrix\Licensing\MyFiles\license_file.lic"" -f MPS_ADV_CCU"
' ***

Set objShell = CreateObject("WScript.Shell")
Set oExec = objShell.Exec(CommandLine)
countLicenses = 0

Do Until oExec.StdOut.AtEndOfStream
mystring= oExec.StdOut.ReadLine
if InStr(mystring, "Users of MPS_ADV_CCU:") Then

issued_start = InStr(mystring,"(Total of ")
issued_len = InStr(mystring, " licenses issued;") - (issued_start + 10)

lic_total = Mid(mystring, issued_start + 10, issued_len)

'-------------------------------------------------------------------------------------------------------------------------------------------------
inuse_start = InStr(mystring,"; Total of ")
inuse_len = InStr(mystring, " licenses in use)") - (inuse_start + 12)


lic_inuse = Mid(mystring, inuse_start + 12, inuse_len)


pc = int(lic_inuse / lic_total * 100)
'pc = 93

if pc => 90 then


WScript.Echo "Critical - " & pc & "% in use"
WScript.Quit(2)
end if


if pc => 80 then


WScript.Echo "Warning - " & pc & "% in use"
WScript.Quit(1)
else

WScript.Echo "OK - " & pc & "% in use"
WScript.Quit(0)
end if


Exit Do

end if
Loop
There you have it. Normal restrictions apply. If you run it and it breaks your stuff, its your fault for running untested code. It works for me.

Import this script into nsclient++ and nagios in the normal way. I will document this at some point.

Thanks for reading.

Trev

VMWare VCB backups

Hi all,

When backing up many virtual machines, you may try one of the big backup solutions, such as Symantec, Commvault, Veeam, and the like. Unfortunately in this case they all rely on one thing.

THE VMWARE API

There is a problem with the API. VMWare are aware there is a problem with it (calls were logged late last year about the problem) but so far there has been no response at all.

The problem is that when a backup of a VM happens, it takes a snapshot of the machine.
The machines base disk and other chuff is backed up, then the snapshot is released. What is happening (to many people) is that the snapshot is removed from the snapshot manager, but the snapshot file is not deleted, and the deltas are still written to the snapshot.

Luckily(!) for me, my luns filled up and bombed out after 2 snapshots. There are reports out there that if you have 25 of these "ghost snapshots" then you start running into problems. For example redo log errors. The fix is easy enough, although a pain in the arse. Basically just run the convertor against it, and itll roll all the snaps together. Don't faff with the command line unless you really have to.

Anyhew, there is one option. In version 4.1 VCB still works. So you can use that. I have written a script you can use, usual terms apply. By usual terms I mean, if this fucks your environment its not my fault. Lose data, it's not my fault.

Here is the code:

Dim fso, ts, weight
weight=0
Const ForWriting = 2
set ofso2 = createobject("scripting.filesystemobject")
set ofiletemp = ofso2.opentextfile ("servers.txt", 1)
set cline = createobject("wscript.shell")
'----------------------------------------------------------------------
do while not ofiletemp.atendofstream
weight = weight + 1
servername = ofiletemp.readline
if weight < 6 then
'msgbox servername
fullpath = "vcbmounter -h [yourVCname] -u username -p Password -a name:" & servername & " -r e:\vmbackups\" & servername & " -t fullvm -m nbd"
cline.run fullpath, 1, False 'lets another copy open "NBD!!"
else
fullpath = "vcbmounter -h [yourVCname] -u username -p Password -a name:" & servername & " -r e:\vmbackups\" & servername & " -t fullvm -m nbd"
cline.run fullpath, 1, True 'waits for this one to finish before carrying on
weight = 0 'resets weight to 0
end if
loop

What you need to do is put this file in the VCB directory in c:\program files\vmware\vmware consolidated backup/

Create a servers.txt file that has a list of your servers in, one per line.

If you look in the code there is a file path to e:. That is the destination directory. Change it to a destination you like.

Once this script has finished, you will have a directory in your detination dir, for every machine, filled with the files that make up that machine. Back these machines up however you want, or just have them as a handy copy.

When you are finished, run this script:

Dim fso, ts, weight
weight=1
Const ForWriting = 2
set ofso2 = createobject("scripting.filesystemobject")
set ofiletemp = ofso2.opentextfile ("servers.txt", 1)
set cline = createobject("wscript.shell")
'----------------------------------------------------------------------
do while not ofiletemp.atendofstream
servername = ofiletemp.readline
'msgbox servername
fullpath = "vcbmounter -h [yourVCname] -u userid -p password -U e:\vmbackups\" & servername
'msgbox fullpath
cline.run fullpath, 1
loop

This will remove all those machine directories. Again, don't forget to change the file path beginning "e:\" in the script. This also needs to live in the VCB directory mentioned above.

This works without creating ghost snapshots. It won't work at all against ESX5.0 as far as I know.

Thanks for reading,

Trev

Friday 10 June 2011

MSSQL - Finding running queries

Hi all,

I've just written this SQL for MS SQL 2005. It returns running sql queries.


select CN.session_id, ST.text, ST.dbid, ES.last_request_start_time,
ES.last_request_end_time, ES.login_name, ES.status
from sys.dm_exec_connections CN
join sys.dm_exec_sessions ES ON CN.session_id = ES.session_id
CROSS APPLY sys.dm_exec_sql_text(CN.most_recent_sql_handle) as ST
where ES.status = 'running'


Trev

Sunday 5 June 2011

DHCP DDNS and BIND part deux

Hi all,

Configuring DHCP on Ubuntu 10.04.

This is a part 2, but can equally be used as a part 1. Install dhcp3 using apt-get.

The first thing you'll see is that after you install it it'll fail to start. That is deliberate. Leave it down.

First thing to do is to tell DHCP what ethernet port to use.

nano /etc/defaults/dhcp3-server

under the interfaces line, in the speech marks, put your interface. For example "eth1"

Save and close that. That's the easy bit.

Now we need to setup the bits and pieces.

nano /etc/dhcp3/dhcpd.conf

Either edit this one, or back it up. Anyway, the lines you want are these:

ddns-update-style interim;
ignore client-updates;


and these (I've left the comment line in so you can see where in the file I did this):

# A slightly different configuration for an internal subnet.
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.70 192.168.1.150;
option domain-name-servers [your DNS server NAME];
option domain-name "[your domain name]";
option routers 192.168.1.254;
option broadcast-address 192.168.1.255;
default-lease-time 600;
max-lease-time 7200;
}

include "/etc/bind/rndc.key";

zone 1.168.192.in-addr.arpa. {
primary [your DNS server IP];
key "rndc-key";
}
zone newburytechinfo.co.uk. {
primary [your DNS server IP];
key "rndc-key";
}


These are fairly easy to read. The last 2 bits on the zone. This is to automatically update your DNS server with your clients IPs and names. The include line, includes the key from BIND.

Change the IP ranges and whatnot for your network, turn off your current DHCP server, start this one.

sudo service dhcp3 start

On your PC type this:

sudo dhclient

You should get an IP preconfigured with the right gateway, and your dns server as first. You should also be able to ping it from another box.

That's it.

Oh. Some messing about might be required.

Do these lines if you start having errors:

cd /etc/apparmor.d
nano usr.sbin.named

add this line:

/etc/bind/zones/** rw,


save and close.

nano usr.sbin.dhcpd3

add these lines (no its not pretty, this is a hack):

/etc/bind/ rw,
/etc/bind/** rw,


Save and close.

Run this:

sudo service apparmor restart

That's it. It should all work.

Thanks for reading

Trev

DHCP DDNS and BIND

Hi all,

I've just installed BIND and DHCP onto my server as my router wasn't dealing very well with either role.

Here's how I did it on, yes, Ubuntu 10.04.

First, install BIND and DHCP3 (You can either sudo or switch to root).

apt-get install bind9 && apt-get install dhcp3-server

Lets sort out bind9 first.

nano /etc/bind/named.conf

Make a quick note of the include statements. As you can see these files of config are added to keep things simple. Anyway, on the end of this file add these lines:

controls {
inet 127.0.0.1 allow {localhost; } keys { "rndc-key";};
};


This is for later so DHCP can update DNS. Save it and lets do the next one, it's one on the include list.

nano /etc/bind/named.conf.options

Stick your forwarders in here. Cunningly where it says forwarders. Here is my setting with my ISPs 2 DNS servers in:

forwarders {
87.194.255.154;
87.194.255.155;
};


Save it, lets do the next one. Again its in the includes list from earlier.

nano /etc/bind/named.conf.local

Bit more difficult this one. You need to create some zones. Mine isn't visible from the internet so bugger it, you can have mine.

zone "newburytechinfo.co.uk"{
type master;
file "/etc/bind/zones/newburytechinfo.co.uk.db";
allow-update { key "rndc-key"; };
//allow-update { localhost; localnets; };
notify yes;
};


See what I've done? All you have to do is replace my domain name, with your internal domain name. // = comment. This is while I was debugging. If you don't care what on your network can update your DNS use that line instead of the key line.

Next up, reverse DNS. Stay where you are and create a new zone like this(again this is mine):

zone "1.168.192.in-addr.arpa"{
type master;
file "/etc/bind/zones/rev.1.168.192.in-addr.arpa";
allow-update { key "rndc-key"; };
notify yes;
};


Ok? Nice and simple this. For the zone, if your computers IP is (for example) 192.168.1.15, chop the last number and dot off, then reverse it. Then add ".in-addr.arpa" on the end. If your computers IP is 10.242.192.24, it'll end up looking like this: "192.242.10.in-addr.arpa".

Obviously change the "file" line up there too to match the zone. Last bit, stick this line on the end:

include "/etc/bind/rndc.key";


Save it.

Ok, next bit. Create a zones directory:

mkdir /etc/bind/zones

go in it

cd /etc/bind/zones

You know the 2 filenames referenced above in that script I gave you?

create a new file with your domain name, like this:

nano newburytechinfo.co.uk.db

You'll notice this file is referenced above in the zone config. Now we need to put the A records in. These I will clean. Cut and paste this. Change example.com to your domain name and the IP for the A record on the last line.

$ORIGIN .
$TTL 38400 ; 10 hours 40 minutes
example.com IN SOA example.com. admin.example.com. (
2007031009 ; serial
28800 ; refresh (8 hours)
3600 ; retry (1 hour)
604800 ; expire (1 week)
38400 ; minimum (10 hours 40 minutes)
)
NS example.com.
MX 10 example.com.
$ORIGIN example.com.
mail A 192.168.1.1

Right. Oh, by the by, that last line? If you want to give another server a fixed DNS name, chuck it in, in the same way. Save and exit.

Bounce the bind9 service:

service bind9 restart

Want to do a quick test? On your PC type this:

nano /etc/resolv.conf

delete everything (or put it somewhere else, write it down, back up your file, I'm not your mum), and put this in:

nameserver [your DNS server IP]


save it, now try and go to a website.

Have a coffee, you've just done your first DNS server.

Tell you what, I'll do DHCP as a seperate article!

Thanks for reading

Trev

Thursday 2 June 2011

Kill all my processes! muhahaha!

Hi all,

This is an interesting command line a friend of mine wrote, I'm posting it here because its handy for killing database processes that are spinning and you can't sort it out nicely:

ps -ef |grep [someprocesseswiththesamename]|grep -Po "^\w+\s+\d+"| grep -Po "\d+"| sed "s/^/|kill -9 /" |sh

Say you have 50 apache2 services and apache is playing up so you want to kill all of them. (An example only...). You would do this:

ps -ef| grep apache2 |grep -Po "^\w+\s+\d+"|grep -Po "\d+"|sed "s/^/kill -9 /"|sh

Quick description:

ps -ef : List processes

grep apache2: Chops down the ps -ef to only show lines with apache2 in

grep -Po "^\w+\s+\d+" : Display just the name and PID

grep -Po "\d+" :Display just the PID

sed "s/^/kill -9 /" :build the command line, replaces the beginning of line char with "kill -9" the PID from the previous line gets appended.

sh: Pipe all that lot to the shell

Thanks for reading,

Trev

apt-get: Hash Sum mismatch

Hi All,

So, sometimes, just when you think that apt-cacher-ng is your amazing solution, or you have managed to get apt-get going through a proxy, you get hit by something like this:

Failed to fetch http://gb.archive.ubuntu.com/ubuntu/dists/lucid-updates/main/source/Sources.bz2 Hash Sum mismatch

That's no good is it? You try wget and that works fine, but apt just will not work. Here's how to fix it. Unfortunately I don't know what causes it. For me it has worked for ages and just broke one day and refused to work. Try the following (run as root):

touch /etc/apt/apt.conf.d/no-cache
nano /etc/apt/apt.conf.d/no-cache

type this line in and save it:

Acquire::http {No-Cache=True;};

The run:

apt-get update

This should now work with no problems.

Thanks for reading.

Trev

Saturday 28 May 2011

UFW and port knocking

Hi All,

This is how to setup basic port knocking for your home network.
At home I run several services that I want to get access to from the internet, but I don't always want these services open. Port knocking provides an interesting way of achieving both.

Basically how it works is that the knock daemon watches the firewall logs for connection attempts on specified ports. If it picks up a certain combination, the knock daemon can open another port for access through the firewall by adding rules.

Here's how I did it on Ubuntu 10.04.

First install the knock daemon:

sudo apt-get install knockd

Now setup UFW. UFW is basically a simpler way of configuring iptables.
First lets setup a rule to allow ssh before we turn the firewall on. Do these in order:

sudo ufw allow ssh/tcp
sudo ufw enable

Check the status by typing this:

sudo ufw status

All other ports, apart from ssh are closed. Now we need to configure the knock daemon.
Crack open the config file. I want to open port 80 so I can get to Mythtv. Add a label and some config.

[WEB]
sequence = 8111,8555,8777
seq_timeout = 5
start_command = ufw allow from %IP% to any port 80
tcpflags = syn
cmd_timeout = 3600
stop_command = ufw delete allow from %IP% to any port 80

Ok, so the knock sequence is 8111, 8555, 8777 in that order (although think about how port scanners work, choose some more random ports). That will then run start_command and open port 80. It'll close itself after an hour.

Once you have configured that start the daemon.

sudo service knockd start

Now for your router. Port forward port 80 and 8000-9000 to your server and thats it, job done.

For the client, to access your new rule, type this:

knock -v [external ip of your router] port1 port2 port3

Now access your website.

Thanks for reading,

Trev

Thursday 26 May 2011

How to install NRPE on linux

Hi All,

Ok, ok. Avoiding the fact that this is the first post and monitoring is not mentioned in the description, this is how I install the NRPE agent for Nagios on Linux boxes.

Logon to the box you want to monitor.

Create a directory somewhere called NRPE and go in it.

mkdir NRPE
cd NRPE

Download the following packages (google them I'll never keep up with versioning):
http://osdn.dl.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.6.tar.gz
http://osdn.dl.sourceforge.net/sourceforge/nagios/nrpe-2.8.tar.gz

As you can see you can get the latest versions from sourceforge.
Do all the following as root (su -)

Make a user called nagios and give it a password

/usr/sbin/useradd nagios
passwd nagios

First we install the plugins.

tar xzf nagios-plugins-1.4.6.tar.gz
cd nagios-plugins-1.4.6.tar.gz
./configure
make
make install

That'll do it. Now run these commands

chown nagios.nagios /usr/local/nagios
chown -R nagios.nagios /usr/local/nagios/libexec

What those 2 lines do is change the owner of /usr/local/nagios and everything under /usr/local/nagios/libexec to the nagios user you created a minute ago. The other directories will still be owned by root.

Now you need to install the NRPE bit itself. Go back to your NRPE directory you created earlier (if you are still with me type cd ..).

Type in:

tar xzf nrpe-2.8.tar.gz

Now it's a little more complicated than last time. Make sure you have openssl and all dependancies installed (yum install openssl or apt-get install openssl, or whatever for your distro).

Then type these commands in:

cd nrpe-2.8.tar.gz
./configure
make all

now run these:

make install-plugin
make install-daemon
make install-daemon-config
make install-xinetd

Thats it installed. Now for the configuration. Don't worry, by this time its taken a maximum of 10 mins from start to finish. It looks more complex than it is.

Config time:

type this in:

vi /etc/xinetd.d/nrpe

Press the insert button on your keyboard.
Go down to the "only_from" line and add your Nagios server IP onto the end of the line.
Then press the escape key then type :wq

This is to grant your nagios server permission to connect to the NRPE daemon.
Next we want to give the port number a service alias so....

vi /etc/services

Press your insert button and add an entry it can go anywhere, but be sensible. It must look like this:
nrpe 5666/tcp # NRPE

press escape then type :wq

For those of you with firewalls, this tells you that the daemon is running on port 5666, so needs to be open for connections from your Nagios box, to this server, on 5666.

Now restart xinetd service like this:

service xinetd restart

Test it's listening by typing this:
netstat -aunt | grep nrpe

To check that NRPE is receiving commands type this:

/usr/local/nagios/libexec/check_nrpe -H localhost

If you get a version number back you are good to go.

By default you get a bunch of checks installed by default. To change the values modify this file:

/usr/local/nagios/etc/nrpe.cfg

Scroll down to the bottom and you'll see some lines prefixed with the word command. You can change the -w and -c options. You can even create your own scripts in perl and create your own commands, but more on that later.

Thanks for reading,

Trev