Problem:
Whenever i try to access my readynas nv+ shares from any computer, i get the following error message:
insufficient system resources exist complete requested service
All of my computers have plenty of free memory, hard disk space, etc. The problem is clearly the NAS. Googling leads me to believe that the NAS is returning a malformed response to windows and so windows gives this incorrect error message.
Moreover, i can still log into frontview, and ssh in.
Solution:
Recently i was playing around with the NAS, i installed the addon that gave me root and installed some programs in the root directory /root. One of those programs misbehaved and filled up my root partition with some data. I sshed in and checked diskspace:
df - k
This showed me that the root partition was indeed at 100%. This is the problem, to solve it i needed to delete the big files that were taking up all the space. I used the method i previously wrote about to find the big files and remove them. After i was finished, root was at 46% full. i rebooted the NAS and the problem went away.
Showing posts with label unix. Show all posts
Showing posts with label unix. Show all posts
Wednesday, August 12, 2009
Sunday, April 27, 2008
check if a website is responding script
recently i needed to check if a website was responding, and if not, page myself (by sending an email to my pager). Here is the unix shell script i used to make this happen.
Required programs:
wget
working mail (i use xmail on solaris 8 in this script)
script:
Required programs:
wget
working mail (i use xmail on solaris 8 in this script)
script:
#!/bin/sh
## Script will check if the "host" is up, if the host is down, send an email
## You should cron it every 5 mins or so.
#uncomment to debug:
#set -x
## change these:
host="http://dogself.com"
email="my-pager-number@skytel.com,my-real-email@gmail.com"
## locations of stuff:
mailx="/usr/bin/mailx"
wget="/usr/bin/wget"
log="/path/to/a/writable/log/file.log"
now=`date '+%m/%d/%Y %H:%M:%S'`
rm ${log}
#when checking connection, do 2 tries, and time out in 7 seconds
${wget} -O /dev/null -t 2 -T 7 ${host} -o ${log}
grep "saved \[" ${log}
if [ $? = "1" ];
then
echo "site:[${host}] is down"
${mailx} -s "PRODUCTION is DOWN at ${now}" ${email} < ${log}
else
echo "site:[${host}] is up"
fi
Friday, April 25, 2008
shell scripting with /bin/sh
I found this half decent tutorial / how-to on shell scripting with /bin/sh which i do lots but suck at.
here it is:
http://ooblick.com/text/sh/
also, to check if a file DOES NOT exist:
if [ ! -e /path/to/file ];
then
# do something
fi
ok i am done!
here it is:
http://ooblick.com/text/sh/
also, to check if a file DOES NOT exist:
if [ ! -e /path/to/file ];
then
# do something
fi
ok i am done!
Tuesday, January 22, 2008
Unix: Compress or delete files older then X days
Problem:
I have some logs in some directories that i need to back up if the file is at least 12 days old, if the file is older then 24 days, i need to delete it
Solution:
to compress:
find <path within you're looking> -xdev -mtime +<number of days> -exec /usr/bin/compress {} \;
to delete:
find <path within you're looking> -xdev -mtime +<number of days> -exec /usr/bin/rm -f {} \;
Note: This was tested on Solaris 8
Examples:
find /usr/local/apache/logs/ -xdev -name "sar.*" -mtime +24 -exec /usr/bin/rm -f {} \;
find /usr/local/apache/logs/ -xdev -name "sar.*" -mtime +12 -exec /usr/bin/compress {} \;
I have some logs in some directories that i need to back up if the file is at least 12 days old, if the file is older then 24 days, i need to delete it
Solution:
to compress:
find <path within you're looking> -xdev -mtime +<number of days> -exec /usr/bin/compress {} \;
to delete:
find <path within you're looking> -xdev -mtime +<number of days> -exec /usr/bin/rm -f {} \;
Note: This was tested on Solaris 8
Examples:
find /usr/local/apache/logs/ -xdev -name "sar.*" -mtime +24 -exec /usr/bin/rm -f {} \;
find /usr/local/apache/logs/ -xdev -name "sar.*" -mtime +12 -exec /usr/bin/compress {} \;
Thursday, January 03, 2008
how to: use cron
ok, now that i really know how to use the cron i should blog it incase i forget
this assumes you have nothing in your cron / or your crontab is in sync with hostname.cron because you always use this method to modify the crontab.
0. run crontab -l to make sure your crontab is empty, if its not, paste what it has to the bottom of the file you make in step 1
1. make a file called hostname. cron and add this into the top of it for later:
#########################################
# #
# Cron file for [username] #
# #
#########################################
# min hour dom month dow usr cmd
# use '*' as skipped element
2. add your stuff to it, here is a better explanation on what to add:
01 * * * * echo "This command is run at one min past every hour"
17 8 * * * echo "This command is run daily at 8:17 am"
17 20 * * * echo "This command is run daily at 8:17 pm"
00 4 * * 0 echo "This command is run at 4 am every Sunday"
* 4 * * Sun echo "So is this"
42 4 1 * * echo "This command is run 4:42 am every 1st of the month"
01 * 19 07 * echo "This command is run hourly on the 19th of July"
3. save your file.
4. run this:
crontab < hostname.cron
done!
useful info happily stolen from here: http://www.unixgeeks.org/security/newbie/unix/cron-1.html
this assumes you have nothing in your cron / or your crontab is in sync with hostname.cron because you always use this method to modify the crontab.
0. run crontab -l to make sure your crontab is empty, if its not, paste what it has to the bottom of the file you make in step 1
1. make a file called hostname. cron and add this into the top of it for later:
#########################################
# #
# Cron file for [username] #
# #
#########################################
# min hour dom month dow usr cmd
# use '*' as skipped element
2. add your stuff to it, here is a better explanation on what to add:
01 * * * * echo "This command is run at one min past every hour"
17 8 * * * echo "This command is run daily at 8:17 am"
17 20 * * * echo "This command is run daily at 8:17 pm"
00 4 * * 0 echo "This command is run at 4 am every Sunday"
* 4 * * Sun echo "So is this"
42 4 1 * * echo "This command is run 4:42 am every 1st of the month"
01 * 19 07 * echo "This command is run hourly on the 19th of July"
3. save your file.
4. run this:
crontab < hostname.cron
done!
useful info happily stolen from here: http://www.unixgeeks.org/security/newbie/unix/cron-1.html
Friday, December 14, 2007
Unix: find largest files / folder to free up space
Unix boxes like to fill up with log files, and when they get really full, stuff starts to break. These commands will tell you where the big log files sit so you can delete them or gzip them.
first, in root run
df -k
/dev/md/dsk/d3 4129290 4020939 67059 99% /var
then go to /var and find what it taking up all the space:
du -sk * | sort -m | head
2855134 mail
..
193102 tmp
so mail is taking up space, now lets look for files bigger then 100MB:
find . -size +100000000c -ls
22615 483984 -rw-r--r-- 1 root other 495339455 Dec 14 15:09 ./mail/access_log
113593 209784 -rw-r--r-- 1 root other 214696853 Dec 14 15:08 ./mail/servletd.log
208492 354768 -rw-rw-rw- 1 admin mail 363091751 Dec 14 14:29 ./mail/admin
now you can see what is steal your megabytes.
first, in root run
df -k
/dev/md/dsk/d3 4129290 4020939 67059 99% /var
then go to /var and find what it taking up all the space:
du -sk * | sort -m | head
2855134 mail
..
193102 tmp
so mail is taking up space, now lets look for files bigger then 100MB:
find . -size +100000000c -ls
22615 483984 -rw-r--r-- 1 root other 495339455 Dec 14 15:09 ./mail/access_log
113593 209784 -rw-r--r-- 1 root other 214696853 Dec 14 15:08 ./mail/servletd.log
208492 354768 -rw-rw-rw- 1 admin mail 363091751 Dec 14 14:29 ./mail/admin
now you can see what is steal your megabytes.
Thursday, December 13, 2007
the most useful program in the world
clear&&perl -e'$|=1;@l=qw{| / - \\};while(){print
qq{$l[$_++%@l]\x0d};select($z,$z,$z,.2*(rand(2)**2))}'
i can think of like 9 uses for this already!
Thursday, July 26, 2007
passwordless ssh/scp setup
I need to set up cron jobs which scp stuff all over the place. scp works best when you have rsa keys setup to allow you to ssh in without a password.
here are the steps to set this up (on solaris 8, possibly linux):
1) setup the public key and send it to the server
ssh-keygen -t rsa
[do not enter password when asked]
scp ~/.ssh/id_rsa.pub [user]@[server]:~/
2) open a terminal on [server] and
mkdir ~/.ssh
touch ~/.ssh/authorized_keys
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
rm ~/id_rsa.pub
Done! That was easy!
Note: if the above doesnt work for some reason, make sure that you do this to the .ssh dirs on the SERVER you are trying to connect to:
chmod go-rxw .ssh
here are the steps to set this up (on solaris 8, possibly linux):
1) setup the public key and send it to the server
ssh-keygen -t rsa
[do not enter password when asked]
scp ~/.ssh/id_rsa.pub [user]@[server]:~/
2) open a terminal on [server] and
mkdir ~/.ssh
touch ~/.ssh/authorized_keys
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
rm ~/id_rsa.pub
Done! That was easy!
Note: if the above doesnt work for some reason, make sure that you do this to the .ssh dirs on the SERVER you are trying to connect to:
chmod go-rxw .ssh
Thursday, May 24, 2007
UNIX: get the last modified file in a dir
problem:
i need a command that will tell me the last modified file or dir
solution:
ls /the/dir/ -1 -t | head -1
i need a command that will tell me the last modified file or dir
solution:
ls /the/dir/ -1 -t | head -1
Tuesday, April 24, 2007
teamsite: passing parameters from javascript to perl
Teamsite (the evil abomination called software it is) allows the unfortunate souls using it to call perl scripts for javascript and pass parameters to the perl script.
the syntax looks something like this:
var params = new Object();
params.val = "some string";
var server = window.location.hostname;
IWDatacapture.callServer("http://"+server+"/iw-bin/somescript.ipl",params);
problem:
sometimes the params dont carry over correctly from javascript to the perl script, ie, all params have values in the javascript, but on the perl script side, they have no values - ""
the problem manifests if you do this:
when you assign the params in this way:
var params = new Object();
params.val = SomeFunctionWhichReturnsAString();
now chances are that params.val = "" in the perl script.
solution:
this problem stinks of pointers. params.val is set given a reference to something in the javascript heap, when perl takes over, that references points to nothing. Either way, this is how you get around it:
var params = new Object();
params.val = SomeFunctionWhichReturnsAString()+"";
now everything works. why? well... because teamsite is evil.
the syntax looks something like this:
var params = new Object();
params.val = "some string";
var server = window.location.hostname;
IWDatacapture.callServer("http://"+server+"/iw-bin/somescript.ipl",params);
problem:
sometimes the params dont carry over correctly from javascript to the perl script, ie, all params have values in the javascript, but on the perl script side, they have no values - ""
the problem manifests if you do this:
when you assign the params in this way:
var params = new Object();
params.val = SomeFunctionWhichReturnsAString();
now chances are that params.val = "" in the perl script.
solution:
this problem stinks of pointers. params.val is set given a reference to something in the javascript heap, when perl takes over, that references points to nothing. Either way, this is how you get around it:
var params = new Object();
params.val = SomeFunctionWhichReturnsAString()+"";
now everything works. why? well... because teamsite is evil.
Friday, April 13, 2007
Teamsite: IWDataDeploy FAILED TDbSchemaSynchronizer create failure

Interwoven Teamsite is the worst piece of software i have ever used, and googling its errors gives you 0 hits. I am going to do the world a favor and change that!
Problem: I needed to deploy DCT data to the table which teamsite uses for its own record keeping. The file which takes care of this deployment is called AREA_dd.cfg where area is the name of the DCT. After making sure that teamsite is using the correct db info, and that all fields i was deploying existed i was getting the error: IWDataDeploy FAILED, where the root cause might have been:
(from the log in OD_HOME/OpenDeployNG/log/iwddd_something.log)
DBD: SELECT * FROM USER_TABLES WHERE TABLE_NAME='IWTRACKER'
DBD: Table [iwtracker] exists.
DBD: DEFAULT__DEVICE__MAIN_STAGING not registered
DBD: Error occured in TDbSchemaSynchronizer
DBD: ERROR:TDbSchemaSynchronizer create failure.
(DEVICE is the name of my DCT)
Solution:
This solution is a hack, but it works. Open the database you are trying to commit to from teamsite, and run the following SQL query:
INSERT INTO IWTRACKER (NAME) VALUES('DEFAULT__DEVICE__MAIN_STAGING');
COMMIT;
where instead of 'DEFAULT__DEVICE__MAIN_STAGING' put the thing which the previous error claims is not registered. Save the DCT again and tail the log, the deployment should succeed. (if it fails, you are probably screwed because the support is terrible and the documentation is teh suck)
edit 11/6/2009: I no longer use teamshite, so please dont ask me any questions about this evil thing, i wont know the answers.
Friday, April 06, 2007
Find and replace with VI
this will replace all occurrences of match_pattern in the file
:g/match_pattern/s//replace_string/g
alternatively, you can use ' : ' instead of ' / ' to make path (/) slashes easier to manage (no escape (\) character needed)
:g:match_pattern:s::replace_string:g
:g/match_pattern/s//replace_string/g
alternatively, you can use ' : ' instead of ' / ' to make path (/) slashes easier to manage (no escape (\) character needed)
:g:match_pattern:s::replace_string:g
Tuesday, March 20, 2007
how to make .so files load in unix
Ive had this problem too many times and so i should blog the solution. The problem usually looks something like this: You try to run something and you get an error screaming about failing to load some file ending in .so
example:
failed: Can't load '/app/teamsite/iw-perl/site/lib/auto/DBD/Oracle/
Oracle.so' for module DBD::Oracle: ld.so.1: perl: fatal: libnnz10.so: open failed: No such
file or directory at /app/teamsite/iw-perl/lib/DynaLoader.pm line 229.
solution:
All these .so problems seem to be have a common solution. You need tell unix where to look for your libraries. This is how you do it.
crle -c /var/ld/ld.config -l /usr/lib:/your/directory/where/the/so/files/are:/another/lib/path
Important: if you remove the /usr/lib part from the Default Library Path, shit will hit the fan.
Use this at your own risk.
example:
failed: Can't load '/app/teamsite/iw-perl/site/lib/auto/DBD/Oracle/
Oracle.so' for module DBD::Oracle: ld.so.1: perl: fatal: libnnz10.so: open failed: No such
file or directory at /app/teamsite/iw-perl/lib/DynaLoader.pm line 229.
solution:
All these .so problems seem to be have a common solution. You need tell unix where to look for your libraries. This is how you do it.
- Figure out where your application has its .so files. Usually this is in a directory called lib, and it usually contains more then one .so file.
- You need to become root, and run this command:
- crle
- write down everything which is output, because if you break things you will need to go back to this. On solaris 10 this is the important piece which has to always be in the output: Default Library Path (ELF): /usr/lib
- Now we need to add our library locations to the Default Library Path, run this as root:
- crle -c /var/ld/ld.config -l /usr/lib:/your/directory/where/the/so/files/are
- exit root
crle -c /var/ld/ld.config -l /usr/lib:/your/directory/where/the/so/files/are:/another/lib/path
Important: if you remove the /usr/lib part from the Default Library Path, shit will hit the fan.
Use this at your own risk.
Wednesday, February 07, 2007
UNIX: list what sudo permissions you have
> sudo -l
Password:
You may run the following commands on this host:
(root) /usr/local/apache/bin/
(root) NOPASSWD: /app/teamsite/iw-home/opendeploy/bin/iwsyncdb.ipl
(root) NOPASSWD: /etc/init.d/smb
(root) NOPASSWD: /usr/local/samba/bin
(root) NOPASSWD: /bin/vi /etc/group
(root) NOPASSWD: /usr/local/bin/top
(root) NOPASSWD: /bin/vi /etc/group
(root) NOPASSWD: /usr/bin/ls
(root) NOPASSWD: /usr/ucb/vipw
(root) NOPASSWD: /usr/local/bin/top
(root) /bin/tail
Password:
You may run the following commands on this host:
(root) /usr/local/apache/bin/
(root) NOPASSWD: /app/teamsite/iw-home/opendeploy/bin/iwsyncdb.ipl
(root) NOPASSWD: /etc/init.d/smb
(root) NOPASSWD: /usr/local/samba/bin
(root) NOPASSWD: /bin/vi /etc/group
(root) NOPASSWD: /usr/local/bin/top
(root) NOPASSWD: /bin/vi /etc/group
(root) NOPASSWD: /usr/bin/ls
(root) NOPASSWD: /usr/ucb/vipw
(root) NOPASSWD: /usr/local/bin/top
(root) /bin/tail
Tuesday, January 16, 2007
UNIX: dd trick to speed up disk access times
This was stolen from the Lucene mailing list as a strategy to warm up an IndexSearcher:
Something like dd if=/path/to/index/foo.cfs of=/dev/null
Basically, force the data through the kernel preemptively, so FS caches it.
Run vmstat while doing it, and if the index hasn't been cached by the FS, you should see a spike in IO activity while dd is running.
source: (something in the middle of the thread)
http://www.gossamer-threads.com/lists/lucene/java-user/43418
Something like dd if=/path/to/index/foo.cfs of=/dev/null
Basically, force the data through the kernel preemptively, so FS caches it.
Run vmstat while doing it, and if the index hasn't been cached by the FS, you should see a spike in IO activity while dd is running.
source: (something in the middle of the thread)
http://www.gossamer-threads.com/lists/lucene/java-user/43418
Friday, December 08, 2006
using cron on solaris
crontab -l
figure out whats in your cron, copy to a file called myCron
modify the file and add the crap you want
50 13 * * * /usr/local/subversion/svnbackup.sh
this runs the script at 1:50pm every day
then copy the file into the cron:
crontab < myCron
yey!
figure out whats in your cron, copy to a file called myCron
modify the file and add the crap you want
50 13 * * * /usr/local/subversion/svnbackup.sh
this runs the script at 1:50pm every day
then copy the file into the cron:
crontab < myCron
yey!
Monday, December 04, 2006
UNIX: delete files older then X days old
This can be useful when making a rolling backup script
find /directory -type f -mtime +12 | xargs rm
/directory: your directory name that the files are under (rename appropriately)
-type f: only delete files (not subdirectories)
+12: older than 12 days old
find /directory -type f -mtime +12 | xargs rm
/directory: your directory name that the files are under (rename appropriately)
-type f: only delete files (not subdirectories)
+12: older than 12 days old
Tuesday, September 12, 2006
unix shell scripting: args check
usage()
{
echo "Usage: program [args] [bleee]"
}
if [ $# -eq 0 ] ; then
echo "ERROR Insufficient arguments"
usage
exit 1
fi
{
echo "Usage: program [args] [bleee]"
}
if [ $# -eq 0 ] ; then
echo "ERROR Insufficient arguments"
usage
exit 1
fi
Tuesday, September 05, 2006
Tuesday, August 08, 2006
how to annoy people using UNIX
first, do
$who
mk pts/3 Aug 8 15:07 (**********.com)
to find out who is around to annoy.
then do
$yes FATAL SYSTEM ERROR. SHUTTING DOWN | write mk pts/3
this will print FATAL SYSTEM ERROR. SHUTTING DOWN all over their terminal until you press ctrl-c
$who
mk pts/3 Aug 8 15:07 (**********.com)
to find out who is around to annoy.
then do
$yes FATAL SYSTEM ERROR. SHUTTING DOWN | write mk pts/3
this will print FATAL SYSTEM ERROR. SHUTTING DOWN all over their terminal until you press ctrl-c
Subscribe to:
Posts (Atom)