I recently ran in to an issue where calling functions from java.awt.Color caused a "NoClassDefFoundError" in the JSP page. I restarted resin and kept refreshing this JSP page. I saw a different error message that looked like this:
libXp.so.6: cannot open shared object file: No such file or directory
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.security.AccessController.doPrivileged(Native Method)
This error was much different from the previous error but shows that when AWT was trying to initialize it could not locate "libXp.so.6". Through some more research I found that libXp.so.6 was part of the "xorg-x11-depreciated-libs" package in CentOS 4.5. I issued a "yum -y install xorg-x11-depreciated-libs" and a "ldconfig" to be safe and restarted resin. My java.awt.Color functions seemed to work perfectly after this.
Hopefully this helps someone!
One great command to add to your arsenal is the “lsof” command. This command prints out all open files in Linux.
lsof can be used to help resolve issues like:
- Can’t unmount device because it is busy; even if you believe it’s not busy.
umount: /mountpoint: device is busy
- A process is using a file but you have no idea which process
- To view a list of active connections (netstat works better for this) and which program and PID (Process ID) is using this socket.
To successfully unmount a device which still complains about being in use simply run the following command:
# lsof | grep “/mountpoint”
This command returns a list of processes and associated PID’s and the user which has that directory or files open. Look for files (usually marked with “REG”) which will allow you to locate the service or program with the file open. Stop this service or at the very extreme kill -9 that process. (A funny video about kill -9)
To search for a file which is in use simply use an alteration of the command above:
# lsof | grep “openfile”
This allows you to locate the process and user using that file.
To view a list of active connections run this command:
# lsof | grep “IPv4”
This returns a list of all open IPv4 connections.
Also be aware that the “lsof” command can take quite some time to run on servers with very large file counts open (Oracle Servers, Web Servers) so please be patient. It’s not uncommon for the lsof command to take about 2-4 seconds to run.
I believe that most performance issues related to slowness occur because of slow disks or poor application tuning. Memory is a big factor when it comes to OS-level caching and buffering but there’s nothing like a fast SCSI array or even a few WD Raptors in RAID-1.
The Linux utility "iostat" allows you to see a complete overview of disk utilization. The iostat utility does this by looking at the time the device is active in relation to the devices average transfer rate.
Using the iostat utility with the -x flag (-x is for extended statistics) will yield results that look like this:
If the iostat command is not available on your system perform one of the following commands to install the sysstat package.
CentOS/RHEL – # yum -y install sysstat
Ubuntu/Debian – # apt-get install sysstat
Pay special attention to the "%util" column of the results. In the example above the percentage of CPU time for I/O requests for /dev/sdb is quite high. This device is actually a large RAID-6 array and has not yet reached its 100% utilization mark. The closer the device or array is to 100% the closer you are to total saturation of that device.
If your utilization numbers are higher than expected take the following into consideration:
- Tune the application (This is where you can gain the cheapest and most performance)
- Obtain faster disks (10K+ SATA/SAS/SCSI)
- Use a larger and more efficient RAID array for your application (RAID 0 for video editing, RAID-10 for databases, RAID-5 for file storage and general access and RAID-6 on newer controllers for increased redundancy)
Hey everyone. I ran across a new open-source application called Mailarchiva. This software allows you to archive all of your email for long term storage. This software is easy to install and seems to be very efficient. The company claimed that the open source edition with a decent server (Dual Xeons and sufficient RAM) could archive 1,400,000 messages per day. I was very impressed with their performance claims.
Mailarchiva provides full-text searching. The enterprise edition will allow for clustering search servers if your archive is significantly large. One main feature missing was the ability to rotate to long term storage such as a tape device. Although disk storage is becoming cheaper and cheaper; a long term storage solution almost always is needed.
Check out MailArchiva here and download the open-source version today. MailArchiva integrates with Exchange 2003, 2007, IpSwitch Imail, Postfix, sendmail, qmail, exim and more!
Here’s a quick reference for wargling or war-googling. These search terms can be used to provide extra information on sites which may have security issues or to provide extra information on which domains have a certain string in URL’s indexed by Google.
Google Search Term
Wargling Search String
|Find similar domains
||"index of" passwd.txt
"index of" etc passwd
||"index of" wsdl
|Enumerate OWA users
||inurl:exchange inurl:finduser inurl:root
|Poor information management
||"internal use only"
"password hint -email"
"show password hint -email"
|Find specific files
type such as .htaccess, .xls, .doc
|Find matches in URL
|Find information about domain
|Find links to domain
The above information is not provided for malicious purposes. Please use the information above to assure you’re not leaking information at your business.