Net-SNMP configuration for non-root AgentX application

It is imperative to run the application when developing it. You may be testing it, debugging it or troubleshooting something. By default Net-SNMP uses named socket for AgentX communication which does not allow a non-root user to connect making troubleshooting difficult. There are security reasons for not allowing this kind of widely open access so do not set this up in your production environment. There are other ways to control the access which I will narrate in future posts.

To enable AgentX and allow non-root applications/Agents to connect to snmpd you can setup TCP socket as follows. TCP socket provides a cleaner access and allows easier troubleshooting e.g you could capture network traffic between snmpd and the AgentX application. Update /etc/snmp/snmpd.conf and ensure that following directives are set for TCP based AgentX communication.

rocommunity public default # or whatever community string/access control you want
master agentx # Run as an AgentX master agent
agentXSocket tcp:localhost:705 # Listen for localhost network connections instead of /var/agentx/master

Restart snmpd (/etc/init.d/snmpd restart)

Alternate is to set correct permissions for /var/agentx/master named socket or whatever you have configured.

Application Logging Improvement – Part 3 Making it Readable

This is part three of my Application Logging improvement plan. So far I have discussed that log should be machine readable for application performance, management and monitoring. In this post I give an example of how to make the log readable to human (or make the log just like everyone has been used to seeing them). I am going to use vim to view the log files and have it configured so it knows how to handle the file with syntax etc.

First thing is configure vim to recognize the format. Continue reading

Application Logging Improvement – Part 2 Multithreading

Multi-threading is now becoming a norm. Obvious issue with logging is how to synchronize between threads. As discussed in last post Application Logging Improvement Plan – Part 1, we want to log as much as possible in machine readable format. So there comes a problem with multiple threads trying to log at the same time. Two possible implementations come to mind but both are flawed.

  1. Synchronize between threads for logging – Disk writes are slow and now locking contention would only make it worse. This slows down the business logic and is a big no-no.
  2. Log without synchronizing – Business logic works but logs get jumbled up because multiple threads are trying to log at the same time. This leaves logs in worst shape and unusable.

We can do better by combining both of above to get a solution. We will create a per thread logging buffer (lets call it LogBuffer) where each thread would log without any conflicts. And at a certain threshold, threads synchronize and log their LogBuffer to the disk (lets call this Flush).  Continue reading

Application Logging Improvement Plan – Part 1

People are divided on how to log, what to log, how much to log. A never ending discussion this is. In addition many open source libraries are available for logging. Not to mention many standards. I am not going to go in details of what is available out there. Use Google to pick your poison. What I am going to discuss here is what I think makes most sense with available technology.
Continue reading

snmp : find network information of a system centrally

Anyone can login to a system and run ifconfig or netstat or other similar commands to find the network information of a system. But what will be even better? Do it remotely without logging in to each and every system. How? Using snmpwalk one can retrieve all this information provided that subject has both snmpd running, snmpd supporting network information and the querying host is allowed to make SNMP queries. Lets see how.

Interface table is covered by basic SNMP (just like system information, udp, tcp  socket information, address translation and snmp stats etc). Here is how to query the interface table to get the IP address and Subnet mask information.

unixite@sanbox:~/ > snmpwalk -v1 -c public sandboxS:161 1.3.6.1.2.1.4.20.1.1
iso.3.6.1.2.1.4.20.1.1.1.2.3.4 = IpAddress: 1.2.3.4
iso.3.6.1.2.1.4.20.1.1.127.0.0.1 = IpAddress: 127.0.0.1
iso.3.6.1.2.1.4.20.1.1.192.168.1.10 = IpAddress: 192.168.1.10
iso.3.6.1.2.1.4.20.1.1.10.0.0.2 = IpAddress: 10.0.0.2
unixite@sanbox:~/ > snmpwalk -v1 -c public sandboxS:161 1.3.6.1.2.1.4.20.1.3
iso.3.6.1.2.1.4.20.1.3.1.2.3.4 = IpAddress: 255.0.0.0
iso.3.6.1.2.1.4.20.1.3.127.0.0.1 = IpAddress: 255.0.0.0
iso.3.6.1.2.1.4.20.1.3.192.168.1.10 = IpAddress: 255.255.255.0
iso.3.6.1.2.1.4.20.1.3.10.0.0.2 = IpAddress: 255.255.0.0

First one here retrieves the IP addresses on the system while second one get the subnet masks. -c public has to be changed to right community string and also the version if your supports a different one. My system name here is sandboxS and snmpd is listening on default port 161. If not then you can change the port to match yours.

Unix : find affected by current working directory

On may Unix variants, find first looks for current working directory before proceeding with what it was asked to find. Ubuntu 10.10 and Debian Squezze not affected and I did not check older versions, but debian 5.0.6 or Lenny is affected and list includes Solaris 10 and Solaris 11 Express. It is very easy to fall in this pitfall if you have some automated package installation which may invoke some scripts for starting applications at the end of installation while cleaning up the temporary directory the package was running from. I wasted couple of hours in going over all my scripts to understand what was going on. The ls command was working but find was not able to get me the list of files to process from unrelated directories. So I ended up redirecting find’s error out to standard out and viola, solution presents itself. That redirection should have been on top of my list. It tells you that “find : cannot get the current working directory”. Why it needs that? I don’t know. Linux has this fixed for some time now, but for some reason SunOS is still using the old find variant including Solaris 11 Express which is the latest version out. Maybe some historical reasons. If anyone know, please share.

So the solution to the problem was that before invoking the command that will continue to run and may need to call find, start it in a directory that will persist after package installation is complete, e.g. / or /tmp.
Continue reading

bash : Self redirect of Script’s output to a file

Everyone knows how to use redirection operators to send output of a script to a file. Simply use “>” or “>>” on command prompt after the script or application name and it does the magic of storing the output of the script to the file of choice. But what if one wants that their script shall create its own file and store the output there. In other words, self redirect of output.

Lets take a use case. Imagine you are developing a script that will bring up some applications when a system reboots. When it is run from command line it works fine but when it is run during system startup, it misbehaves. How do you troubleshoot? First response is to use “-x” to print how the script is triggering. But what we want is that when our startup script runs, all of its output (both stdout and stderr) stored in a file that we could use for troubleshooting later. For redirecting output of a startup script we have to use a wrapper script to trigger it and use redirection operator to store its output which is simply a workaround. Or, use the magic word “exec“.
Continue reading

SunOS : List dynamic libraries loaded by a running process or core

Both SunOS and Linux provides ldd which provides the list of shared objects or dynamic libraries that are required for the executable to run. It will tell which libraries will be required and from where they will be loaded when that executable is started. This is controlled by LD_LIBRARY_PATH and/or compilation flags. Here I will address how to find which objects were really picked when executable was started because what you see and what may be happening are two different things.

Take for example a running process that starts to misbehave or even crash. For troubleshooting it is a must that real dynamic libraries that were loaded by the application are found. ldd on the executable lists what it is supposed to use (based on our current environment) but not what that running process is currently using or it was using when it was running (if crashed). Maybe environment was setup differently when that executable was started and process loaded shared libraries from a different location. So ldd on the same binary when it started may be different than what it is currently, baffling the troubleshooter.

SunOS provides pldd (from the suite of /proc utilities) which lists the dynamic libraries that were loaded in reality when the process in question was started (or during its lifetime). pldd can also be used on a corefile to list the dynamic libraries that were used by that process. You simply run pldd with process Id or the the core to get the list of dynamic libraries that the process is using or was using before crash.

Continue reading