Night time story – Move over Cloud, Edge computing is here

Once upon a time, computing was happening on PCs and Servers. Bigger organizations like Banks were setting up the server farms in data-centers for their computing needs. Along came big companies like Amazon, AT&T, Microsoft and what not that setup their own server farms and started to rent out the computing power. And one day Big Data showed up. Small companies that could not afford maintain their own data-centers but had a great idea, started out in Cloud. That was the dawn of Cloud as we know it. And rest is history. So computing moved from the devices that were on the Edge of Internet or “Cloud” to the “Cloud”.

Fast forward few years. Smart Phones and Autonomous devices brought in a new age. Mobile phones became way powerful than 15 year old high end PCs providing good amount of processing power. Autonomous components needed answer right then and there based on the live-events. There is no margin for error or delay. If the autopilot in car detects oncoming car in its track it has to decide next course of action right away. No time to send a message to Cloud and wait. Or god forbids, no signal. So computing has to be done in situ. That is Edge computing.

Now this does not mean that Cloud is on its way out. Both have their own niches. For example, Autonomous device has to react based on real-life events in wild. Think nature. These are the critters that react to their environment. They do not need a central hub to guide them or help them to navigate. They look around and find their way to food and avoid colliding with other objects or insects. But that does not mean that queen or alpha-counterpart is useless. They guide the colonies in case of disaster or finding new abodes and prospects. That is Cloud which is seeing all and deciding the next best course of action.

So to get a quick solution, Edge Computing is must. But on a grand scale you need to bring data to central computing or Cloud, analyze, find patterns and decide what is best course of action.

And then they both lived happily ever after!

Quick Premier on migration to Curl for a socket programmer

I have been thinking of putting this out for quite some time. In future this will find its home on stackoverflow (libcurl documentation) which still is in proposal state. This is very high level and only provides the basic guideline. There are many options and cases that are better documented on libcurl website.

Check https://curl.haxx.se/libcurl/c/libcurl-tutorial.html which provides many examples. Documentation for libcurl also provides small code snippets to use for each API. For some they even provide high level steps just like I mentioned below. Continue reading

AgentX Tutorial using Net-SNMP – Trap implementation

Third one in this series is implementation of SNMP Traps, the method which is the main strength of SNMP. These Traps provide an event based system where NMS relies heavily on the monitored stations to tell them if something of interest has happened. mib2c again provides an easy way to implement traps. This is actually one of the easiest ones of the three in this series. Again as I said before clone my repository and checkout the code to find what was done and how. Following are three commands of interest.


git clone https://github.com/jainvishal/agentx-tutorial.git
git checkout step3_traps_autogenerated
git checkout step3_traps_implement

mib2c provides mib2c.notify.conf to compile the MIB and generate C code. By default it uses SNMPv2 for traps and I leave it as is. Implementation is as simple as copying the data into right variables when time comes and they will be delivered. Continue reading

AgentX Tutorial using Net-SNMP – Simple Table MIB for Dummies

Second in the series of implementing an AgentX agent I describe a simple table implementation. Net-SNMP provides multiple implementations from easy one to more elaborate ones. An approach should be accepted based on the need. For most of the needs MIB for Dummies work (mib2c -c mib2c.mfd.conf). Points to consider for using MIB for dummies are :

  1. Data could be cached and smaller time windows are acceptable where data may be stale compared to real world values (or application can implement a logic to mark cache expired on real value change)
  2. Data reloads are not resource intensive
  3. Data set for table is small so memory foot print of copying to cache will be small

Continue reading

AgentX Tutorial using Net-SNMP – Simple Scalars

Since no good tutorials are available I put together what I learnt. My approach was step by step learning and that is what I describe here. So on day 1 or step one we will see how to create a simple MIB that has only two simple read-only scalars and how to implement them in the agent. I have configured net-snmpd agentx mode to run on TCP. Check this post on how to configure that.

You should clone my git repository and follow the README which describes the changes made. I have created tags in the git repository so it is easy to reference what is done when and for what.

git clone https://github.com/jainvishal/agentx-tutorial.git
git checkout step1_scalars_autogenerated
git checkout step1_scalars_implement

Notes

  1. I used net-snmp 5.7.3 for this tutorial.
  2. My test MIB AGENTX-TUTORIAL-MIB is under experimental branch (.1.3.6.1.3) so my MIB branch is .1.3.6.1.3.9999.
  3. smilint was used to verify that MIB was syntactically correct.
  4. mib2c was used with mib2c.scalar.conf to compile the MIB and generate the code, makefile and agent code.

Now wait for second tutorial where I will implement a simple table using MIB-For-Dummies configuration.

Net-SNMP configuration for non-root AgentX application

It is imperative to run the application when developing it. You may be testing it, debugging it or troubleshooting something. By default Net-SNMP uses named socket for AgentX communication which does not allow a non-root user to connect making troubleshooting difficult. There are security reasons for not allowing this kind of widely open access so do not set this up in your production environment. There are other ways to control the access which I will narrate in future posts.

To enable AgentX and allow non-root applications/Agents to connect to snmpd you can setup TCP socket as follows. TCP socket provides a cleaner access and allows easier troubleshooting e.g you could capture network traffic between snmpd and the AgentX application. Update /etc/snmp/snmpd.conf and ensure that following directives are set for TCP based AgentX communication.

rocommunity public default # or whatever community string/access control you want
master agentx # Run as an AgentX master agent
agentXSocket tcp:localhost:705 # Listen for localhost network connections instead of /var/agentx/master

Restart snmpd (/etc/init.d/snmpd restart)

Alternate is to set correct permissions for /var/agentx/master named socket or whatever you have configured.

Capture local network traffic for multi-homed host

If both end-points of a socket are on local system, network traffic will be seen on loopback interface even if applications are using non-loopback interface (e.g. eth0, wlan0…). Capturing data over loopback is quite obvious. But here I am discussing that applications are using one of the external interfaces (e.g. eth0, eth1, wlan0 …..).

Since both end-points are on  local system, kernel will shunt the traffic and not send it to the wire. The data will be delivered internally by queuing it to the read queue of other end-point. So we cannot capture the traffic on that particular interface, but this traffic is visible on loopback interface. Lets see an example.

I use netcat for setting up our test client and server program. nc -kl 9090 will run server on all interfaces on port 9090. And nc 10.1.1.100 9090 will setup a client. Here 10.1.1.100 is the external IP of my system(wlan0). Now instead of using the interface name associated with that IP (in my case wlan0), we have to use loopback interface lo to capture the traffic as below.

tcpdump -i lo tcp port 9090

Now anything that is typed on the client terminal when sent will be seen by tcpdump. Problem solved.

Compare files ignoring a field or column using Process Substitution

Lets say the data contains multiple fields/columns separated by space or comma or some other delimiter. And we want to compare two files ignoring a specific column. Lets divide work in two small issues. First is to ignore the provided field/column.

If we simply want to ignore the first column, we can use one of the following cut constructs.

cut -d',' -f 1 --complement datafile
cut -d',' -f 2- fileName.csv

If we want to ignore a specific one we can use awk in following manner which is much more generalized because you can specify which column to ignore, be it first, third or last.

This can be used as

awk -F',' -v FieldToIgnore=3 -f ignoreField.awk datafile

Next part is to diff the output after ignoring (read removing) the column. That is where process substitution comes handy. Here are two examples.

# ignore 1st column from two csv datafiles while comparing
diff -u <(cut -d, -f 2- datafile1) <(cut -d, -f 2- datafile2)
# ignore column 3 from two csv datafiles while comparing
diff -u <(awk -F',' -v FieldToIgnore=3 -f ignoreField.awk datafile1) <(awk -F',' -v FieldToIgnore=3 -f ignoreField.awk datafile2)

So instead of giving it two real files, we give it two redirected streams. Same solution can be used to pre-process files differently (e.g. ignore any comments or empty lines or compare two unsorted files).

See below for more information on Process Substitution.
http://www.tldp.org/LDP/abs/html/process-sub.html
http://wiki.bash-hackers.org/syntax/expansion/proc_subst