Here I explain how to use a Physical disk partition for a guest OS under VirtualBox. This is also called Raw Disk partition use for VirtualBox. My use case was to run WindowsXP as guest OS from a physical installation under Linux and still be able to boot up the system in same Windows installation when needed. My system is running Ubuntu 11.04 on core2 duo, 3GB memory, two hard disks (one with Ubuntu and other with WindowsXP installation), with VirtualBox 4.0.4. Process is simple but took quite a while to get all information/steps collected and tested. In nutshell, first we need to know the partition that we will use, then the user who is going to use it needs to have access to it. After that an mbr has to be created and finally a vmdk file is created to use the Raw Disk. Keep reading for the full process. Continue reading
Apache Webserver is all written in ‘c’. So how does one develop using C++? Simple.
- Generate the stub in “c” using apxs tool (e.g.
apxs -n foo -g)
- use extern “C” in the module file to define the handlers that Apache would invoke.
- LoadFile is your friend which will load libstdc++.so
Here is a little more detail. Apache now comes with apxs tool that will generate a simple stub and with it compilation makefiles etc.
First I define Foo.h which contains a simple class implementation. For this howto I am keeping things simple by not splitting in multiple files. So here is Foo.h which only has the handler for incoming request and produces “Hello world from foo”.
Then I renamed the mod_foo.c to mod_foo.cpp and updated content. I declared a global object of Foo class. Again it could have been singleton but for now I simplified it. Important point to note is the use of
extern "C" at line 8 so the c++ compiler would generate names which are compatible with c. And I register my hook using a lambda which is not strictly necessary.
Next is to update the makefile to include CXX and CXXFLAGS for libtool to find them.
Now comes configuring apache to register the module and load necessary dependencies. This will allow you to visit /foo on your httpd webserver (e.g. http://theunixtips.com/foo) and see this code in action.
make reload would compile, install and bounce apache. Otherwise you can run these operations separately and upload the module to your apache installation. Now go ahead and start the browser and visit the page to see your code running in all its glory.
Once upon a time, computing was happening on PCs and Servers. Bigger organizations like Banks were setting up the server farms in data-centers for their computing needs. Along came big companies like Amazon, AT&T, Microsoft and what not that setup their own server farms and started to rent out the computing power. And one day Big Data showed up. Small companies that could not afford maintain their own data-centers but had a great idea, started out in Cloud. That was the dawn of Cloud as we know it. And rest is history. So computing moved from the devices that were on the Edge of Internet or “Cloud” to the “Cloud”.
Fast forward few years. Smart Phones and Autonomous devices brought in a new age. Mobile phones became way powerful than 15 year old high end PCs providing good amount of processing power. Autonomous components needed answer right then and there based on the live-events. There is no margin for error or delay. If the autopilot in car detects oncoming car in its track it has to decide next course of action right away. No time to send a message to Cloud and wait. Or god forbids, no signal. So computing has to be done in situ. That is Edge computing.
Now this does not mean that Cloud is on its way out. Both have their own niches. For example, Autonomous device has to react based on real-life events in wild. Think nature. These are the critters that react to their environment. They do not need a central hub to guide them or help them to navigate. They look around and find their way to food and avoid colliding with other objects or insects. But that does not mean that queen or alpha-counterpart is useless. They guide the colonies in case of disaster or finding new abodes and prospects. That is Cloud which is seeing all and deciding the next best course of action.
So to get a quick solution, Edge Computing is must. But on a grand scale you need to bring data to central computing or Cloud, analyze, find patterns and decide what is best course of action.
And then they both lived happily ever after!
I have been thinking of putting this out for quite some time. In future this will find its home on stackoverflow (libcurl documentation) which still is in proposal state. This is very high level and only provides the basic guideline. There are many options and cases that are better documented on libcurl website.
Check https://curl.haxx.se/libcurl/c/libcurl-tutorial.html which provides many examples. Documentation for libcurl also provides small code snippets to use for each API. For some they even provide high level steps just like I mentioned below. Continue reading
Third one in this series is implementation of SNMP Traps, the method which is the main strength of SNMP. These Traps provide an event based system where NMS relies heavily on the monitored stations to tell them if something of interest has happened.
mib2c again provides an easy way to implement traps. This is actually one of the easiest ones of the three in this series. Again as I said before clone my repository and checkout the code to find what was done and how. Following are three commands of interest.
git clone https://github.com/jainvishal/agentx-tutorial.git
git checkout step3_traps_autogenerated
git checkout step3_traps_implement
mib2c.notify.conf to compile the MIB and generate
C code. By default it uses SNMPv2 for traps and I leave it as is. Implementation is as simple as copying the data into right variables when time comes and they will be delivered. Continue reading
Second in the series of implementing an AgentX agent I describe a simple table implementation. Net-SNMP provides multiple implementations from easy one to more elaborate ones. An approach should be accepted based on the need. For most of the needs MIB for Dummies work (
mib2c -c mib2c.mfd.conf). Points to consider for using MIB for dummies are :
- Data could be cached and smaller time windows are acceptable where data may be stale compared to real world values (or application can implement a logic to mark cache expired on real value change)
- Data reloads are not resource intensive
- Data set for table is small so memory foot print of copying to cache will be small
Since no good tutorials are available I put together what I learnt. My approach was step by step learning and that is what I describe here. So on day 1 or step one we will see how to create a simple MIB that has only two simple read-only scalars and how to implement them in the agent. I have configured net-snmpd agentx mode to run on TCP. Check this post on how to configure that.
You should clone my git repository and follow the README which describes the changes made. I have created tags in the git repository so it is easy to reference what is done when and for what.
git clone https://github.com/jainvishal/agentx-tutorial.git git checkout step1_scalars_autogenerated git checkout step1_scalars_implement
- I used net-snmp 5.7.3 for this tutorial.
- My test MIB AGENTX-TUTORIAL-MIB is under experimental branch (.220.127.116.11.3) so my MIB branch is .18.104.22.168.3.9999.
smilintwas used to verify that MIB was syntactically correct.
mib2cwas used with
mib2c.scalar.confto compile the MIB and generate the code, makefile and agent code.
Now wait for second tutorial where I will implement a simple table using MIB-For-Dummies configuration.
It is imperative to run the application when developing it. You may be testing it, debugging it or troubleshooting something. By default Net-SNMP uses named socket for AgentX communication which does not allow a non-root user to connect making troubleshooting difficult. There are security reasons for not allowing this kind of widely open access so do not set this up in your production environment. There are other ways to control the access which I will narrate in future posts.
To enable AgentX and allow non-root applications/Agents to connect to snmpd you can setup TCP socket as follows. TCP socket provides a cleaner access and allows easier troubleshooting e.g you could capture network traffic between snmpd and the AgentX application. Update /etc/snmp/snmpd.conf and ensure that following directives are set for TCP based AgentX communication.
rocommunity public default # or whatever community string/access control you want master agentx # Run as an AgentX master agent agentXSocket tcp:localhost:705 # Listen for localhost network connections instead of /var/agentx/master
Restart snmpd (/etc/init.d/snmpd restart)
Alternate is to set correct permissions for /var/agentx/master named socket or whatever you have configured.
The button is not visible with shutdown and logout buttons. But by pressing ALT key the Shutdown button changes to Suspend/Pause button in Gnome3. No need to install a plugin to find it. Just a keystroke.
Seconds since Epoc or Unixtime is ever increasing value and it remains stable (except for leap second). Daylight Savings changes does not affect it because Unixtime is based on UTC while DST is a local change applied to the timezone. So an application using Unixtime as a reference is not directly influenced by DST. Spent few hours today realizing this.
If both end-points of a socket are on local system, network traffic will be seen on loopback interface even if applications are using non-loopback interface (e.g. eth0, wlan0…). Capturing data over loopback is quite obvious. But here I am discussing that applications are using one of the external interfaces (e.g. eth0, eth1, wlan0 …..).
Since both end-points are on local system, kernel will shunt the traffic and not send it to the wire. The data will be delivered internally by queuing it to the read queue of other end-point. So we cannot capture the traffic on that particular interface, but this traffic is visible on loopback interface. Lets see an example.
I use netcat for setting up our test client and server program.
nc -kl 9090 will run server on all interfaces on port 9090. And
nc 10.1.1.100 9090 will setup a client. Here 10.1.1.100 is the external IP of my system(wlan0). Now instead of using the interface name associated with that IP (in my case wlan0), we have to use loopback interface
lo to capture the traffic as below.
tcpdump -i lo tcp port 9090
Now anything that is typed on the client terminal when sent will be seen by tcpdump. Problem solved.