Here I explain how to use a Physical disk partition for a guest OS under VirtualBox. This is also called Raw Disk partition use for VirtualBox. My use case was to run WindowsXP as guest OS from a physical installation under Linux and still be able to boot up the system in same Windows installation when needed. My system is running Ubuntu 11.04 on core2 duo, 3GB memory, two hard disks (one with Ubuntu and other with WindowsXP installation), with VirtualBox 4.0.4. Process is simple but took quite a while to get all information/steps collected and tested. In nutshell, first we need to know the partition that we will use, then the user who is going to use it needs to have access to it. After that an mbr has to be created and finally a vmdk file is created to use the Raw Disk. Keep reading for the full process.
For large repository with many projects, dependencies and contributors, getting everything to work nicely is not an easy task. Standardization is the biggest issue. The idea behind `otto` is to get any project up and running with the tooling needed for a modern C or C++ codebase while requiring minimal cmake/makefile coding.
Code for `otto` is at github http://github.com/jainvishal/otto
Features in a nutshell:
- Encapsulate complexities of cmake in module files
- Very small and simple makefiles for end user projects
- Handle external dependencies between projects using
- Out of source compilation
- Support for installing both ‘release’ and ‘debug’ binaries and shared objects
- Support for installation target. The binaries can be installed with the command
The old way was to do an iptables-save each time an update to iptables rules is made and then iptables-restore during system startup. New way is to use iptables-persistent which takes care of both of these.
Install iptables-persist using deb package/apt or tool of your choice on the Linux of your choice. On debian during install it will ask to save currently active ipv4 and ipv6 rules and setup necessary restore processes. That’s all!
Recently I bought HP Spectre x360 Convertible that has a fantastic crisp 4K screen. Using CTRL+F1 (upto F6 usually) one can access console. With 4K screens now becoming available the default font size becomes really minute and one would need to increase it somehow. gnome-terminal provides shortcut SHIFT+CTRL and plus key (+) to increase font size when needed, or profiles which can be changed. But with console something more is necessary. Font used in the console is monospaced and is not the same as the tt or otf fonts used for document and browsers. In addition, there is also a limited selection fonts and sizes to choose from. So here is how to do that. Go to a console and login. Following command has to be run as root or using sudo.
sudo dpkg-reconfigure console-setup
Leave the first two screen settings as is. On third one you will see list of Font styles as Fixed, Terminus (and its variations), VGA and few others. Fixed is the default one and at least for me its biggest font size was still small for me. So I used VGA. Terminus looks fantastic but as the warning says it is not good for programming because of the way some of characters are represented. So select VGA and move to next screen. Select the biggest font and press enter. Within few seconds change will take effect. Try other font styles and sizes to suit your need.
For a while I was mulling to buy an ultrabook. 13.3″ screen, less than 3 LB and powerful. Came across HP Spectre x360 convertible that supports upto Core i7 8th gen, 16GB RAM, upto a terabye SSD, Beautiful UHD Touch Screen, Tilt Pen support, FingerPrint Scanner and best of all tablet like features where one can fold the laptop full 360 degrees.
Next step was to find if it can run Linux. Certainly it can but requires non-free drivers like for Intel wifi. So I tried live-cd of Ubuntu 18.04. And viola all worked. So first I went back to windows, migrated windows installation to create a disk partition for Linux. Finally installed Ubuntu. The Spectre boots so fast that for first time I thought it was only waking up from Standby but it was full boot and felt like instant ON. Another great thing is the battery life. So far I am able to get more than full day worth of work done on single charge. Keyboard is great. Only things that do not work on Ubuntu yet are the Fingerprint scanner and HP IR Camera. But those are the things I can live without for now. Intel Wifi is so fast that now I finally have a wifi device at home that can give me 100MBps download as advertised by Optimum (my ISP).
It even came with a folio/soft cover to secure my device from scratches when traveling. That was one unpublished feature (or maybe I missed it). But that was a pleasant surprise.
Now comes to the cost. I loaded the system to max except for the SSD. So I got Core i7 8th Gen, 16GB RAM, 512GB SSD, UHD Touch Screen and HP Tilt Pen. Original cost about USD 1550. With discounts etc price dropped to 1333.33 (including Tax etc). So I went to Raise.com and found few gift cards that were already selling for cheap. Saved another USD 50. Shipping was after about a week of order. During that time price dropped for the same device by USD 30. So I called HP for a pricematch and they obliged. So I ended up paying USD 1300 only. And later on because I used Ebates and they have 10% off cashback I got another 125$ back (price before tax). You cannot go wrong with that. I selected for the regular shipping but Fedex ended up delivering in 2 days after it was shipped. So there you have it. You can have the cake and eat it too :-).
I will go on adding my posts here for the experience with Linux and tweaks I had to do perform in order to use the hardware to its full scope.
Apache Webserver is all written in ‘c’. So how does one develop using C++? Simple.
- Generate the stub in “c” using apxs tool (e.g.
apxs -n foo -g)
- use extern “C” in the module file to define the handlers that Apache would invoke.
- LoadFile is your friend which will load libstdc++.so
Here is a little more detail. Apache now comes with apxs tool that will generate a simple stub and with it compilation makefiles etc.
First I define Foo.h which contains a simple class implementation. For this howto I am keeping things simple by not splitting in multiple files. So here is Foo.h which only has the handler for incoming request and produces “Hello world from foo”.
Then I renamed the mod_foo.c to mod_foo.cpp and updated content. I declared a global object of Foo class. Again it could have been singleton but for now I simplified it. Important point to note is the use of
extern "C" at line 8 so the c++ compiler would generate names which are compatible with c. And I register my hook using a lambda which is not strictly necessary.
Next is to update the makefile to include CXX and CXXFLAGS for libtool to find them.
Now comes configuring apache to register the module and load necessary dependencies. This will allow you to visit /foo on your httpd webserver (e.g. https://theunixtips.com/foo) and see this code in action.
make reload would compile, install and bounce apache. Otherwise you can run these operations separately and upload the module to your apache installation. Now go ahead and start the browser and visit the page to see your code running in all its glory.
Once upon a time, computing was happening on PCs and Servers. Bigger organizations like Banks were setting up the server farms in data-centers for their computing needs. Along came big companies like Amazon, AT&T, Microsoft and what not that setup their own server farms and started to rent out the computing power. And one day Big Data showed up. Small companies that could not afford maintain their own data-centers but had a great idea, started out in Cloud. That was the dawn of Cloud as we know it. And rest is history. So computing moved from the devices that were on the Edge of Internet or “Cloud” to the “Cloud”.
Fast forward few years. Smart Phones and Autonomous devices brought in a new age. Mobile phones became way powerful than 15 year old high end PCs providing good amount of processing power. Autonomous components needed answer right then and there based on the live-events. There is no margin for error or delay. If the autopilot in car detects oncoming car in its track it has to decide next course of action right away. No time to send a message to Cloud and wait. Or god forbids, no signal. So computing has to be done in situ. That is Edge computing.
Now this does not mean that Cloud is on its way out. Both have their own niches. For example, Autonomous device has to react based on real-life events in wild. Think nature. These are the critters that react to their environment. They do not need a central hub to guide them or help them to navigate. They look around and find their way to food and avoid colliding with other objects or insects. But that does not mean that queen or alpha-counterpart is useless. They guide the colonies in case of disaster or finding new abodes and prospects. That is Cloud which is seeing all and deciding the next best course of action.
So to get a quick solution, Edge Computing is must. But on a grand scale you need to bring data to central computing or Cloud, analyze, find patterns and decide what is best course of action.
And then they both lived happily ever after!
I have been thinking of putting this out for quite some time. In future this will find its home on stackoverflow (libcurl documentation) which still is in proposal state. This is very high level and only provides the basic guideline. There are many options and cases that are better documented on libcurl website.
Check https://curl.haxx.se/libcurl/c/libcurl-tutorial.html which provides many examples. Documentation for libcurl also provides small code snippets to use for each API. For some they even provide high level steps just like I mentioned below.
Third one in this series is implementation of SNMP Traps, the method which is the main strength of SNMP. These Traps provide an event based system where NMS relies heavily on the monitored stations to tell them if something of interest has happened.
mib2c again provides an easy way to implement traps. This is actually one of the easiest ones of the three in this series. Again as I said before clone my repository and checkout the code to find what was done and how. Following are three commands of interest.
git clone https://github.com/jainvishal/agentx-tutorial.git
git checkout step3_traps_autogenerated
git checkout step3_traps_implement
mib2c.notify.conf to compile the MIB and generate
C code. By default it uses SNMPv2 for traps and I leave it as is. Implementation is as simple as copying the data into right variables when time comes and they will be delivered.
Second in the series of implementing an AgentX agent I describe a simple table implementation. Net-SNMP provides multiple implementations from easy one to more elaborate ones. An approach should be accepted based on the need. For most of the needs MIB for Dummies work (
mib2c -c mib2c.mfd.conf). Points to consider for using MIB for dummies are :
- Data could be cached and smaller time windows are acceptable where data may be stale compared to real world values (or application can implement a logic to mark cache expired on real value change)
- Data reloads are not resource intensive
- Data set for table is small so memory foot print of copying to cache will be small
Since no good tutorials are available I put together what I learnt. My approach was step by step learning and that is what I describe here. So on day 1 or step one we will see how to create a simple MIB that has only two simple read-only scalars and how to implement them in the agent. I have configured net-snmpd agentx mode to run on TCP. Check this post on how to configure that.
You should clone my git repository and follow the README which describes the changes made. I have created tags in the git repository so it is easy to reference what is done when and for what.
git clone https://github.com/jainvishal/agentx-tutorial.git git checkout step1_scalars_autogenerated git checkout step1_scalars_implement
- I used net-snmp 5.7.3 for this tutorial.
- My test MIB AGENTX-TUTORIAL-MIB is under experimental branch (.188.8.131.52.3) so my MIB branch is .184.108.40.206.3.9999.
smilintwas used to verify that MIB was syntactically correct.
mib2cwas used with
mib2c.scalar.confto compile the MIB and generate the code, makefile and agent code.
Now wait for second tutorial where I will implement a simple table using MIB-For-Dummies configuration.