The old way was to do an iptables-save each time an update to iptables rules is made and then iptables-restore during system startup. New way is to use iptables-persistent which takes care of both of these.
Install iptables-persist using deb package/apt or tool of your choice on the Linux of your choice. On debian during install it will ask to save currently active ipv4 and ipv6 rules and setup necessary restore processes. That’s all!
It is imperative to run the application when developing it. You may be testing it, debugging it or troubleshooting something. By default Net-SNMP uses named socket for AgentX communication which does not allow a non-root user to connect making troubleshooting difficult. There are security reasons for not allowing this kind of widely open access so do not set this up in your production environment. There are other ways to control the access which I will narrate in future posts.
To enable AgentX and allow non-root applications/Agents to connect to snmpd you can setup TCP socket as follows. TCP socket provides a cleaner access and allows easier troubleshooting e.g you could capture network traffic between snmpd and the AgentX application. Update /etc/snmp/snmpd.conf and ensure that following directives are set for TCP based AgentX communication.
rocommunity public default # or whatever community string/access control you want
master agentx # Run as an AgentX master agent
agentXSocket tcp:localhost:705 # Listen for localhost network connections instead of /var/agentx/master
Restart snmpd (/etc/init.d/snmpd restart)
Alternate is to set correct permissions for /var/agentx/master named socket or whatever you have configured.
Seconds since Epoc or Unixtime is ever increasing value and it remains stable (except for leap second). Daylight Savings changes does not affect it because Unixtime is based on UTC while DST is a local change applied to the timezone. So an application using Unixtime as a reference is not directly influenced by DST. Spent few hours today realizing this.
If both end-points of a socket are on local system, network traffic will be seen on loopback interface even if applications are using non-loopback interface (e.g. eth0, wlan0…). Capturing data over loopback is quite obvious. But here I am discussing that applications are using one of the external interfaces (e.g. eth0, eth1, wlan0 …..).
Since both end-points are on local system, kernel will shunt the traffic and not send it to the wire. The data will be delivered internally by queuing it to the read queue of other end-point. So we cannot capture the traffic on that particular interface, but this traffic is visible on loopback interface. Lets see an example.
I use netcat for setting up our test client and server program.
nc -kl 9090 will run server on all interfaces on port 9090. And
nc 10.1.1.100 9090 will setup a client. Here 10.1.1.100 is the external IP of my system(wlan0). Now instead of using the interface name associated with that IP (in my case wlan0), we have to use loopback interface
lo to capture the traffic as below.
tcpdump -i lo tcp port 9090
Now anything that is typed on the client terminal when sent will be seen by tcpdump. Problem solved.
Recently our organization started to provision Private certificates using Symantec Managed PKI Service. It has lot more appeal for IT admins because it takes out all user intervention which always creates support nightmares.
Previously I had direct access to the private key so it was easy to export it to all my devices and use for VPN and other secure stuff that needed to verify that I am indeed the real user. Because Symantec PKI is not available for Linux, it broke the VPN access from my Ubuntu system. Naturally I started to look for ways to export the key out of windows system. So here is what I did to get me out of the bind.
How to export certificates
First I installed Symantec PKI client on a windows 7 system. That was a no brainer because there was no other choice. I did not try with Windows 8 so YMMV. The main issue was that Windows certificate manager showed that the private key was not exportable. If it was then my quest would have been over right there. But I had to take another step. Mimikatz was the answer which marks them exportable and also allows to export them. Note: The patching that it does only lasts for that session. Once you reboot windows system you have to patch again using mimikatz. I used latest version which is 2.0 at the writing of this post. Continue reading “mimikatz : Export non-exporteable Private certificate from Symantec PKI”
Update /etc/postfix/main.cf and add the name of your outgoing/relaying mailhost as “relayhost”. Ensure that the relay server is accepting your email first.
e.g. if the outgoing relay is mailhost.xyzserver.com sendmail configuration should look like following.
# INTERNET OR INTRANET
# The relayhost parameter specifies the default host to send mail to
# when no entry is matched in the optional transport(5) table. When
# no relayhost is given, mail is routed directly to the destination.
# On an intranet, specify the organizational domain name. If your
# internal DNS uses no MX records, specify the name of the intranet
# gateway host instead.
# In the case of SMTP, specify a domain, host, host:port, [host]:port,
# [address] or [address]:port; the form [host] turns off MX lookups.
# If you're connected via UUCP, see also the default_transport parameter.
#relayhost = $mydomain
#relayhost = [gateway.my.domain]
#relayhost = [mailserver.isp.tld]
#relayhost = uucphost
#relayhost = [an.ip.add.ress]
relayhost = mailhost.xyzserver.com
After that restart postscript.
service postscript restart
Lets say you are on a system where top is not available (or other tools similar to it). Sound incomprehensible but believe me. There are systems which do not have any of those great tools available. So how do you find the process eating up most CPU? The humble
ps command provides
pcpu which is CPU percentage used by a process. Here is how.
ps -eo pcpu,pid,ruser,args | sort -r -k1 | less
This will give in reverse sort order the “pid” that is taking up most of
pcpu and the
ruser (real user) with
args. So there you have it.
WebEx would not work on Ubuntu 12.04 64 bit with default configuration. It requires 32 bit java. WebEx control window would launch but desktop sharing, application sharing, white-board etc. do not show up. Neither I could see other people’s shared content nor I could share mine even if I am the host of the meeting.
Starting Firefox from command line on a terminal shows ELFCLASS32 error from WebEx shared objects. So it was clear that WebEx would not work on 64 bit system as is and would need 32 bit java to work. Because I use 64 bit system I do not want to downgrade to a 32 bit version just for the sake of WebEx.
In brief, these three steps cover the fix.
- Install 32 bit Oracle Java locally. Oracle Java is must and OpenJDK would not cut. Warning: because it is local installation, user would need to manually keep on updating as new java becomes available. Recently there have been many releases from Oracle which came with very little time in between addressing major security issues so this would be concerning.
- Install Firefox locally so it can be configured to use this 32 bit java. Add a different profile and use a different theme so it does not conflict with the native Firefox and clearly stands out if both are running.
- (Optional) Add shortcut in Unity HUD for quick access.
Continue reading “Setup WebEx on 64 bit Ubuntu 12.04 using 32 bit Oracle Java”
Here is how to add a custom script in Unity HUD/dash for quick access. /usr/share/applications directory has all shortcuts for Unity desktop. So create a file named “mycustomscript.desktop” (or any_name_you_like.desktop) there which has information about the custom script. Additionally an icon could be added by pointing to an image. Files in /usr/share/applications directory have to be created as root.
Comment=My Custom Script for X, Y and Z
sudo update-desktop-database after which you will be able to use Unity HUD for invoking the custom script. Also note that each time you update a .desktop file you have to run
Once a user starts a vpn client to connect to company extranet, all network traffic is diverted to the vpn tunnel. Routing gets setup by VPN client such that everything would go down the tunnel. Split tunnel can fix that by keeping traffic for internet from tunnel and only direct extranet traffic to the tunnel. But it comes with few risks on its own. Lets review the concept for a minute.
The VPN tunnel can be configured to work in two modes.
- Mandatory (default)
While a client tunnel is established in mandatory mode, all client traffic is tunneled through it by default. This is the default vpn mode. So accessing yahoo.com will go through vpn tunnel to company extranet which will then route it via its own internet connection after applying access policy etc.
- Split Tunneled mode
Split Tunneling allows configuring specific network routes that are then tunneled and sent to the client’s Extranet adapter; any other traffic goes to the local PC Ethernet or Dialup adapter interface. So Split tunneling allows the user to get access to the Internet or print locally even while the system is tunneled into the company Extranet. But this comes with a security issue because it opens a backdoor into the secure office network from internet via the home system. A hacker can exploit the home system and can use that as a jump box to get into the company network. Or if the system at home is infected it will further that infection into office network. That is why organizations want vpn users to ensure they are up to date and have anti-virus installed and most will provide vpn clients that are tightly controlled to enable the Default mode. Continue reading “vpn : Split Tunnel Concept”