Wednesday, April 8, 2020

Putty and double bastion host tunneling

I've always found complex ssh tunneling to be a pain in Windows.  Unfortunately, I'm stuck with it due to company mandate.  This post is as much about sharing the info as it is about documenting it so that I can come back and refer to it when I forget everything here.

So, the use case is this.  You have a host that you have to hop through in order to get to other hosts, but there may be more hosts that you can't get to unless you hop through yet another host.

For example, if I want to access my Linux workstation at the office, I have to hop through two hosts, my ssh bastion host and from there to another host that also has an IP on the office network, and from there to my desktop.

Even to experienced tunnelers, that's daunting, especially from Windows.

So, here goes.  Here's what the values in my example mean:

My workstation: 172.16.0.99
The public facing bastion host: bastion.example.com
The internal server: server1.example.com
My username on my workstation: myusername
My username when accessing company servers: companyusername
My WSL username: myWSLusername


Putty Configuration

  1. Create a new session in Putty.
  2. Hostname: 172.16.0.99
  3. Port: 22
  4. Category → Connection → Data
    • Auto-login username: myusername
  5. Category → Connection → Proxy
    • Proxy type: Local
    • Telnet command, or local proxy command:
      c:\progra~1\putty\plink.exe -ssh -agent -A -l companyusername server1.example.com -nc %host:%port -proxycmd "c:\progra~1\putty\plink.exe companyusername@bastion.example.com -l companyusername -agent -A -nc server1.example.com:22"
  6. Category → Connection → SSH
    • Enable compression
    • Notice that I didn't use the compression option -C in my plink commands in the previous step.  When tunneling ssh traffic, you should only enable compression in one place so that you're not compressing traffic only to have other segments attempting to compress the already compressed data.
  7. Category → Connection → SSH → Auth
    • Attempt authentication using Pageant
    • Allow agent forwarding
  8. Category → Connection → SSH → X11
    • With WSL and VcXsrv X Server installed, you can run gui apps
    • X11 Forwarding: Enable X11 forwarding
    • X display location: 127.0.0.1:0.0 (Look in VcXsrv server's log to confirm this value.)
    • Remote X11 authentication protocol: MIT-Magic-Cookie-1
    • X authority file for local display:
      %LOCALAPPDATA%\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\home\myWSLusername\.Xauthority
    • (You might to run from WSL bash shell xauth generate $DISPLAY initially to get the x authority file seeded.)
  9. Save new Putty session
  10. Launch new Putty session
There are certainly other ways of doing this that can be a little bit simpler.  One of the reasons I used this method is because my bastion host strips X11 traffic.  It's not configured for it and doesn't have any of the required x related dependencies needed (xorg-x11-server-utils et. al.).  Doing it the way I have creates a tunnel that simply passes all traffic through to the next host keeping me well below the application layer from the perspective of the bastion host.

Workstation File Access

The other advantage to this method is that I can use this same Putty session to make additional ssh tunnels that go right to my workstation.  So, in my Putty config detailed above, I also have a local tunnel 20202:172.16.0.99:22 that gives me direct ssh access to my workstation by ssh'ing to 127.0.0.1:20202 here on my laptop.

So, software like WinSCP can now access my workstation over this tunnel and behaves as if it were direct access.  I use Mountain Duck, which is not free software. The end result is that it allows me to map a drive right to my Linux workstation at the office from my Windows 10 laptop at home.

(Additionally, I use the RDP tunneling method described in my previous post to make it so that RDP sessions end up originating from my workstation.)

Limitations

While drive/file access works quite well, this is not fast enough to do X11 well at all.  So as much as I'd like to run gvim directly from my workstation over the tunnel, it's just not fast enough.  

But, having the option to do it was helpful for me.  There were a couple apps that I really just needed some of the settings out of so that I could set up the Windows versions of those apps to work the same way.

OpenSSH

I haven't actually explored the capabilities of the OpenSSH that's now included with Windows 10, but regardless, for non-Windows users, note that there is a newer ProxyJump directive.  This let's you chain together any number of bastion hosts.  So, following my earlier example, you can do something like in an .ssh/config file entry:

Host workstation-tunnel
ProxyJump companyusername@bastion.example.com,companyusername@server1.example.com
Hostname 172.16.0.99
User myusername
ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes
Compress yes
PubkeyAuthentication yes

Then you just ssh workstation-tunnel and you're good!

Tuesday, April 7, 2020

SSH SOCKS Proxying with Putty

I'm writing this during the COVID-19 lockdown.  My company's VPN is getting hit really hard since everyone is working from home.  Anything we can do to stay off of it is helpful.

We also keep a host available with SSH exposed publicly (public key auth only).  So, I use that host as an SSH SOCKS proxy and it works great for keeping me off the VPN.

So, if you're in a similar position or simply would like to use SSH as a sort of pseudo-VPN, these instructions might be helpful.

Non-Windows users can do the same thing, you just need to use the ssh command to connect to the remote host and use the -D parameter.  Something like: ssh -D 1337 yourhost

Putty Configuration


  1. Create a new session in Putty
  2. Hostname: yourhost
  3. Port: 22
  4. Go under Category → Connection → Data
    • Auto-login username: <your username>
  5. Category → Connection → Proxy
    • Leave this off
  6. Category → Connection → SSH
    • Enable compression
  7. Category → Connection → SSH → Auth
    • Attempt to authenticate using Pageant
    • Allow agent forwarding
  8. Category → Connection → SSH → Tunnels
    • Source port: 1337
    • Destination: yourhost
    • Radio button: Dynamic
    • Click Add
    • (Just shows D1337, this ok)
  9. Save the new Putty session
  10. Launch the new Putty session

Proxy Configuration

Now, to actually use the proxy, you can go a couple ways.  Originally, I was doing it the manual way, but I found the Chrome extension SOCKS proxy which works great.  It's hassle free and even make it so that DNS requests go over the proxy.  The source code is very small and easily reviewed so you can see it's not doing anything nefarious.

If you can't or won't install an extension, here's the manual method.
  1. Run the inetcpl.cpl control panel. (NOT the new Windows 10 Proxy Settings page.)
  2. Go under the Connections tab
  3. LAN settings button
  4. Uncheck automatic detection
  5. Check Use a proxy server for your LAN
  6. Advanced button.
  7. Fill in ONLY the SOCKS information (not http, secure, or ftp. Uncheck Use the same proxy for all protocols)
    • Socks: 127.0.0.1
    • Port: 1337

DNS Considerations

Now, if you don't have to worry about resolving any private DNS records, you're good to go.  My company has whole zones that aren't resolvable from the public internet.  For these, DNS queries have to originate from the company network.  Chrome, by default, will not send DNS requests over the SOCKS proxy, so there's an additional step required.

I suggest copying your existing Chrome icon and giving it a different name.  Edit this icon and append to the end of the Target: field, after the final quote (not inside the quotes) the following:

--proxy-server="socks5://127.0.0.1:1337" --host-resolver-rules="MAP * ~NOTFOUND , EXCLUDE 127.0.0.1"

I haven't test it myself, but I've heard that Firefox automatically pushes DNS requests over the proxy.

Limitations

So one of the big limitations of this is that it doesn't really help in a heavy Active Directory environment where your PC has to communicate with things over a domain, such as shared drives.

RDP

However, you can tunnel RDP through your SSH host as well.  Configure additional tunnels, one per RDP destination. Back in your new Putty session:

  • Category → Connection → SSH → Tunnels
  • Source port: 38001
    • (This is a made up value of no significance. You'll have to make one up for each RDP destination.)
  • Destination: rdphost:3389
  • Relaunch your Putty session
  • Open RDP
  • Use the destination address: 127.0.0.1:38001
  • Repeat the port forwards with different port numbers for each RDP host you to access.


Thursday, October 24, 2019

Using Wireshark on a remote host

In a large environment, troubleshooting problems with network packet traces usually means you're logged into a remote host running tcpdump.  Even after you develop some skill with pcap-filter syntax, wielding tcpdump is clunky and it usually looks like you're trying to view The Matrix encoded.

There are other console based tools like tshark, but few of them are as useful and as user-friendly as Wireshark which can render and parse network packets in an extremely readable and comprehensive fashion.

The problem is that Wireshark is a graphical interface.  Running it on a remote host means you'll have to install it and all supporting dependencies and libraries on the remote host and then ssh X tunneling it back to your desktop.  For many reasons, this may not work well.  Or, you may not even be able to install Wireshark on the remote host for any number of reasons.

One workaround used by a lot of people is to capture some network output with tcpdump writing to a file, then fetch that capture file to your desktop and open it up in Wireshark.  It's definitely handy that pcap is so portable that this is possible, but this method lacks the ability to watch network traffic in real-time.

So how can you achieve the holy grail and use Wireshark locally on your desktop to watch live traffic on a remote host?

Enter socat - Multipurpose relay.

The socat utility is a swiss army knife of basically all possible types if input/ouput.  One of its supported i/o types is named pipes.

In short, we can use socat as the middleman to read from a remote named pipe to a local named pipe. Then, we take advantage of Wireshark's ability to read right from a named pipe and read that local named file.

Here's the steps using the example username jsmith, example remote host name srv1, and example network interface name eth0.

On the remote host:

  1. Create a temp dir for your named pipe file.
    • sudo mkdir /tmp/fifo
    • sudo chown jsmith /tmp/fifo
    • sudo chmod 700 /tmp/fifo
  2. Create the named pipe
    • sudo mkfifo /tmp/fifo/pcappipe
  3. Kick off tcpdump, writing to that pipe.
    • sudo tcpdump -i eth0 -s 0 -U -w /tmp/fifo/pcappipe not port 22
Notice the temp dir permissions.  You need to be able to read the named pipe as the non-root user with which you're going to use to log in.

Also notice the pcap filter 'not port 22'.  You can alter this of course, but if you don't specifically exclude your ssh traffic, tcpdump is going to pick up all of the traffic from you being logged in as well as the part where we remotely read from the named pipe which takes place over ssh.


Next, on your local desktop, run socat like so:

socat -b 67108864 \
    EXEC:"stdbuf -i0 -o0 -e0 ssh -x -C -t srv1 cat /tmp/fifo/pcappipe",pty,raw \
    PIPE:/home/jsmith/localpcappipe,wronly=1,noatime

This tells socat to ssh into the remote host and cat the named pipe (sending the data to STDOUT).  It reads from that and writes it to the named pipe file in your home directory.

The buffer tuning was important to making it as live as possible as well as more stable.  Plus, this can be somewhat of a brittle process and socat can end up crashing easily.  The buffer tuning helps make things much more stable and reliable.

Next, run wireshark, as root.

sudo wireshark -s 0 -k -i /home/jsmith/localpcappipe

Profit!


Normal ssh rules apply.  So, if you can't ssh directly to your remote host, configure your .ssh/config file accordingly.

I need to tunnel through an intermediary jump host as well, so this is what I do in my .ssh/config file:

Host srv1
    ProxyCommand           ssh jumpsrv1 /usr/bin/nc %h 22
    User                   jsmith
    IdentityFile           ~/.ssh/id_rsa
    Compression            yes
    PubkeyAuthentication   yes
    Port                   22
    Protocol               2
    EscapeChar             none
    ServerAliveInterval    30

(I know I know, there's a new ProxyJump directive...  I don't change my .ssh/config that often.)

Monday, September 1, 2014

Amazon AWS free tier: Converting from a t1.micro to t2.micro


Just finished "converting" my t1.micro instance where I run my TT-RSS server to a t2.micro. Recently, I discovered that it was $5 cheaper per month. Doing it was almost not worth the trouble, however. Hopefully this can save you some time.

I quickly discovered that I couldn't even think about spinning up a t2.micro without having a "default vpc" it told me. What I discovered was that I was one of these weird accounts documented here "Your Default VPC and Subnet" where I fell in the 3rd mentioned date range.

Since it was so ambiguous, I wasted quite a bit of time trying to convert my old instance to a t2 using the old volume or at least an image from it.  I seemed to have a default VPC, but it wasn't a real one apparently. Poking at it with the aws cli tools indeed showed me that I couldn't affect the underlying "IsDefault" attribute of the VPC. It was false, and there was no way I was getting it flipped to true. Nor was there any way I could apparently strip the support for "EC2 Classic" mode as they called it.  I fell into that window of time where they were trying to please everyone and it end up being a royal pain.

I ended up following a suggestion in one of the Amazon or Bitnami documentation where I would need to fully deleted my AWS account and everything inside it and started over with a fresh account.

Having finally accepted this inevitability, the process became quite simple.

First, I updated TT-RSS to the latest version and made sure it was working to reduce the possibility of any issues related to my being two version behind.  Then I dumped the database to a local directory on my laptop.  Then I made a backup of the entire htdocs directory where TT-RSS lived, just in case.  Then, I completely burned down my Amazon AWS account and created a new account, including new MFA and API token credentials.

Headed back over to Bitnami, plugged in the new Amazon account details, and told it to create a new TT-RSS instance using a t2.micro instance.  After a few minutes, it was up and running.  So I took a backup of the stock bitnami tt-rss database from mysql, then dropped it.  From my previous TT-RSS dump, the database was actually named differently, so I renamed it in the config.php file for TT-RSS and pointed it to the new db name.  Restarted everything.  Voila.  Logged in with my old creds.  All feeds and settings were in perfect condition.

It's actually noticeably faster as well.  The new SSD volume type definitely makes a difference, not that I was frustrated by its performance beforehand.

Since my initial goal in this was simply to reduce monthly payment from $15 to $10 per month, I was surprised to see that, having created a brand new AWS account (and even though I didn't fudge any of my account details, same address, same cc#, everything), I'm once again eligible for the free tier for the next year.

Friday, September 20, 2013

TLS: warning: cacertdir not implemented for gnutls

I got this error recently while trying to use ldap utilities and libraries. In the debug output from an ldapsearch, I noticed the distinct error:
TLS: warning: cacertdir not implemented for gnutls
This error comes up when you try and use the TLS_CACERTDIR directive in your ldap configuration. Googling for answers was somewhat fruitless, the first complaints of the problem started many years ago. People said it happened when the openldap packages were compiled against gnutls instead of openssl, which apparently does not support the tls_cacertdir option. The general consensus was therefor to not use that directive.

I didn't turn up anything about a fix. The current versions of the openldap packages are still broken all these years later. Although I expected to fail (there had to be reason no one did this before, right??) I tried the obvious solution and it worked.

$ mkdir /tmp/openldap
$ cd !$
$ sudo apt-get source openldap
$ sudo apt-get build-dep openldap
$ sudo apt-get install libssl-dev
$ sudo vim openldap-2.4.31/debian/configure.options

Change --with-tls=gnutls to --with-tls=openssl

$ sudo apt-get -b --no-download source openldap

Go have some lunch, mow the lawn, maybe a pub crawl. The amount incredible amount of hardcore testing that's been integrated into the build process is amazing, but it takes a while.

$ sudo dpkg --install ldap-utils_2.4.31-1ubuntu2.1_amd64.deb \
    libldap-2.4-2_2.4.31-1ubuntu2.1_amd64.deb \
    libldap2-dev_2.4.31-1ubuntu2.1_amd64.deb
$ ldapsearch -LLL -h ldap-server.example.com -D uid=andy,ou=foo,dc=example,dc=com -b dc=example,dc=com -ZZ -W uid=andy cn
Enter LDAP Password:
dn: uid=andy,ou=foo,dc=example,dc=com
cn: Andy Harrison

The other applications I was using that relied on the ldap libraries started working immediately as well.

Friday, August 23, 2013

Linux Mint Olivia - 1 week later...

A follow up to my last post Linux Mint 15 Olivia - Observations...

The Good

Xorg/KDE

Xorg has been working beautifully. No memory leaks. No squirrelly issues in performance or attitude. I haven't even had to blow away my $HOME/.kde/share/config/plasma* files even once!

Packaging

I think I finally started to make friends with the packaging system. The stock 'screen' package is left hamstrung with a MAXWIN value of 40. I can't live within the confines of only 40 so this was my catalyst for making this a priority and figuring out. I finally found some decent docs so that I could download the src-deb, extract, fix, compile, repackage, install. Not only that, but there was another package I needed to tweak and it was super easy to download the binary deb file, extract, fix, repackage, install.

The Bad

Seriously?

Also thanks to the Mint teams priorities, I quickly noticed that after fixing your default search engine in Firefox, the search autocomplete is broken.
If Aerobie Inc. paid Tesla Motors to replace the steering wheel in their vehicles with an Aerobie, do you think they should do it? After all, Tesla needs the money, so shouldn't they do it? Because it's such a great idea to have the primary means in which you steer your vehicle be a product that people used to have a little fun with a long time ago. Not only that, but let's make sure if people try to fix the mistake and switch to a real steering wheel, that it won't turn all the way.
#FAIL

Other missing package nits...

The curl package isn't installed by default. Seriously. No, I'm not kidding.
Less ridiculous exclusions that you can find in every other distro, no 'lynx' (which only old farts like me use anyway), 'pcregrep' and friends, 'mc' (again, an old fart utility) and 'whois' (ok, I work at an isp, obviously that would only be important to me).

Thursday, August 15, 2013

Linux Mint 15 Olivia - Observations

I've been an opensuse user for the last several years and usually really enjoy running it as my workstation's desktop operating system. But, as the 12.1 repos have started to unceremoniously vanish from existence, I've finally decided that enough is enough. I had been thinking about possibly another rpm based distro or even going in a completely different direction (like Arch) while avoiding any Debian based distro, but Mint has held such a commanding lead on distrowatch.com for such a long time, I thought it might be worth taking a look.

Here's some observations from the first few days.

The Good

Xorg

My Xorg memory leaks aren't present in this version. This represents about 1/3rd of the reason I started looking outside my normal opensuse comfort zone.

KDE

Mint's KDE 4.10 environment is fast. So far I haven't even gone in and shut off all the silly animations and junk. Normally, however slight the amount, these things interfere enough so that it's obvious I'm spending time waiting for animations to draw when I could be already clicked onto the next step. The animations on Mint seems so well tuned that there's actually some benefit to having the animations enabled. Otherwise, the transitions are so fast you have to almost stop and evaluate whether you clicked something and an action actually took place.

VMware

VMware Workstation 9.02 installed and ran perfectly right out of the gate. Stock install, didn't have to go and fetch linux kernel header packages or anything.

aptitude

I missed aptitude. I gave Ubuntu (kubuntu, specifically) a try many years ago and generally didn't like it. Traditionally I'm a Red Hat derivative guy, and moving to a Debian derivative was a little shocking. But aptitude was such nice piece of curses based package management. I found myself opening a shell window to install packages instead of using the gui package managers and I may continue this with Mint.

repos

Speaking of aptitude, the stock set of repos with Mint are fairly well rounded.
When I go into the main Mint repo in a browser, I see the last 11 versions of Mint. This represents the other 2/3rds of the reason I'm giving up on opensuse. I'm incredibly sick and tired of having my repos dry up and vanish on me every couple of point revs. I'm done being forced to do a full OS upgrade of a perfectly working desktop just because someone's OCD is preventing intelligent repo management. The 'zypper dup' upgrades may work for some folks, but they never, ever, ever work as expected for me. Doing a 'zypper dup' is always an 8 or more hour ordeal for me.

Java

Java works a bit better. I work on a large amount of HP enterprise class hardware. Unfortunately, doing away with Java is not an option for me because of the iLO management interface. While not perfect, the Java support is definitely better and I can reasonably expect it to work when I open up a remote console window.

The Bad

Seriously?

Firefox default search engine is Yahoo. Google isn't even present as an option in the drop-down choices. This says a lot about Mint's priorities. Spoiler alert: it isn't you.

Let's all pretend VLAN's don't exist.

If your only network connection requires VLAN tagging, you will have *no* internet access during the installation.

The Network Settings panel hasn't the faintest hint of VLAN support (before or after installation). If your installation is already underway, maybe you can use your mobile phone to Google how to set up VLAN tagging. Otherwise, hopefully you looked up how to configure VLAN tagging before starting to install Mint. The process of configuring VLAN's is obscure and stupid, similar to an enterprise Linux distro, definitely not what you'd expect from a premiere desktop Linux distribution. And since VLAN's don't exist in Mint's world and consequently do not appear in documentation, you'll have to take an educated guess on how to add them to the /etc/network/interfaces file. After you've finally figured out that the appropriate file to edit is /etc/network interfaces, that is.

I tried setting my physical ethernet interface to "link-local" just to get it out of the way of my VLAN interfaces while keeping it active. This vaporizes the ability to configure your dns servers. Even if you had used the Network Settings panel to configure dns servers previously, it quietly deletes them and replaces them with opendns. On the plus side, the Network Settings panel doesn't complain if you set your ethernet interface to "manual" and then leave everything blank except the dns servers.

VLAN network interfaces *never* show up in the Network Settings panel. You're 100% command line and text file editing to manage your VLAN interfaces.

EFI (and GPT)

EFI support is terrible. Almost everyone's is, Mint's is just worse.

With EFI present, it is not possible to complete the installation without an internet connection. Period. Even if you open a shell and manually preinstall the necessary packages (which *are* present on the live iso), the installer is hard coded to download the EFI related packages from the online public repos. It never even bothers to check if those packages are already installed, nor does it try to install them from the iso. Since VLAN's don't exist in Mint's world, if your only network connection requires VLAN tagging, you're completely out of luck. For that matter, if for any other reason you don't have internet access during the install, you're completely out of luck if you booted from the EFI loader.

Also absent are GPT partition management utilities. In general, it seems like it would be best if you just didn't start the Mint install until you'd already burned a bootable image of Parted Magic in preparation for having to do any partition editing.

tail

During install, the tail command does not work at all. Nor tailf. It shows you the last few lines of the file and just sits there with its thumb up its ass. I found a forum post where someone mentioned this issue a year (and a few Mint versions) or so ago with no response. I don't really like using less and its "F" function to follow files, but at least it works.

ssh

sshd host keys don't get generated. If you're expecting to immediately be able to ssh into your newly installed Mint host, forget it. There may be an official and proper way of doing this, but fortunately I had saved my host keys from my last Linux desktop distro so I just restored those right into place with no fuss.

Minor missing package nits:

gnu screen
socat
kgpg

Some popular but missing packages...

Try installing taskjuggler. Go ahead. With no ruby knowledge. Just try it.
Despite the well rounded stock repos, once going outside their scope, I feel like I'm really up the creek. Being an rpm guy, though familiar with package management in general, normally I know exactly what to do in any situation. Everything from how to find the difficult-to-find packages, to porting source rpm files from other distros, to building my own packages from the spec up I have no problem handling. I've even automated building Solaris packages on my own. I'm no stranger to this. Yet every time I go looking for HOWTO docs on deb packages, I feel like I'm looking at VCR schematics when generally all I want to do is stop the time from flashing 12:00.