Tag Archives: AntiFUD

VSCode, SSH, and you

In this very short post we’ll take a look at how to properly set up your environment to allow you working with your local IDE to a remote server.

Identity files

Since you’re logging through an identity file — because you’re not logging in with a password now, are you? — we need to bind a specified host to a user and identity file, also known as OpenSSH private key. To do so we modify or create ~/.ssh/config or %UserProfile%\.ssh\config on Windows. The file structure is pretty simple:

Host <ip/hostname>
	User <username>
	IdentityFile <PEM file path>

With that information saved, we can simply ssh <ip/hostname> to remote in, without specifying anything.

Identity passphrase

The next problem in our automation process is to store the passphrase for the session, to avoid manual prompts each time we start a remote instance of VSCode. In order to do this we need to ensure that ssh-agent is running properly. On Windows it’s enough to make sure that the “OpenSSH Authentication Agent” service is running, while on Linux it depends on the distro. Generally, we can make sure that it’s running by issuing ps x | grep ssh-[a]gent, and running ssh-agent once if it’s not.

Now we can simply ssh-add <PEM file path>, insert our passphrase, and ssh with reckless abandon.

Visual Studio Code

The majority of the work has now been done, we can simply install the Remote Development extension, and to remote in we select “Remote SSH: Connect to Host…” from the command palette, or press on the little green area in the bottom left part of the status bar.

From there we just “Open folder” and navigate the remote server directory structure, open where we want and code at leisure.

OSSEC troubleshooting

Today we continue the saga of things I was supposed to write down but didn’t, for reasons unknown. Suppose you migrated your OSSEC management server, or freshly installed what will be the new manager on a new OS. You import the keys, as described in my previous post, but the connection fails for one or both of these reasons:

  • ossec-remoted(1403): ERROR: Incorrectly formatted message from '<client_ip>'. – Pick your own adventure-style error message.
  • ossec-agentd(1407): ERROR: Duplicated counter for '<server_name>'. – Incorrect serials.

This has happened several times over the course of the last decade, due to client/server version mismatch, drive failures, and what have you. There’s a pretty brute-force way to solve these problems, though:

  • stop both server and client;
  • on the client, delete everything inside /var/ossec/queue/rids;
  • reimport the key on the client (unsure if this step is really needed);
  • start the server;
  • test that the client is working, via ossec-agentd -d -f.
  • if the client is working, start the service.

That’s it. There’s nothing that a good ol’ rm -rf * can’t solve.

Nvidia drivers and MSI support in Windows

Today I started searching for an old article of mine in regard to guest Windows VMs and the troubles with pass-through Nvidia cards. Picture me surprised when I found out that I never actually posted it, although the article has been in the back of my mind for the past two years or so. So, I’ll write it right now, since it contains valuable information that might help some people.

PCI pass-through

There are only a handful of problems with PCI pass-through of video devices:

  1. manufacturers are dicks. You can’t pass-through the first graphic card on consumer devices, because reasons. If you buy a workstation grade with the same hardware though, we’ll allow it.
  2. Nvidia is a dick. If the drivers on the guest sniff out that you’re running within a hypervisor, they won’t work. At all. They refuse to load.
  3. Nvidia is a dick. Although every card supports MSI mode as a replacement for line-based mode, every single time you install the drivers the MSI mode gets reset, as only the workstation/server grade drivers flag the system about message mode. You’re not using the card in a guest machine after all, right? Right?

So, here are fixes for the problems above, same numerical order:

  1. none. The best thing you can do is have GPU capabilities in the CPU. This could/should work (untested).
  2. there are ways to “unflag” a guest machine from the dom0. On KVM through QEMU you can specify a `kvm=off` for the CPU, or edit the machine with `virsh edit`.
  3. after the drivers are installed you can manually edit the Windows registry to enable MSI (also needs a reboot).

MSI and you

There are various arbitrary sources that can tell you why MSI is better than the default line-based counterpart, but when it comes to virtualization I can tell you the top reason why you want to switch to MSI: line-based is unstable. I’ve used my virtualized main workstation/gaming station for a while now, and the only times video card had troubles or the entire VM crashed, was because something between the drivers and the pass-through of the IRQ interrupts in line-based mode failed hard. Since the discovery of MSI I stopped having issues with the video card and everything runs butter smooth.

So, to recap:

  • Audio coming from the video card crackling? Switch to MSI.
  • Guest O/S crashing? Switch to MSI.
  • Video drivers throwing a fit? Switch to MSI.
  • Bored? Switch to MSI.
  • Switch to MSI.

Enable MSI

Checking is fairly simple, just open Computer Management’s Device Manager, and check if the NVIDIA Geforce <whatever> and the relative High Definition Audio Controller have a positive or negative value.

List by connection
Line-based
MSI based

If the value you see is greater than zero, you should switch to MSI. In order to do that, you need to open the device properties and find the device instance path:

With that in hand, you can open HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\PCI path in the registry, and follow the device instance path to find the following:

With MSI disabled you will notice that the MessageSignaledInterruptProperties key is missing, as you will need to create it along with the DWORD MSISupported set to 1.

That’s all there is to it. You can now reboot the system and the drivers will use MSI mode. Any audio crackling coming from the monitors will be gone, and everyone will rejoice.

Self-Signed Certificate with Subject Alternative Names (SAN) [AntiFUD]

Wrangling obscure OpenSSL functions to create and publish SSL certificates has always been kind of a mess. If you want(ed) to create a valid self-signed certificate for multi domains or, at least, example.com and www.example.com, you most likely were out of luck.

There is a lot of wrong or partial documentation on the subject, but is… well… wrong and/or incomplete. It is thus time for another episode of AntiFUD.

The problem

You have multiple paths of the same website to cover for, but a single CN. If you use example.com then www.example.com will result in invalid SSL certificate, and vice versa. Suppose you have the following domain names:

  • example.com
  • www.example.com
  • *.user.example.com

In such a scenario there is no real victory no matter what you choose to use as a CN: the most used wildcard CN, *.example.com, is of no use either because it matches with www.example.com and user.example.com, but not with username1.user.example.com. The only way to address all these issues is to create and sign a X.509 v3 SSL certificate, to allow SAN. The SAN extension has been introduce to resolve all of these problems, allowing the validity of multiple domains/subdomains within the same certificate.

Creating the certificate

We have to start by creating an alternative configuration file to use with OpenSSL, and list the server names we need. As mentioned below we also have to enable the usage of v3 extensions.

# mkdir certificates
# cd certificates
# cp /etc/ssl/openssl.cnf ./example-com.cnf

We can now edit the file and adjust as needed:

[ req ]
x509_extensions = v3_ca
req_extensions = v3_req

[ usr_cert ]
keyUsage = nonRepudiation, digitalSignature, keyEncipherment

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName=@alt_names

[ v3_ca ]
subjectAltName=@alt_names

[ alt_names ]
DNS.1 = example.com
DNS.2 = www.example.com
DNS.3 = *.host.example.com

In the default file, parameters such as req_extensions and keyUsage are commented out, while subjectAltName is missing. We have to add it to v3_req and v3_ca, and create the respective section. It can be created anywhere in the file, but it is generally appended to the bottom. Since the CN is (or, at least, should be) ignored in the presence of SAN, we insert all the names in the alt_names field.

With the configuration in place we can now create the certificate:

# openssl genrsa -out example-com.key 4096
# openssl req -new -config example-com.cnf -key example-com.key -out example-com.csr
# openssl x509 -req -in example-com.csr -CA rootCA.pem -CAkey rootCA.key -CAserial rootCA.srl -out example-com.crt -days 365  -extfile example-com.cnf -extensions v3_ca

The deviation from the standard procedure is the addition of the v3 during the CA sign. We do this by using -extfile example-com.cnf to use the custom configurations, and specifying -extensions v3_ca to make sure SAN are passed through and saved in the signed certificate.

To make sure it worked you can do the following:

# openssl x509 -in example-com.crt -text -noout
        […]
        X509v3 extensions:
            X509v3 Subject Alternative Name:
                DNS:example.com, DNS:www.example.com, DNS:*.user.example.com
            X509v3 Subject Key Identifier:
                xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
            X509v3 Authority Key Identifier:
                xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx

            X509v3 Basic Constraints:
                CA:TRUE
        […]

The only thing left to do is to set up the certificates in the server, and everything will work as intended.