Tag: rhel

Quick n’ Dirty – Adding disks to Proxmox with LVM

Proxmox LVM Expansion, adding additional disks to your Proxmox host for VM storage.  No real story behind this post, just something simple and probably more of a documentation of the process for myself more than anything.

In case you were just CRAVING the story and background behind this…well, I recently got a few new (to me) Dell R710 servers, pretty decked out.  Booting Proxmox off of a 128gb USB 3.0 Stick, the internal hard drives were untouched and more importantly, unmounted when booting into Proxmox.

It pays to have an RHCSA…(PV/VG/LV is part of it).  In looking for a guides and resources detailing the addition of additional disks to Proxmox, many of them had it set as a mounted ext3 filesystem.  I knew this couldn’t be right.  A lot of other resources were extremely confusing, and then I realized that Proxmox uses LVM natively so if I recall to my RHCSA training all I need to do is assign the disks as a Physical Volume, add them to the associated Volume Group, and extend the Logical Volume.  Then boom, LVM handles the rest of it for me like magic.

I’ve got 2 120gb SSDs in the server waiting to host some VMs so let’s get those disks initialized and added to the LVM pools!

And before you ask, yes that set of 120gb SSDs is in a RAID0.  As you can tell, I too like to live dangerously…but seriously they’re SSDs and I’ll probably reformat the whole thing before it could even error out…and yeah I could use RHV Self-Hosted, but honestly, for quick tests in a lab Proxmox is easier for me.  This isn’t production after all…geez, G.O.M.D.

Take 1

First thing, load into the Proxmox server terminal, either with the keyboard and mouse or via the Web GUI’s Shell option.  You’ll want to be root.

Next, use fdisk -l to see what disk you’ll be attaching, mine looked something like this:

What I’m looking for is that /dev/sda device.  Let’s work with that.

Next, we’ll initialize the partition table, let’s use cfdisk for that…

cfdisk /dev/sda

> New -> Primary -> Specify size in MB
> Write
> Quit

Great, next let’s create a Physical Volume from that partition.  It’ll ask if you want to wipe, press Y

pvcreate /dev/sda1

Next we’ll extend the pve Volume Group with the new Physical Volume…

vgextend pve /dev/sda1

We’re almost there, next let’s extend the logical volume for the PVE Data mapper…we’re increasing it by 251.50GB, you can find that size by seeing how much is available with the vgs command

lvextend /dev/pve/data -L +251.50g

And that’s it! now if we jump into Proxmox and check the Storage across the Datacenter we can see it’s increased!  Or we can run the command…

lvdisplay

Rinse and Repeat

Now we’re on my next Proxmox node.  No, I’m not building a cluster and providing shared storage, at least not at this layer.

My next system is a Dell R710 with Proxmox freshly installed on an internal 128gb USB flash drive.  It has two RAID1+1hot-spare arrays that are about 418GB large each, they’re at /dev/sdb and /dev/sdc.  Let’s add them really quickly…

cfdisk /dev/sdb

> GPT
> New -> Primary -> Specify size in MB
> Write
> Quit

cfdisk /dev/sdc

> GPT
> New -> Primary -> Specify size in MB
> Write
> Quit

pvcreate /dev/sdb1 && pvcreate /dev/sdc1
vgextend pve /dev/sdb1 && vgextend pve /dev/sdc1
lvextend /dev/pve/data -L +851.49g

And now we should have just about a terabyte of storage available to load VMs into…

Now with more room for activities!

Huzzah!  It worked!  Plenty of room for our VMs to roam around now.

What am I gonna do with a few redundant TBs of VM storage and about half a TB in available RAM and more compute than makes sense?  Continue along my Disconnected DevSecOps lab challenge of course.  You might remember some software defined networking services being tested on a Raspberry Pi Cluster…

More soon to come…

Software Defined Networking with Linux

Well well well, it’s been a while y’all.

Been busy developing and writing a few things, some more exciting stuff coming up in the pipeline.
A lot of the projects I’m working on have to kind of sort of “plug together” and to do a lot of this I use open-source solutions and a lot of automation.
Today I’d like to show you how to setup a Linux based router, complete with packet forwarding, DHCP, DNS, and dare I even say NTP!

Why and what now?

One of the projects I’m working on requires deployment into a disconnected environment, and it’s a lot of things coming together.  Half a dozen Red Hat products, some CloudBees, and even some GitLab in the mix.  Being disconnected, there needs to be some way to provide routing services.  Some would buy a router such as a Cisco ISR, I in many cases like to deploy a software-based router such as pfSense or Cumulus Linux.  In this environment, there’s a strict need to only deploy Red Hat Enterprise Linux, so that’s what I used and that’s what this guide is based around but it can be used with CentOS with little to no modification, and you can execute the same thing on Debian based system with some minor substitutions.

A router allows packets to be routed around and in and out of the network, DHCP allows other clients to obtain an IP automatically as you would at home, and DNS allows for resolution of URLs such as google.com into 123.45.67.190 which can also be used to resolve hostnames internally.  NTP ensures that everyone is humming along at the same beat.  Your Asus or Nighthawk router and datacenters use Linux to route traffic every day and we’ll be using the same sort of technologies to deliver routing to our disconnected environment.

Today’s use case

Let’s imagine you start with this sort of environment, maybe something like this…

This slideshow requires JavaScript.

What we have here is a 7-node Raspberry Pi 3 B+ cluster!

3 nodes have 2x) 32gb USB drives in them to support a 3-node replica Gluster cluster (it’s fucking magic!).  Then 3 other nodes are a part of a Kubernetes cluster, and the last RPi is the brains of the operation!

In order to get all these nodes talking to each other, we could set static IPs on every node and tell everyone where everyone else is at and call it a day.  In reality, though, no one does that and it’s a pain if not daunting.  So the last Raspberry Pi will offer DHCP, DNS, and NTP to the rest of the Kubernetes and Gluster clusters while also offering service as a wifi bridge and bastion host to the other nodes!  I’ve already got this running on Raspbian and have some workloads operating so I’ve recreated this lab in VirtualBox with a Virtual Internal Network and Red Hat Enterprise Linux.

Step 1 – Configure Linux Router

Before we proceed, let’s go along with the following understandings of your Linux Router machine:

  • Running any modern Linux, RHEL, Cumulus Linux, Raspbian, etc
  • Has two network interface cards, we’ll call them eth0 and eth1:
    • WAN (eth0) – This is where you get the “internet” from.  In the RPi cluster, it’s the wlan0 wifi interface, in my RHEL VM it’s named enp0s3.
    • LAN (eth1) – This is where you connect the switch to that connects to the other nodes, or the virtual network that the VMs live in.  In my RHEL VM it’s named enp0s8.
  • We’ll be using the network 192.168.69.0/24 on the LAN side (or netmask of 255.255.255.0 for those who don’t speak CIDR), and setting our internal domain to kemo.priv-int

I’m starting with a fresh RHEL VM here, so the first thing I want to do is jump into root and set my hostname for my router, update packages, and install the ones we’ll need.

sudo -i
hostnamectl set-hostname router.kemo.priv-int
yum update -y
yum install firewalld dnsmasq bind-utils

Now that we’ve got everything set up, let’s jump right into configuring the network interface connections.  As I’m sure you all remember from your RHCSA exam prep, we’ll assign a connection to the eth1 interface to set up the static IP of the router on the LAN side and bring it up.  So assuming that your WAN on eth0 is already up (check with nmcli con show) and has a connection via DHCP, let’s make a connection for LAN/eth1 (my enp0s8)…

nmcli con add con-name lanSide-enp0s8 ifname enp0s8 type ethernet ip4 192.168.69.1/24 gw4 192.168.69.1
nmcli con modify lanSide-enp0s8 ipv4.dns 192.168.69.1

Before we bring up the connection, let’s set up dnsmasq.  dnsmasq will serve as both our DNS and DHCP servers which is really nice!  Go ahead and open /etc/dnsmasq.conf with your favorite editor…

vi /etc/dnsmasq.conf

And add the following lines:

# Bind dnsmasq to only serving on the LAN interface
interface=enp0s8
bind-interfaces
# Listen on the LAN address assigned to this Linux router machine
listen-address=192.168.69.1
# Upstream DNS, we're using Google here
server=8.8.8.8
# Never forward plain/short names
domain-needed
# Never forward addresses in the non-routed address space (bogon networks)
bogus-priv
# Sets the DHCP range (keep some for static assignments), and the lifespan of the DHCP leases
dhcp-range=192.168.69.100,192.168.69.250,12h
# The domain to append short requests to, all clients in the 192.168.69.0/24 subnet have FQDNs based on their hostname
domain=kemo.priv-int,192.168.69.0/24
local=/kemo.priv-int/
# Add domain name automatically
expand-hosts

Annnd go ahead and save that file.

Now, on a RHEL/CentOS 7 machine, we have firewalld enabled by default so let’s make sure to enable those services.

firewall-cmd --add-service=dns --permanent
firewall-cmd --add-service=dhcp --permanent
firewall-cmd --add-service=ntp --permanent
firewall-cmd --reload

Next, we’ll need to tell the Linux kernel to forward packets by modifying the /etc/sysctl.conf file and add the following line:

net.ipv4.ip_forward=1

It might already be in the file but commented out, so simply remove the pound/hashtag in front and that’ll do.  Still, need to enable it though:

echo 1 > /proc/sys/net/ipv4/ip_forward

Yep, almost set so let’s just bring up the network interface connection for eth1, set some iptable NAT masquerading and save it, and enable dnsmasq…

iptables -t nat -A POSTROUTING -o enp0s3 -j MASQUERADE
iptables -A FORWARD -i enp0s3 -o enp0s8 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i enp0s8 -o enp0s3 -j ACCEPT
firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o enp0s3 -j MASQUERADE -s 192.168.69.0/24
iptables-save > /etc/iptables.ipv4.nat
nmcli con up lanSide-enp0s8
systemctl enable dnsmasq && systemctl start dnsmasq

Step 2 – Connect Clients & Test

So this part is pretty easy actually, you’ll just need to connect the clients/nodes to the same switch, or make a few other VMs in the same internal network.  Then you can check for DHCP leases with the following command:

tail -f /var/lib/dnsmasq/dnsmasq.leases

And you should see the lease time, MAC address, associated IP, and client hostname listed for each connected client on this routed network!  We should be able to ping all those hostnames now too…

This is great, and we have many of the core components needed by a routed and switched network.  Our use case needs some very special considerations for time synchronization so we’ll use this same Linux router to offer NTP services to the cluster as well!

Step 3 – Add NTP

Here most people would choose to use NTPd which is perfectly fine.  However, RHEL and CentOS (and many other modern Linux distros) come preconfigured with Chronyd which is sort of a newer, better, faster, stronger version of NTPd with some deeper integrations into systemd.  So today I’ll be using Chronyd to setup an NTP server on this Linux router.  Chronyd is also a bit better for disconnected environments, too.

Essentially, we just need to modify the /etc/chrony.conf and set the following lines:

stratumweight 0
local stratum 10
allow 192.168.69.0/24

After that, enable NTP synchronization and restart with:

timedatectl set-ntp 1
systemctl restart chronyd

And give that a moment to sync and you should have a fully functional network core based on simple Linux and a few packages!

Next Steps

There are a few things that come to mind that you could do in this sort of environment…

  • Create an actual GPS-based NTP Server – Be your own source!
  • Set Static Host/IP Mappings – Make sure you have a section of IPs available that aren’t in the DHCP block to set the static IP reservations to.
  • Create site-to-site VPNs – Tack on a bit of OpenVPN and we could easily create a secure site-to-site VPN to join networks or access other resources!
  • Anything in your router’s web UI – Pretty much every router out there runs some sort of Linux embedded, and they’re all abstracting elements and functions that are primarily built-in and accessible to everyone.  Set up port-forwarding?  No problem. Add UPnP?  Not too hard either.
  • Add PiHole Ad-Blocker – Maybe you’re using a Raspberry Pi as a wireless bridge to connect some hard wired devices on a switch to a wifi network.  Wouldn’t it be nice to block ads for all those connected devices?  You can with PiHole!

Wrangling Bluetooth in RHEL & CentOS 7

I recently started as the Lead Solution Architect at Fierce Software. It’s been an excellent experience so far and I’m excited to be working with such a great team. At Fierce Software we help Public Sector entities find enterprise ready open source solutions. Our portfolio of vendors is deliberately diverse because in today’s enterprise multi-vendor, multi-vertical solutions are commonplace. No one’s running all Cisco network gear, Cisco servers, annnnd then what’s after that? They don’t have a traditional server platform, even they partner with Red Hat and VMWare for that. We help navigate these deep and wide waters to find your Treasure Island (no affiliation).

New business cards

These cards are Fierce…

 

Starting a new role also comes with some advantages such as a new work laptop. I’ll be traveling regularly, running heavy workloads such as Red Hat Satellite VMs, and balancing two browsers with a few dozen tabs each (with collab and Spotify in the background). Needless to say, I need a powerful, sturdy, and mobile workhorse with plenty of storage.

A few years ago that would have been a tall order, or at the very least an order that could get you a decent used Toyota Camry. Now this is possible without breaking the bank and while checking all the boxes.

Enter System76’s 3rd generation Galago Pro laptop. Sporting a HiDPI screen, 8th generation Intel Core i7, an aluminum body, backlit keyboard, and plenty of connectivity options. This one is configured with the i7-8550U processor for that extra umph, 32GB of DDR4 RAM, Wireless AC, a 500gb m.2 NVMe drive, and an additional 250GB Samsung SSD. That last drive replaced a configured 1TB spinner that I toss in an external enclosure and replaced with a spare SSD I had around. Great thing about System76’s Galago Pro is that it’s meant to be modified so that 250GB SSD I put in will later be replaced with a much larger SSD.

At configuration you have the option of running Pop!_OS or Ubuntu. Being a Red Hat fan boy, I naturally ran RHEL Server with GUI. Why RHEL Server instead of Desktop or Workstation? Mostly because I have a Developer Subscription that can be consumed by a RHEL Server install, and it can run 4 RHEL guest VMs as well. If you’ve ever used enterprise-grade software on commodity, even higher-end commodity hardware, you might have an idea of where this is going…

Problem Child

One of the benefits of Red Hat Enterprise Linux is that it is stable, tested, and certified on many hardware platforms. System76 isn’t one of those (yet). One of the first issues is the HiDPI screen, not all applications like those extra pixels. There are some workarounds, I’ll get to that in a later post. System76 provides drivers and firmware updates for Debian-based distributions so some of their specialized configuration options aren’t available naturally. These are some issues I’ll be working on to produce and provide a package to enable proper operation of Fedora/RHEL based distributions on the Galago Pro. More on that later…

The BIGGEST issue I had was with Bluetooth. This is not inherent to System76 or Red Hat. It’s actually (primarily) a BlueZ execution flag issue and I’ll get to that in a moment…

Essentially my most crucial requirement is for my Bluetooth headset to work. We use telepresence software like Google Hangouts and Cisco WebEx all day long to conduct business and a wired headset is difficult to use and especially when mobile. I spent probably half a day trying to get the damned thing working.

I attempted to connect my headset with little success, it kept quickly disconnecting. I paired the Galago Pro to one of my Amazon Echos and it worked for a few minutes then began distorting and skipping. I plugged in a USB Bluetooth adapter, blacklisted the onboard one, still had the same issues. Must be a software thing…

Have you tried turning it on and off again?

Part of the solution? Buried in an Adafruit tutorial for setting up BlueZ on Raspbian on a Raspberry Pi…
Give up and search Google for “best linux bluetooth headset” and you get a whole lot of nothing. Comb through mail-lists, message boards, and random articles? Nowhere and nada.
Give me enough time and I can stumble through this Internet thing and piece together a solution though.

Galago Pro and BT Headset

The plaintiff, a hot shot SA determined to have his day in wireless audio; the defendant, an insanely stable platform

 

Essentially I’ll (finally) get to the short part of how to fix your Bluetooth 4.1 LE audio devices in RHEL/CentOS 7. Probably works for other distros? Not sure, don’t have time to test. All you have to do is enable experimental features via a configuration change….and set a trust manually…and if you want higher quality audio automatically, create a small file in /etc/bluetooth/…

Step 1 – Experimental Features

So you want to use your nice, new, BLE 4.1 headset with your RHEL/CentOS 7 system…that low energy stuff is a little, well…newer than what is set by default so we just need to add a switch to the Bluetooth service execution script to enable those “fresh” and “hot” low energy features…

$ sudo nano /lib/systemd/system/bluetooth.service

Enable the experimental features by adding –experimental to the ExecStart line, for example the configuration should look like:

…
[Service]
Type=dbus
BusName=org.bluez
ExecStart=/usr/local/libexec/bluetooth/bluetoothd --experimental
NotifyAccess=main
…

Save the file and then run the following commands:

$ sudo systemctl daemon-reload
$ sudo systemctl restart bluetooth

Step 2 – Set Trust

So you’ll find that some of the GUI tools might not coordinate a Bluetooth device pair/trust properly…it only has to manage between the service, BlueZ daemon, and Pulseaudio (!), what is so difficult about that?
Let’s just take the headache out of it and load up bluetoothctl, list my devices, and set a manual trust. This is assuming you’ve at least paired the BT device with the GUI tools, but it might vibrate oddly or disconnect quickly after connecting. There are ways to setup pairing via bluetoothctl as well, but the help command and manpages will go into that for you.

$ sudo bluetoothctl
> devices
  Device XX:XX:XX:XX:XX:XX
> trust XX:XX:XX:XX:XX:XX

Step 3 – Create /etc/bluetooth/audio.conf

So now we have it paired, trusted, and the newer BLE 4.1 features enabled. If your BT headset also includes a microphone for call functionality, you might find the headset auto-connecting to the lower quality HSP/HFP profile. We want that tasty, stereo sound from the A2DP profile. Let’s tell the BlueZ daemon to auto-connect to that A2DP sink.

$ sudo echo "AutoConnect=true" > /etc/bluetooth/audio.conf
$ sudo systemctl restart bluetooth.service

Now, turn off and on your headset, it should auto-connect, and your Bluetooth 4.1 LE headphones should work all fine and dandy now with high(er) fidelity sound!