Tag: walkthrough

Quick n’ Dirty – Adding disks to Proxmox with LVM

Proxmox LVM Expansion, adding additional disks to your Proxmox host for VM storage.  No real story behind this post, just something simple and probably more of a documentation of the process for myself more than anything.

In case you were just CRAVING the story and background behind this…well, I recently got a few new (to me) Dell R710 servers, pretty decked out.  Booting Proxmox off of a 128gb USB 3.0 Stick, the internal hard drives were untouched and more importantly, unmounted when booting into Proxmox.

It pays to have an RHCSA…(PV/VG/LV is part of it).  In looking for a guides and resources detailing the addition of additional disks to Proxmox, many of them had it set as a mounted ext3 filesystem.  I knew this couldn’t be right.  A lot of other resources were extremely confusing, and then I realized that Proxmox uses LVM natively so if I recall to my RHCSA training all I need to do is assign the disks as a Physical Volume, add them to the associated Volume Group, and extend the Logical Volume.  Then boom, LVM handles the rest of it for me like magic.

I’ve got 2 120gb SSDs in the server waiting to host some VMs so let’s get those disks initialized and added to the LVM pools!

And before you ask, yes that set of 120gb SSDs is in a RAID0.  As you can tell, I too like to live dangerously…but seriously they’re SSDs and I’ll probably reformat the whole thing before it could even error out…and yeah I could use RHV Self-Hosted, but honestly, for quick tests in a lab Proxmox is easier for me.  This isn’t production after all…geez, G.O.M.D.

Take 1

First thing, load into the Proxmox server terminal, either with the keyboard and mouse or via the Web GUI’s Shell option.  You’ll want to be root.

Next, use fdisk -l to see what disk you’ll be attaching, mine looked something like this:

What I’m looking for is that /dev/sda device.  Let’s work with that.

Next, we’ll initialize the partition table, let’s use cfdisk for that…

cfdisk /dev/sda

> New -> Primary -> Specify size in MB
> Write
> Quit

Great, next let’s create a Physical Volume from that partition.  It’ll ask if you want to wipe, press Y

pvcreate /dev/sda1

Next we’ll extend the pve Volume Group with the new Physical Volume…

vgextend pve /dev/sda1

We’re almost there, next let’s extend the logical volume for the PVE Data mapper…we’re increasing it by 251.50GB, you can find that size by seeing how much is available with the vgs command

lvextend /dev/pve/data -L +251.50g

And that’s it! now if we jump into Proxmox and check the Storage across the Datacenter we can see it’s increased!  Or we can run the command…


Rinse and Repeat

Now we’re on my next Proxmox node.  No, I’m not building a cluster and providing shared storage, at least not at this layer.

My next system is a Dell R710 with Proxmox freshly installed on an internal 128gb USB flash drive.  It has two RAID1+1hot-spare arrays that are about 418GB large each, they’re at /dev/sdb and /dev/sdc.  Let’s add them really quickly…

cfdisk /dev/sdb

> New -> Primary -> Specify size in MB
> Write
> Quit

cfdisk /dev/sdc

> New -> Primary -> Specify size in MB
> Write
> Quit

pvcreate /dev/sdb1 && pvcreate /dev/sdc1
vgextend pve /dev/sdb1 && vgextend pve /dev/sdc1
lvextend /dev/pve/data -L +851.49g

And now we should have just about a terabyte of storage available to load VMs into…

Now with more room for activities!

Huzzah!  It worked!  Plenty of room for our VMs to roam around now.

What am I gonna do with a few redundant TBs of VM storage and about half a TB in available RAM and more compute than makes sense?  Continue along my Disconnected DevSecOps lab challenge of course.  You might remember some software defined networking services being tested on a Raspberry Pi Cluster…

More soon to come…

Software Defined Networking with Linux

Well well well, it’s been a while y’all.

Been busy developing and writing a few things, some more exciting stuff coming up in the pipeline.
A lot of the projects I’m working on have to kind of sort of “plug together” and to do a lot of this I use open-source solutions and a lot of automation.
Today I’d like to show you how to setup a Linux based router, complete with packet forwarding, DHCP, DNS, and dare I even say NTP!

Why and what now?

One of the projects I’m working on requires deployment into a disconnected environment, and it’s a lot of things coming together.  Half a dozen Red Hat products, some CloudBees, and even some GitLab in the mix.  Being disconnected, there needs to be some way to provide routing services.  Some would buy a router such as a Cisco ISR, I in many cases like to deploy a software-based router such as pfSense or Cumulus Linux.  In this environment, there’s a strict need to only deploy Red Hat Enterprise Linux, so that’s what I used and that’s what this guide is based around but it can be used with CentOS with little to no modification, and you can execute the same thing on Debian based system with some minor substitutions.

A router allows packets to be routed around and in and out of the network, DHCP allows other clients to obtain an IP automatically as you would at home, and DNS allows for resolution of URLs such as google.com into which can also be used to resolve hostnames internally.  NTP ensures that everyone is humming along at the same beat.  Your Asus or Nighthawk router and datacenters use Linux to route traffic every day and we’ll be using the same sort of technologies to deliver routing to our disconnected environment.

Today’s use case

Let’s imagine you start with this sort of environment, maybe something like this…

This slideshow requires JavaScript.

What we have here is a 7-node Raspberry Pi 3 B+ cluster!

3 nodes have 2x) 32gb USB drives in them to support a 3-node replica Gluster cluster (it’s fucking magic!).  Then 3 other nodes are a part of a Kubernetes cluster, and the last RPi is the brains of the operation!

In order to get all these nodes talking to each other, we could set static IPs on every node and tell everyone where everyone else is at and call it a day.  In reality, though, no one does that and it’s a pain if not daunting.  So the last Raspberry Pi will offer DHCP, DNS, and NTP to the rest of the Kubernetes and Gluster clusters while also offering service as a wifi bridge and bastion host to the other nodes!  I’ve already got this running on Raspbian and have some workloads operating so I’ve recreated this lab in VirtualBox with a Virtual Internal Network and Red Hat Enterprise Linux.

Step 1 – Configure Linux Router

Before we proceed, let’s go along with the following understandings of your Linux Router machine:

  • Running any modern Linux, RHEL, Cumulus Linux, Raspbian, etc
  • Has two network interface cards, we’ll call them eth0 and eth1:
    • WAN (eth0) – This is where you get the “internet” from.  In the RPi cluster, it’s the wlan0 wifi interface, in my RHEL VM it’s named enp0s3.
    • LAN (eth1) – This is where you connect the switch to that connects to the other nodes, or the virtual network that the VMs live in.  In my RHEL VM it’s named enp0s8.
  • We’ll be using the network on the LAN side (or netmask of for those who don’t speak CIDR), and setting our internal domain to kemo.priv-int

I’m starting with a fresh RHEL VM here, so the first thing I want to do is jump into root and set my hostname for my router, update packages, and install the ones we’ll need.

sudo -i
hostnamectl set-hostname router.kemo.priv-int
yum update -y
yum install firewalld dnsmasq bind-utils

Now that we’ve got everything set up, let’s jump right into configuring the network interface connections.  As I’m sure you all remember from your RHCSA exam prep, we’ll assign a connection to the eth1 interface to set up the static IP of the router on the LAN side and bring it up.  So assuming that your WAN on eth0 is already up (check with nmcli con show) and has a connection via DHCP, let’s make a connection for LAN/eth1 (my enp0s8)…

nmcli con add con-name lanSide-enp0s8 ifname enp0s8 type ethernet ip4 gw4
nmcli con modify lanSide-enp0s8 ipv4.dns

Before we bring up the connection, let’s set up dnsmasq.  dnsmasq will serve as both our DNS and DHCP servers which is really nice!  Go ahead and open /etc/dnsmasq.conf with your favorite editor…

vi /etc/dnsmasq.conf

And add the following lines:

# Bind dnsmasq to only serving on the LAN interface
# Listen on the LAN address assigned to this Linux router machine
# Upstream DNS, we're using Google here
# Never forward plain/short names
# Never forward addresses in the non-routed address space (bogon networks)
# Sets the DHCP range (keep some for static assignments), and the lifespan of the DHCP leases
# The domain to append short requests to, all clients in the subnet have FQDNs based on their hostname
# Add domain name automatically

Annnd go ahead and save that file.

Now, on a RHEL/CentOS 7 machine, we have firewalld enabled by default so let’s make sure to enable those services.

firewall-cmd --add-service=dns --permanent
firewall-cmd --add-service=dhcp --permanent
firewall-cmd --add-service=ntp --permanent
firewall-cmd --reload

Next, we’ll need to tell the Linux kernel to forward packets by modifying the /etc/sysctl.conf file and add the following line:


It might already be in the file but commented out, so simply remove the pound/hashtag in front and that’ll do.  Still, need to enable it though:

echo 1 > /proc/sys/net/ipv4/ip_forward

Yep, almost set so let’s just bring up the network interface connection for eth1, set some iptable NAT masquerading and save it, and enable dnsmasq…

iptables -t nat -A POSTROUTING -o enp0s3 -j MASQUERADE
iptables -A FORWARD -i enp0s3 -o enp0s8 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i enp0s8 -o enp0s3 -j ACCEPT
firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o enp0s3 -j MASQUERADE -s
iptables-save > /etc/iptables.ipv4.nat
nmcli con up lanSide-enp0s8
systemctl enable dnsmasq && systemctl start dnsmasq

Step 2 – Connect Clients & Test

So this part is pretty easy actually, you’ll just need to connect the clients/nodes to the same switch, or make a few other VMs in the same internal network.  Then you can check for DHCP leases with the following command:

tail -f /var/lib/dnsmasq/dnsmasq.leases

And you should see the lease time, MAC address, associated IP, and client hostname listed for each connected client on this routed network!  We should be able to ping all those hostnames now too…

This is great, and we have many of the core components needed by a routed and switched network.  Our use case needs some very special considerations for time synchronization so we’ll use this same Linux router to offer NTP services to the cluster as well!

Step 3 – Add NTP

Here most people would choose to use NTPd which is perfectly fine.  However, RHEL and CentOS (and many other modern Linux distros) come preconfigured with Chronyd which is sort of a newer, better, faster, stronger version of NTPd with some deeper integrations into systemd.  So today I’ll be using Chronyd to setup an NTP server on this Linux router.  Chronyd is also a bit better for disconnected environments, too.

Essentially, we just need to modify the /etc/chrony.conf and set the following lines:

stratumweight 0
local stratum 10

After that, enable NTP synchronization and restart with:

timedatectl set-ntp 1
systemctl restart chronyd

And give that a moment to sync and you should have a fully functional network core based on simple Linux and a few packages!

Next Steps

There are a few things that come to mind that you could do in this sort of environment…

  • Create an actual GPS-based NTP Server – Be your own source!
  • Set Static Host/IP Mappings – Make sure you have a section of IPs available that aren’t in the DHCP block to set the static IP reservations to.
  • Create site-to-site VPNs – Tack on a bit of OpenVPN and we could easily create a secure site-to-site VPN to join networks or access other resources!
  • Anything in your router’s web UI – Pretty much every router out there runs some sort of Linux embedded, and they’re all abstracting elements and functions that are primarily built-in and accessible to everyone.  Set up port-forwarding?  No problem. Add UPnP?  Not too hard either.
  • Add PiHole Ad-Blocker – Maybe you’re using a Raspberry Pi as a wireless bridge to connect some hard wired devices on a switch to a wifi network.  Wouldn’t it be nice to block ads for all those connected devices?  You can with PiHole!

Rolling up Let’s Encrypt on Ansible Tower’s UI

The other day someone asked me what I do for fun.

“Fun” really has a few different definitions for me, and I’d say for most people.  It could be entertainment, guttural satisfaction, leisurely adventuring about, or maybe for some slightly compulsive people like me, accomplishing a task.  Something I’m kind of overly compulsive about is proper SSL implementation and PKI.

So this morning I was having LOADS of fun.  My fast just started to kick in with some of the good energy and ‘umph’ so I was feeling great.  Bumping that new Childish summertime banger, really grooving.  I just finished spinning up a new installation of Ansible Tower and logged in.  That’s when the Emperor lost his groove.

I’ve seen the screen plenty of times in the Ansible Tower Workshops and simply, almost reflexively skip past the big warning sign you see when you first log into an Ansible Tower server’s UI.  The big warning sign isn’t too crucial in the large scheme of things, but it really stuck out to me this time.  Maybe because this server is part of a larger permanent infrastructure play, but it really got to me and I HAD to install some proper SSL certificates.

Self-signed SSL Warning

We all know what to do here, click Advanced and yadda-yadda…or shouldn’t we just fix the issue?


So let’s go over two different ways to fix this…


Ansible Tower uses Nginx (pronounced engine-x) as their HTTP server for the Web UI.  It’s not configured ‘normally’ like you’d see in most web hosting scenarios, there’s no site-available, mods-available, etc.  That’s good though because nothing else should really run on this server outside of Ansible Tower so the good guys at Ansible thought it’d be good to just stuff everything in the default nginx.conf file.

The certificate is self-signed and can be easily replaced.  Here are the lines from the nginx.conf file that matter for this scope, starting at line 42 as of today/this version:

# If you have a domain name, this is where to add it
server_name _;
keepalive_timeout 65;

ssl_certificate /etc/tower/tower.cert;
ssl_certificate_key /etc/tower/tower.key;

Method 1 – Let’s Encrypt

This is probably the more prevalent method nowadays.  It’s easy, free, no need to manage anything since ACME takes care of it.  If your Ansible Tower instance faces the publicly routable Internet, this is probably your go-to.  If it’s not able to reach the Let’s Encrypt ACME servers, you won’t be able to use Let’s Encrypt without some tunnel/proxy/cron tomfoolery, or their manual method which incurs extra steps.  Alternatively, skip to Method 2 which is how to install your own certificate from your own CA/PKI.

Remember a few lines up in the configuration snippet where it had a comment “# If you have a domain name, this is where to add it”?  Go ahead and do just that, edit the /etc/nginx/nginx.conf file and replace the underscore (“_”) with your FQDN.  Save, exit.

Go ahead and reload the nginx configuration

# systemctl reload nginx.service

Next, let’s enable the repos we need to install Let’s Encrypt.  Here are some one-liners, some parts will still be interactive (adding the PPA, accepting GPG keys in yum, etc).  Installing a PPA/EPEL and enabling repos where needed, updating, and installing the needed packages.  Slightly interactive prompts.


# add-apt-repository ppa:certbot/certbot && apt-get update && apt-get install python-certbot-nginx -y

Red Hat Enterprise Linux (RHEL)/CentOS in AWS

# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum -y install yum-utils && yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional && yum update -y && yum -y install python-certbot-nginx

Red Hat Enterprise Linux (RHEL)/CentOS (Normal?)

# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum -y install yum-utils && yum-config-manager --enable rhel-7-server-extras-rpms rhel-7-server-optional-rpms && yum update -y && yum -y install python-certbot-nginx

And boom, just like that we….are almost there.  One more command and we should be set:

# certbot --nginx

If the server_name variable in your nginx.conf was modified to point to your FQDN, nginx was reloaded, and all packages enabled properly, the Certbot/Let’s Encrypt command should give you the option of selecting “1: tower.example.com” and do so.  Important: Certbot will ask if you want to force all traffic to be HTTPS.  Ansible Tower already has this configuration in place, so just select “1” when asked about forcing HTTPS to skip that configuration change.

Navigate to your Ansible Tower Web UI, and you should have a “Secure” badged site.

Ansible Tower, Secured

Ansible Tower and Let’s Encrypt. That looks so good.


Method 2 – Manually replacing the SSL Certificate with your own

This is really easy to do actually.  All you have to do is place your certificate files on your Ansible Tower server (in /etc/ssh or /etc/certificates for example), and modify the nginx configuration to point to them.  You may recall the lines in the configuration from earlier…

ssl_certificate /etc/tower/tower.cert;
ssl_certificate_key /etc/tower/tower.key;

Yes, there.  All you have to do is replace those two files, or preferably deposit your own and change the configuration to point to the new files.

Now, this only works under one of the following considerations…

  1. Your certificate is from Comodo, VeriSign, etc.  A CA that’s generally in the root zone of most browser’s certificate store.
  2. Your certificate is from a CA that is installed in your browser’s or device’s root CA store.  Typical of enterprises who manage their own PKI and deploy to their endpoints.

Basically, as long as the Certificate Authority (or CA) that signed your replacement certificate is in your CA root zone you should be golden, otherwise, you’ll see the same SSL Warning message displayed since your browser doesn’t recognize the CA’s identity and therefore does not trust anything signed by them.  If there’s an Intermediary CA, make sure to include the full certificate chain to establish the full line of trust.


Gee, that was fun, right?!

Well, I had fun at least.  If not fun, maybe someone needs to secure their Ansible Tower installation(s) and finds this useful and it brings along a sense of accomplishment or relief.

I have half a mind to make this into an Ansible Role…EDIT: Holy crap, I did make this into an Ansible Role. Has a couple neat tricks it does yeah.

Compile & Install Apache 2.4.7, PHP 5.5 via FPM, on Debian 7 (Wheezy)

Ok, I have to admit, I’ve been getting pretty used to just running up my list of installed libraries and applications via apt-get. Haven’t had to compile anything lately. Not that it’s hard to do so, but a single command is rather easy especially when it takes care of a lot of work for you.

Either way, every now and then you’ll want to compile your own copy of an application any way. For me, right now it’s cause of some features I want such as being able to setup Perfect Forward Secrecy on a few sites.  This whole setup, the compiling the list of configuration arguments I wanted, trial and error with dependencies and build methods, and a bit of going back and fixing up a few things took about 2 days.  Could have done it quicker, or potentially even slower if I researched some more things, but I was the point where I was rather happy with this being a model setup for future deployments.

So here’s the game plan…I’ve got a shiny new Debian 7 VPS that I’m turning into a new ultra secure LAMP server.  A few things I’ll leave to install via apt-get…OpenSSL in the Deb7 repo is up to 1.0.1e (the latest at the time of this writing) so it’ll support TLS 1.2 that I need.  APR is something we’ll need for the Event MPM, we’ll install it via apt-get to also install a few other packages Apache uses as well, but we’ll download the latest version to include in the configuration process since it’s nothing really big or difficult.

So let’s go ahead and install a few things that we’ll need…

root@localhost:~/# apt-get install libaprutil1 libaprutil1-dev build-essential libgdbm3 libgdbm-dev 

Next, I made a new directory, /opt/src/, downloaded all of the packages, unzipped, and deleted archives.  At the time of writing, that was a few wgets involving Apache 2.4.7, PHP 5.5.7, APR 1.5.0, APR-Util 1.5.3, APR-Iconv 1.2.1, and ZLib 1.2.8.

So, now got the latest versions of Apache, PHP, MySQL, and the APR suite downloaded to the /opt/src/ directory, unzip them all, delete the archives (or move them to another directory!).  First, we’ll build APR 1.5.0.

root@localhost:/opt/src# cd apr-1.5.0/

root@localhost:/opt/src/apr-1.5.0/# chmod +x configure && ./configure --enable-threads --prefix=/opt/apr-1.5.0

So far so easy, goto the APR src directory, make the configure script executable, pass on a few arguments to the configure script, and read over everything to make sure the system isn’t missing some software or something didn’t go slightly wrong.  Once it looks good, just run a quick  make && make install to install APR.  Next, onto APR-Iconv.

root@localhost:/opt/src/apr-1.5.0/# cd ../apr-iconv-1.2.1/

root@localhost:/opt/src/apr-iconv-1.2.1/# chmod +x ./configure && ./configure --prefix=/opt/apr-iconv-1.2.1 --with-apr=/opt/apr-1.5.0/bin/apr-1-config

Those few lines simply change directories, quick chmod, and APR-Iconv is almost installed!  Look things over, once it looks good with no serious warnings, run make && make install to move onto APR-Util.

root@localhost:/opt/src/apr-iconv-1.2.1/# cd ../apr-util-1.5.3/

root@localhost:/opt/src/apr-util-1.5.3/# chmod +x configure && ./configure --prefix=/opt/apr-util-1.5.3 --with-iconv=/opt/apr-iconv-1.2.1 --with-crypto --with-gdbm --with-openssl --with-apr=/opt/apr-1.5.0/bin/apr-1-config

You should be getting the hang of it by now, as long as nothing is on fire, continue to make && make install.  Done with the APR suite, move onwards to ZLib!

root@localhost:/opt/src/apr-util-1.5.3/# cd ../zlib-1.2.8/

root@localhost:/opt/src/zlib-1.2.8/# chmod +x configure && ./configure --prefix=/opt/zlib-1.2.8 && make && make install

Ok, so that one I just went ahead and put the make && make install suffixed to the configure command because it’s such a simple compile I’m sure any machine will configure properly.  Done with all the prerequisites, onto the actual fun!

Now, compiling Apache can be a very easy, or very strenuous process.  It all depends on what modules your applications need, how much time you want to spend researching performance/modules, and various other unique variables.  For my purposes, I’m compiling all the MPMs modularly so I can swap back and forth if need be, got everything available being compiled but I’ll go back and only load the ones I need later once I have a few hours to go through a bunch of configuration files.

root@localhost:/opt/src/zlib-1.2.8/# cd ../httpd-2.4.7/

root@localhost:/opt/src/httpd-2.4.7/# /configure --prefix=/opt/apache2 --enable-mpms-shared=all --with-mpm=event --enable-threads --enable-mods-shared=reallyall --enable-http --enable-deflate --enable-expires --enable-headers --enable-rewrite --enable-mime-magic --enable-log-debug --enable-ssl --enable-nonportable-atomics=yes --enable-ssl-staticlib-deps --enable-mods-static=ssl --with-apr=/opt/apr-1.5.0/bin/apr-1-config --with-apr-util=/opt/apr-util-1.5.3/bin/apu-1-config --enable-fcgi --with-z=/opt/zlib-1.2.8

Ok, that one’ll throw you for a loop.  Either way, again check to make sure it isn’t whining about much…might complain about lack of LUA, privileges, etc.  For the most part, don’t worry about it.  If you knew you should, then you would.   Everything look nice?  make && make install and let’s recap what we just did…

  1. We setup our environment, a few prerequisites, a work area with extracted source files in /opt, then we were ready to build
  2. Spent a lot of time researching configuration options via ./configure --help and Google Fu, configuring, building, reconfiguring, and rebuilding, and a few more times of that later figured everything out and started fresh…
  3. Configured and built APR.  Some will say to include the APR and APR-Util source files in the srclib/ directory in the Apache source directory and let the Apache configure script configure them as well, but doing so will not build correctly in most cases with a very custom setup.
  4. Configured and built APR-Iconv, then APR-Util, followed by Zlib.
  5. Configured and built Apache 2.4.  Apache has some crazy configuration options, and I used most of them.  All modules and MPMs were compiled and set to load modularly, this lets us experiment with different setups to see which performs best and also lets us expand and contract our feature set easily.

Ok, so that’s where we’re at now.  We could go ahead and test the Apache setup, but let’s configure the init.d script first so that it’ll run on start up.

root@localhost:/opt/apache2/# cp bin/apachectl /etc/init.d/apache2

root@localhost:/opt/apache2/# nano /etc/init.d/apache2

Now, the reason we’re copying this over instead of symlinking it, is because we want to keep the original apachectl file while making a few simple adjustments; this file was configured specifically to this installation so it requires very little modification.

Now, append the following lines right after the first line (after the interpreter declaration)

# Provides:          apache2
# Required-Start:    $local_fs $remote_fs $network $syslog $named
# Required-Stop:     $local_fs $remote_fs $network $syslog $named
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# X-Interactive:     true
# Short-Description: Start/stop apache2 web server

Save, and exit.

Finish up with a few quick commands to finalize the Apache environment…we’ll symlink the compiled binaries to somewhere in our default PATH. You could add it to your ~/.bashrc file, but this way it’s system-wide and not user-specific. Then just update with default parameters, boom, you can use Apache 2 as a service and it’ll start on boot!

root@localhost:/opt/apache2/# ln -s /opt/apache2/bin/* /usr/local/bin/
root@localhost:/opt/apache2/# update-rc.d -f apache2 remove
root@localhost:/opt/apache2/# update-rc.d -f apache2 defaults 91 09

You should now be able to start/stop/etc Apache 2 via:

root@localhost:/opt/apache2/# service apache2 stop
root@localhost:/opt/apache2/# /etc/init.d/apache2 start

Ok, got about half of this done now, onto MySQL which has just as many configure options as Apache does, but thankfully most of them wouldn’t be close to being needed…

root@localhost:/opt/apache2/# cd /opt/src/mysql-5.6.15/