Category: Blog

How-to: Passing the Certified Cloudbees Jenkins Engineer exam

Well, add another notch in my tool belt, I recently became a CCJE or CCJPE or whatever they’ll call it in a few months.
Basically, that means I’ve attained the recognition of being a Certified Cloudbees Jenkins Engineer.

What the frack?

If you haven’t heard of DevOps, it’s basically the new way of doing software development.  Ok, it’s not that new, but it’s now something that can’t be avoided.  Agility is king and that agility can make or break whole industries, tenured titans, and define how we as a society work.  Ok, maybe it doesn’t have quite that crucial of an effect and reach, but if you don’t know, now you kinda know and go out there and read some books and stumble around Wikipedia for a while on the DevOps side of things.  Actually, you should start your search with Jenkins

Jenkins is pretty much the king of the DevOps CI/CD automation/orchestration world.  Very versatile, long-running history, most used CI/CD platform out there, the list of flatteries goes on.

Obligatory Halloween-themed Jenkins. You’re welcome, Internet.

Cloudbees is the company behind Jenkins and they provide two certification options: Certified Jenkins Engineer (CJE) which is for the open-source centric part of Jenkins, and then there’s the one I obtained which is called Certified Cloudbees Jenkins Engineer (CCJE) which is everything from the CJE with an extra 30 questions over the enterprise version of Jenkins, Cloudbees Core.  Both exams are multiple choice, either 60 or 90 question over a 60 or 90 minute time period.  Taking and passing the Certified Cloudbees Jenkins Engineer (CCJE) exam will also cover and count for the content covered in the CJE exam.  Considering the company I work for just won Public Sector Partner of the Year from Cloudbees I decided to go for the gusto and go for the bigger, badder, DevOps-y-er CCJE exam to get what I feel to be a two-for-one, or maybe more closely to….ah whatever, you get the idea, it’s mo betta.

Most of what I’ve mentioned is all listed online in the Jenkins Certification page of the Cloudbees site, and even more is available on their site regarding the exams so there haven’t been any real secrets given about the exam.  I also can’t really say too much regarding the specifics of the exam, no-cheat/no-tell NDA and all.  What I can tell you about is my experience with studying and preparing for the exam.

Study Material

First, let’s start with some resources that I used to help me study…

  • Take a look, it’s in a book – I never appreciated Reading Rainbow as much as I should have, but thankfully I read [physical books] more than ever now; thanks LeVar!
    I’d highly recommend Jenkins 2.x Continuous Integration Cookbook (Third Edition).  Great way to bootstrap getting your hands dirty with Jenkins and some of the best practices available.  Knowing these best practices and how they relate to the enterprise product Cloudbees Core (or Cloudbees Jenkins Enterprise as referred to before) is extremely important for the CCJE as it is focused on the enterprise.  Of course, the enterprise product isn’t covered in the open-source centric CJE, but best practices are still great to know so you can easily map how things should interact.
  • From the Horse’s Mouth – While I was working for Red Hat I was fortunate enough to have access to a Red Hat Learning Subscription which is pretty much all the training Red Hat produces, in addition to labs, practice exams, and so on.  It really set me with an appreciation of vendor produced training since it’s usually the best available resource and comes from the source.  Cloudbees has something called Cloudbees University that has a lot of great free resources and for ONLY $300 offers their training course which is 2-days worth of content in a self-paced format and it even comes with a lab VM ready to rock and roll!  Considering a lot of other vendor training options are more expensive, this is probably the best $300 you can spend on technical training.
  • YouTube – This is kind of an obvious source for some training but it’s sometimes more miss than hit.  I found a great resource that goes over most of the key points of the CJE and CCJE exams.  It’s done by a channel called DevOps Library and they have a playlist with some wonderful content, it’s a little dated but it’s largely still applicable to not only Jenkins but also the exam.
  • Other blogs – Some of these were a little older (hell, one is about the exam BETA) but with everything else gave a little bit more color to the experience and some other tips to help with studying for the exam…

Before the SCANTRON

Ok, it’s not really on a SCANTRON but I couldn’t help but show my age.

There were a few other things I’d suggest to potential exam goers.

  1. Install the projects and the products – Install Jenkins, a few times, and a couple different ways if you can.  Then install Cloudbees Core, you can request a trial license from within the installer.  Get your hands dirty.
  2. Learn the architecture – Learn how masters, agents, the file system, and all the other components work together.  This includes learning some of the more popular plugins and what they deliver.  Basic *nix admin skills are good to have as well.
  3. Create jobs/pipelines and run builds – There’s some confusing terminology and sometimes a word can mean multiple things depending on what part of Jenkins you’re using.  Best way to get over that is to actually start building jobs and pipelines.  It also brings that actual “AHA!” or “AWESOME!” moment when you see everything just magically working together to form a CI/CD workflow.
  4. Buy the $300 training from Cloudbees University – Why?  It’s cheap, has a great ROI, and it comes with a lab VM to run in Virtualbox (via Vagrant) that has everything you need pretty much set-up and ready to rock and roll, IN ADDITION TO the best CJE/CCJE training you can get.  This means no need to waste time setting up Docker, LDAP, Jenkins masters/agents, reverse proxy, etc and more time for learning.
  5. Review the official Jenkins Certification siteThis page has a lot of great information that can help you along the way.  You’ll find the Study Guides (important!), some FAQs such as passing scores, longevity (forever), along with a bunch of other great detail regarding the exam offerings.

Do the damn thing

That’s all I got really, and pretty much all I can say without getting in trouble!

Disconnect, drop off the face off the Internet, and focus on downloading that information to your dome

This is the combination of resources and study material I used to pass the CCJE.  I can’t really say how you should go about studying, everyone learns best differently but hopefully there’s something in here that can help you.  I learn best with multiple ways aggregated together.  Over two days I wrote about 30 pages of notes in addition to watching those videos a few times, flipping through the training slides back and forth, and reading the documentation and training material…oh and doing the labs.  So as you can tell, it takes a lot for me to learn something…

After the study session, I had a great night’s sleep, hit the gym in the morning, dined on a salad and some local [to Denver] charcuterie, all washed down with a few local [to Denver] brews.  Then I walked into the testing center, this was hosted in a Community College, which was super weird to walk through in my Halloween costume.  About 40 minutes later, I had passed the exam!

Will this all work for you?  Probably not.

Will some of it?  Yes, and that’s all that matters.

Is this a piss-poor “how-to”? More than likely.

Good luck, and get to integrating and deploying, LIKE A BOSS!

Quick n’ Dirty – Adding disks to Proxmox with LVM

Proxmox LVM Expansion, adding additional disks to your Proxmox host for VM storage.  No real story behind this post, just something simple and probably more of a documentation of the process for myself more than anything.

In case you were just CRAVING the story and background behind this…well, I recently got a few new (to me) Dell R710 servers, pretty decked out.  Booting Proxmox off of a 128gb USB 3.0 Stick, the internal hard drives were untouched and more importantly, unmounted when booting into Proxmox.

It pays to have an RHCSA…(PV/VG/LV is part of it).  In looking for a guides and resources detailing the addition of additional disks to Proxmox, many of them had it set as a mounted ext3 filesystem.  I knew this couldn’t be right.  A lot of other resources were extremely confusing, and then I realized that Proxmox uses LVM natively so if I recall to my RHCSA training all I need to do is assign the disks as a Physical Volume, add them to the associated Volume Group, and extend the Logical Volume.  Then boom, LVM handles the rest of it for me like magic.

I’ve got 2 120gb SSDs in the server waiting to host some VMs so let’s get those disks initialized and added to the LVM pools!

And before you ask, yes that set of 120gb SSDs is in a RAID0.  As you can tell, I too like to live dangerously…but seriously they’re SSDs and I’ll probably reformat the whole thing before it could even error out…and yeah I could use RHV Self-Hosted, but honestly, for quick tests in a lab Proxmox is easier for me.  This isn’t production after all…geez, G.O.M.D.

Take 1

First thing, load into the Proxmox server terminal, either with the keyboard and mouse or via the Web GUI’s Shell option.  You’ll want to be root.

Next, use fdisk -l to see what disk you’ll be attaching, mine looked something like this:

What I’m looking for is that /dev/sda device.  Let’s work with that.

Next, we’ll initialize the partition table, let’s use cfdisk for that…

cfdisk /dev/sda

> New -> Primary -> Specify size in MB
> Write
> Quit

Great, next let’s create a Physical Volume from that partition.  It’ll ask if you want to wipe, press Y

pvcreate /dev/sda1

Next we’ll extend the pve Volume Group with the new Physical Volume…

vgextend pve /dev/sda1

We’re almost there, next let’s extend the logical volume for the PVE Data mapper…we’re increasing it by 251.50GB, you can find that size by seeing how much is available with the vgs command

lvextend /dev/pve/data -L +251.50g

And that’s it! now if we jump into Proxmox and check the Storage across the Datacenter we can see it’s increased!  Or we can run the command…

lvdisplay

Rinse and Repeat

Now we’re on my next Proxmox node.  No, I’m not building a cluster and providing shared storage, at least not at this layer.

My next system is a Dell R710 with Proxmox freshly installed on an internal 128gb USB flash drive.  It has two RAID1+1hot-spare arrays that are about 418GB large each, they’re at /dev/sdb and /dev/sdc.  Let’s add them really quickly…

cfdisk /dev/sdb

> GPT
> New -> Primary -> Specify size in MB
> Write
> Quit

cfdisk /dev/sdc

> GPT
> New -> Primary -> Specify size in MB
> Write
> Quit

pvcreate /dev/sdb1 && pvcreate /dev/sdc1
vgextend pve /dev/sdb1 && vgextend pve /dev/sdc1
lvextend /dev/pve/data -L +851.49g

And now we should have just about a terabyte of storage available to load VMs into…

Now with more room for activities!

Huzzah!  It worked!  Plenty of room for our VMs to roam around now.

What am I gonna do with a few redundant TBs of VM storage and about half a TB in available RAM and more compute than makes sense?  Continue along my Disconnected DevSecOps lab challenge of course.  You might remember some software defined networking services being tested on a Raspberry Pi Cluster…

More soon to come…

Software Defined Networking with Linux

Well well well, it’s been a while y’all.

Been busy developing and writing a few things, some more exciting stuff coming up in the pipeline.
A lot of the projects I’m working on have to kind of sort of “plug together” and to do a lot of this I use open-source solutions and a lot of automation.
Today I’d like to show you how to setup a Linux based router, complete with packet forwarding, DHCP, DNS, and dare I even say NTP!

Why and what now?

One of the projects I’m working on requires deployment into a disconnected environment, and it’s a lot of things coming together.  Half a dozen Red Hat products, some CloudBees, and even some GitLab in the mix.  Being disconnected, there needs to be some way to provide routing services.  Some would buy a router such as a Cisco ISR, I in many cases like to deploy a software-based router such as pfSense or Cumulus Linux.  In this environment, there’s a strict need to only deploy Red Hat Enterprise Linux, so that’s what I used and that’s what this guide is based around but it can be used with CentOS with little to no modification, and you can execute the same thing on Debian based system with some minor substitutions.

A router allows packets to be routed around and in and out of the network, DHCP allows other clients to obtain an IP automatically as you would at home, and DNS allows for resolution of URLs such as google.com into 123.45.67.190 which can also be used to resolve hostnames internally.  NTP ensures that everyone is humming along at the same beat.  Your Asus or Nighthawk router and datacenters use Linux to route traffic every day and we’ll be using the same sort of technologies to deliver routing to our disconnected environment.

Today’s use case

Let’s imagine you start with this sort of environment, maybe something like this…

This slideshow requires JavaScript.

What we have here is a 7-node Raspberry Pi 3 B+ cluster!

3 nodes have 2x) 32gb USB drives in them to support a 3-node replica Gluster cluster (it’s fucking magic!).  Then 3 other nodes are a part of a Kubernetes cluster, and the last RPi is the brains of the operation!

In order to get all these nodes talking to each other, we could set static IPs on every node and tell everyone where everyone else is at and call it a day.  In reality, though, no one does that and it’s a pain if not daunting.  So the last Raspberry Pi will offer DHCP, DNS, and NTP to the rest of the Kubernetes and Gluster clusters while also offering service as a wifi bridge and bastion host to the other nodes!  I’ve already got this running on Raspbian and have some workloads operating so I’ve recreated this lab in VirtualBox with a Virtual Internal Network and Red Hat Enterprise Linux.

Step 1 – Configure Linux Router

Before we proceed, let’s go along with the following understandings of your Linux Router machine:

  • Running any modern Linux, RHEL, Cumulus Linux, Raspbian, etc
  • Has two network interface cards, we’ll call them eth0 and eth1:
    • WAN (eth0) – This is where you get the “internet” from.  In the RPi cluster, it’s the wlan0 wifi interface, in my RHEL VM it’s named enp0s3.
    • LAN (eth1) – This is where you connect the switch to that connects to the other nodes, or the virtual network that the VMs live in.  In my RHEL VM it’s named enp0s8.
  • We’ll be using the network 192.168.69.0/24 on the LAN side (or netmask of 255.255.255.0 for those who don’t speak CIDR), and setting our internal domain to kemo.priv-int

I’m starting with a fresh RHEL VM here, so the first thing I want to do is jump into root and set my hostname for my router, update packages, and install the ones we’ll need.

sudo -i
hostnamectl set-hostname router.kemo.priv-int
yum update -y
yum install firewalld dnsmasq bind-utils

Now that we’ve got everything set up, let’s jump right into configuring the network interface connections.  As I’m sure you all remember from your RHCSA exam prep, we’ll assign a connection to the eth1 interface to set up the static IP of the router on the LAN side and bring it up.  So assuming that your WAN on eth0 is already up (check with nmcli con show) and has a connection via DHCP, let’s make a connection for LAN/eth1 (my enp0s8)…

nmcli con add con-name lanSide-enp0s8 ifname enp0s8 type ethernet ip4 192.168.69.1/24 gw4 192.168.69.1
nmcli con modify lanSide-enp0s8 ipv4.dns 192.168.69.1

Before we bring up the connection, let’s set up dnsmasq.  dnsmasq will serve as both our DNS and DHCP servers which is really nice!  Go ahead and open /etc/dnsmasq.conf with your favorite editor…

vi /etc/dnsmasq.conf

And add the following lines:

# Bind dnsmasq to only serving on the LAN interface
interface=enp0s8
bind-interfaces
# Listen on the LAN address assigned to this Linux router machine
listen-address=192.168.69.1
# Upstream DNS, we're using Google here
server=8.8.8.8
# Never forward plain/short names
domain-needed
# Never forward addresses in the non-routed address space (bogon networks)
bogus-priv
# Sets the DHCP range (keep some for static assignments), and the lifespan of the DHCP leases
dhcp-range=192.168.69.100,192.168.69.250,12h
# The domain to append short requests to, all clients in the 192.168.69.0/24 subnet have FQDNs based on their hostname
domain=kemo.priv-int,192.168.69.0/24
local=/kemo.priv-int/
# Add domain name automatically
expand-hosts

Annnd go ahead and save that file.

Now, on a RHEL/CentOS 7 machine, we have firewalld enabled by default so let’s make sure to enable those services.

firewall-cmd --add-service=dns --permanent
firewall-cmd --add-service=dhcp --permanent
firewall-cmd --add-service=ntp --permanent
firewall-cmd --reload

Next, we’ll need to tell the Linux kernel to forward packets by modifying the /etc/sysctl.conf file and add the following line:

net.ipv4.ip_forward=1

It might already be in the file but commented out, so simply remove the pound/hashtag in front and that’ll do.  Still, need to enable it though:

echo 1 > /proc/sys/net/ipv4/ip_forward

Yep, almost set so let’s just bring up the network interface connection for eth1, set some iptable NAT masquerading and save it, and enable dnsmasq…

iptables -t nat -A POSTROUTING -o enp0s3 -j MASQUERADE
iptables -A FORWARD -i enp0s3 -o enp0s8 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i enp0s8 -o enp0s3 -j ACCEPT
firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o enp0s3 -j MASQUERADE -s 192.168.69.0/24
iptables-save > /etc/iptables.ipv4.nat
nmcli con up lanSide-enp0s8
systemctl enable dnsmasq && systemctl start dnsmasq

Step 2 – Connect Clients & Test

So this part is pretty easy actually, you’ll just need to connect the clients/nodes to the same switch, or make a few other VMs in the same internal network.  Then you can check for DHCP leases with the following command:

tail -f /var/lib/dnsmasq/dnsmasq.leases

And you should see the lease time, MAC address, associated IP, and client hostname listed for each connected client on this routed network!  We should be able to ping all those hostnames now too…

This is great, and we have many of the core components needed by a routed and switched network.  Our use case needs some very special considerations for time synchronization so we’ll use this same Linux router to offer NTP services to the cluster as well!

Step 3 – Add NTP

Here most people would choose to use NTPd which is perfectly fine.  However, RHEL and CentOS (and many other modern Linux distros) come preconfigured with Chronyd which is sort of a newer, better, faster, stronger version of NTPd with some deeper integrations into systemd.  So today I’ll be using Chronyd to setup an NTP server on this Linux router.  Chronyd is also a bit better for disconnected environments, too.

Essentially, we just need to modify the /etc/chrony.conf and set the following lines:

stratumweight 0
local stratum 10
allow 192.168.69.0/24

After that, enable NTP synchronization and restart with:

timedatectl set-ntp 1
systemctl restart chronyd

And give that a moment to sync and you should have a fully functional network core based on simple Linux and a few packages!

Next Steps

There are a few things that come to mind that you could do in this sort of environment…

  • Create an actual GPS-based NTP Server – Be your own source!
  • Set Static Host/IP Mappings – Make sure you have a section of IPs available that aren’t in the DHCP block to set the static IP reservations to.
  • Create site-to-site VPNs – Tack on a bit of OpenVPN and we could easily create a secure site-to-site VPN to join networks or access other resources!
  • Anything in your router’s web UI – Pretty much every router out there runs some sort of Linux embedded, and they’re all abstracting elements and functions that are primarily built-in and accessible to everyone.  Set up port-forwarding?  No problem. Add UPnP?  Not too hard either.
  • Add PiHole Ad-Blocker – Maybe you’re using a Raspberry Pi as a wireless bridge to connect some hard wired devices on a switch to a wifi network.  Wouldn’t it be nice to block ads for all those connected devices?  You can with PiHole!

Rolling up Let’s Encrypt on Ansible Tower’s UI

The other day someone asked me what I do for fun.

“Fun” really has a few different definitions for me, and I’d say for most people.  It could be entertainment, guttural satisfaction, leisurely adventuring about, or maybe for some slightly compulsive people like me, accomplishing a task.  Something I’m kind of overly compulsive about is proper SSL implementation and PKI.

So this morning I was having LOADS of fun.  My fast just started to kick in with some of the good energy and ‘umph’ so I was feeling great.  Bumping that new Childish summertime banger, really grooving.  I just finished spinning up a new installation of Ansible Tower and logged in.  That’s when the Emperor lost his groove.

I’ve seen the screen plenty of times in the Ansible Tower Workshops and simply, almost reflexively skip past the big warning sign you see when you first log into an Ansible Tower server’s UI.  The big warning sign isn’t too crucial in the large scheme of things, but it really stuck out to me this time.  Maybe because this server is part of a larger permanent infrastructure play, but it really got to me and I HAD to install some proper SSL certificates.

Self-signed SSL Warning

We all know what to do here, click Advanced and yadda-yadda…or shouldn’t we just fix the issue?


 

So let’s go over two different ways to fix this…

Background

Ansible Tower uses Nginx (pronounced engine-x) as their HTTP server for the Web UI.  It’s not configured ‘normally’ like you’d see in most web hosting scenarios, there’s no site-available, mods-available, etc.  That’s good though because nothing else should really run on this server outside of Ansible Tower so the good guys at Ansible thought it’d be good to just stuff everything in the default nginx.conf file.

The certificate is self-signed and can be easily replaced.  Here are the lines from the nginx.conf file that matter for this scope, starting at line 42 as of today/this version:

# If you have a domain name, this is where to add it
server_name _;
keepalive_timeout 65;

ssl_certificate /etc/tower/tower.cert;
ssl_certificate_key /etc/tower/tower.key;

Method 1 – Let’s Encrypt

This is probably the more prevalent method nowadays.  It’s easy, free, no need to manage anything since ACME takes care of it.  If your Ansible Tower instance faces the publicly routable Internet, this is probably your go-to.  If it’s not able to reach the Let’s Encrypt ACME servers, you won’t be able to use Let’s Encrypt without some tunnel/proxy/cron tomfoolery, or their manual method which incurs extra steps.  Alternatively, skip to Method 2 which is how to install your own certificate from your own CA/PKI.

Remember a few lines up in the configuration snippet where it had a comment “# If you have a domain name, this is where to add it”?  Go ahead and do just that, edit the /etc/nginx/nginx.conf file and replace the underscore (“_”) with your FQDN.  Save, exit.

Go ahead and reload the nginx configuration

# systemctl reload nginx.service

Next, let’s enable the repos we need to install Let’s Encrypt.  Here are some one-liners, some parts will still be interactive (adding the PPA, accepting GPG keys in yum, etc).  Installing a PPA/EPEL and enabling repos where needed, updating, and installing the needed packages.  Slightly interactive prompts.

Debian/Ubuntu

# add-apt-repository ppa:certbot/certbot && apt-get update && apt-get install python-certbot-nginx -y

Red Hat Enterprise Linux (RHEL)/CentOS in AWS

# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum -y install yum-utils && yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional && yum update -y && yum -y install python-certbot-nginx

Red Hat Enterprise Linux (RHEL)/CentOS (Normal?)

# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum -y install yum-utils && yum-config-manager --enable rhel-7-server-extras-rpms rhel-7-server-optional-rpms && yum update -y && yum -y install python-certbot-nginx

And boom, just like that we….are almost there.  One more command and we should be set:

# certbot --nginx

If the server_name variable in your nginx.conf was modified to point to your FQDN, nginx was reloaded, and all packages enabled properly, the Certbot/Let’s Encrypt command should give you the option of selecting “1: tower.example.com” and do so.  Important: Certbot will ask if you want to force all traffic to be HTTPS.  Ansible Tower already has this configuration in place, so just select “1” when asked about forcing HTTPS to skip that configuration change.

Navigate to your Ansible Tower Web UI, and you should have a “Secure” badged site.

Ansible Tower, Secured

Ansible Tower and Let’s Encrypt. That looks so good.


 

Method 2 – Manually replacing the SSL Certificate with your own

This is really easy to do actually.  All you have to do is place your certificate files on your Ansible Tower server (in /etc/ssh or /etc/certificates for example), and modify the nginx configuration to point to them.  You may recall the lines in the configuration from earlier…

ssl_certificate /etc/tower/tower.cert;
ssl_certificate_key /etc/tower/tower.key;

Yes, there.  All you have to do is replace those two files, or preferably deposit your own and change the configuration to point to the new files.

Now, this only works under one of the following considerations…

  1. Your certificate is from Comodo, VeriSign, etc.  A CA that’s generally in the root zone of most browser’s certificate store.
  2. Your certificate is from a CA that is installed in your browser’s or device’s root CA store.  Typical of enterprises who manage their own PKI and deploy to their endpoints.

Basically, as long as the Certificate Authority (or CA) that signed your replacement certificate is in your CA root zone you should be golden, otherwise, you’ll see the same SSL Warning message displayed since your browser doesn’t recognize the CA’s identity and therefore does not trust anything signed by them.  If there’s an Intermediary CA, make sure to include the full certificate chain to establish the full line of trust.

Conclusion

Gee, that was fun, right?!

Well, I had fun at least.  If not fun, maybe someone needs to secure their Ansible Tower installation(s) and finds this useful and it brings along a sense of accomplishment or relief.

I have half a mind to make this into an Ansible Role…EDIT: Holy crap, I did make this into an Ansible Role. Has a couple neat tricks it does yeah.

Wrangling Bluetooth in RHEL & CentOS 7

I recently started as the Lead Solution Architect at Fierce Software. It’s been an excellent experience so far and I’m excited to be working with such a great team. At Fierce Software we help Public Sector entities find enterprise ready open source solutions. Our portfolio of vendors is deliberately diverse because in today’s enterprise multi-vendor, multi-vertical solutions are commonplace. No one’s running all Cisco network gear, Cisco servers, annnnd then what’s after that? They don’t have a traditional server platform, even they partner with Red Hat and VMWare for that. We help navigate these deep and wide waters to find your Treasure Island (no affiliation).

New business cards

These cards are Fierce…

 

Starting a new role also comes with some advantages such as a new work laptop. I’ll be traveling regularly, running heavy workloads such as Red Hat Satellite VMs, and balancing two browsers with a few dozen tabs each (with collab and Spotify in the background). Needless to say, I need a powerful, sturdy, and mobile workhorse with plenty of storage.

A few years ago that would have been a tall order, or at the very least an order that could get you a decent used Toyota Camry. Now this is possible without breaking the bank and while checking all the boxes.

Enter System76’s 3rd generation Galago Pro laptop. Sporting a HiDPI screen, 8th generation Intel Core i7, an aluminum body, backlit keyboard, and plenty of connectivity options. This one is configured with the i7-8550U processor for that extra umph, 32GB of DDR4 RAM, Wireless AC, a 500gb m.2 NVMe drive, and an additional 250GB Samsung SSD. That last drive replaced a configured 1TB spinner that I toss in an external enclosure and replaced with a spare SSD I had around. Great thing about System76’s Galago Pro is that it’s meant to be modified so that 250GB SSD I put in will later be replaced with a much larger SSD.

At configuration you have the option of running Pop!_OS or Ubuntu. Being a Red Hat fan boy, I naturally ran RHEL Server with GUI. Why RHEL Server instead of Desktop or Workstation? Mostly because I have a Developer Subscription that can be consumed by a RHEL Server install, and it can run 4 RHEL guest VMs as well. If you’ve ever used enterprise-grade software on commodity, even higher-end commodity hardware, you might have an idea of where this is going…

Problem Child

One of the benefits of Red Hat Enterprise Linux is that it is stable, tested, and certified on many hardware platforms. System76 isn’t one of those (yet). One of the first issues is the HiDPI screen, not all applications like those extra pixels. There are some workarounds, I’ll get to that in a later post. System76 provides drivers and firmware updates for Debian-based distributions so some of their specialized configuration options aren’t available naturally. These are some issues I’ll be working on to produce and provide a package to enable proper operation of Fedora/RHEL based distributions on the Galago Pro. More on that later…

The BIGGEST issue I had was with Bluetooth. This is not inherent to System76 or Red Hat. It’s actually (primarily) a BlueZ execution flag issue and I’ll get to that in a moment…

Essentially my most crucial requirement is for my Bluetooth headset to work. We use telepresence software like Google Hangouts and Cisco WebEx all day long to conduct business and a wired headset is difficult to use and especially when mobile. I spent probably half a day trying to get the damned thing working.

I attempted to connect my headset with little success, it kept quickly disconnecting. I paired the Galago Pro to one of my Amazon Echos and it worked for a few minutes then began distorting and skipping. I plugged in a USB Bluetooth adapter, blacklisted the onboard one, still had the same issues. Must be a software thing…

Have you tried turning it on and off again?

Part of the solution? Buried in an Adafruit tutorial for setting up BlueZ on Raspbian on a Raspberry Pi…
Give up and search Google for “best linux bluetooth headset” and you get a whole lot of nothing. Comb through mail-lists, message boards, and random articles? Nowhere and nada.
Give me enough time and I can stumble through this Internet thing and piece together a solution though.

Galago Pro and BT Headset

The plaintiff, a hot shot SA determined to have his day in wireless audio; the defendant, an insanely stable platform

 

Essentially I’ll (finally) get to the short part of how to fix your Bluetooth 4.1 LE audio devices in RHEL/CentOS 7. Probably works for other distros? Not sure, don’t have time to test. All you have to do is enable experimental features via a configuration change….and set a trust manually…and if you want higher quality audio automatically, create a small file in /etc/bluetooth/…

Step 1 – Experimental Features

So you want to use your nice, new, BLE 4.1 headset with your RHEL/CentOS 7 system…that low energy stuff is a little, well…newer than what is set by default so we just need to add a switch to the Bluetooth service execution script to enable those “fresh” and “hot” low energy features…

$ sudo nano /lib/systemd/system/bluetooth.service

Enable the experimental features by adding –experimental to the ExecStart line, for example the configuration should look like:

…
[Service]
Type=dbus
BusName=org.bluez
ExecStart=/usr/local/libexec/bluetooth/bluetoothd --experimental
NotifyAccess=main
…

Save the file and then run the following commands:

$ sudo systemctl daemon-reload
$ sudo systemctl restart bluetooth

Step 2 – Set Trust

So you’ll find that some of the GUI tools might not coordinate a Bluetooth device pair/trust properly…it only has to manage between the service, BlueZ daemon, and Pulseaudio (!), what is so difficult about that?
Let’s just take the headache out of it and load up bluetoothctl, list my devices, and set a manual trust. This is assuming you’ve at least paired the BT device with the GUI tools, but it might vibrate oddly or disconnect quickly after connecting. There are ways to setup pairing via bluetoothctl as well, but the help command and manpages will go into that for you.

$ sudo bluetoothctl
> devices
  Device XX:XX:XX:XX:XX:XX
> trust XX:XX:XX:XX:XX:XX

Step 3 – Create /etc/bluetooth/audio.conf

So now we have it paired, trusted, and the newer BLE 4.1 features enabled. If your BT headset also includes a microphone for call functionality, you might find the headset auto-connecting to the lower quality HSP/HFP profile. We want that tasty, stereo sound from the A2DP profile. Let’s tell the BlueZ daemon to auto-connect to that A2DP sink.

$ sudo echo "AutoConnect=true" > /etc/bluetooth/audio.conf
$ sudo systemctl restart bluetooth.service

Now, turn off and on your headset, it should auto-connect, and your Bluetooth 4.1 LE headphones should work all fine and dandy now with high(er) fidelity sound!

Woah, this isn’t where I parked my car…

Things looking a bit different around here eh?  Got a new splash of paint, some wallpaper here and there, and a couple updates.

OLD

Old website of Ken Moini

Man, that is a great looking site

First off, let’s go with my old site…based on around WordPress, single page for the most part, and a dark hallway of pop-up-modal-ajax hell…

  • I had decided to disregard any sane logic for content.  Custom Post Types?  Na, I’d rather hard code some portfolio entries to make sure I don’t ever update the listings in the four years it’s been up
  • I wanted to keep it a completely single page with additional content being loaded via AJAX and presented in full-screen pop-up modals.  It *worked* and I had URL anchor rewrite working as well but it still was kinda silly to go this route.  Yes, there were fewer scripts loaded and less bandwidth used because of this but I could have probably just cached things better and optimized some other things to produce a better experience overall.
  • I don’t care what you say, I like the fushia.
  • Yes, I know I didn’t update it as much as I should have.  That’s what happens when you live your life like a rock star I guess.

Still, it served me well enough.  Now though, it’s time to get serious, it’s time to get mean, it’s time to get scary, it’s time to get things updated.

NEW

So what’s changed?

  • No need for a screenshot, take a look around, click a little.
    • Dropped the fuschia, gave it an analogous color that’s a bit easier on the eyes and complimentary to more media.
    • Updated Foundation by ZURB to their newest version (with the XY Grid!)
    • Got rid of the AJAX-y…everything really…
  • Security update
    • So now my personal site is on its own VPS, locked down nice and tight
    • Most things auto-update, cause ain’t nobody got time for that
    • Back-ups are actually off-site now instead of just off-partition…(doh)
    • HTTPS, PWA elements, HTTP2
  • Content
    • More of it!  I’ll actually be active now!

WHY

Why bother updating my site?  Well, there’s always a sneaking suspicion that 4 year old code could have some new vulnerabilities.  And also because I hated the AJAX modals and adding content was too long of a process.

I’ll be updating this site a lot more in the future, I owe that to all of my 7 readers.  Also because somehow, I’ve stumbled into a position where I’ll be generating content, demos, documentation, use cases, and putting on workshops.  Cool!

I’ve worked for a couple of larger tech companies the last few years and the common issue I see with many technical organizations is the ‘data silo,’ or ‘tribal knowledge,’ or my personal favorite ‘cylinders of excellence.’  A lot of what I personally know is locked up in this noggin’ of mine and I need to be more diligent in my communication, documentation, and sharing skills.  Who knows, maybe I can be one of those popular and cool blogger guys.

Setup Laravel Homestead on Windows

Recently I’ve been trying to port a custom-made PHP application to the Laravel framework. I’ve been a fan of the Laravel framework for some time now, and with the recent release of Laravel 5, there are a lot of new features that really improve upon the development pipeline, from Laravel Elixir, to full PSR-4 support.

While porting this custom made application I’ve made use of Bower to include the relevant front-end bits, and Laravel Elixir to manage them into the application.  This has produced a certain number of headaches being that my production server has stringent security policies involved so near every command has included one following to change permissions.  This gets old after a while.

Then I thought, “Why am I developing an application on my production server?”

Yeah, the current software I’m porting is on my production server which makes it easy to copy code over in a quick manner.  Yeah, with just a small bit of configuration I setup a development sub-domain that worked on the same server.  Yeah, it’s a central source.  Yeah, it can become a headache still.

So I set my sights on Laravel Homestead, which is a helpful wrapper for Vagrant which will let you easily launch a virtual machine configured for Laravel development with all the necessary bits already installed.  The Homestead development environment is quite a bit different from my production environment, but I’ll be able to tackle the porting of the code base later with considerable easy especially compared to what I’m dealing with in having to develop on my production server.

I currently have a few machines, but the one I do most of my work on is a Lenovo laptop running Windows 7.  Most of the guides and documentation online are geared towards users who are already running *nix environments such as Mac OS X or Ubuntu, so I came across some larger learning curves which I’ll document below hopefully helping out anyone else avoid the issues I encountered.

 

Background

First off, a quick break down of what Homestead is.

The underlying technology is simple VM tech.  I like many are using VirtualBox, though VMWare is also supported.  If you have had any VM experience, you’ll know that setting up a Virtual Machine can be tedious, configuring the resources, installing the guest operating system, installing necessary software, configuring the software, and then you’re ready to begin work, hopefully.

This is where Vagrant comes into play.  Much like any entity setting up similar VMs over and over again with similar software stacks, Vagrant makes deploying VMs easy comprising of local/remote repositories of VMs to download and spin up with just a few commands.

Homestead is a set of scripts and a VM defined by Vagrant configurations that sets up a Virtual Machine that already houses most everything you would need for Laravel development with just a few simple commands.

 

Installation

To get started you’ll need to install a few bits of software, in the following order…

  1. VirtualBox – Download the latest installer for your host, in this case probably Windows, and install
  2. VirtualBox Extension Pack – From the same VirtualBox download page, download the extension pack and install as well
  3. Vagrant – Download Vagrant, install.  It’ll ask you to reboot.  Go ahead and do so before proceeding.
  4. Git – Download the Git CLI tool, this will become important because the Windows terminal is rather lacking.

Setup

Once you have all the necessary packages installed, we can go ahead and start setting up the development environment.

  1. Now that we have Git installed, open the “Git Bash” application.  This will be the terminal interface we’ll use to perform our tasks and has many features you’d be used to in a *nix environment such as “ls” and being able to run bash scripts.
  2. With the “Git Bash” terminal open, it’ll default to your user directory.  I took this time to create a subfolder in my user directory called “Development” which will house all of my developmental files that will be shared between the virtual machine and my Windows host machine
    mkdir Development
  3. Next, run the command to download the Homestead Vagrant box
    vagrant box add laravel/homesteadThis will download the Homestead configured Vagrant box. It will probably take a good while to download, so grab a cup of coffee or a goblet of geuze.
  4. Next, we’ll clone the Homestead repository. On *nix machines you would just use Composer to download Homestead, but with the Git Bash terminal, we can clone the same scripts and execute them locally without the need for Composer/PHP installed on our host machine.
    git clone https://github.com/laravel/homestead.git Homestead
  5. With the Homestead repository cloned, we’ll enter it. In the previous versions of Homestead the homestead.yaml file was already generated, but now that’s a manual task so we’ll have to run an additional command that will generate the homestead.yaml
    cd Homestead
    bash init.sh
  6. Now that we’ve generated the homestead.yaml file and the various other files needed, if you do an “ls” on the local Homestead repository directory, you’ll notice that those files are not located there. This is because it’s generated inside your user home directory in a hidden directory .homestead
  7. Next we’ll open our text editor or IDE and browse to that generated homestead.yaml file to configure our environment. Note that it is important to configure everything correctly before spinning up the Vagrant box or else some of the configuration files in the VM will not match and can produce errors. I’ll get into the possible errors here later down the page…
  8. Like I mentioned as to where the homestead.yaml file is generated, in C:\Users\Ken\.homestead\, I opened Notepad++ and edited it to look like this:
    ---
    ip: "192.168.10.10"
    memory: 1024
    cpus: 1
    provider: virtualbox

    authorize: ~/.ssh/id_rsa.pub

    keys:
    - ~/.ssh/id_rsa

    folders:
    - map: ~/Development/Shared
    to: /home/vagrant/Code

    sites:
    - map: saltsmarts.app
    to: /home/vagrant/Code/SaltSmarts/public

    databases:
    - saltSmarts

    variables:
    - key: APP_ENV
    value: local

    • The IP address is a suggested address for your VM virtual NIC to bind to. If your local network runs on the 192.168.10.xxx network, you’ll want to change this to something else, such as 192.168.100.10
    • memory can be variable, the default is 2048, I set mine to 1024 since I only have 8gb of RAM on my host machine, being that this is a very simple development VM and that I normally have a few dozen tabs open in Chrome/FF, iTunes, and a host of other resource guzzling applications.
    • authorize and keys can be left set at the default as we’ll generate new SSH keys here soon in those default locations
    • folders is where you’ll want to link your local host folder to the guest VM folder.  This is where I linked the Development directory I created earlier.
    • sites is where you’ll setup the nginx site linking.  This is important to setup before spinning up the Vagrant box because if you alter it later you’ll need to reprovision the Vagrant box or manually reconfigure the nginx configuration.  Here I have the domain “saltsmarts.app” mapped to the “/home/vagrant/Code/SaltSmarts/public” folder, with the thought in mind that I’ll be creating a new Laravel project by the name SaltSmarts and of course we’ll want the server to load the public directory like in any other Laravel deployment.
    • databases is where you’ll setup whatever database you’d like to use, for my purposes I called it “saltSmarts”
    • variables you can leave as default
  9. Next we’ll setup SSH keys to connect to the Vagrant box by passing the following command, substituting your email address.ssh-keygen -t rsa -C "you@emailaddress.com"Make sure to not use a passphrase on the SSH key as that will hang the box on start up
  10. Next we’ll want to map the “saltsmarts.app” domain to the VM. To this we’ll need to open Notepad as Administrator (search the Start Menu for “Notepad” and right click to “Run as Administrator”). From there, we’ll use the menu “File > Open” and browse to “C:\Windows\system32\drivers\etc” and at the bottom next to “File name” there’s a drop down box listing “Text documents (.txt)” click on that and select “All Files (*.*)” to list all the files in the directory and open “hosts”
  11. With the “hosts” file open in Notepad, at the bottom of the file add the IP address listed in your homestead.yaml file linking to the saltsmarts.app domain. At the bottom of my hosts file I added the following line192.168.10.10 saltsmarts.appSave and exit Notepad.

That’s it! Now we’re ready to bring our Vagrant box up. Switch back to the Git Bash terminal and run the following command from the ~/.homestead directoryvagrant up
This will provide a few lines of output, hopefully it will successfully boot the VM. If it gives you a few Warnings about the connection timing out, no worries this just means the VM hasn’t booted to a state ready enough to connect to, just let it run the reconnect attempts.

If it gives you an error about the VM entering “gurumeditation” status, just run the following commandvagrant haltThen load the VirtualBox GUI, and manually boot the VM watching for error messages. Send the GUI loaded VM the ACPI shutdown signal, and try booting via “vagrant up” again.

Next we’ll connect to the VM via SSH to install Laravel. To do this, you run the simple commandvagrant sshFrom there you’ll be connected to the VM via SSH and can install Laravel via Composer like you normally would do. To make everything link nicely, I installed the new Laravel project to “/home/vagrant/Code/SaltSmarts”

That’s the bulk of it, most everything else is ready to rock and roll from here. The documentation is handy for general operation and administration of the box, but ideally you won’t need much other configuration or doctoring.

Errors and Caveats

  • If you didn’t setup the folders and sites initially before the box was brought up and provisioned you might run into the “No input file specified” error when you browse to your VM.  This is an issue with nginx not knowing where to load the Laravel project from.  You can reconfigure the directory paths by editing the file found in “/etc/nginx/sites-enabled/” to reflect those path changes and restart the nginx service to reload the configuration.
  • If there is an error generated while spinning up the VM regarding improper SSH keys, don’t fret it’ll automatically generate new keys but you might have to bring the VM up and down a few times for it to properly generate.
  • Be wary of using tabs in your homestead.yaml file, evidently it does not process tabs correctly so make sure to only use spaces. There are no real errors generated when it can not parse the homestead.yaml file, it just fails softly.

Creating a Responsive Design the Easy Way

Responsive design is all the rage now-a-days. It just magically takes the same content on your website and transforms it into a properly formatted and more user friendly version of your site for just about any device without the need for user-agent detection/redirection scripts or a specialized mobile domain.  From the big Fortune 500 companies and Internet giants such as Twitter and Google, to the small mom-and-pop shops and everyone in between are updating their layouts, content, and feature set to be more adaptable to any size screen.  It gains a larger target audience and a better user experience, as I’m sure everyone remembers the pain of constantly zooming and un-zooming on desktop native sites on your smartphones.  Believe it or not, this can all be done without making the lives of the developer and designer harder if done properly.

I’ve used many responsive and grid frameworks, from Skeleton to Bootstrap 2/3, to my current favorite Foundation by ZURB. They all have different styles for their markup, different classes, different features, and different libraries that enhance the user experience. To find which is right for you will take a bit of play, but it also depends on what is right for your web application. I’ve taken away from Bootstrap because the included styles were rather generic and a bit more work to modify, and the libraries added on a good deal of weight on the load time of most of the applications I made. Foundation is a bit more up my alley because of those reasons, but again for yourself and your apps, you’ll have to experiment a little to see what’s best.

With any responsive framework though, there are a few guidelines and tricks to make your life a bit easier while you’re scaling up and down screen sizes and here are some of them…

This is what happens when you have a coffee cup in one hand and a pen in the other, they both harmoniously meet on paper

Sketch it out

Even if it’s a rough sketch, something done on pen and paper, throw in a little color action with markers/colored pencils, but get your idea outside of your head before you start developing it.  I used to sketch everything in Photoshop myself, but then just found it easier, faster, and more of an organized effort to put the pen to the paper.  Odds are, especially if you have a harder time bringing your ideas to life, drawing on analog formats will be much more efficient and productive to producing your layout and feature set.
This is the rough sketch of the mobile view of an app I’m working on, which brings me to my next point…

Start with the mobile view first

If you’re wanting to make a responsive site, odds are mobile users are a large target in mind.  Why?  Damned near everyone has a smartphone now, and mobile web usage continues to increase every year.  It’s much easier to design your application and the functions and features of it on a mobile view first and expand upwards than it is to take a large screen layout and condense it down.  You’ll be able to focus on what really matters for the user experience, what features and content are more superfluous, and focus on usability in an almost streamlined fashion.

Stick with the Grid

Most responsive frameworks including Bootstrap and Foundation have been based off a grid system.  The most common of these grid systems is the 960 grid, 960 being the number of pixels in this grid on the horizontal plane.  There are 12 columns, generally 60px wide, with a 10px margin on either side for a total of 80px.  You’ll be tempted to modify grid size some, which of course the more power to you, but at a certain point things can get hairy, so in my opinion stick to the grid, and style around it.  This way it’s much easier to simply add a few classes to your HTML markup and have things scaling perfectly across displays.

Now, just because it’s based off the 960 grid doesn’t mean these frameworks or your design has to be limited to a width of 960px. Most grids can even go beyond 12 columns, to 16 and 24 giving more minute level of design. These days with high definition screens, you’ll end up having to set a few extra media queries for extra large screens even.  While designing this site, my development machine uses a 40″ 1080p TV as the display.  I had to use the base media queries for phone, tablet, desktop, and create a new one for extra large displays.  Nothing too hard at all, but another screen to be considered while setting your media queries and layout transformations.

Fall in love with rem/vw/vh

Most people remember how to set width, height, font-size, margins, and so on based on pixel values.  That’s still all fine and dandy, but with responsive design one of the newer CSS 3 specs gives you a powerful tool set to redefine your sizes into values that are more elastic.

You might have heard of em for size settings, which is the predecessor of all these new units from CSS 2.  The difference in em (elastic measure) and rem (root elastic measure) is that em is based off the parent item, it’ll inherit the size and set a relative elastic measure for the child item.  The benefit of rem is that it’s based off the HTML body root size, so whatever your CSS style is set to for your html tag will be the base for the elastic measure for rem.  Inherit sizes elastic sizes rule in RWD, and make your job significantly easier.

Another new set of units introduced to CSS 3 is vw and vh which stand for view-width and view-height respectively.  As you might gather, these units are relative to the viewport width and height.  These are rather handy especially if you want to keep things relative to the width/height of the viewport for a few devices in a specific media query as you’ll find that with the vast number of smartphones out there there are slightly different pixel ratios even for screens of the same size.  The only thing to note about vw/vh is that not all browsers full support them.  In order to have backwards compatibility, you can use a small JavaScript function to compute vw/vh to pixels if you’d like to go that route.

In-Browser, and On-Device Testing

Chrome DOM Inspector
You can scale down the size of your actual window, but the device emulators give you more tools and specificity

One of the best tools in your arsenal can be built right into your browser.  If you don’t know about Firebug for Firefox, or Chrome’s built in DOM Inspector, this might change your life.  Not only can you modify the DOM on the fly for quick testing, but they both also have another feature that makes RWD easier.  You’re able to scale the viewport up and down to different sizes with lists of various popular devices.  This will help you see exactly where the media queries start and stop, how they scale from each other, and give you a hands-on tool set to manipulate it without a constant cycle of saving layouts and stylesheets, and reloading on your phone.

Now, the thing to remember is if you have the resources available to you, even if you have to borrow a friends iOS/Android device for a few minutes, test your design on the actual devices as well.  Even though Firebug and Chrome’s DOM Inspector will emulate the screen sizes and set the proper user-agent for the specified device you’ll be sure to find some odd quirks to how the actual devices render your markup.

 

Other than that, all I can suggest is to try the various RWD frameworks, see which one is best for you, your application, and the devices you’re wanting to build for.  They all have different features, some overlapping, and what’s best for you is going to be different from what is best for someone else.  Otherwise the above tips will hopefully make your transition into the RWD realm a bit easier.

Implementing a Secure Remember Me Login Solution

This one set of function sets took me the better part of the day to get to the point to where I’m happy with it. Enough checks and balances for security, handles efficiently, and doesn’t produce any errors!

I’m developing a site, from scratch PHP, with a lot of drop-in scripts, reference taken from Composer.

I’m using Zebra Sessions for database session handling. I have authentication working beautifully, will be enabling SSL soon, then wanted to implement a “Remember Me” function. I figured that was some part of the Zebra Sessions class, and then learned that it’s a completely whole new system entirely.  Just like authentication, there are ways to do it, and ways to do it well and securely.  This is a TL;DR crash-course in how a “Remember Me” function works and how to implement it…

Things to know…

  • Barely ties into your authentication system.  You can do this without much more than an additional database table and a few lines in your current authentication script.
  • $_SESSION data is used during the current browser session.  If you have Chrome/Firefox/Opera restore your last session automatically, some of this data could be held, but most times you will have an empty $_SESSION data set when you open an instance of your browser.
  • $_COOKIE data can persist over the course of multiple browser sessions, and expire whenever they’re set to.  This is where you’ll store your “Remember Me” data.
  • This class has three main functions:
    • setRememberMe which is run when a successful authentication has occurred and the user had the “Remember Me” checkbox checked AND when you’re regenerating an authentication based on the cookie
    • checkRememberMe which is run when there’s no current logged in and authenticated session present.  Handles a lot of the security measures automatically.
    • logout which is run when the user initiates a Logout function while having a “Remember Me” cookie set which deletes the cookie and the rows from the database.

How this “Remember Me” function works…

  1. User logs in with the “Remember Me” check box marked on log in.
  2. Upon successful authentication, a cookie is set with a security triplet: userID, series token, and signature.  The series token is something that is very large and very random, and will be kept among successful re-authentications to show signs of tampering and track re-authentications. The signature is a hashed value of a secret key, the date of creation, and the other cookie data. This will be verified to check authenticity of all the data sets.
  3. User browses and accesses the application, does what is needed to be done, and shuts down his system or otherwise closes his browser

A short while later…

  1. User comes back a few hours/days later, the authenticated session is no longer available, but the “Remember Me” cookie is still set so long as it hasn’t expired.
  2. Site detects there is no authenticated user session, would have otherwise gone to redirect/restrict access to the supposed guest users. Instead, having failed the authentication check, the application checks for the presence of the “Remember Me” cookie and finds it.
  3. Application verifies the details of the cookie with what is in the database, creates a new session for the previously authenticated user, nulls the old database entry, generates a new cookie, and updates the database with the new cookie details.

And from there it basically just repeats itself till either the user forcefully logs out, thus manually destroying the cookie as well, or till the cookie expires.

Well let’s get down to brass tax…

Front-end Modifications

  1. Modify your Login Screen HTML Form to include a checkbox, set the name/id/type, wrap it in a label.

That’s it!

Back-end Modifications

  1. Create a new class, define a few private variables, and with the construct is where you’ll define those variables. We’ll layout the general functions of the rest of the class too…
    class rememberMe {
    	private $key = null; //Very large, very random key. Defined in construct function
    	private $cookieExpiration; //Time in the future to set the expiration date for the cookies set.
    	private $databaseExpiration; //Time in the future past $this->cookieExpiration where it's safe to remove from the database if it's a used row.
    	private $link; //DB Handler
    	private $table_name; //Table name to use.  Default is 'session_rememberMe_data'
    	private $restrictIP; //Restrict Remember Me cookies to use only by the same IP.  Not preferred for NAT'd networks, which are EVERYWHERE.  Even...in your HOME!  Most likely anyway.  Default set to false.
    	private $restrictUA; //Restrict Remember Me cookies to be able to be used only by the same User Agent.  IE does some stupid shit and changes it as often as it dumps the memory, so your session might cookie might become invalid upon random visits and then destroyed.  Default set to false.
    
    	function __construct($link, $table_name = 'session_rememberMe_data', $restrict_to_ip = false, $restrict_to_user_agent = false) {
    		$this->key = LARGE_RANDOM_PRIVATE_KEY_HERE;
    		$this->cookieExpiration = ( time() + 604800 ); //7d
    		$this->databaseExpiration = ( $this->cookieExpiration + 604800 ); //7d after cookie expiration
    		$this->link = $link;
    		$this->table_name = $table_name;
    		$this->restrictIP = $restrict_to_ip;
    		$this->restrictUA = $restrict_to_user_agent;
    	} //End construct function
    
    	//Logout function used to clean up current cookie from the database...
    	public function logout() {
    		// If cookie is set..
    			// Load and decode Cookie..
    			// Delete/Null matching cookie data from database
    			// Delete cookie from user browser
    	} //End public function logout
    

    Now that we have the class created, we’ll integrate it with the current authentication system

  2. Your authentication system should have a function/method to check the status of a logged in user, assuming the function method is called loggedIn().  Somewhere in your main initial authentication checks…
    if ( !loggedIn() ) {
    	$rememberMe = new rememberMe($databaseHandler);
    	if ( $rememberMe->checkRememberMe() ) {
    		//User remember me data has been verified, session has been set to reauthenticate the user, and new cookie data has already been generated
    		//Redirect, or show welcome back message?
    	}
    }
    

    This loads the kemo_RememberMe class and checks the cookie. The checkRememberMe function either will validate the matching cookie and reauthenticate the user/session, or destroy it.

  3. In your authentication system’s log out function/method, assuming it’s called logOut(), inside that function call the following to destroy session/cookie data…This will destroy the current table row set for the authenticated user if they decide to manually log out. This prevents any sessions understood to be logged out from being hijacked.
    function logOut() {
    	$rememberMe = new rememberMe($databaseHandler);
    	$rememberMe->logout();
    	//Go on to unset/destroy session, restart with blank session
    }
    

    Now whenever the user manually logs out, or otherwise fails a reauthentication check and the logOut()

Creating a ZenCart Template

Recently I’ve been taking tasks of a masochistic nature.
I’ve got a client who’s using ZenCart. It’s an old, and antiquated PHP-based shopping cart software that isn’t updated anymore, isn’t easy to use, and the list goes on. Either way, we’re comparing options of future cart software choices, and one of the things I mention is that mobile shopping has increased 30% every year and that his site needs to be responsive down to mobile. Showed him what it is, got the “Wow” factor, and all that.

Either way, for some reason I had a late night craving to make a template…for ZenCart. A responsive one, a simple one. Just porting Bootstrap into ZenCart’s system! Or so I thought it’d be a simple one…

Their documentation is extensive, but poorly organized.
Here’s a quick run down on how to make a new ZenCart template…this will be mostly terminal commands, so you’ll need to be able to read that. I’ll give an overview of it all, but I’m not going to detail this extensively like I normally do with things as ZenCart is probably not going to be even heard of in 5 years.

The basis of the following is under the understanding that you’ve already downloaded ZenCart and installed it to your host, and are working in that directory. We’re going to call the new template CUSTOM, using the language ENGLISH, these capitalized items are to be renamed to whatever you want to call your template. So here’s my little quick start batch script sorta thing…
kemo@kemoDesk:/www/zencart/$ cd includes/templates/
kemo@kemoDesk:/www/zencart/includes/templates$ mkdir CUSTOM
kemo@kemoDesk:/www/zencart/includes/templates$ cp -r template_default/* CUSTOM/
kemo@kemoDesk:/www/zencart/includes/templates$ cp -r classic/{template_info.php,css/} CUSTOM/
kemo@kemoDesk:/www/zencart/includes/templates$ cp template_default/templates/tpl_* CUSTOM/templates/
kemo@kemoDesk:/www/zencart/includes/templates$ cd ../languages/
kemo@kemoDesk:/www/zencart/includes/languages$ mkdir {CUSTOM,english/CUSTOM}
kemo@kemoDesk:/www/zencart/includes/languages$ cp english.php CUSTOM/
kemo@kemoDesk:/www/zencart/includes/languages$ cp english/*.php english/CUSTOM/

From there, you’re modify the details in that CUSTOM/template_info.php file with your template name, author name, etc. Once you’ve done that, go into the Admin and switch to this new template to start seeing changes.
One thing to note is that the ZC documentation is so horribly out dated…the template the docs talk about is from 2003. Kinda retro-mod with the flat design I guess, but the colors kill. So instead of basing the template off of template_default, I used classic…
[INSERT COMPARISON IMAGE…]
From there it’s just a matter of using a lot of grep and updating tables to divs.

Here are a few places that helped to sort out this information…
http://www.zen-cart.com/content.php?78-how-do-i-set-up-the-template-overrides