Redhat – I’m a redhat certified engineer

Redhat Certified

I’m a Red Hat Certified Engineer (RHCE), achieved on RHEL 6, RHEL 7 and RHEL 8.

I’m a Red Hat Certified Specialist in OpenShift Administration achieved on Red Hat OpenShift Container Platform 3.11 and 4.

You can verify my Certificate here (Certification ID 111-210-076).

I have also completed the “Red Hat Partner Platform Certified Salesperson” (RHPPCS) training program.

Gamelinux PassiveDNS RPM (RedHat / Centos)

PassiveDNS, A tool (by GameLinux) to collect DNS records passively to aid incident handling, Network Security Monitoring (NSM) and general digital forensics.

PassiveDNS sniffs traffic from an interface or reads a pcap-file and outputs the DNS-server answers to a log file. PassiveDNS can cache/aggregate duplicate DNS answers in-memory, limiting the amount of data in the logfile without losing the essense in the DNS answer.

I only found some RPM builds, for example by Slava Dubrovskiy at Altlinux but they were out-of-date (release 0.3.3). I’ve created a new RPM which is up-to-date with release 1.2.0 (b94d776). Feel free to download and rebuild the source RPM (passivedns-1.2.0-3.20151019git3e0611d.cgk.el6.src.rpm) if required. 4 packages will be built: passivedns, passivedns-daemon, passivedns-tools, passivedns-debug.

One thing to note, a patch has been added to this RPM which makes passivedns send it’s logs to syslog via the local6 facility, instead of the local7 facility.

— update
I’m going to write some systemd compatible service scripts for passivedns at RedHat / Centos 7. These will be versioned at github.
Continue reading Gamelinux PassiveDNS RPM (RedHat / Centos)

Openstack Jenkins Job Builder RPM

Jenkins Job Builder takes simple descriptions of Jenkins jobs in YAML or JSON format and uses them to configure Jenkins. You can keep your job descriptions in human readable text format in a version control system to make changes and auditing easier. It also has a flexible template system, so creating many similarly configured jobs is easy.

Continue reading Openstack Jenkins Job Builder RPM

Protect yourself from POODLE SSLv3

On Tuesday, October 14, 2014, Google released details on the POODLE attack, a padding oracle attack that targets CBC-mode ciphers in SSLv3. The vulnerability allows an active MITM attacker to decrypt content transferred an SSLv3 connection. While secure connections primarily use TLS (the successor to SSL), most users were vulnerable because web browsers and servers will downgrade to SSLv3 if there are problems negotiating a TLS session.
poodle.io

POODLE: SSLv3 vulnerability (CVE-2014-3566)
Red Hat Product Security has been made aware of a vulnerability in the SSL 3.0 protocol, which has been assigned CVE-2014-3566. All implementations of SSL 3.0 are affected.
https://access.redhat.com/articles/1232123

Fix Apache

SSLProtocol ALL -SSLv2 -SSLv3
SSLHonorCipherOrder On
SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:\
ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:\
RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

Don’t forget to test your configuration at ssllabs
https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/

Fix your browser, eg: firefox
You can set the value security.tls.version.min = 1 in the about:config dialog.
https://poodle.io/browsers.html

Monitoring Cisco Ironport with Collectd

Collectd is a daemon which collects system performance statistics periodically and provides mechanisms to store the values in a variety of ways. Collectd gathers statistics about the system it is running on and stores this information. Those statistics can then be used to find current performance bottlenecks (i.e. performance analysis) and predict future system load (i.e. capacity planning)

You can’t run collectd directly on the Ironport so we needed to find some other way to pull useful data from Ironport. We could either use SNMP (less data) or some other way (more data). After some searching we found out you can also access your Ironport statistics using the web frontend. A logic choice was to use the cURL-XML plugin.

Another important functionality in our setup is the use of graphite, a tool that provides realtime scalable graphing. You can send your metric to graphite in stead of using a local RRD file. This is done via the AMQP plugin for which we provide packages at our yum repository.

You can access the ironport XML file containing more statistics manually: https:///xml/status?, which will result in:

<status build=”rls” hostname=”hostname” timestamp=”20130429193603″>

I’m only going to cover the gauges in this post, because those seem the most relevant.

<gauges>
<gauge name=”ram_utilization” current=”18″ />
</gauge>

You can pull data from this XML using XPath, it takes some time until you find the correct syntax to pull the data, so here is a small example:

<LoadPlugin curl_xml>
Interval 10
<LoadPlugin>

<Plugin “curl_xml”>
<URL “https://ironport.fqdn/xml/status”>
Host “ironport.fqdn”
Instance “ironport”
User “username”
Password “password”
VerifyPeer false
VerifyHost false
CACert “/etc/pki/tls/certs/ca-bundle.crt”

<xpath “/status/gauges/gauge[@name=\”ram_utilization\”]”>
Type “ram_utilization”
ValuesFrom “@current”
</xpath>
</URL>
</Plugin>

That’s about everything you need to do. Remember, if you want your data to be stored in graphite, you also have to configure the AMQP plugin. There are some fine blogposts about that matter, so I’m not going to duplicate this information. Check: Collectd to graphite.

Packaging puppet 3.1.1 for ARM Raspberry PI

Steps

There are some steps you can follow to create yourself a build host that matches the Raspberry Pi almost identically:

  • Install Qemu
  • Download the latest version of Raspbian
  • Expand the raspbian image with extra disk space (more info here)

When your buildhost is operational you can start packaging:

  • Install the puppetlabs source apt repository
  • Start building the arm deb packages

Prepare the buildhost

raspberry-pi_arm

Start building the packages at the buildhost

  • Add the apt-src repository of puppetlabs to your apt/sources.list: “deb-src http://apt.puppetlabs.com/ wheezy main devel depedencies” and apt-get update
  • Install the puppet sources (you’ll probably need to install & build facter first, because it’s a dependency of puppet): apt-src install facter puppet
  • Build the facter package: apt-src build facter (you’ll probably have to install the new package first)
  • Build the puppet package: apt-src build puppet
  • Continue these steps for all other packages until you built them all

Cegeka puppet apt (arm) repository

Our apt repository contains puppet deb packages for the ARM platform, they are build to manage our raspberry pi farm.
deb http://apt.cegeka.be/puppetlabs/ wheezy main dependencies

Building an rpm of commvault simpana

Building rpm packages from static compiled data that your software vendor provides can be a little confusing, for instance simpana, a backup client by commvault.

Commvault provides an installer that copies their binary files in the right place, applies hot-fixes, etc.. For easy distribution i tried to create an rpm package around this installer.
So far, everything seems to be just fine. The RPM file has been created and is ready to install. Sadly the installer fails for some obscure reason..

[3%] Verifying CRC32 for Base/libCvLib.so …

*** File Base/libCvLib.so is corrupt.
*** CRC32 checksums do not match.

*** Failed to install package CVGxBase

So there is some CRC32 checking on all kinds of files in the install script, the CRC32 checking is done by “pkgcrc32”. The install script cross references the copied file with the input file “contents”

pkgcrc32 Base/libCvLib.so
rpm: 0xea2faf1c
source: 0xdbdca5e0

At first i thought my source tar archive was corrupt, so I extracted the tar archive, but the CRC32 checksum seemed to be the same as the one from the vendor: 0xdbdca5e0
Oddly the source & binary RPM contain the modified version of that shared object file.

Investigating the RPM build process I found out there is a difference between the extracted source, the buildroot and the buildroot after the building is completed. This means the shared object file is modified during the rpm building.

There are only a few other scripts that are being executed upon an RPM build:

+ /usr/lib/rpm/brp-compress
+ /usr/lib/rpm/brp-strip
+ /usr/lib/rpm/brp-strip-static-archive
+ /usr/lib/rpm/brp-strip-comment-note

One of these script is: “brp-strip-comment-note”, it creates an object dump of your shared object and checks for the .comment symbol, which it, at the end of the script, strips it. This makes the CRC32 checking very unhappy..

You can resolve this matter by putting this option in your .spec file:

%define __os_install_post %{nil}

no more unwanted strips, compression etc on your binary files..

Poweradmin – I’m a poweradmin maintainer

Poweradmin is a friendly web-based DNS administration tool for Bert Hubert’s PowerDNS server.

The interface has full support for most of the features of PowerDNS.

It has full support for all zone types (master, native and slave),
for  supermasters for automatic provisioning of slave zones, full support for IPv6
and comes with multi-language support. See feature list for all features.

Explore poweradmin now or contribute on Github.