How to setup the MySQL data directory to be in your encrypted home folder on Ubuntu 14.04

Ubuntu has built in home folder encryption similar to OSX. I always turn on this feature on both OSs and have never experienced any perceptible performance hit. This guide shows one approach to migrating the MySQL data directory into the encrypted home folder on Ubuntu 14.04.

Caveats:

The only system user allowed to access the encrypted home folder is the user that owns that folder (eg your user). For this approach to work, MySQL must run under the same user that you login as. The service must be started after you login to the desktop. That can be automated by creating a script that gets triggered by the ‘Startup Applications’ program.

Configuration changes:

# stop mysql
$ sudo service mysql stop

# backup mysql data folder and config file
$ sudo cp /var/lib/mysql /var/lib/mysql_backup
$ sudo cp /etc/mysql/my.cnf /etc/mysql/my.cnf_backup

# move mysql data folder
$ sudo mv /var/lib/mysql /home/youruser/mysql

# change ownership of folder
$ sudo chown -R youruser /home/youruser/mysql

# config changes to my.cnf
$ sudo vi /etc/mysql/my.cnf

Changes to my.cnf:

  • socket = /home/youruser/mysql/mysqld.sock (there will be multiples)
  • pid-file = /home/youruser/mysql/mysql.pid
  • user = youruser
  • datadir = /home/youruser/mysql
  • log_error = /home/youruser/mysql/mysql_error.log
# start mysql
$ sudo service mysql start

# test everything out...

# when you are sure it is working
$ sudo rm -rf /var/lib/mysql_backup

Why encrypt the MySQL data directory?

Computer equipment, particularly laptops, are stolen all the time. As a developer, your machine probably contains dozens of sensitive passwords, api keys, ssh keys and so forth. Most are probably dev accounts, but a few live passwords might be floating around too. For this reason I keep all my files in the encrypted home folder (as it is meant to be).

A potentially huge source of sensitive information are local databases on your machine. The degree to which a dev database should be locked down really depends on the nature of the business. Talk to your manager about it if you are unsure.

What I like about this solution is, since the entire data folder is encrypted, it works going forward automatically for any new databases. This technique is not unique to MySQL, all database platforms allow storing data in a user defined location.

Is Ubuntu’s encryption of the home folder bullet proof?

See the following links for more information:
http://www.linux-mag.com/id/7568/
http://security.stackexchange.com/questions/41368/is-encrpyting-home-sufficient
https://help.ubuntu.com/community/EncryptedHome

Nothing is likely to stop serious hackers or the NSA. However, putting sensitive data into the encrypted home folder is a reasonable precaution a professional should be expected take.

Saying –

“My laptop was stolen which contained all customer email addresses… *sorry*.”

Sounds MUCH worse than  –

“My laptop was stolen and the data was encrypted with AES 128-bit encryption making it very very unlikely anybody, including computer experts, small nation states and powerful corporations will be able to access anything.”

 

What about using a cloud database for development?

Hosting your dev database in the cloud keeps sensitive data off your machine. This option is becoming increasingly affordable. Depending on latency to the cloud it can slow down day to day development work. If you do use cloud servers for development, make sure to connect over an encrypted connection! Otherwise everything that goes back and forth can be eavesdropped on. A VPN, SSH Tunnel, or MySQL SSL connection will do the trick.

Posted in Sys Admin, Work | Tagged , , , , | Comments Off

Correct use of PHP’s ‘at’ operator with speed benchmark

In PHP placing an @ symbol in front of an expression (variable or function call) tells php to suppress any error messages that expression generates. I find this to be a handy piece of syntactic sugar. When used correctly the gains in code readability far outweigh the costs in terms of performance (which I benchmark below). Some people argue that suppressing errors is a mistake and can mask problems so therefore this technique should never be used. I agree with the idea that suppressing errors is bad. At the same time if I don’t care if something in a 4 level nested array is null, then suppressing PHP’s chatter is doing me a huge favor.

Let’s look at an example of where the @-operator shines. Consider trying to get a value out of a nested array, which may or may not be set such as $response['STATUS']['ERRORS']['ERROR_COUNT'], which is a typical thing to see in SOAP based XML responses from enterprisey APIs.

One approach might be:

if(isset($response) &&
   isset($response['STATUS']) && 
   isset($response['STATUS']['ERRORS']) && 
   isset($response['STATUS']['ERRORS']['ERROR_COUNT'])) {
	$error_count = $response['STATUS']['ERRORS']['ERROR_COUNT'];
}

Although isset() doesn’t have a problem with this shorter version either. Thank you to my friend for pointing this out!

if(isset($response['STATUS']['ERRORS']['ERROR_COUNT'])) {
	$error_count = $response['STATUS']['ERRORS']['ERROR_COUNT'];
}

With the @-operator:

$error_count = @$response['STATUS']['ERRORS']['ERROR_COUNT'];

I like the last method because it is cleanest. I don’t care if $error_count is zero or null. The @-operator, being a somewhat lazy technique pairs well with another of PHP’s lazy at best but deeply flawed at worst ‘features’ in that NULL, “0″, 0, array(), and false are ‘falsey’ and can be used interchangeably when doing comparisons with plain ‘==’. By using three equal signs ‘===’ the types of the variables are also considered and that is generally the preferred method of comparing things, but that level of precision isn’t always required.

Notes about the @ sign in PHP:

  • If you delcared a custom error handler with set_error_handler() that will still get called.
  • It only works on expressions (things that give back a value). So it does not work on if/then statements, loops, and class structures, etc. This was a wise choice by the PHP community.
  • The fact that it only works on expressions greatly reduces the unanticipated side effects that can result. In this sense it is nothing like ON ERROR RESUME NEXT, an infamous language feature in Visual Basic and Classic ASP, which chugs past errors. The previous error can still be checked for in a sort of poor man’s try/catch block. ON ERROR RESUME NEXT sucks and makes me want to hurl just thinking about it.

Some people really hate the @-operator:

Most of the arguments against the @-operator come down to misuse and then over reaction. The fact is inexperienced and inept programmers can take any language feature and come back with a hairball of unmaintainable code.

As I demonstrated above, the @-operator is great when digging through arrays such as complex DOM objects. This is especially true with optional keys. It should not be used when calling external resources like the file system, database, APIs, etc. In those situations, try/catch blocks should be used to make sure if something goes wrong it gets logged and cleaned up properly. The @-operator is not a substitute for a try/catch!

The second major knock against the @-operator is the alleged performance penalty. Let’s do some benchmarking:

laurence@blog $ php -v
PHP 5.3.24 (cli) (built: Apr 10 2013 18:38:43)
Copyright (c) 1997-2013 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2013 Zend Technologies

laurence@blog $ cat php-at-operator-test.php
<?php
error_reporting(E_ALL ^ E_NOTICE);

$OPERATIONS = 100000;

// test using @-operator
$time_start = microtime(true);
for($i=0; $i<$OPERATIONS; $i++) {
  $error_count = @$response['STATUS']['ERRORS']['ERROR_COUNT'];
}
$duration = (microtime(true) - $time_start);

echo "With the @-operator:" . PHP_EOL;
echo "\tTotal time:\t\t" . $duration . PHP_EOL;
echo "\tTime per operation:\t" . number_format($duration / $OPERATIONS, 10) . PHP_EOL;
echo PHP_EOL;


// test using isset()
$time_start = microtime(true);
for($i=0; $i<$OPERATIONS; $i++) {
        if(isset($response['STATUS']['ERRORS']['ERROR_COUNT'])) {
             $error_count = $response['STATUS']['ERRORS']['ERROR_COUNT'];
        }
}
$duration = (microtime(true) - $time_start);

echo "Using isset():" . PHP_EOL;
echo "\tTotal time:\t\t" . $duration . PHP_EOL;
echo "\tTime per operation:\t" . number_format($duration / $OPERATIONS, 10) . PHP_EOL;
echo PHP_EOL;
laurence@blog $ php php-at-operator-test.php
With the @-operator:
        Total time:             0.19701099395752
        Time per operation:     0.0000019701

Using isset():
        Total time:             0.015001058578491
        Time per operation:     0.0000001500

For my limited testing with PHP 5.3.24 on a 6 core box looks like the @-operator is ~13 times slower than using isset(). That sounds like a lot, but let's look at the penalty per use, which is 0.0000018201 seconds, or ~1.82 microseconds. An application could do approximately 550 @-operator uses, and it would impact the response time by just 1 millisecond. If a single page request does 550 @-operator look-ups and every millisecond counts then you have a problem. Probably what matters more is overall memory consumption, transactionality, caching, code cleanliness, ease of maintainability, logging, unit tests, having customers, etc... Still it is good to have a solid measure when arguing the case either way. In the future as CPUs get faster and cheaper, I expect the performance penalty to shrink.

Posted in Code | Tagged , | 3 Comments

Flash Boys by Michael Lewis

For anyone interested in code, networking, and finance, Flash Boys is a real page turner. For me personally, with interests in all three, it sent chills up my spine. I could not put it down!!!

Flash Boys is a fascinating, informative, and thoroughly done edge of your seat ride through the modern world of technology driven high frequency trading. I really enjoy Michael Lewis’ works and this is the best so far. This being his latest work, published March 2014.

Flash-boys-jkt_1

The Back Story:

Since the mid 1980′s trading has been increasingly handled by computers instead of people. Starting around 2007 there was a huge disruption in the way stocks are traded on exchanges. It was precipitated by new SEC rules and advances in fiber optic networking. This lead to a surge in trading volume, all of it automated by computer. Clever ‘high frequency’ traders figured out how to exploit the slower players (everyone else) based on network latency and order manipulation. In a fine example of capitalism’s creative destruction, the high frequency trading firms were able to exploit a weakness in the market and shave billions in profit. It is unfortunate for investors that the new players in the market did not correct the inefficiency, but instead used it for exploitation and in some senses made the problem even worse.

I don’t want to give away any of the story because it was such an enjoyable read. You won’t be disappointed by the way tech talk is presented. Some of the heroes in this true story are developers and systems administrators. Go nerds!

Lessons for all Software Professionals:

One issue the book brings to light is the economic consequences of ignoring software quality and long term vision when it comes to system maintenance. Many of the trading platforms and exchanges out there were not written to cope with the complexity and speed of today’s world. This may not come as a surprise, but non-technical wall street managers are driven by short term personal gain in the form of fat bonuses. As such they end up with core systems that are done completely piecemeal, each feature bolted onto the next. Sound familiar?

The piecemeal approach to building software is commonly found in any non-technology company that uses technology. All companies in today’s world are forced to use technology to stay competitive, but few are good at managing that technology for the long term. On wall street it has become a systemic problem and is to blame for what are becoming consistent ‘system glitches’ that send markets spiraling for ‘inexplicable reasons’. The Knight Capital ‘glitch’ that lost $440 million is a great example. NASDAQ and other exchanges routinely have serious flaws that are now looked at as the cost of doing business. It is SCARY to think how much money flows through these systems each day. The planet’s economic security depends on these systems. Technology is easy to blame (especially for managers who don’t understand it). What is actually to blame is the way in which the technology is being managed. The book goes into this issue in detail from multiple view points and I was refreshed to see it brought up.

Hope you enjoy reading Flash Boys!

Posted in Book Reviews | Tagged , | Comments Off

Ever heard of inodes? You need lots of them.

Ran into a situation on a customer’s CentOS server the other day where a service wasn’t working. Symptoms and error messages indicated the disk was full. However ‘$ df -h’ was showing ample free space. What the heck? Turned out the maximum number of files on the disk had been consumed. Technically speaking, the limiting factor was the number of inodes allocated to the volume. An inode is taken up for each file, directory and link on the file system. Inodes act like a database for the files on a file system and contain pointers to the actual information.

When a partition is created the maximum number of inodes is established, rather set in stone. There is no way to re-partition the number of inodes on the fly. In this particular case the volume was 75GB with 23GB free, but only 1,000,000 inodes were allocated to it. The temporary solution was to remove old files that were not needed to get the total number of files on the partition safely back below 1M. As soon as that was taken care of the system started working again.

Unix/Linux (and Mac of course) have the inode concept built into their file systems. To check out the inode status run ‘$ df -i’ to make sure you are not at risk of running out of those precious inodes.

user@host.com [~]# df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda             49152000 8771724 40380276   18% /

inode related commands:

‘$ ls -i’ it will output the inode ids for each file / directory.

user@host.com [~]# ls -i1
 1725516 access-logs@
 1721190 backups/
 1720340 dead.letter
 1720652 etc/
 2173459 logs/
 1720654 mail/
 1720648 public_html/
41845314 python@
 1729306 ssl/
 1720653 tmp/
 1720660 www@

The stat command will tell more details about the particular file / inode.

user@host.com [~]# stat public_html
  File: `public_html'
  Size: 4096            Blocks: 8          IO Block: 4096   directory
Device: 800h/2048d      Inode: 1720648     Links: 13
Access: (0750/drwxr-x---)  Uid: ( 1058/user)   Gid: (   99/  nobody)
Access: 2011-12-04 16:29:56.000000000 -0500
Modify: 2014-04-20 03:19:04.000000000 -0400
Change: 2014-05-17 00:00:11.000000000 -0400

To get a count of the inodes per folder under the current directory:

user@host.com [~]# find . -type f -printf "%h\n" | cut -d/ -f-2 | sort | uniq -c | sort -rn
   5789 ./public_html
    557 ./mail
    555 ./tmp
    205 ./logs
     75 ./.cpanel
     43 ./etc
     25 .
     13 ./.sqmaildata
     10 ./.fontconfig
      6 ./.subversion
      6 ./.gnupg
      6 ./.fantasticodata
      5 ./.htpasswds
      3 ./backups
      2 ./.emacs.d
      1 ./.ssh
      1 ./public_ftp
      1 ./.cpan

This can take forever so you may want to direct the output to a file (assuming you can spare an inode):

user@host.com [~]# find . -type f -printf "%h\n" | cut -d/ -f-2 | sort | uniq -c | sort -rn > inode_count.txt

For more information:
http://www.linux.org/threads/intro-to-inodes.4130/

Posted in Sys Admin | Tagged , , | Comments Off

Gist CSS for WordPress That Looks Better

The following Gist is the CSS I’m using for my wordpress blog to improve the way Gists look. I found the default Gist CSS to render too large and unwieldy.

The improved CSS sets the maximum height of the gist to 500 pixels. It also reduces the font size and line height so it is more compact. Inspired by: https://gist.github.com/wataru420/2048287

Hope it helps!

Update 10/9/2014 – github must have changed their CSS. Removing the .gist div, line-height entry fixed it.

Posted in Code | Tagged | Comments Off

Grunt – for automating builds in Front End land

Grunt is a front end build tool I’ve used on the last several projects. It handles CSS / JavaScript minification, concatenation, and linting really well.  Some of my legacy projects use a combination of bash and Yahoo UI Compressor, which I’m now switching away from in favor of Grunt.

grunt
What I liked about Grunt from the start is, it is 100% command line based!  Never seen a front end tool that lives on the command line before. That alone got me excited, but it gets better. Grunt is versatile given its plugin architecture. There are over 2750 Grunt plugins at the time of this writing. For example, Grunt can be used to run unit tests, setup as a ‘watch’ to automatically build SASS while developing, and even run PHP, Ruby and Python tasks.

Grunt runs on Node.

Grunt depends on node and npm (node package manager). It is very simple to get started.

$ npm install -g grunt-cli

Then you drop a Gruntfile.js into the root of your project and start configuring.

Here is a sample Grunt  script.

This script combines the web app’s JavaScript and CSS files into production ready files. This is in accordance with the YSlow recommendations for limiting the number of .js and .css files a web application downloads the first time it loads. It also has a task for running jslint, which checks the JavaScript I wrote for obvious problems and stylistic errors.

To kick it off:

$ grunt minify

Results in the built JavaScript and CSS files in the /build/ folder.

To run the lint task (powered by jshint in this case):

$ grunt lint

If opening an extra terminal window gets annoying, there is a plugin available for Sublime Text: sublime-grunt.

For those of you coming from the Java world:

Grunt works a lot like Ant. It does the same things in terms of automating the build process, compilation (well in this case minification), cleaning the build folder, and running unit tests.

There is a companion tool called Bower which reminds me of Maven in the way it resolves dependencies. A second companion tool called Yeoman works similar to Maven archetypes in that it provides pre-built projects with the scaffolding setup.

The trifecta – Grunt, Yoeman, and Bower:

Grunt by itself is just a build system, but combined with Yeoman ‘yo’ for short, and Bower it gets a lot more powerful.  Descriptions of each from the Yeoman website:

  • yo scaffolds out a new application, writing your Grunt configuration and pulling in relevant Grunt tasks and Bower dependencies that you might need for your build.”
  • Grunt is used to build, preview and test your project, thanks to help from tasks curated by the Yeoman team and grunt-contrib.”
  • Bower is used for dependency management, so that you no longer have to manually download and manage your scripts.”


Other thoughts:

At the moment, NPM is a bit like the wild west meets woodstock. The progressive free love that is the npm echo system continues to crank out new packages and interwoven dependencies at a staggering rate. No one person or company is in control of the endless supply of new packages and plugins that are available. That makes it great. It also makes it unstable and insecure.  See my post on Software Ghettos for some thoughts on using open source projects of all shapes and sizes as dependencies.

On rare occasions it is frustrating when something goes wrong with Grunt. If you are lucky it is due to a version mismatch in the local environment and ‘$ npm cache clean‘ might fix it. The error messages can be vague and misleading. I have ran into situations where a fix was available but not ported into the main npm tree or even merged into the plugin’s repo. In these cases I had to override the version manually or do some other hacky fix to get going again.  I have also noticed subtle differences between Windows / Mac / Ubuntu in the way the CSS / SASS related plugins operate. In these cases I deferred to building on Mac. (I really should have documented the issue and made a blog post about it. I wrote it off at the time as a fluke so take that last observation with a grain of salt.)

All in all Grunt is a great tool.  I use it, my life is better, my clients benefit, and releases proceed as planned.

 

Posted in Application Development, Code | Tagged , , , , | Comments Off

Wolfram Alpha Language Coming Soon

Wolfram Alpha has always been a interesting if not quirky ‘intelligent search engine’. It can do things like:

Now its creator, Stephen Wolfram, has announced a ‘language’ that bolts on top of the curated data Wolfram Alpha uses.  It looks like Mathematica on steroids hooked to the cloud. This preview video is well worth watching:

From the video the Worlfram Language appears to be more like a high level collection of functions that fit nicely together to process data vs a ‘language’. The annoying part is, I don’t see a way to get my hands dirty and play with it at the moment.

More information:

Posted in Science and Math | Tagged , , , , | Comments Off

ReCaptcha getting hard to read, found streamlined substitute in FatFree

Recently implemented a Captcha field on a signup page.  So, to start with I looked around for a good plugin to handle this. ReCaptcha was the first one that popped up. ReCaptcha does support themeing, which is nice since the default red and yellow is a bit loud.  The problem I came across is it renders hard to read images a good 20% of the time. See example below.

recaptcha

Can the average user be expected to get past this hurdle? I seriously doubt it.  We don’t want to deter users from succeeding at signing up. We especially don’t want to make them feel stupid because of some clunky but well intended gadget on the page.

So for now instead of ReCaptcha I went with the PHP FatFree Captcha plugin. It doesn’t have the audio component, nor the refresh or help button, but I think it is a lot cleaner. I wish the ReCaptcha library had configuration options for this, and a ‘difficulty’ level.

fatfree_captcha

Here is a code example of using PHP FatFree (F3) to display a captcha image inline in a form. You supply the ttf font on your own.

<? // use FatFree's captcha feature to build a 7 letter captcha image
$img = new Image();
$img->captcha('./library/fonts/Arial.ttf',16,7,'SESSION.captcha_code');
?>

<img src="data:image/png;base64,<?= base64_encode($img->dump()); ?>" />

<? 
// the correct answer is stored in:
// $_SESSION['captcha_code'];
?>

 

Other thoughts about blocking spammy signups:

We could go without a captcha field, but at the same time, we want to cut down on spam. A good trick, which compliments a captcha, is to add a hidden form field which must be left empty for the submission to succeed. A human never sees this field so it is no problem for that use case. However, greedy spam robots will normally fill out every single form field they find in the HTML. The robots are too dumb to recognize they are tipping their hand, and the submission fails.

 

Posted in Code | Tagged , , | Comments Off

On Pluggers, Rockstars, Ninjas, and other fun labels for developers – which are you?

My first boss taught me there are two types of programmers: pluggers and rock stars. It was the late 90’s, and programming talent was in high demand. I was the young gun they brought in.  A plugger would not have worked out they told me. There was just too much opportunity to be had!  We went on to attempt our own web based online auction system hosted with Windows NT 4.0 powered by a Pentium 100 chip using a FoxPro backend… You can guess the outcome, but that is another story.

As I’ve thought about it over the years there are many categories, or rather stereotypes for programmers beyond the garden variety plugger and rock star.  In general stereotypes are negative, politically incorrect, and ignorant. At the same time, stereotypes help us survive in the wild and make for humorous but sometimes hurtful labels.

Pluggers, or better Samurai Coders

samurai

Pluggers are the ultimate katana / keyboard wielding soldier.  A plugger comes into work on time everyday and reliably gets their work done without stirring things up. Like Samurai, they are willing to sacrifice themselves for the cause, following the rules and the spec without question.  Work gets done, but at best it is average, since the average programmer is probably a plugger / samurai anyway.

The Joke about Samurai Coders:
Too bad swords are obsolete and so is their skillset.
Harsh but true, programmers who get in a comfortable rut are going backwards in the ever changing software job market.

Where Samurai Coders can improve:
Generally, I’ve noticed Samurai Coders are not very interested in learning new things unless forced.  My recommendation is to join a software book club, attend local meetups, or at the very least watch a tutorial video with lunch everyday for a week. Then share what was learned with the team.  Also, don’t forget to try and poke holes in the spec.

 

Rockstar aka Diva

rockstar

The opposite of a plugger, rockstars crave attention, challenge, and accomplishment. They can work well on teams, provided there are not too many of them in one area of the system (or they fight). Rockstars have an insatiable hunger to take on projects WAY beyond their abilities.  They may just get away with it too.

The Joke about Rockstars:
Fashionably dressed in their own mind. Some rockstars are so cool they wear the same clothes everyday.
When managing a rockstar – don’t forget to schedule a bug fix release, just tell them it’s an encore.

Where Rockstars can improve:
The best performers know their limitations. A rock star needs to understand their own limitations and especially the limitations of the frameworks and tools they rely on.  I would encourage a ‘junior’ rockstar who only knows one programming language to learn a second and somewhat different programming language from their first.  Rock stars should also learn to pass on their good energy and experience by being everyday leaders and mentors.

Some Rockstars are really just Divas who rock the boat whenever they don’t get their way. Divas think their code is perfect and can’t listen.

 

And now for something completely different…

Humorous stereotypes for developers including Ninjas, Do Alls, Hackers, Acronym Guy/Gal, Barnacles, and Mercenaries.

 

Ninja

ninja

JavaScript Ninjas, DevOps Ninjas, and all other sorts of Ruby/Python/PHP/iOS Ninjas are running around out there. Some job postings even request Ninja’s by title.

Ninja coders work in stealth, and leave no trace. Similarly, their code must be kept quiet. No logging statements… no comments… just an odd tingling sensation on the back of the neck that something might be wrong.  A Ninja’s code is so concise, nobody, not even their future selves will be able to decipher it.

Ninjas fix bugs like lightening. Warning: may require multiple strikes to close the ticket. Warning 2: these ‘lightening’ strikes may start fires elsewhere.

 

Do All

ned

You may have encountered the software team member equivalent of ‘Ned Flanders’ once or twice your career. It is easy to recognize the okily-dokily Do All.

The Do All volunteers to work evenings and weekends to make sure the features that ‘have’ to ship do.  They might get some of the cooler tasks, but they also volunteer for the stuff that is ‘too hard’ for everybody else.  A sure sign of a Do All is they have absolutely no interests outside the current sprint.

The Joke about the Do All:
Calculate their actual take home pay on an hourly basis and you’ll feel sorry for them.

 

Hacker

hacker

To a hacker, their code is such a masterpiece, why would it need testing or documentation?

Hacker Behavior:

  • Runs port scans and network sniffers on co-workers machines.
  • Attempts to install malware on co-workers machines.
  • Thinks more code is always the solution, especially when it comes to bug fixes.
  • Attempts to sneak non-business compatible licensed libraries into the code base.
  • Runs a really screwy build of Linux so they don’t have to join WebEx meetings and avoid platform testing.

The Joke about Hackers:
They are probably really just a script bunny.

 

Acronym Guy/Gal

acronym

You may have witnessed this breed, common to the Enterprise Java and .NET stacks. When Acronym Guy/Gal announces themselves, they proudly rattle off the acronyms associated with the stack they work on. Their resume is covered with acronyms. They may have a habit of reciting acronym chains. An Acronym Guy/Gal often recommends building new features using acronyms they know nothing about because they have yet to pad their resume with them.

The Joke about Acronym Guy/Gal:
Acronym Guy/Gal fails to realize how fast today’s acronyms go from popular buzzwords to yesterday’s maintenance project, or fade into nothingness.

 

Barnacle

barnacle

Barnacles have been at the company way too long and are VERY intent on staying. When threatened the typical barnacle defense mechanism is to bring up ‘how things used to be done’ based on a self serving oral history of the organization. A barnacle argues against upgrading anything, kicking and screaming their way out of a meeting that proposes even the slightest degree of change.

The joke about barnacles:
Over the years, barnacles have walled themselves off with so much bad code they believe they are indispensable. (Yeah, keep believing that barnacle). Hence, one of the least productive people on the team. Their co-workers notice this and shake their heads ‘ahh, what a silly barnacle…’.

 

Mercenary

merc

A hired gun. The mercenary’s mantra: “Identify the problem and prolong it.”

Mercenaries are very, very good about bringing up schedule problems… but only 2 weeks before their contract ends.  A mercenary’s agenda usually involves scope creep if not wholesale re-writes. Mercenaries are fun to talk to at the water cooler because they have seen more of the outside world, have interesting stories, and have a different emotional take on the situation.

The joke about mercenaries:
When invited to irrelevant meetings, a mercenary is the only person in the room smiling (except the Do All who arrived early and has the eagerness of a puppy to get the meeting started).

 

The End

Don’t be offended if you see yourself somewhere in this article.

I’ve been several of these over the years at different times, perhaps multiple categories at once, usually without realizing it. The challenge is to step outside yourself and analyze your own behavior, like you might debug a program. What makes you tick? How you can transcend a label? My goal is to deliver value on a daily basis to my team, my code base, and my customer/employer. Stereotypes for that sort of individual might be: team player, a full stack developer, but I think it is best described as simply: software professional.

Photo credits:

Samurai Photo from indi.ca, CC-License Rockstar Photo from LUIS BLANCO PRESS PHOTOGRAPHER CC-License Ninja Photo from Lachlan HardyCC-License  (cropped) Do All Photo from Popculturegeek CC-License Hacker photo from sfslim  CC-License Acronym photo from mraible CC-License Barnacle photo from mscheltgen CC-License Mercenary photo from xJason.Rogersx's CC-License

Posted in Fun Nerdy, Work | Tagged , , | Comments Off