Is localhost development obsolete?

Topic explored: Someday soon developers will only need a basic Chrome Book and a wifi connection to do their work. No software will be installed locally other than a browser.

I’m not so sure this trend will pan out across the board, but there are several reasons it makes sense.

Software development traditionally requires a high end machine capable of running everything the server needs plus an array of development tools. That translates to a non-trivial setup process and leads to subtle variations in what packages are installed. Some languages try to make life easier, for example python with virtualenv, or Ruby with rmv, but it is rarely a 100% perfect match between all team members and the production servers.

Why is localhost bad?

Using the exact same system libraries in dev, qa, staging and production is a smart thing to do because it eliminates bugs related to differences between versions. As a contract developer with multiple clients, I often have several projects going at once on the same development laptop. Keeping all the dependencies wired correctly gets annoying sometimes, but I’ve kept good notes and for the most part it doesn’t get in the way.

Dependency hell is a real place and I’ve been down there too many times.

In the modern world we solve problems by outsouring them to the cloud. So why not outsource localhost to the cloud?

The winning combination as I see it is:

  • Web Shell for Vagrant / Git
  • GitHub (or BitBucket) for collaboration
  • Web based IDE
  • Slack – not required but might as well publicly get on the Slack bandwagon now, ’cause it does make my life better.

In a nutshell, this new solution allows developers to edit code in a browser tab, click a button to launch a vagrant instance on AWS,  access shell commands in another browser tab, and integrate perfectly with source control. No need for any development libraries or tools installed locally. This lends itself heavily to the LAMP / MEAN stacks, but I don’t see why Java, C++, or any platform wouldn’t work with this approach.

Vagrant makes localhost as a server obsolete:

vagrant

Vagrant is a utility for spinning up virtual machines that run your application. Vagrant is heavily configurable. The config file lives in your project’s source code, typically in the root directory. With Vagrant all team members run the exact same virtual environment. Vagrant integrates with VirtualBox by default, but also Amazon Web Services, VMWare, and allows custom providers. Vagrant links your source code into the app directory it is hosting. When you make edits to your code the VM is automatically updated.

As of Vagrant 1.6 (April 2014), Vagrant started supporting Windows as the server environment. This was a smart business move for Vagrant (if I dare use the word business in the same sentence as an open source project). With 1.6, supporting Windows virtual machines is a major step for Vagrant to be universal and not just a *nix tool for all the l33t people working LAMP / MEAN variants on Macs and Linux.

Web Based IDEs to challenge local development:

A Web Based IDE will have to be downright amazing to get developers to switch in large numbers. It has to have a super fast search feature, auto complete, syntax highlighting, code formatting, and lots of flexibility. Remember, software development is like herding cats, so it has to work with everyone’s finicky little idiosyncrasies. Editing code aside, it will flop without a powerful plugin architecture. I would expect a rich ecosystem of utilities including a database explorer, command line tools, XML / JSON viewers, web services, test suite runners, file comparison, etc.

I have PyCharm, Eclipse, IntelliJ, PHPStorm, and Sublime Text currently installed on my Ubuntu development laptop. I have all that plus MSSQL Studio and Visual Studio on my Windows desktop (because some of my work does require Windows). That might be a low number of IDEs for a typical developer. For brevity, I didn’t mention text editors… That is a lot of functionality to cram into a browser, but people are out there working on it.

Here are some of the current contenders (in no particular order):

I’m not seeing an extensive plugin architecture from any of them… Maybe JetBrains can pull it off? They don’t seem to be working on anything publicly yet. From a business perspective they have no real incentive to cannibalize their current products. Besides JetBrains integrates with Vagrant via a plugin, and that solves most of the issues.

That feeling when you are stuck without a tool you need:

Developer A: The application code, the server environment, and the IDE are now in the cloud. Yes I can finally buy a Chrome Book!!!

Developer A: Wait…. what about the database??

Developer B: On the Vagrant instance or in the cloud, duh…

Developer A: Yeah, let’s all buy Chrome books!

[A trip to Best Buy, and a few minutes later…]

Developer A: Cool, the app is loading! But wait…. I want to run a query. How do you access the database?

Developer B: Umm… command line, duh…

*music from Psycho plays*

Developer A: Nooooooooooooo!!!!!!

The command prompt is not a tool I like to use for data exploration:

Don’t get me wrong, I can navigate the SQL command prompt with the best of them. But let’s be honest, it SUCKS for wading through complex data. When there are enough columns to cause line wrapping per row it gets impossible to read. What about pasting huge queries? Every mature app has at least a few queries that span multiple screens, amirite? The SQL command line REALLY SUCKS for debugging lengthy queries written by ‘intelligent’ ORM frameworks or the bastard who writes SQL using string concatenation with inline if/thens, redundant joins, wanton disregard* to formatting, and over use of sub-queries {IN(), EXISTS(), etc}.

* Wanton Disregard – legal term meaning severe negligence, extreme recklessness, not malicious but more serious than carelessness, can be evidence of gross negligence, can result in punitive damages depending on severity.

There are many examples out there of web based data explorers but they are clunky at best (take PHPMyAdmin for example). A good web based SQL explorer supports multiple tabs, allows saving of SQL, and shows a basic picture of the database entities. MySQL Workbench, HeidiSQL and MSSQL Studio are the three tools I mainly use today. In the past I’ve used Toad, Navicat and DbVisualizer. They are great tools as well. In fact paid tools are generally better.

Side note – I was really hoping the Oculus Rift DK2 would be a good platform to build an app for data exploring, but it makes me sea-sick…

What’s the actual payoff?

If we are going to outsource something, we expect to save some money too. Economically, unless I’m missing something, the payoff this new approach provides for run of the mill software development isn’t really that big.

  • If your company already has QA + staging environments, in theory you’ll catch bugs related to environment differences anyway.
  • If you don’t have QA + staging, you’ve got bigger problems to worry about than minor package differences on some contractor’s laptop.
  • Bugs come in a wide range of shapes and sizes. Even if there is a bug due to environment differences, it is a small percentage of overall bugs that happen.
  • Vagrant alone solves the issue of keeping everyone’s server environment the same, and it is free.
  • The cost savings of an ‘automatic’ environment setup is a rounding error compared to a developer’s cost per year. Crappy developers take ages to get their environment going because they don’t understand $PATH or other basics. For me it is typically under an hour to get up and running. Good software shops have scripts that assist the developer in obtaining database dumps and the like.
  • If developers all require cloud instances to be spun up during development that is an added cost on top of licenses / subscriptions for the IDE.
  • If the infrastructure running the Web Based IDE goes down, all your programmers are idle.

Where a Web Based IDE does make sense:

For certain applications, like cluster computing, or big data (where localhost is just too small), I think it makes perfect sense. In situations where high security is needed, a locked down Web IDE also makes sense (no data or code on localhost at all). This might put an end to developing over a VPN through RDC – thank god for that!

Cloud based software development tools can work in theory for just about any style of programming, even 3D-game developers. Nvidia offers a cloud gaming grid which houses an array GPUs in the cloud, renders HD video in the cloud, and streams it back to the client. If you can develop Ruby in the cloud, why can’t you do OpenGL or DirectX? At least, that is what Nvidia is saying. Sounds like fun!

>>> “there's no place like localhost... “ * 3
Posted in Application Development, Work | Tagged , , | Comments Off on Is localhost development obsolete?

Example Django Model Count and Group By Query

The Django framework abstracts the nitty gritty details of SQL with its Model library (the M in MVC). It is straight forward for standard lookups involving WHERE clauses. 10% of the time I need it to do something fancy like a GROUP BY aggregate query. This required checking the docs to see ‘how the heck do I make Django do a GROUP BY with a count(*)?‘. I’ll explain in detail below with examples. Django has a method called raw() for running custom SQL, but that is a last resort. Thankfully Django supports this case really well and did exactly what I was expecting.

This information applies to Django 1.8.2.

Example ‘Bike’ model:

In this example, the Bike model has paint color, seat color, and category:

class Bike(models.Model):
    name = models.CharField(max_length=50)
    paint_color = models.CharField(max_length=255)
    seat_color = models.CharField(max_length=255)
    category = models.CharField(max_length=255)
    active = models.BooleanField()

The SQL I wanted Django’s Model to run for me:

SELECT paint_color, count(*) 
FROM bike
WHERE 
  paint_color IS NOT NULL AND
  paint_color != '' AND
  active = 1
GROUP BY paint_color
ORDER BY paint_color;

-- same thing for seat_color and category
SELECT seat_color, count(*) 
FROM bike
WHERE 
  seat_color IS NOT NULL AND
  seat_color != '' AND
  active = 1
GROUP BY seat_color
ORDER BY seat_color;

SELECT category, count(*) 
FROM bike
WHERE 
  category IS NOT NULL AND
  category != '' AND
  active = 1
GROUP BY category
ORDER BY category;

My report needs a count of all the active bikes by paint_color, by seat_color, and by category. Note that the columns allow null and empty string, so those need to be filtered out of the report.

How to do the GROUP BY / count(*) with Django:

Bike.objects.filter(active=1)
  .exclude(paint_color__exact='')
  .exclude(paint_color__isnull=True)
  .values('paint_color')
  .annotate(total=Count('paint_color'))
  .order_by('paint_color'))

For more details see the documentation page on Django Aggregation.

The call returns a list of dictionaries like so:

[
 {'paint_color': u'Green', 'total': 15},
 {'paint_color': u'Blue', 'total': 19},
 {'paint_color': u'Yellow', 'total': 4}
]

Getting fancy – allowing dynamic column substitution by variable name:

The code above is a start, but I don’t want to have three copies of that lengthy model query floating around in my code. This calls for converting ‘paint_color’ into a parameter. I also opted to go with a static method, which I can do like so on the Bike model:

@staticmethod
def summary_report(fieldname):
  allowed_fields = ('paint_color', 'seat_color', 'category')
  if fieldname not in allowed_fields:
    return {}

  return (Bike.objects.filter(active=1)
             .exclude(**{fieldname + '__exact': ''})
             .exclude(**{fieldname + '__isnull': True})
             .values(fieldname)
             .annotate(total=Count(fieldname))
             .order_by(fieldname))

Now the parameter fieldname takes the place of the hard coded string. In the spirit of defensive coding, the method checks to make sure that fieldname is an authorized property on the Bike model in this context. It could also throw exception, log an error, etc, but it is kept simple for this example.  From there, the exclude() calls use **kwargs (keyword arguments) to pass in the dynamic value.

The data for the Bike report can be obtained as follows:

summary_paint_color = Bike.summary_report('paint_color')
summary_seat_color = Bike.summary_report('seat_color')
summary_category = Bike.summary_report('category')

How to see what SQL Django Query generated?

As I was working on this, I needed an easy way to see what SQL Django was generating behind the scenes. Django Debug Toolbar does it nicely.

To install the Django Debug Toolbar it takes just two steps:

$ pip install django-debug-toolbar

Then add ‘debug_toolbar’ to your INSTALLED_APPS. It requires django.contrib.staticfiles. Refresh your page, and you’ll see the debug toolbar come up:

django_toolbar

Hope this helps!

 

Posted in Application Development, Code | Tagged , , , | Comments Off on Example Django Model Count and Group By Query

Design your own GitHub activity graph, mine is a DNA spiral

I recently turned my github activity graph into an 8-bit looking DNA spiral!

github_contributions

By setting GIT_AUTHOR_DATE and GIT_COMMITTER_DATE it is possible to log a commit at any point in time. The tool I wrote allows you to draw a pattern sort of like a mashup of mine sweeper and MS paint for windows 3.1. Then it generates the commits that match that pattern.

Here is the page where you can build your own. If you come up with something cool to put on your profile I’d love to see it. For the record, this wasn’t an original idea. Others have done similar projects (which I reference at the bottom of the tool), but none let you draw right there in the page. This was purely for fun and only took me a few hours to knock out and test.

Posted in Fun Nerdy | Tagged , , | Comments Off on Design your own GitHub activity graph, mine is a DNA spiral

Mastery over Negativity – Dealing with Negative Geeks

I think it is okay to be negative about a given software technology, but it has to be for the right technical reasons in the context of the problem at hand. For the most part what goes on is bashing with scant substance behind it. Thankfully that sort of bashing can safely be ignored, but it is not always easy.  We software developers take our work with pride. Hey, I even claim that ‘software is my life’.

A fellow Portland software developer wrote a post on negativity in the software profession, why it is lame, and some steps to address it.

“PHP, possibly the most made fun of language, doesn’t even get a reason most of the time. It is just ‘lulz php is bad, right gaise?’” – Wraithan

This inspired me to break down where the negativity comes from and how to address it in a positive way. As a software developer I am compelled to categorize and organize things, so here goes…

Why are they snickering at that technology and how can I help them see their folly?

Mono-lingual programmers – It is natural to see your first language as the best in the world. It is also the ONLY language you know, so by default it is the best. My advice is get familiar with multiple languages. That way you can contrast the pros and cons of each language. Now you have a shot at being a master programmer.

Distrust of the unfamiliar – It is human nature to distrust the unfamiliar. This is true no matter how many languages a person knows. Bashing something because you don’t know it is forgivable but screams low emotional intelligence and a weak mind. If I’m pretty sure someone doesn’t know what they are talking about, I try to point out a couple really cool things about what they are bashing and hopefully get them excited about it.

Hubris and self confirmation bias – Again, human nature at play, overconfidence can cause bias. Programmers build up deep specializations spanning many years of experience in a given area. They may even get fancy titles like Principle or Lead, and consider themselves a ‘master’. It is easy to fall into the trap of thinking the skills you’ve worked so hard to attain are the ‘best’ skills. When an alpha geek is bashing something, what I like to do is point out that what they are saying may very well be the case for a given set of problems at the moment.  Or with a specific version.  Nothing in software stays the same for very long.  Ignoring that is a failure to recognize how fast technology changes.  A good alpha geek will appreciate that point. Take JavaScript for example, when it started everybody completely hated it! Now JavaScript is everywhere and has gotten a lot better than it used to be. In fact, some the highest paying jobs as of 2015 are for JavaScript engineers, not Java engineers or C++ engineers like it use to be in 2005. In 2025 who knows what it will be?

People trying to sound smartThis news article talks about how negative people tend to be viewed as more intelligent. There is a trick to seeing through that. Are they pointing out drawbacks relevant to a task? Okay, that is fine. For example, PHP sucks at building flight control software because it isn’t multi-threaded. Agreed! Or are they pointing out weaknesses that may amount to personal preference or fail to address a specific situation? PHP sucks because it uses globals. Yeah that isn’t perfect, but you are not forced to use globals in PHP. Every language has pitfalls that should be avoided.  If they are not being specific, call it out, make them be specific so they can be more helpful.

Jerks and Gits – The haters be hating… I avoid these people when possible. Some are truly too smart for their own good. Others are frustrated sub-geniuses who feel the world owes them fame. You might be able to learn a trick or two from their criticism. Getting to know them is rarely worth the effort because sooner or later they’ll start hating on you. It amuses me when people publicly (and permanently) reveal this trait on social media or forums, thinking they are being clever.

Concluding Thoughts:

It is wise to see all languages / technologies for what they are: tools.

A software tool is not an extension of one’s identity or ego… unless you actually wrote it. Even then it is best to keep emotional distance from it. If you did write something that became famous I hope for your sake the online bashing and endless stream of bug fix and feature requests did not get to your soul.

Master software developers know that everything has limitations, and they also know what gets the job done. No software is perfect. To launch software on time within budget requires artful compromises.

Posted in For New Developers, Work | Tagged , , | Comments Off on Mastery over Negativity – Dealing with Negative Geeks

Sending emails through Comcast on Ubuntu using ssmtp

Ssmpt is a light weight mail package that is easy to configure and suitable for my needs during local development. It is basically a mail forwarder, can’t receive email, and has very few settings relative to a program like sendmail.

Comcast is notorious for requiring email sent on its network to go through its smtp server. Not doing that can get your IP blacklisted and your legitimate emails flagged as spam. I resisted but was assimilated. These settings should work for most ISPs, not just Comcast.

Install ssmtp:

sudo apt-get install ssmtp

Configure ssmpt for Comcast:

You must setup an account with our ISP / email provider and enter the email/password below. I use a dedicated email account for development.

sudo vi /etc/ssmtp/ssmtp.conf

ssmpt.conf content:

root=postmaster
mailhub=smtp.comcast.net:587
UseSTARTTLS=YES
UseTLS=YES
AuthUser=myaccount@comcast.net
AuthPass=****
hostname=mymachine
FromLineOverride=YES

To test it out:

First save a test message in the ssmtp format, here is how my file looks:

$ cat testmessage.txt
To: youremail@gmail.com
From: you@comcast.net
Subject: test message

Test message for ssmtp.

To send the message:

ssmtp youremail@gmail.com < testmessage.txt

For PHP compatibility:

Edit php.ini, look for the sendmail section, set the following:

sendmail_path = /usr/sbin/ssmtp -t

Last step: restart apache

Posted in Sys Admin | Tagged , | Comments Off on Sending emails through Comcast on Ubuntu using ssmtp

A KeePass setting that might save your online identity

Your KeePass file might not be as safe as you think, but it is easy to protect yourself with this simple settings change that does not require creating a new kdbx file. This helps make your KeePass file more secure by deterring dictionary and brute force attacks.

The setting is called ‘Key Transformation’, accessible in KeePass under File > Database Settings… > Security. This screenshot is of version 2.x, but 1.x also has this feature (minus the helpful one second delay button).

KeePass Transform Key Settting

What it does is run the master key through N rounds of encryption before applying it. The higher the N, the more time it takes your CPU to process through all the rounds of encryption. The default is 6000 which takes less than a millisecond for a modern CPU to churn through. My setting is in the high 7 figures, and takes about one second. That is a delay I can live with each time I attempt to open my KeePass file. In fact it kind of feels good to be reminded the program is doing extra work to protect me.

The reason for introducing a delay is to slow down a brute force attack to the point it is unfeasible in this lifetime. A brute force attack starts by trying every character (A-Z, a-z, 0-9, symbols), then every two character combination (aa, ab, ac…), then every three character combination (aaa, aab, aac), and so on. A related approach, called a dictionary attack, loops through a dictionary and tries all words and various combinations of words with different delimiters. Eventually these approaches will find the master password. However, when N is a high enough number, it will cost the attacker one second per attack (per CPU), which is a serious roadblock.

If your password is sufficiently strong, say 30 random characters including A-Z, a-z, 0-9, and 10 different possible symbols, that is 72 characters to draw from. That results in 72^30 = 5.24e+55 possible combinations! Only an attacker with a huge number of CPUs or a huge amount of time would be able to check all combinations. I doubt this little technique would deter high level national security organizations with billions of dollars in funding. However, I have a strong sense that a high N would deter script kiddies and cracking programs.

As CPUs get faster, N needs to increase to offset the time it takes to attempt a single crack at the master password. I plan to increase the value every time I get a new machine.

What the ‘average’ user sets their password to:

You know it really isn’t very hard to achieve ‘better than average’ password security. Most people use the password ‘password’ or ‘123456’, and tend to use the same password for all their accounts.

Going beyond just a strong password:

A full proof password may not be enough. Wired did a thorough write up on how a weak password and social engineering combined with a basic flaw in processes at Amazon and Apple lead to journalist losing his entire online identity.  That is why I always setup the extra identity verification questions under my account. I never use the same Q&A twice. I also use three different emails: personal, work, and private / banking. That way even in the worst case scenario where a hacker is able to trigger password resets and get into accounts the scope of the damage is limited.

What is KeePass?

For those who don’t know, KeePass is a FOSS program for managing passwords. One ‘master’ password gets you into all your other passwords. It can easily generate strong passwords. In fact, I don’t even know some of my passwords since they were generated inside KeePass with the ***’s showing. From there I pasted the value it made into whatever website’s sign-in form I was at. I then immediately make a secure backup of the KeePass file so I don’t lose that new password. The coolest thing is the Ctl+V feature that will tab back to the previous window, paste your username, tab, paste your password, and then hit enter to submit the form.

I’ve been using KeePass to manage my passwords for almost a decade. What I really like about it is how portable it is between Linux, Mac, and Windows. It also has ports to all manners of tablets and smart phones – but I would never put such a sensitive file on something that doesn’t have an encrypted drive.

Is KeePass secure?

I have not read the source and can’t vouch for it. I just know a lot of other software professionals who also use it. The fact that it is open source makes me feel better about it. It does encourage temporarily putting passwords into the system clipboard, which is arguably an insecure spot. Typing a complex password has its downsides too a) it takes time, and b) keystroke listeners would be able to pick them up.

Here is an interesting article about someone who was tasked with cracking a KeePass file. The article doesn’t say how they cracked it, but the YouTube video comments say they “found it written on a piece of paper.”

LOL!

So the moral is, KeePass is as insecure as its operator is careless.

Posted in Business, Work | Tagged , | Comments Off on A KeePass setting that might save your online identity

Why I use GitHub (or Bitbucket) at every chance, and why you should too

When I work on projects that don’t have GitHub or Bitbucket, I really miss it. It is the little things they do that speed things along and get me access to what I need in a way that looks visually pleasing.

github

bitbucket

This is not meant to offend, but for me GitHub and Bitbucket are pretty much the same thing. BitBucket originally attracted me due to its free private repos. All the work I do is under NDA, meaning it is confidential. The code is usually owned by whoever I’m working for, so privacy really matters. In the course of my work I’ve used both GitHub and Bitbucket extensively. For my purposes I really can’t distinguish between them. Others have tried recently. It seems to come down to nuances between open source vs enterprise development. That aside, I’ll just call the pair GitHub from now on so I don’t have to repeat myself.

5-speed manual vs automatic:

The difference between a project with and without GitHub, is sort of like owning a car with a automatic transmission vs a 5 speed manual.

I used to own an old BMW 3 series with a 5-speed (technically an E30). It had 3 floor pedals, the extra being the clutch for shifting gears. That car was a blast to drive! It had a tachometer in the dash too. I remember always being impressed that in 5th gear the speedometer and the tachometer were parallel. Pretty cool design and engineering philosophy by BMW. I just loved the way it responded, even though it had 180k miles when I bought it. Yeah it was expensive to maintain, but I was infatuated.

Sadly, this is less common today, but I also learned to drive on a manual. Just after my sixteenth birthday I took my drivers test with a 5-speed Corolla. During the test I conked it out twice but still passed by one point.

That is the good and bad about the manual: it is more work, can be slower to shift and fatiguing to drive, but in the right hands, when you down shift and punch it out of a corner there is nothing like it! It does just what you expect it to do at all times.

The enjoyment of shifting gears:

When it comes source control, git command line is my sporty 5-speed manual. I use git exclusively on the command line. I know my limits (by no means am I a git guru), but I get the job done day in and day out. It brings satisfaction in the familiar routine of going through the gears (pull, commit, push and the occasional merge/rebase). Everybody I know who can switch to git already has.

Sorry Subversion:

I suppose SVN is now the equivalent of an old rust bucket with a 3-speed on the column without a synchro (double clutch to get back into first). Sorry SVN, you were a trusty pal back in the day.

The ease of driving an automatic:

Using GitHub on top of git is what I consider an ‘automatic’. It does a lot of nice stuff intuitively that I don’t have to work at or think about to much.

My main use of GitHub is the web interface for browsing the repo. I love being able to compare branches, look at commits, study code, go back in time, make inline comments, etc, etc. The coloring of the output is very clear as to what is new code, what was removed, and which lines were changed. I will often have a handful of GitHub tabs open at once to get caught up on recent commits. Reading code recently committed by your team members is a good habit, even if not required by management.

To that point, fixing a bug correctly (without breaking something else) almost always involves determining its origins. With GitHub it is very handy to be able to literally ‘click’ into the past and search for keywords at a certain point, and then correlate those changes to commit messages. Then you know who to take the nerf bat to.

I have tried desktop GUI tools on Ubuntu and Windows for browsing repo history. They all come up way short and remind me of Windows 3.1 programs. The command line can be used for looking at recent changes and even code archaeology, but in practice it becomes too much to wade through.

Managing pull requests in GitHub is really nice too. It will even warn you if there is a merge conflict in advance. The built in wiki’s are nice. The Readme.md markdown formatting is nice.

A project run through GitHub (or BitBucket) makes my work day easier, makes collaboration easier, and helps me feel like I’m right there with the rest of the team when I’m working remotely.

With git command line and GitHub, we get the best of both worlds. The pleasure of the 5-speed (git cli), and the convenience of the automatic (GitHub). Okay it’s not a perfect analogy…

Some alternatives for the DIY project:

Don’t want to tie yourself to GitHub or BitBucket? I don’t blame you. There are many business cases for keeping code on servers you and only you control.

These projects are web based repo browsers that work similar to GitHub:

Posted in Code, Work | Tagged , | Comments Off on Why I use GitHub (or Bitbucket) at every chance, and why you should too

The Software Maintenance Efficiency Curve

I have been told “there is no such thing as green field development”. While that statement is false for the hobbyist developer, in the business world it is nearly true. Those who code for a hobby or for pure enjoyment often start from scratch, as evidenced by the explosion of unmaintained projects on Github. See my article about software ghettos for more on that. When it comes to software used in the real world, open source or not, maintenance is an everyday task.

Consider what goes on between the 1.0 and 1.1 release. Was that 100% new work or did it include some maintenance to allow the 1.1 features to fit with the 1.0 architecture? Now fast forward to the 1.8 release, was the ratio of maintenance higher? Almost certainly.

An article by Robert Glass in IEEE Software May/June 2001 called Frequently Forgotten Fundamental Facts about Software Engineering states maintenance is 40-80% of software cost, and enhancements contribute to 60% of new maintenance costs!

Why care about quality?

Consider that businesses are not interested in (and probably can’t afford) a monument to computer science. What the average business demands is functional code. I have been involved with dozens of businesses, small, large, tech centric, and technophobic – none have asked for fancy or perfect code. Anything beyond functional is seen as a waste, and I agree. This is not a license to take shortcuts and hack things together. If shown the distinction a business doesn’t want a ghetto code base with anti-patterns everywhere that will soon become unmaintainable and cause developers to run and hide. In spite of this, it turns out a lot of systems are managed in a manner that contributes to major system outages, security holes, developer attrition, and occasionally huge monetary losses. Google ‘stock market glitch‘ for examples.

How can software maintenance work be done efficiently?

A great developer won’t make much of a dent if they are blocked from doing so. The product owner should have a long term plan for the system which includes keeping the system healthy and maintainable. That plan should favor fixing existing bugs (see #5 on Joel’s list) and allocate time for paying down technical debt in each release. Technologies such as source control, a suite of unit tests, code linting and build automation are extremely helpful. Policies on code style, documentation, learning, and knowledge sharing make a big difference too.

A team composed of a mix of veterans, mid level staff, and junior developers makes for a healthy balance. The developers should be allowed to think they own it (a variation on a famous quote from Bill Gates). A culture of knowledge sharing should be encouraged and rewarded. Assumption checking should be considered normal and non-threatening. Have you ever read a spec that was 100% free of half baked assumptions? Individual performance should take a back seat to team performance. Otherwise silos form, the incentives become twisted, and so does the code.

On the individual level a developer has three hills to climb to become maximally efficient:
1) The languages, libraries, and technologies used in the system.
2) The domain (the nature of the business).
3) The way the system was setup.

Languages and libraries should be a relatively low hurdle if the technologies used are ubiquitous and the right skills are hired for. Domain knowledge is harder to come by. In some areas such as insurance, finance, education, or ERP a person with the right experience is attainable. The third hurdle is by far the least visible to the business and the most challenging. It ultimately comes down to what is stuck inside the developer’s head that makes them efficient at maintaining the system. If the developer wrote the system from scratch, they get past that for free. That assumes they haven’t already moved on… perhaps washing their hands of a mess?

“Debugging is like farting – it’s not so bad when it’s your own code” – Unknown

The time it takes to attain mastery over a code base is proportional to its size and complexity. The best approach is to start with an easy task, then something slightly more complex in a different part of the system, then something in a third area, and finally circling back to the first area for a real challenge. This way confidence is built up steadily and the risk of breaking something critical is reduced.

The first few days to several months of working on an unknown system are the most stressful and error prone for a developer. Without knowing every aspect of the system it is easy to accidentally write new bugs. Without a senior developer or product manager to explain things it can be very confusing and frustrating to make headway. This is where developers with solid people skills and high self esteem will shine because they are not afraid to ask for help and are effective at getting good answers.

Development efficiency increases over time then plateaus:

Software Maintenance Efficiency Curve

The orientation phase and steepness of the growth phase increase relative to the size of the system. They can be shortened with documentation, clean code, but most importantly friendly and knowledgeable team members. Hiring for a person with knowledge of the languages, libraries, and domain also helps.

Let’s say things go well, and the developer climbs the efficiency curve after X days or weeks. Now they are really ‘making money’ for the business. This is the most efficient place for the developer to be business wise. The length of time a developer spends on top of the curve depends entirely on the company’s ability to retain that developer. The going advice is to pay at least a fair wage, be flexible, be organized, then stand back and let them go. Make sure to let them do interesting things from time to time. Offices with windows, sit stand desks, and flexible hours are nice perks that don’t cost much when averaged out. The alternative is to loose the developer and go back to square one in the orientation phase with someone new.

Posted in Application Development, Business, For New Developers | Tagged , , , | Comments Off on The Software Maintenance Efficiency Curve

How to setup the MySQL data directory to be in your encrypted home folder on Ubuntu 14.04

Ubuntu has built in home folder encryption similar to OSX. I always turn on this feature on both OSs and have never experienced any perceptible performance hit. This guide shows one approach to migrating the MySQL data directory into the encrypted home folder on Ubuntu 14.04.

Caveats:

The only system user allowed to access the encrypted home folder is the user that owns that folder (eg your user). For this approach to work, MySQL must run under the same user that you login as. The service must be started after you login to the desktop. That can be automated by creating a script that gets triggered by the ‘Startup Applications’ program.

Configuration changes:

# stop mysql
$ sudo service mysql stop

# backup mysql data folder and config file
$ sudo cp /var/lib/mysql /var/lib/mysql_backup
$ sudo cp /etc/mysql/my.cnf /etc/mysql/my.cnf_backup

# move mysql data folder
$ sudo mv /var/lib/mysql /home/youruser/mysql

# change ownership of folder
$ sudo chown -R youruser /home/youruser/mysql

# config changes to my.cnf
$ sudo vi /etc/mysql/my.cnf

Changes to my.cnf:

  • socket = /home/youruser/mysql/mysqld.sock (there will be multiples)
  • pid-file = /home/youruser/mysql/mysql.pid
  • user = youruser
  • datadir = /home/youruser/mysql
  • log_error = /home/youruser/mysql/mysql_error.log
# start mysql
$ sudo service mysql start

# test everything out...

# when you are sure it is working
$ sudo rm -rf /var/lib/mysql_backup

Why encrypt the MySQL data directory?

Computer equipment, particularly laptops, are stolen all the time. As a developer, your machine probably contains dozens of sensitive passwords, api keys, ssh keys and so forth. Most are probably dev accounts, but a few live passwords might be floating around too. For this reason I keep all my files in the encrypted home folder (as it is meant to be).

A potentially huge source of sensitive information are local databases on your machine. The degree to which a dev database should be locked down really depends on the nature of the business. Talk to your manager about it if you are unsure.

What I like about this solution is, since the entire data folder is encrypted, it works going forward automatically for any new databases. This technique is not unique to MySQL, all database platforms allow storing data in a user defined location.

Is Ubuntu’s encryption of the home folder bullet proof?

See the following links for more information:
http://www.linux-mag.com/id/7568/
http://security.stackexchange.com/questions/41368/is-encrpyting-home-sufficient
https://help.ubuntu.com/community/EncryptedHome

Nothing is likely to stop serious hackers or the NSA. However, putting sensitive data into the encrypted home folder is a reasonable precaution a professional should be expected take.

Saying –

“My laptop was stolen which contained all customer email addresses… *sorry*.”

Sounds MUCH worse than  –

“My laptop was stolen and the data was encrypted with AES 128-bit encryption making it very very unlikely anybody, including computer experts, small nation states and powerful corporations will be able to access anything.”

 

What about using a cloud database for development?

Hosting your dev database in the cloud keeps sensitive data off your machine. This option is becoming increasingly affordable. Depending on latency to the cloud it can slow down day to day development work. If you do use cloud servers for development, make sure to connect over an encrypted connection! Otherwise everything that goes back and forth can be eavesdropped on. A VPN, SSH Tunnel, or MySQL SSL connection will do the trick.

Posted in Sys Admin, Work | Tagged , , , , | Comments Off on How to setup the MySQL data directory to be in your encrypted home folder on Ubuntu 14.04

Correct use of PHP’s ‘at’ operator with speed benchmark

In PHP placing an @ symbol in front of an expression (variable or function call) tells php to suppress any error messages that expression generates. I find this to be a handy piece of syntactic sugar. When used correctly the gains in code readability far outweigh the costs in terms of performance (which I benchmark below). Some people argue that suppressing errors is a mistake and can mask problems so therefore this technique should never be used. I agree with the idea that suppressing errors is bad. At the same time if I don’t care if something in a 4 level nested array is null, then suppressing PHP’s chatter is doing me a huge favor.

Let’s look at an example of where the @-operator shines. Consider trying to get a value out of a nested array, which may or may not be set such as $response[‘STATUS’][‘ERRORS’][‘ERROR_COUNT’], which is a typical thing to see in SOAP based XML responses from enterprisey APIs.

One approach might be:

if(isset($response) &&
   isset($response['STATUS']) && 
   isset($response['STATUS']['ERRORS']) && 
   isset($response['STATUS']['ERRORS']['ERROR_COUNT'])) {
	$error_count = $response['STATUS']['ERRORS']['ERROR_COUNT'];
}

Although isset() doesn’t have a problem with this shorter version either. Thank you to my friend for pointing this out!

if(isset($response['STATUS']['ERRORS']['ERROR_COUNT'])) {
	$error_count = $response['STATUS']['ERRORS']['ERROR_COUNT'];
}

With the @-operator:

$error_count = @$response['STATUS']['ERRORS']['ERROR_COUNT'];

I like the last method because it is cleanest. I don’t care if $error_count is zero or null. The @-operator, being a somewhat lazy technique pairs well with another of PHP’s lazy at best but deeply flawed at worst ‘features’ in that NULL, “0”, 0, array(), and false are ‘falsey’ and can be used interchangeably when doing comparisons with plain ‘==’. By using three equal signs ‘===’ the types of the variables are also considered and that is generally the preferred method of comparing things, but that level of precision isn’t always required.

Notes about the @ sign in PHP:

  • If you delcared a custom error handler with set_error_handler() that will still get called.
  • It only works on expressions (things that give back a value). So it does not work on if/then statements, loops, and class structures, etc. This was a wise choice by the PHP community.
  • The fact that it only works on expressions greatly reduces the unanticipated side effects that can result. In this sense it is nothing like ON ERROR RESUME NEXT, an infamous language feature in Visual Basic and Classic ASP, which chugs past errors. The previous error can still be checked for in a sort of poor man’s try/catch block. ON ERROR RESUME NEXT sucks and makes me want to hurl just thinking about it.

Some people really hate the @-operator:

Most of the arguments against the @-operator come down to misuse and then over reaction. The fact is inexperienced and inept programmers can take any language feature and come back with a hairball of unmaintainable code.

As I demonstrated above, the @-operator is great when digging through arrays such as complex DOM objects. This is especially true with optional keys. It should not be used when calling external resources like the file system, database, APIs, etc. In those situations, try/catch blocks should be used to make sure if something goes wrong it gets logged and cleaned up properly. The @-operator is not a substitute for a try/catch!

The second major knock against the @-operator is the alleged performance penalty. Let’s do some benchmarking:

laurence@blog $ php -v
PHP 5.3.24 (cli) (built: Apr 10 2013 18:38:43)
Copyright (c) 1997-2013 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2013 Zend Technologies

laurence@blog $ cat php-at-operator-test.php
<?php
error_reporting(E_ALL ^ E_NOTICE);

$OPERATIONS = 100000;

// test using @-operator
$time_start = microtime(true);
for($i=0; $i<$OPERATIONS; $i++) {
  $error_count = @$response['STATUS']['ERRORS']['ERROR_COUNT'];
}
$duration = (microtime(true) - $time_start);

echo "With the @-operator:" . PHP_EOL;
echo "\tTotal time:\t\t" . $duration . PHP_EOL;
echo "\tTime per operation:\t" . number_format($duration / $OPERATIONS, 10) . PHP_EOL;
echo PHP_EOL;


// test using isset()
$time_start = microtime(true);
for($i=0; $i<$OPERATIONS; $i++) {
        if(isset($response['STATUS']['ERRORS']['ERROR_COUNT'])) {
             $error_count = $response['STATUS']['ERRORS']['ERROR_COUNT'];
        }
}
$duration = (microtime(true) - $time_start);

echo "Using isset():" . PHP_EOL;
echo "\tTotal time:\t\t" . $duration . PHP_EOL;
echo "\tTime per operation:\t" . number_format($duration / $OPERATIONS, 10) . PHP_EOL;
echo PHP_EOL;
laurence@blog $ php php-at-operator-test.php
With the @-operator:
        Total time:             0.19701099395752
        Time per operation:     0.0000019701

Using isset():
        Total time:             0.015001058578491
        Time per operation:     0.0000001500

For my limited testing with PHP 5.3.24 on a 6 core box looks like the @-operator is ~13 times slower than using isset(). That sounds like a lot, but let's look at the penalty per use, which is 0.0000018201 seconds, or ~1.82 microseconds. An application could do approximately 550 @-operator uses, and it would impact the response time by just 1 millisecond. If a single page request does 550 @-operator look-ups and every millisecond counts then you have a problem. Probably what matters more is overall memory consumption, transactionality, caching, code cleanliness, ease of maintainability, logging, unit tests, having customers, etc... Still it is good to have a solid measure when arguing the case either way. In the future as CPUs get faster and cheaper, I expect the performance penalty to shrink.

Posted in Code | Tagged , | 3 Comments