The Ongoing Vigil of Software Security

The Ongoing Vigil of Software Security

This guest post is by James Schorr, who has been developing software since 1999. He is the owner of an IT consulting company, Enspiren IT Consulting, LLC.  He lives with his lovely wife, Tara, and their children in Kansas City, Missouri. James spends a lot of time writing code in many languages, doing IT security audits, and has a passion for Ruby on Rails in particular. He also loves spending time with his family, playing chess, going to the shooting range, hiking, fishing, and investing. His professional profile is on LinkedIn.

James M. Schorr The news is often filled with stories about exploits affecting large corporations and widely used software (LinkedIn, Yahoo, Windows, Linux, OS X, *BSD, Oracle, MySQL, Java, Flash, etc…). However, a tremendous amount of successful hacks and exploits take place on a daily basis on smaller-profile systems that we never hear about.

Some of the reasons that we keep seeing these types of exploits are that the “bad guys” are much smarter and more determined than we give them credit for, we're much lazier and more ignorant than we take responsibility for, and security is difficult to manage properly. As we become more and more reliant upon software, it is imperative that security be taken more seriously.

What's the big deal?

Consider this somewhat over-the-top thought exercise:

Think of your systems, databases, and code as a ship floating in the middle of the Atlantic. The ship was fairly hastily constructed as the management team pushed the various craftsmen to get done in time for the journey.

It's the middle of the hurricane season. The waves are getting higher, sharks are circling your boat, and aboard are quite a few passengers. Most of the passengers are of a fairly decent ilk, but some are not. This latter group, partially due to the insufferable boredom that accompanies their long journey, have taken delight in drilling holes in the side of the boat (with the tools that were discarded during construction). Other troublemakers spend their time throwing chum overboard to the encircling sharks and even, when no one is watching, throwing each other overboard. A few of the cleverer sort spend their time impersonating the crew and using their new privileges to look for ways to take over the ship. Sadly, even some of the crew members have been persuaded into joining their mutinous ranks.

As time goes by, the remaining crew loses its ability to prevent damage to the craft and protect those on board, as a result of sheer exhaustion, the tenacity of the passengers, and the natural wear and tear of the elements.

What's the point of this mental exercise? We need to realize that unrelenting attacks abound, both from within and without the system. If not properly addressed, they only escalate over time.

Security is a word that has a long, storied past. According to most dictionaries, one of the definitions of security includes, “free from danger”. Of course, stating that a system, code base, network, etc. is secure is quite naïve at best, dangerous at worst. Recognizing the threats is the first step toward positively addressing them.

Ask any IT team member that is charged with “securing” anything and you'll quickly find out that it is an extremely difficult, often thankless task. Even in a tightly controlled environment, it can be pretty tough, especially during times of extreme change, turnover, growth, etc.

Why should we care?

We need to care because our applications, databases, and systems:

  • are regularly being threatened from the inside and the outside, often without us even being aware of it.
  • are depended upon by users who have invested some degree of their money, trust, time, or work into using them.
  • haven't “arrived”. There is always a way to circumvent the “system”.
  • typically depend upon the “happy path” scenarios (e.g. when all goes well).

What can we do?

Thankfully, there are quite a few things that can be proactively done to help mitigate the risks and stave off the threats. For brevity's sake, I'm going to give a high level overview of what can be done to help prevent exploits:

Team Security Measures

  1. Who should be in charge of our project's security? Involve the right people, taking the time to get to know their character and mindset. Not everyone is cut out to think with the type of mindset needed to properly manage security. Unless someone is really into security, is trustworthy, is assertive, and unafraid of conflict, they simply aren't the right person for this task.
  2. Who has need-to-know? Need-to-know is an essential principle in projects. Data leakage often inadvertently occurs by team members that probably didn't need the information to begin with. Those that realize the “big-picture” usage of the data and need access to it for their tasks typically realize the need to keep the data private.
  3. Separation of duties with each area managed by a small core team. While not always possible, it is helpful to have one main realm of responsibility per team. Also, the core team of each area/realm needs to remain just that – the core team. In other words, the more people added, the tougher it is to keep things secure.
  4. How, when and to whom do we communicate? The procedures for securely communicating need-to-know information are critical to establish. Various methods need to be implemented to allow team members to exchange information in as secure a fashion as possible. An example might be the usage of an encrypted volume in a shared drive (retaining the control of the encryption details).
  5. Knowledge Transfer: when someone leaves the team, great care should be taken to transfer the knowledge to the new member in a secure fashion. Additionally, all relevant credentials should be changed immediately, no matter how trusted that individual or group was. A simple exit checklist – that is followed – can greatly help with this.

Technological Security Measures

  1. Testing is critical: we are testing, right? In dev-speak, tested_code != secure_code but tested_code.class == SecurityMindset. In other words, it is possible to write insecure, tested code, but proper testing does seem to inherit qualities from a security mindset and to encourage more thoughtful programming. In my opinion, testing generally falls into two main types:
    1. Code-based Testing: I'll let others bore you with a long list of what's available out there but do want to point out that real-world progress can be made towards better securing code with the usage of tools/methods such as: Rspec and friends, TDD, BDD, etc.
    2. Human Testing: sometimes nothing beats enlisting the help of others to pound away on our beloved projects. You'd be surprised at how many issues are found by this approach, often leading to cries of, “But users aren't supposed to do that!”
      1. Non-technical users: enlist someone who can has a hard time finding the / key. This type of person will usually do all sorts of unexpected things. The unexpected behavior can quickly reveal the hidden weaknesses in the UI, workflow, and security.
      2. Enlist the upcoming geeks: you know those kids who are always jail-breaking phones? After issuing a few half-hearted reprimands, ask them to “conquer” your app. Offering a prize can't hurt.
      3. Enlist an expert to audit your code, procedures, and projects.
  2. Logging:
    1. What to log: in general, the more information about transactional details (transactional referring to any actions that involve change), the better. Note that anything related to attempted security breaches needs to be logged. Admin alerts should also be automatically sent out; these alerts need to be designed with great care to not transmit anything that would harm the system if intercepted in transit.
    2. What to never, everlog:
      1. Credentials: passwords, API keys (abstract before logging: e.g. if Bob does X with an API key, put a different identifier in the log file, not the key).
      2. Credit card numbers, PINs, debit card numbers, anything banking related unless we are doing so in compliance with PCI standards.
      3. Medical information (see HIPPA – Health Insurance Portability and Accountability Act or your country's corresponding laws).
      4. Anything that can be used to compromise the systems or it's users.
    3. How to log: I personally prefer a two-pronged approach: 1) writing to log files which are automatically transferred offsite, 2) an audit trail via a NoSQL database (using a fire-and-forget type of approach; in other words send the insert but keep on moving, a failure to log to the audit trail should alert admins but not slow down or impact the user's use of the application at all).
    4. When to log: as close to the event as possible, to minimize the chance of data loss.
      1. Log Alterability: Think, “if I was a hacker and compromised this system, I'd want to clean up after my activities”. How do I make my logs non-alterable, even by support staff?
  3. Access Levels: these typically fall into the following:
    1. Users
      1. What can they access and why
      2. Who can change their level (e.g. can the user manage their own level via subscriptions)?
    2. Support Staff
      1. Level 1 CSR
      2. Level 2 CSR
      3. Level 3 Admins
      4. Dictators (can do anything with no recourse)… careful with these types.
  4. Crucial Elements:
    1. Account Lockouts
      1. Users are locked out for some period of time when they fail to login after X attempts or try via different IPs, etc.
      2. Users are locked out and admins alerted when they try to get around the system (these types of lockouts do not expire with time but rather require a Support Staff person to unlock them based on their discretion).
      3. Ability for Support Staff to lock and unlock users very quickly after following a procedure to record why they're doing so. A permanent record needs to be kept as to who unlocked whom and why.
    2. Account Password Policies: password strength, requirements to change the password every X days, password history (can't reuse old passwords), etc.
    3. Other: click-limits, IP address binding, geographic-binding, usage of Oauth 2, etc.
  5. Frameworks and Software Libraries: it's fairly common to have security vulnerabilities “appear” due to the integration of code from other sources. Of course, no one has time to re-invent the wheel, so to speak; nor should they. It is a good practice to always read through the source code and reported issues of 3rd party software prior to implementation.
    1. Take the time to search for some of its common exploits and best-practice methods of usage. Have we taken the time to test what X library (framework, gem, plugin, etc.) would mean for our application's speed, stability, and security?
    2. Refrain from handling some things ourselves. A good example is credit-card processing. Why handle it yourself when a 3rd party, tested service will likely do so in a more secure manner? Look for a project that has been around for a while and has a good track-record of quickly closing vulnerabilities.
  6. Servers and Hosting: it may save some money to host on a shared host or cousin Bill's server, but will the data be secure? It's best to strive for meeting all three of the CIA principles (Confidentiality, Integrity, and Availability) when choosing a host, striving for at least a medium-level for each principal.
    1. Keep the servers up-to-date.
    2. Use intrusion detection applications (e.g. psad, fwsnort) to alert admins of attempts to break in the system.
    3. Use a properly configured firewall that is easy to adjust quickly.
    4. Send the logs offsite (e.g. not on the same “box”) to a secured server on a frequent basis.
    5. Backups: ideally, these should occur nightly of the entire codebase, logs, and database dumps; these backups should be kept offsite in the same manner as logs.
    6. Imaging: frequent images of servers can be helpful for forensics in the event of an exploit and for data recovery.
    7. Server-side miscellaneous applications (Apache, Nginx, SSH, OpenSSL, etc.): disable unused modules, limit connections, use non-default ports, etc. (see Resources for more ideas).
    8. Schedule checks for rootkits and malware on a daily basis; be sure to alert admins if any is found.
  7. Database(s): Familiarity with the database(s) is key to keeping them secure. For instance, if a development team is very familiar with MySQL and decides to add in a secondary technology alongside (maybe some MongoDB databases), it would be wisest to evaluate the architecture and security implications prior to implementation.
  8. Credentials:
    1. Where and how should we store the credentials that our app needs (e.g. api keys, database credentials, etc.)? A good thing to ask ourselves is, “if someone did get into our server (as non-root, as if they did as root, it's game-over anyhow), what could they get and who would it hurt?”
    2. Are we deploying our credentials to GitHub or other VCS? If so, we're blindly trusting that 3rd party to be and stay secure.
    3. Changes should be planned for and completed whenever there is a change in personnel and on a periodic basis. This can become a real hassle unless thought is given along the lines of, “How do we quickly change these credentials?”

I hope that this article has given you at least a few ideas of how to better improve your software project's security. If so, I'll consider it a success. Feel free to ask questions and give feedback in the comments section of this post. Thanks!


Below are some resources that may be helpful (those that I have found extremely helpful over the years are denoted with a * next to them):

comments powered by Disqus