Communications to Mitigate Security Risks from Within

CatapultSecurity has always been a vexing problem for us. Back in earlier times, great castles and walls were built to keep out unfriendly types. Technologies and tactics such as catapults came along to reduce the effectiveness of these defenses.

What prompted this thought was an article I came across, The new paradigm for utility information security: assume your security system has already been breached in Asian Power magazine. The author shares:

Basically, there has been a standard practice if you will for many years where the “fortress” approach was the norm –- or paradigm — for enterprise and energy company security. This applied to physical security and cyber security. The fortress concept included a strict perimeter – usually defined by gates, guards, and firewalls.

In this approach, the assumption was that all the attackers were on the outside of the perimeter and that the strong perimeter would prevent the attacker from not only entering the walls but they could not access the crown jewels (aka data) because it was housed within layers of more security barriers that included more walls, more guards, and more firewalls and maybe a moat.

I ran this article by Emerson’s Bob Huba, whom you may recall from earlier cyber-security related posts. He agreed that this threat from the inside was very real. What this means is that a hacker can use social networks, Google and other methods to discover personal details about a high-level or other person in the company. Executives are especially vulnerable as they tend to be in the news or in annual reports and easier to discover personal details.

The hacker can then craft a very personal message to the executive perhaps claiming a mutual friend or mutual interest that makes the hacker seem more harmless and maybe even somebody the executive has met. Upon opening the message, the attack can occur by perhaps opening the attachment or following a link to a malicious site. Traditional virus and firewall filters can prevent many but not all attacks.

It seems that the only mitigation to this (beyond deep scanning emails for bad stuff) is to train employees about this threat. The easiest filter is to check the sender’s email address, and if it is a generic account such as Gmail, Yahoo, Hotmail, or other non-business email address, then this would be a trigger to either permanently delete the message without opening it (or only preview it in the preview pane) or use the email junk filtering process to disable the possible “bad stuff” in the message while figuring out if it is real or bogus. The junk filtering will likely already handle the spoofed email case where the address seen in the email is different from the actual email server that sends the email. Bob notes that he deletes emails from people he does not recognize or already trust. If it is important, folks will find another way to contact him.

The bottom line is that there is a personal element to security and continued employee communications about the threats and how to best deal with them is an important part of any security program. This communications element should be part of the planning process for the team in charge of overall security risk mitigation.

MP3 | iTunes

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Update: I received an email from a reader who shared that they’ve had great success with white list based security software. As Bob explained it to me, you start with a clean PC before all the applications are installed, installed the whitelist software, and add the applications which are all added to the whitelist. Anything not in this list will be unable to execute. There is some overhead involved to manage this list as software, patches, and other changes are performed to the PC, smartphone, tablet, or other communications device.

Posted Monday, July 18th, 2011 under Cyber-Security.

4 comments

  1. Jim:

    Focusing on the user is useful and
    necessary as long as it doesn’t take your focus away from other, more
    productive, ways of enhancing security. I feel perfectly confident in
    predicting that every human being will, at some awkward moment, fail,
    and if your security regime depends on human beings not failing then
    it, too, will fail. The ideal defense will assume that your users
    will do dumb things, and protect you anyway. That’s where an
    “enhanced whitelisting” system comes in.

    As you and probably all your readers
    know, whitelisting usually means denying execution rights to all
    unauthorized software, and software becomes authorized by being
    placed on a white list. Done reasonably well, that means that
    unauthorized software is blocked, can’t execute, and can’t do
    whatever it was intended to do. But that’s not the end of it.

    Whitelisting has been around a long
    time but for many years had trouble getting traction simply because
    managing the whitelist was more trouble than it was worth. Today’s
    whitelisting products automate, to varying degrees, whitelist
    maintenance, and now it’s a very workable system. Those varying
    degrees of automation also mean varying degrees of security of the
    whitelist management process, and in the Industrial Control Systems
    environment, more security, even at the price of a little more
    “friction” is very desirable.

    The “enhanced” part of whitelisting
    that I believe ICS folks in particular would be interested in are
    things like (a) quarantine on discovery, where unauthorized software
    is quarantined as soon as it hits the system rather than waiting for
    it to attempt to execute and then block; (b) quarantine after block,
    so that it cannot get into an execute-block-execute loop as some
    malware will, with blue screen consequences; (c) robust whitelist
    management that goes far beyond simply accommodating patches and
    updates or “trusting” locations, users, and agents so that
    whitelist changes (not just additions) can be fully under the control
    of the whitelisting system; (d) filesystem inventory, so that a
    periodic snapshot can be taken of all software on each computer with
    a flag for any off-whitelist software found, giving proof positive of
    the software on the computer at the time of audit; and, there are
    others (log insertion/departure of removable media, for example, and
    quarantine USB-based software that attempts to execute) but I’ve
    probably already worn out my welcome. You or anyone else who wants
    the rest of the story can find me at Doug.Finley@naknan.com.

    I enjoy blogs like yours that have some
    meat in them.

  2. Bernie Pella says:

    The current cyber security philosophy is that of a counter-punch boxer. Punch back after he swings at you. It works great if he doesn’t hit you first. The solution is white listing. Only allow what is expected to run the ability to execute. Everything else, anti-virus, anti-spyware, scripts and bots are all looked after the punch is made. By then the damage is done. The current philosophy only works if the hackers tell us ahead of time they are going to attack so we can update our defenses. With a good whitelist program, all the attack vectors are protected. Then we only need to trust the administrator who has the password to allow updates or changes. This topic was presented at the ICSJWG Spring 2011 conference called “A Paradigm Shift in Cyber Security”
    BPella, Invensys CISP

Leave a Reply