As an information systems consultant who has been working with educational technology and security for nearly two decades, I have seen a lot of security breaches. I have seen students change their own or other students’ grades for a price, publicly post all of the teachers’ private information including social security numbers, break into other student’s accounts in order to frame schoolyard foes, and download files which forced a complete shutdown of a large metropolitan school district’s network and systems due to subsequent computer virus and worm activity. I’ve seen trusted WAN administrators set up unprotected personal servers on a school network and open it to the Internet, all without the knowledge of the school system, which concluded with the expected result of such actions. I’ve even seen security controls become the attack vector, such a compromised anti-virus update server saturating an Internet connection as a newly commissioned child pornography server.
However, I don’t believe there has ever been a time more urgent for deployment of a fully-meshed security infrastructure to combat malware and technology vulnerabilities then there is today. At the core of this technology infrastructure must be a strong set of access and content filtering controls, fully patched systems, and an active engine at the core of the network and on all network nodes detecting and preventing intrusion. Why is this?
There are two primary reasons: Firstly, the need for functional and increasingly capable technology infrastructure has never been greater. This means more powerful systems, vast storage capabilities, and high-speed network connections across the organization. Secondly, the attacks against that infrastructure have never been more complex, more successful, and perhaps most importantly, more profitable.
Why Is It a Problem?
Unfortunately, schools have an even more difficult challenge than businesses when it comes to securing the infrastructure. Compared to business, most schools have multiple users per computer workstation , fewer information technology staff per computer , run more applications, and many choose to or are required by law to allow public use of their technology infrastructure as well. Also, in business you can require employee computer security training and dismiss individuals who don’t comply with written information security policies. It is far more challenging to take similar action with students, or to even track an attack on often, less-controlled networks to its source.
With educational technology, the joke used to be that if you had a complicated computer problem, you needed to get a student to help you out. Today, students and even trained technology professionals can be easily misled by a skillful blend of exploiting vulnerabilities, realistic-looking deceptions, and a lack of basic security at the foundations of many critical systems, including the Internet as a whole. A simple “phishing” attack can fool even experienced users to websites that will quickly detect the operating system and web browser, and target attacks specifically for that system. Once a system is compromised, today it becomes a marketable asset in a black economy providing services to email spammers, piracy criminals, and illicit pornography networks.
One of the biggest challenges today is that because the Internet is a global network, it makes it possible for malevolent and profiteering individuals world-wide to take advantage of anyone with a computer. As you can imagine, in poorer or higher-crime countries this can be an irresistible lure. In an article entitled “The Cybercrime Economy”, Thomas Claburn states that “an IT graduate in Romania might be able to earn $400 per month legitimately, compared with several thousand per month in the cybercrime economy. And I've spoken with security researchers who suggest the difference in pay between being a security researcher and a security exploiter differs by a factor of 10 quite often.”
In the past year or so, several prominent technology publications have released articles that state how security has changed on the desktop. These include the following headlines from respected publications: “New threats could make traditional antivirus tools ineffective”, “Is Desktop Antivirus Dead? Analysts say signature-based checking can no longer keep up with flood of new viruses”, and “Signature-based antivirus is dead: Get over it”.
The message is plain. The methods used in the past to protect us from attack have largely been rendered ineffectual by determined criminals. Attacks have become multi-faceted, malicious code has become polymorphic and metamorphic, changing itself each time it moves from system to system, organized crime has made substantial investments in their attack infrastructure, and social application attacks have become indistinguishable from legitimate emails, instant messages, and web sites.
Does this mean we should stop using anti-virus products? Not at all, but we should realize that the days of firewalls and anti-virus software being the lone, effective method of network and system security are long gone. Attempting to “blacklist” each individual incidence of malware has become all but impossible. Microsoft recently cited a case of a polymorphic worm which they had monitored changing form 571 times in a day to evade detection. The anti-virus vendors cannot propagate updates fast enough to keep up with this.
In spite of all of this, I have recently seen and heard many technology directors who in the past have balanced their purchases on technology infrastructure and related security systems who are now stating that they’re pulling back on security in the short term due to budgets shortfalls. Although I applaud their efforts to be fiscally responsible, I am afraid that ill-advised cuts in the wrong places may mean they could end up with some very serious security consequences within their newly built or expanded communication and storage networks.
Solutions to the problem
There are really only a few effective ways to stop the current crop of malware today. Obviously, keeping your server and host system patches current is critical. Unfortunately, it is all too common for patches to cause application problems, and so to test and deploy patches quickly is extremely challenging.
This dilemma often leads to two schools of thought, both of which have flaws. The first school of thought is to “lock everything down and don’t patch until the next holiday break”. Unfortunately, this often leads to systems which are more vulnerable over time, users who complain about systems that are so restrictive that they impede normal activities, and finally compromised systems which misuse network resources by propagating spam or malware, or which cause users to unintentionally give away sensitive information to the attackers.
The second school of thought is “immediately patch every system when the patches come out”. The problems with this method are two-fold. First, I have witnessed first-hand a patch disabling some critical service requiring information technology staff to walk computer-to-computer over a period of days to repair the problem. Secondly, without a very controlled patching process, it’s impossible to know which of systems successfully deployed the patch, mitigating the vulnerability.
Finally, a challenge that no patching methodology can solve is that “zero-day” exploits are now increasingly common, meaning that malicious code is often released before the patches that can protect from the attack.
One new and innovative way to block the spread of malware is to securely image the hard drives of each system, cataloging each piece of software that is allowed to run, and making a lengthy, distributed database of the signatures of each individual piece of software. This method is called “whitelisting”. This promising technology works well when you have complete control over both the infrastructure and all nodes that are allowed to operate on it. Unfortunately, most educational environments don’t have the luxury of this kind of control over the infrastructure and all systems using it.
This leads us to the most commonly deployed technology in public environments that has a relatively high degree of success in detecting and stopping attacks against networks and systems. This technology is called intrusion prevention. Intrusion prevention isn’t a single technology. It’s a suite of technologies that operate on the network and on the hosts to look for anything that shouldn’t happen and stop it.
Although intrusion prevention became popular earlier this decade, its deployment has been limited mostly to relatively large enterprise businesses and large educational institutions. For those who have had more restricted IT staffing or budget allocations, its deployment has been limited or completely non-existent. Fortunately, the costs of deploying have dropped considerably since the technologies debut and the reduced complexity of the systems have made them much more approachable.
One form of intrusion prevention that has had some success is called statistical or behavioral intrusion prevention. It operates much like the immune system of the human body in that it monitors operations and builds a profile of normal activity. When system or network use strays outside of this norm, alerts are sent and traffic can be restricted. The disadvantage to this method is that it can trigger corrective measures during unusual but authorized activities.
Another form of intrusion detection and prevention harnesses the built-in abilities of servers, network appliances such as firewalls, and user workstations to track attacks, correlate security events, and report them back to the management console for either manual action or automated prevention by network control points. The greatest challenge with this method is that the initial configuration can be complex and the cost of the solutions can be high. The good news is that this area is becoming increasingly competitive and new, more cost effective solutions are coming out.
Often one of the most beneficial and cost-effective methods to control attacks is to keep a list of all known vulnerabilities and how they are exploited. This is called signature intrusion prevention. Unlike signature anti-virus systems tracking malware file signatures, these solutions track “exploit vectors” or the methods by which vulnerabilities are exploited.
Vulnerability exploits are specific sequences of network packets designed to take advantage of bugs which cause software or hardware to malfunction and allow access which they normally wouldn't allow. This lets the bad guys can access the things network administrators don't want them to.
Although exploits and vulnerabilities are discovered often, keeping track of these exploit vectors is considerably easier than trying to keep track of signatures of all known malware in all its (polymorphic) forms. By utilizing a database of these exploits and monitoring traffic on a network or host, any time a known attack sequence is detected on the network, it can be stopped before the attack can do damage.
Although attacks can be adjusted to bypass intrusion prevention engines, it is typically more difficult and less common to morph exploit attacks than to morph malware or create new phishing web sites.
The best intrusion prevention systems use a combination of methods on networks and host systems to maximize the number of attacks it can detect and stop, and to minimize the number of false alerts. Because of the potential for erroneously alerting on good network traffic or vice versa, all intrusion prevention systems require a period of tuning before they can be fully effective.
The good news is that a properly implemented intrusion prevention system is both highly effective and rarely alerts on traffic that should be allowed. A well designed and implemented system also has the side effect of giving the systems administrator some breathing room to test new patches before deploying them, without opening their systems to the same level of risk they would face without intrusion prevention.
Anti-virus solution vendors have taken this new security reality to heart and most vendors now have software security suites which include basic host-based intrusion detection and prevention along with anti-virus, anti-spyware, and host-based firewalls. Network- and host-based intrusion prevention systems, when properly used in combination with network access controls, content filtering, software patching, and malware controls are an effective method to securing infrastructure and hosts from attack.
David R. Bailey, Senior Engineer & Security Consultant, is a 20-year veteran information technology professional with broad computer security and network technology background. He regularly provides technology and information security consulting services to government and educational organizations. He conducts risk assessments and heads management policy development groups. He has often developed, managed, and headed planning, deployment, training, and post-project support of major IT initiatives. David is the published author of technical and security standards documents being utilized by government agencies and educational institutions. He is an expert communicator presenting complex information to both technical and non-technical audiences. David currently works for Data Networks a network and systems integrator based in the mid-Atlantic and Southeast coastal states.
Organisation for Economic Co-Operation and Development. (2002) “University Education Produces Measurably High Returns for Students, According to OECD.” Article states that there are five students per computer in the United States.
Electronic School. (1999) “Technology's Real Costs.” Article states that schools have roughly half the computer support staff as businesses.
Constantin, Lucian. (2008) Softpedia. “Patch for the Internet Core Flaw Also Flawed”.
Milletary, Jason. (2006) CERT. “Technical Trends in Phishing Attacks”.
Claburn, Thomas. (2008) Information Week. “The Cybercrime Economy”.
Thomson, Iain. (2008) vnunet.com. “Microsoft claims success with Vista security”.