August 17, 2011
In a recent publication, the Princeton University described an attack, labeled as a ‘Cold Boot Attack’ against DRAM system memory. The attack completely transforms the traditional concepts of DRAM’s volatility and shows that the content of a supposedly ‘Volatile’ RAM can be accessed even when the power has been turned off.
Cold Boot attacks present an imminent threat to cryptographic key material that may be retained in the system memory and might later be used for both forensic and malicious applications.
Booting is the process of starting your computer and loading your Operating System. The term “boot” comes from “boot strapping”, a manual process, and long and involved with older computers. As an electronic device, computers are helpless without some sort of program running. The boot process that we see on modern PCs and MACs is carried out in combination with firmware called the “BIOS” and the master boot record for an operating system stored on a disk drive that the system looks for by wrote every time it is powered up. It does things like enumerate the peripherals, set up interrupt vector handlers and enable or disable the A20 line in some systems.
DRAM stands for Dynamic Random Access Memory. DRAM employs a series of capacitors to store information by representing a single bit as a charge. Computers refresh their power every few milliseconds to retain the information. If power is Off from DRAM, the charge in the capacitor leak to ground state making the data unrecoverable. The ground state is either zero (0) or one (1).
A cold boot attack (platform reset attack) is a type of side channel attack in which an attacker accesses a computer from a running operating system. After using cold reboot, he restarts the machine from an Off state and retrieves encryption keys.
An attacker makes the access time to 35 seconds or less. In some cases, this time is as low as 2.5 seconds. During this he must steal the computer, open it, access the DRAM and cool it. In this process, the owner of the PC remains unaware of the attack; however he should prevent the system by restricting physical access.
DRAM Attack (Cold Boot Attack)
It involves the use of cooling agents that slow the failure speed of DRAM so that the data could be reassembled. This data usually contains encryption keys such as the keys used for FDE products. The speed of attack is not related to data security as attacker has an uncontrollable access to the hacked machine. Instead, it is related to the time period between off state and failure of DRAM.
If the machine is stolen while powered on, it is useless to break the encryption when the data can be easily accessed. In case machine is in Standby mode, it is more likely to get attacked. During this time, sensitive data may remain in unencrypted form in memory. The time period for attackers can be as low as 2.5 seconds before complete memory loss or can be 35 seconds. The attacker gradually drops the temperature of DRAM during this time period allowing him to access the contents of memory. Latest memory technologies have shorter time to total decay than older memory technologies.
Password strings retrieved from dump file
Users Web browsing history present in memory dump
Launching an Attack
Step 1: Powering Off the Machine
The simplest attack is to reboot the machine and conﬁgure the BIOS to boot a memory imaging tool. A cold boot will result in little or no decay depending on the memory’s retention time. Restarting the system in this way denies the operating system and applications any chance to scrub memory before shutting down.
Step 2: Fetching the Contents of the RAM
Place the RAM in other machine and start the system, or keep the RAM in the same machine, attach a bootable USB flash drive in the USB PORT, and reboot the system. Boot priority of the system must be set to ‘External USB Drive’ and not to ‘Internal hard Drive’. Otherwise the system will reboot again into its native Operating System. The memory-imaging tool or scrapper present on USB Drive starts executing. It fetches the memory dump present on the RAM into the USB Drive.
Step 3: Making the Memory Dump Readable
After taking the memory dump of the RAM in a USB drive. It can now be analyzed. Data can be read straight out of the dump either by dumping it to a flat-file using ‘dd’ or by examining it in-place.
Use of Memory Acquisition Hardware & Software
A cold boot attack against a suspect computer system should only be carried out in the event that no other methods of acquiring the system’s memory are possible. Although cold boot attack is the best method possible for acquiring a suspect system’s memory, there are several issues left to be considered. These issues are related to the use and exploitation of memory acquisition-specific hardware and software.
Defenses for Software-Based Full Disk Encryption
There are many possible solutions for this attack but most of them failed. However, some are considerable enough to discuss here.
1) Change the location of keys during runtime. DRAM is totally frozen at the attack time. A key search algorithm represents the location of the key and their periodic movement. Encryption can be made difficult, but theoretically it is not valid.
2) Multiple keys should be used for different parts of disk. It prevents all contents of disk during a single attack. Multiple keys need more authentications to prevent in the same attack instance. Although it is a suitable preventive measure, but it is not sure that most sensitive data would not be present on the exposed part. Additional layer of encryption should be used for top secret data.
3) Fragment keys into discontinuous pieces. It will increase the difficulty of encryption. This may delay an attack, but it wouldn’t prevent one. Moreover, this will delay the decryption time too. Among all problems, performance degradation is termed as number one complaint. This means that application of this technique is not useful.
4) Multiple Keys used in sequence for decryption. An additional layer of difficulty would delay an attack but all the keys would be accessible to attacker. Search algorithms do not depend on decrypted plain text to check integrity of key which allows the attacker to have a number of keys to correctly decrypt data. It might slow the process but that doesn’t mean that it is being prevented from attack. Largest available key lengths should be applied.
5) Longer encryption keys should be used. As time period passes, DRAM loses more of its data making the searchable key space larger and so attacker would build the correct key with more difficulty. Longer decryption key makes larger searchable key space. This way degradation of key will take same time period and will make a shorter key still recoverable.
6) Trusted Platform Module (TPM) combined with Full Disk Encryption (FDE) is used for additional protection in alternative attack scenarios. Usually it does nothing as TPM doesn’t perform the drive decryption. Key must be copied in memory for decryption.
7) Clear Memory at boot time before loading any operating system. It will prevent an attacker to use stolen machine. However, an attacker might move the DRAM to another machine.
8) All accessible ports should be blocked. Though its influence is same as of clearing memory at boot time but it is recommended by many software vendors.
Eliminate DRAM Attacks with Hardware-Based Full Disk Encryption
Hardware based full disk encryption is nowadays embedded in hard drives to eliminate DRAM attacks. This technology has more influence than software based encryption. The data encryption keys never enter into computer memory and are not easily accessed for this kind of attack.
1) Location of the data encryption keys.
In hardware based encryption, the encryption keys are located in self contained computing environment and never enter into DRAM. This ways DRAM itself do not get attacked. A separate key, Key Encryption Key (KEK), is used to access and decrypt the Data Encryption Key (DEK). This key is encrypted using a hash value of username/ password or certificate depending upon the authentication.
The KEK is only decrypted. It is used to unlock disk drive and is not available in DRAM.
An attacker should have the capability to shut down power to drive immediately after the drive is unlocked and KEK is wiped off from memory. All this process has to be done in milliseconds after software authentication with the computer owner being present and willing.
2) Physical Challenges
The attack could be modified to hit directly on the chipset or HDD against the Data Encryption Key. It increases physical complexity, because more time will be required to access the chipset or hard disk drive.
If chipset based full disk encryption is removed and chips ingresses in liquid nitrogen, it would become a challenge for an attacker to either detach the chipset or keeping liquid nitrogen that could be enough to submerge the whole motherboard or computer/ laptop. However, both of the above options are difficult to manage. It is impractical to detach full disk encrypted hard drive in a short period of time. Whereas, if the entire motherboard or hard drive is submerged in liquid nitrogen data might get harm such that it won’t get repair.
It is another way to counter the cold boot attack. Store the keys in the computer cache, instead of RAM. In contrast to the RAM which is a separate device connected to the computers motherboard, the Cache resides on the CPU die, and cannot easily be extracted or read-out. However, caches are difficult to control and one needs to make sure that keys are really frozen in the cache and are never written to the RAM.
DRAMs hold their values for long intervals without power or refresh. It enables a variety of security attacks that can be used to access sensitive information such as cryptographic keys from memory.
There is no easy solution to overcome these attacks. Software changes have benefits and drawbacks too; hardware changes are possible but require time and expense. Latest trusted computing technologies can’t protect keys present in memory. Laptops have more probability of these attacks. Disk encryption on laptops does not give perfect protection.
Therefore, it is necessary to treat DRAM as insecure so that sensitive data from SSL, FDE, and other sensitive applications needs to processed with greater consideration. In the end, without significant architecture changes in the current computers, we are all fairly vulnerable.
There are a plethora of fuzzers available nowadays that target everyday network protocols and file formats. These fuzzers thoroughly iterate through their targeted protocols and files, and act as a valuable resource for stress testing as well.
There are two genres of fuzzers; specialized and generic (aka ‘dumb’) fuzzers. Specialized fuzzers are designed for specific targets. E.g. for a range of email servers like Microsoft Exchange, Sendmail, qmail etc., a specialized SMTP fuzzer would be invaluable. Conversely, ‘dumb’ fuzzers are used for arbitrary protocols and file formats as well as performing simple and non-protocol mutations.
At times, programmer may need more customized and thorough fuzzing for the purpose of performing on propriety and untested protocols, even if the dumb fuzzers could effectively be used against various common applications. It is times like these that the significance of Fuzzing Frameworks is realized and in this document we shall look at some of the most popular and potent Fuzzing Frameworks available in the public domain.
What is a Fuzzing Framework?
Apart from the language platform, way of abstraction, designing and orientation, a fuzzing framework always comes with the central goal of providing fuzzer developers with a quick, flexible, reusable and homogenous development environment. A good fuzzing framework abstracts and minimizes a number of different tasks like converting network traffic into framework compatible format.
A functional framework should include Automatic length calculation e.g TLV (type, length, value), ASN, CRC calculations and many other algorithms. If automatic length calculation is not performed correctly, communication will be failed to get observed. If the CRC is not correctly updated it will void all fuzzing efforts.
Generating Pseudo random data, including an assured list of attack heuristics like format string and directory traversal should be a feature of good framework. A good framework should be able to detect its fault as soon as the target fails to setup a connection. While making a framework more advanced, it should fairly be ensured that the framework allows the fuzzer to directly communicate with a debugger that is being attached to the target.
In addition to this, an advance fuzzing framework should include an interface designed for communicating with a metric gathering tool. Lastly in an advanced framework there are facilities that carry out code reuse at its maximum by making creating developed components that are readily available for future use.
Antiparser is a fuzz testing and fault injection API. Its goal is to provide an API that can be used to model network protocols and file formats by their composite data types. Once an instance has been created it works as a container. All data objects in the container have its own properties and thus they can be saved and used later on when needed.
Although this framework is simple and beneficial for simple fuzzers, it doesn’t support complex tasks. The ratio of framework specific code versus generic code is low in antiparser. It lacks automated methods that are considerably important for a good framework. It has a single version 2.0 that released in August 2005.
To uncover a range of vulnerabilities that are affecting Microsoft, Ipswitch, and RealNetworks, Dfuz fuzzing network was designed, actively maintained and frequently updated. Capable of running on UNIX/Linux operating system, Dfuz exposes a custom language for developing new fuzzers. Though it is not the most advanced fuzzer but it is simple and easy to use and understand.
Defuz comprises of some basic components ranging from data, functions and lists to options protocols and variables. These components are used to define a set of rules that parse to generate and transmit data. Unlike antiparser, Dfuz is a self contained fuzzer.
Through, custom scripting language data can be represented in multiple ways and multiple data definitions can be declared using comma separator. Basic components combined with some additional directives that create a rule file.
Dfuz is a simple and powerful fuzzing framework having a relatively quick learning curve and fast development time. Accomplishing fuzzer development in its own scripting language has both pros and cons. It is positive in that non programmers can fuzz and describe protocols and negative in that experienced programmers can not have benefits from the basic powers or features of a mature programming language. Dfuz has a better code reusability but it lacks a strong set of attack heuristics.
Spike, the most commonly used framework, is an API for quick and efficient network protocols. It has been released under a favorable license i.e. GNU general public license that enables programmer creation of SpikeFile, a repurposed version. Spikes are basically blocks of protocol data structures which are broken down containing both binary data and the size of the block. It enhances the abstraction with the help of automatic calculation of size.
Using spike, programmers can design and model arbitrarily complex protocols. As spike is documented very scattered it has lead to confusion among different researchers that it cannot prevent others from reducing systems information assurance. Spike is basically a Unix supported fuzzer but it can run on Windows using cygwin. Even a very simple change in framework requires recompilation which definitely could be a major drawback. Code reusability is a manual task and new elements could not be defined simply but globally across the framework.
Spike despite being an effective fuzzer has some shortcoming as it includes many useful utilities like proxy and fuzz communications. Its block based technique has been adopted by a number of frameworks making it quite popular.
Peach, released by IOACTIVE in 2004, is a cross platform framework having the most flexible architecture with code reusability technique. Basic components include generators, transformers, protocols, publishers and groups. Every component has a particular function.
Generators basically generate data from simple strings to complex binary messages. Combination of generators simplifies complex data types and abstraction allows code reusability. Data could be changed by transformers. It can be combined with other transformers and bounded to a generator. Once implemented, it also helps in code reusability. Values produced by a generator stepped through by a group that contains one or more generators. Script object serving as an additional component reduces the redundancy of code.
Another drawback is that this framework is not that much intuitive because it takes longer to develop a new fuzzer as compared to other frameworks. As an initial step individual subcomponents are being focused and then combining them to develop a complete fuzzer that is helpful to a programmer with the facility of code reusability. With proper installation of Python it can be run from any environment. Though Peach is advancing theoretically but there is no proper documentation.
General Purpose Fuzzer
GPF, available as an open source designed for Unix environment, is a generic fuzzer designed to generate infinite number of mutations. The basic advantage of GPF is that it takes less time to get fuzzer up and run. GPF basic modes include PureFuzz, convert, GPF (main mode), PatternFuzz and Super GPF.
PureFuzzer is as easy to use as attaching a device to a socket. Convert translates libcap files into GFP file, generated by Ethereal or Wireshark. GPF main mode controls number of protocol attacks. PatternFuzz is the most distinct mode because it automatically tokenizes and fuzz protocols. SuperGPF detects whether a socket endpoint has been targeted for fuzzing or not. But it fuzzes ASCII protocols only.
One has to learn significantly as it is complex to work on. On the other hand it is extensible and flexible too. Automatically processing and fuzzing empowers it in against with other frameworks.
Autodafe, a next version of SPIKE, uses a block based approach to fuzz both network protocols and file formats. The main goal is to reduce the size and complexity, focusing on most resulted problems. When once implemented, it can be iterated through different HTTP mutations.
An interesting technique named Markers technique decides the importance of each fuzz variable as there could be hundreds of fuzz variables making test cases double, so important test cases containing important fuzz variables have to be sorted out. Autodafe includes a debugger named adbg to set breakpoints. It is the first framework that explicitly includes a debugger. Additional tools PDML2AD, TXT2AD and ADC are used to make development quick and efficient.
It has many pros and cons same as of Spike. Its debugging feature makes it quite distinguishable but even then, lack of Windows support and recompilation on even simple modifications might make it fail.
The web browser is one of the most used and most exploited applications. Its frequent use has made it risky and vulnerable in past few years. It made attackers to adopt it as their basic path to fulfill their malicious wishes.
Usually there are different exploit codes used as scripts for every browser. However these exploits are easy to detect through exploit patterns using different regular expression and heuristic based signature engines. Consequently there also exist numerous techniques that have been adopted by attackers to bypass these detective methods. Vendors of signature-based protection systems then focused on detecting the obfuscated exploit variants in this never ending cat and mouse game.
As a response attackers have resorted to creating a unique exploit that morphs uniquely in every instance, making it impossible for signature-based protection engines to identify these Holy Grail of all obfuscated attacks. Techniques to alter this load in iterations of request are commonly referred to as oligomorphic, polymorphic or metamorphic manipulation.
Code morphing is one of the techniques to protect software applications from reverse engineering, analysis, modifications, and cracking. Code morphing breaks up the protected code into several processor commands and replaces them by others, while maintaining the same result. Thus the protector obfuscates the code at the intermediate level.
i) Malware Morphing
The concept of malware morphing is in use from many years. Malware authors and anti-virus researchers have identified the methods used to obfuscate and hide malware code with each infection. These techniques have been a source of innovation for web browser exploit developers. They were ineffective in early stages against traditional signature-based protection engines for the following reasons:
- There was no organizational structure or financial backing to develop obfuscation techniques that would be effective against modern security solutions.
- Patches were generally available when an exploit appeared.
- The methods used for attracting victims to malicious websites were relatively unsophisticated and static in nature.
The most common morphing classes found in malware development include the following:
The malware author used multiple decrypt engines instead of just one. It randomly builds an engine from several predefined alternatives with malware iteration. The very first technique was called WHALE (Aug 1990).
Polymorphic malware uses a dynamic build process to incorporate noise instructions or an instruction to load an unused register with a value and random keys to encrypt the constant part.
Metamorphic malware carries a copy of the source code whenever it finds a compiler recompiling itself after adding or removing junk code to its source.
ii) Exploit Morphing
Usually in web browsers, the more widely deployed and consistent the exploit code, the earlier protection is developed and deployed. Morphing exploit code bypasses the limiting factors of web browser exploitations.
In recent years there is a dramatic rise in web browser attacks. Attackers are dynamically altering the obfuscated exploit each time a potential user visits the insecure page, creating a unique exploit with each request. This is called x-morphic exploitation.
These unique principals are being applied to commercial exploit development incorporating within web browser attacks due to their susceptibility to content-level manipulation. With x-morphic exploitation, the code that morphs the exploit is never passed through to the victim host. Therefore there is no chance to identify exploitation by singling out the x-morphic engine, rendering useless signature-based protection engines, designed to detect polymorphic and metamorphic generating code, that make up the antivirus market.
An ideal condition for x-morphic exploitation would contain:
a) Possibly different exploit code for every user’s browser.
b) Services that exploit subscription based management.
c) Exploits those are invulnerable to signature-based anti-virus software.
A web server is said to be harmful if it responds an HTTP GET or HTTP POST request with an HTTP exploit code to the victim’s request. A single exploited HTTP page is responded. If a signature that detects this exploited material is present, potential users are secured. The longer the harmful web server responds the exploited material the higher will be the security. This way attacker loses the ability to control and disguise himself.
Attackers are developing a solution; behold the “x-morphic engine” that is designed to serve highly obfuscated and one-of-a-kind web browser exploits with each page uniquely rendered to a potential victim. The concept behind an x-morphic engine is simple, with the individual techniques and technologies.
There are two core elements to the x-morphic engine:
- Exploit Morpher
It focuses on manipulating a stock Web browser exploit by reordering, padding, swapping shell code, changing script components and altering the exploit code using oligomorphic and polymorphic principles.
It consists of engines at the network layer, content delivery layer or application content layer that take the morphed exploit code and wrap it in one or more layers of obfuscation. Each layer has its own influence and provides a metamorphic aspect. The x-morphic engine may also include additional exploits stored on the Web server
There are number of obfuscation techniques which are used when integrated with automation system. It can be classified as:
i) Network Layer
The intent of obfuscating at the network layer is to bypass network-centric protection systems, intrusion detection systems (IDSs), intrusion protection systems (IPSs) and filtering proxies. It provides packet fragmentation as a primary tool. It breaks the original packets into smaller packets and alters the fragmented data. Some common techniques include:
‘AT’ ‘TAC’ ‘K’ à ATTACK
Out Of sequence packets:
‘C’ ‘T’ ‘K’ ‘A’ ‘A’ ‘T’ àATTACK
‘AT’ ‘TAC’ ‘ACK’ ‘K’ à ATTACK
Overwriting redundant packets:
‘AT’ ‘QWE’ ‘TAC’ ‘RTY’ ‘ACK’ ‘K’ à ATTACK
ATT—-long pause——ACK à ATTACK
ii) Content Delivery Layer
HTTP is used as a primary delivery protocol which might be obfuscated by the attacker. It is important to properly reassemble and parse encoding techniques, in order to identify the exploit material. It makes an attacker to adopt these techniques.
1) HTTPS encryption over Secure Sockets Layer (SSL) and Transport Layer Security (TLS).
2) HTTP-supported compression.
3) Multiple character set encoding.
4) Transfer encoding, such as “chunked” and “token-extension”.
5) Chaffing content with characters.
iii) Application Content Layer
It focuses the way the application rebuilds, compiles or executes HTML content. Some of the most popular application content layer obfuscation techniques are:
1) Splitting up the source files and dynamically rebuilding the exploit page.
2) Execution of embedded scripts to “unpack” and execute the exploit.
3) Using file formats which have their own scripting languages and can be rendered inside the Web browser.
How to Deliver Malicious Content?
An attacker must obligate multiple users to request a page from the affected web server to increase the possibility of exploitation and malware. Some common methods used by the attackers are:
4) Banner advertising
5) Search page-rank
6) Expired domains
7) Domain Name Server (DNS) hijacking
8) Forum posting
9) Tickers and counters
10) 404 page errors
11) Server-side user-agent checks
Personalizing the Attack
X-morphic engines further obfuscate their attacks by taking advantage of advanced personalization techniques. Personalized attacks deceive visitors by creating a more dynamic “user experience” on the site, while bypassing many security systems.
Strategies that the x-morphic engine developers will likely adopt as part of their personalized attack delivery platform include the following
1) Using the source IP address information of the request, the attacker ensures that only one exploit is being served. It prevents subsequent replay-based analysis.
2) Implement a time-based approach to prevent their engine from exposure.
3) Depending upon the browser type information, the attacker would ensure that only exploits relevant to that browser are being served. It prevents search engines and web crawlers from malicious content.
4) Leveraging the IP address, the attacker can prevent IP addresses from any malicious content.
5) One-time URLs will likely be used to ensure that exploit code is served just once.
Web browser exploit platforms will be vital for the infection success for organizations that rely on malware installation. Fortunately, legitimate security researchers and organizations have been developing more preventative means of fighting these sophisticated attacks. Recent advances in anomaly detection and intrusion prevention systems combined with more behavioral-based techniques are helping organizations identify suspicious activity earlier. With the arrival of x-morphic exploitation, the exploit and security world has entered into a different phase that has rendered trustworthy signature-based antivirus system obsolete
December 17, 2010
A “smart grid” refers to the traditional electric power grid updated with modern information technology equipment and knowhow. It is comprised of digitized devices and the industrial facilities in the energy sector that such devices help operate: electrical plants, electrical substations, utility towers, relays, and transformers, nuclear power plants, and oil refineries.
A smart grid pertains to all the facets of the power grid—generation at power plants, distribution and transmission along electrical lines, and delivery and consumption at the customer homes or businesses of a utility. It features intelligent monitoring of the status and amounts of the electricity flowing throughout the grid. A smart grid employs such devices as sensors, programmable logic controllers, field controllers, distributed control systems, emission controls, intelligent electronic devices, and remote terminal units.
For the consumer, a smart grid typically means, rather like with a person’s Internet provider, a two-way digital interaction between the utility and his home and home appliances. Usually this includes smart meters that allow quick and precise measuring and information sharing about the power and electrical supply. This digitized interaction is supposed to allow easy, real-time adjustment of power, heating, and cooling devices, and appliances. It also raises privacy concerns, as smart meters and other tools could provide a utility, or a malicious observer, with access to much more personal and financial data on a consumer.
A smart grid has various purposes: increase the reliability of power supplies, reduce waste of energy, cut costs, enhance consumer choice and flexibility, and permit the merging into the traditional power grid of alternative energy sources. Smart grids can continuously monitor crucial system components and keep track of energy use. They are supposed to diagnose, and to flexibly and precisely respond, to surges in power demand and other grid variables.
Regional and local utilities manage the U.S. electrical grid. The grid’s thousands of miles of transmission lines, substations, and power generation facilities make up three distinct operating networks in the Western and Eastern states, and in Texas.
Due to growing energy and environmental concerns, smart grids have become a subject of growing interest. The financial resources being invested in them are substantial. A year ago, the size of the U.S. smart grid market was about $21 billion. By 2014, it is estimated it will grow to $43 billion. World-wide, the smart grid market in 2009 was $69 billion. By 2014, fueled by large expenditures in East Asia, it should reach about $170 billion. In the U.S., a chunk of the federal stimulus spending in 2009-10, some $3.4 billion, was directed to investment in, and modernization, of smart grids.
The cyber security market for smart grids is also growing fast, about one-third a year. It is thought security-related expenditures on smart grids will reach $4 billion annually by 2013. Major corporate players in this field include General Electric, IBM, Lockheed, and Raytheon in the U.S., and Toshiba and Kyocera overseas.
Cyber security in infrastructure is also a growing concern, because smart grids have many vulnerabilities. Richard A. Clarke, the former federal National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism, has stated that a cyber attack aimed at energy infrastructure “could disable trains all over the country and it could blow up pipelines. It could cause blackouts and damage electrical power grids…It could wipe out and confuse financial records… It could do things like disrupt traffic in urban areas by knocking out control computers. It could…wipe out medical records.”
An obvious vulnerability is the physical infrastructure of electricity grids. The long stretches of overhead transmission lines could make inviting targets for terrorists. In fact, in recent years, terrorists overseas have launched many attacks against the physical infrastructure of power systems. The placement of lines underground would better protect the lines. At the same time, high construction costs render this option impractical. Video surveillance of transmission lines is expected to play a growing role in protecting these valuable assets.
A growing concern is the threat of cyber attacks on smart electrical grids. This is because smart grids by their very nature are susceptible to hacks and malware. In the past, electrical installations were essentially stand-alone operations separated from the outside world. Today, they are increasingly being hooked up to, and operated by, IT devices connected to the World Wide Web.
The connection to the Internet makes them susceptible to many of the same malicious attacks that regularly occur against computer networks outside the electrical and energy sectors. One example of vulnerability is the intelligent electronic ddevices that control the circuit breakers in many electrical networks. A hacker could target the sensor and equipment data that such devices receive from computer networks.
A wide range of IT systems and applications in smart grids cries out for better security. Many energy facilities operate old-school mainframe computers running “tried and true” COBOL code that date from before the Internet. When such systems were built, cyber security was not an issue, and was not incorporated into their design architecture. Therefore security features developed with the Internet in mind have not been incorporated into many of these systems.
Modern IT applications in smart grids are often full of security defects. Web apps, such as online billings applications aimed at providing utility customers more convenience and flexibility, may provide hackers with the account and credit card information of the same clients. Remotely hosted services and applications provided by power and utility companies pose similar risks. The IT departments of such organizations may have insufficient knowledge sets and trained personnel, compared to the IT departments of organizations long accustomed to the Internet, for properly configuring and maintaining the security of server and client-side databases and software.
Modern applications, and smart grids, thrive on vastly greater amounts of data, which poses its own risks. Smart grids employ devices called “synchrophasors,” which measure and stream voltage and other data many times faster than previous devices. And such data is now “visible” over the Internet. “We’re collecting more data at more parts of the grid, in real time. It becomes more complicated to secure,” noted a NIST security consultant. “If I’m able to see that stream and understand what’s going on,” remarked the consultant, “then I’m able to remotely monitor how my attack is performing… and see in real time how the attack is working, then optimize it.”
Another new device that poses potential risks is a recloser. A recloser is an electrical device, placed in substations or atop electrical poles, that permits the flow of electricity. Facilities are outfitting reclosers with Bluetooth to allow maintenance personnel to manipulate the reclosers from afar. But because security has not been designed into recloser architectures, attackers could use Bluetooth to access and illicitly manipulate the devices.
The two-way digital communications that technologically advanced grids provide between energy suppliers and consumers are other reasons for concern. A hacker with a basic knowledge of electronics and a few hundred dollars in hardware could interfere with, and get control over, the smart meters that are essential to managing the two-way interaction. By gaining control over the devices of a large number of consumers, a malicious attack could alter the load balance of a power grid, or shut down power to a large number of users.
The sharp expansion in the installation and use of smart meters underlines this worry. In 2009-2010, the number of smart meters in the U.S. is projected to rise from 14 million to 23 million. In California alone, from 2009 to 2012, the number of smart meters is estimated to rise from about 3 million to close to 10 million.
Theoretical concerns have become practical realities, as a number of exploits involving smart grids and power complexes have taken place. Although gaining relatively little publicity, cyber attacks have already occurred across the world: on sewage treatment plants, natural gas and petroleum pipelines, nuclear power plants, hydroelectric power facilities, and electricity transmission infrastructure.
In 2009, the Wall Street Journal reported that cyber spies from China, Russia, and other nations had used the Internet to map electrical grids in the United States. Moreover, they had left behind software apps on the grids that could be activated later to disrupt parts of the electrical infrastructure. In 2008, the CIA reported, hackers disrupted the power systems of multiple cities in several, unidentified foreign countries.
A notorious attack occurred in Maroochy, Australia in 2000. Using pilfered radio gear, a disgruntled former employee of a water treatment plant wirelessly hacked into the plant’s supervisory control and data acquisition (SCADA) system. Issuing multiple radio commands, the hacker triggered the release of 800,000 liters of untreated sewage into local rivers and parks.
In 2009, in a simulated attack, technicians from the cyber security firm IOActive, Inc. designed a computer worm that could penetrate and infect interactive, wireless meters that make up part of an extensive smart grid. The worm “spread from one meter to another,” noted an IT consultant, “and then it changed the text in the LCD screen to say ‘pwned’.” Infrastructure security specialist Joe Weiss, formerly a manager with the Electric Power Research Institute, or EPRI, has compiled a database of more than 170 infrastructure cyber incidents.
A wealth of IT security organizations, such as the Computer Emergency Response Team, or CERT, exist. However, there are few organizations that deal with cyber security in electrical and other industrial infrastructure. At the same time, there is a great deal of information readily available on public infrastructure. Terrorists could gain most of the information required to mount an attack on a smart grid from public sources such as industry journals.
“The electric grid is highly dependent on computer-based control systems,” sums up House Committee on Homeland Security chairman Bennie Thompson. “These systems are increasingly connected to open networks such as the internet, exposing them to cyber risks. Any failure of our electric grid, whether intentional or unintentional, would have a significant and potentially devastating impact on our nation.”
The cyber risks that concern observers include many vulnerabilities that lead to inadvertent mishaps unrelated to malicious hackers or malware. A classic example of this was the 1999 explosion of a pipeline in Bellingham, Washington. There the computer monitoring systems failed to detect the buildup of pressure within the fuel line. The resulting explosion killed three, and the busted line spilled an ocean of gasoline into nearby creeks, resulting in $45 million of damage. A recent example was the highly publicized disruption of suspected nuclear weapons facilities in Iran via the Stuxnet worm, which was specifically designed to penetrate the Windows operating system that run the computer systems of the nuclear plants in question.
Many inadvertent problems stem from trying to graft traditional IT security solutions onto infrastructure systems for which such solutions weren’t designed. Penetration testing, a standard tool of white hat hackers, has been known to destroy the firmware or disrupt the control systems of infrastructure facilities. Maintenance of anti-virus software on such facilities has disrupted control devices and triggered denials of service. Installation of software patches has prevented shutting off the pumps of water utilities, while software for other infrastructure cannot be patched while the facilities are in operation. Inadvertent incidents have even forced nuclear power plants to fall back on auxiliary power.
These mishaps result in part from the lack of testing of, and experience with, cyber security tools applied to infrastructure systems. At the same time there is often a “culture gap” between the employees of IT shops and those of electrical and other infrastructure facilities. The two sets of personnel are simply not yet used to working together. Another gap exists among the infrastructure industry, the IT sector, and federal government regulators. While representatives of software and computer manufacturing firms are regularly invited to government conferences on cyber security, leaders from the infrastructure sector are usually an afterthought at best or forgotten at worst.
Fortunately, despite the exploits that have occurred, malicious or inadvertent, the cyber threat to the electrical grid and other infrastructure elements is still at its early stages. This fact hopefully will allow companies and government agencies the time to take countermeasures to minimize the threat. Most of the steps that have been proposed mirror those that have been taken to better secure the IT industry against malicious attack.
An important first step is standards. The North American Electric Reliability Corp., or NERC, is a non-profit organization of industry working groups and utilities that formulate some Critical Infrastructure Protection (CIP) standards. The Federal Energy Regulatory Commission, an independent agency that regulates transmission and transport of electricity and energy commodities, provides oversight for NERC. NERC focuses on ensuring reliability of the power system in the U.S. and Canada. Although the standards are limited, and much else remains to be done, NERC and CIP have served to raise awareness of infrastructure security issues, and have provided the context for an increase in funding to bolster infrastructure cyber security.
The development of effective policy, procedures, and procedures for infrastructure security is vital. And, as with IT cyber security, risk management will play a key role. Risk management with smart grids has to do with threat assessment, vulnerability detection and identification, risk assessment itself, and drawing up of countermeasures. A realistic assessment of actual risks must be made, with resources apportioned rationally to deal with risks that are most likely and that could cause the most damage.
As a relatively new field, infrastructure cyber security must begin to embed security into it architecture, as part of the design process. Testing of security applications and of grid components must become more comprehensive and more rigorous. Security software and security threats are evolving continuously, and the test regime must change constantly to keep up.
Testing would be more effective and more credible if the infrastructure sector employed independent testing experts from outside the infrastructure realm. This would be particularly true of the testing of smart meters.
As a new field, infrastructure cyber security would benefit from organizational programs to raise security awareness among employees. A natural part of that would be training programs in security.
Further, the government must strive to bring representatives of the electrical and other infrastructure sectors into its conferences on IT security, along with representatives of the IT industry. And within an organization, management must ensure that the IT and infrastructure operations shops, which often work separately and at cross purposes, collaborate in aligning their functions to bring about better security.
In all of these concerns, the role of upper-level management is key. Management must make security for the electrical grid a priority, and ensure that the various divisions of an enterprise make it their priority as well.
November 8, 2010
Two new reports–from the Center for Strategic and International Studies (CSIS), and from the consulting firm Booz Allen and the non-profit Partnership for Public Service (PPS)–highlight serious shortfalls among the federal government’s cyber security work force. Against a background of growing threats to the IT infrastructure of the U.S. military, civilian federal agencies, and major private-sector firms, the reports find common ground on short- and longer-term recommendations for grappling with this pressing concern.
The reports make clear the mounting threats to federal agencies and to major private-sector firms and vital national infrastructures. “Foreign powers, criminal groups, hackers, and terrorist organizations have launched cyber attacks on the White House, Pentagon, State Department, and New York Stock Exchange,” notes the Booz Allen/PPS report. In the past few years, millions of attempts have been made to hack into defense digital networks, and cyber criminals have penetrated the nation’s electrical grid.
For “the past six years,” the CSIS report states, “the US Department of Defense, nuclear laboratory sites and other sensitive US civilian government sites have been deeply penetrated, multiple times, by other nation-states.” In 2008, CSIS adds, “one of the nation’s largest processors of pharmacy prescriptions reported extortionists had threatened to disclose personal and medical information on millions of Americans.” Indeed, last year the General Accountability Office (GAO) reported deficiencies in 23 of 24 federal agencies to detect or thwart cyber attacks.
President Obama has declared cyber security to be “one of the most serious economic and national security challenges we face.” Defense Secretary Robert Gates has stated that the Department of Defense (DoD) is “desperately short of people who have capabilities (defensive and offensive cyber security war skills) in all the services.”
The two reports essentially agree on the deficiencies facing the federal agencies. CSIS notes the “shortage of the highly technically skilled people required to operate and support systems already deployed” and “an even more desperate shortage of people who can design secure systems, write safe computer code, and create the ever more sophisticated tools” for preventing and mitigating damage from malicious acts.
Booz Allen identified four serious conditions inhibiting the strength of the cyber security workforce:
- An inadequate pipeline of potential new talent. Just 40 percent of federal chief information officers (CIOs), chief information security officers (CISOs), and IT managers, according to those surveyed, find sufficient the quality of applicants for cyber security jobs. This leads to a disproportionate reliance on contractor personnel, such as the 83 percent of CIO staff at the Department of Homeland Security that are private contractors.
- Uncoordinated leadership and fragmented governance in the federal effort, with no one organization heading up decision making or planning for the cyber security workforce. Thus agencies sometimes work at cross-purposes. None of the people interviewed for the report could provide an official count of the actual number of government cyber security personnel.
- Recruitment and retention of cyber security talent is hampered by: the federal government’s cumbersome hiring processes, outdated job classifications, inadequate specialized training, and absence of a federal career path. One computer science job category was last updated in 1988–before the adoption of the Internet.
- Hiring managers, compared to HR managers, are dissatisfied with efforts to hire cyber security talent.
CSIS reaches similar conclusions, and provides others as well. “There is neither a broad cadre of cyber experts,” its report notes, “nor an established cyber career field to build upon.” CSIS specifically criticizes the certification process, asserting that credentials focus on showing expertise in complying with statutes, not risk reduction, thus creating “a dangerously false sense of security.”
The two reports take somewhat similar paths in their recommendations for improving the workforce. Taking the big view, Booz Allen/PPS calls for the White House cyber security coordinator, agency leaders, and OPM to formulate a government-wide blueprint for addressing workforce demands. The blueprint would include tools to gauge the health of the workforce.
Regarding certifications, Booz Allen/PPS advocates updating job classifications, while CSIS calls for the adoption of rigorous professional certifications. CSIS would accomplish the latter through creation of a governance body, to be evaluated after a two-year pilot test, which would formulate and administer certifications in new specialty areas. Members in the governance body would be drawn from key federal agencies, major private-sector organizations, and universities with important cyber education programs.
Both reports urge establishment of a career path in cyber security akin to that in civil engineering or medicine. CSIS emphasizes strengthening the technical competence of personnel through the hiring, acquisition, and training processes, while Booz Allen/PPS stresses the provision by congress of adequate funding for such purposes as worker training and the bolstering of management expertise.
Funds would include graduate and undergraduate scholarships in cyber security such as the Scholarship for Service program. In fact, CSIS posits a number of initiatives to enhance cyber security education, including an OPM action plan on career issues, and the creation via the federal Chief Information Officers Council of a Cyber Corps alumni group.
More broadly, the reports view the dearth in cyber security talent as reflecting the nation’s woes in science and technical education and in the technological workforce generally. To address this, CSIS stresses more rigorous school curricula, while Booz Allen/PPS calls expanding scholarship funding in cyber security and computer science. The White House should lead,” affirms Booz Allen/PPS, “a nationwide effort to encourage Americans to develop technology, math, and science skills.”
The two reports, shown below, were compiled from public reports and congressional testimony, and interviews with and surveys of federal subject matter experts and information officers in many federal agencies.
For years IT organizations have focused on securing the computer network. Technologies such as firewalls and network access control (NAC) are designed to keep malware and unauthorized traffic from coming in. That makes sense from an operational integrity standpoint. Viruses, worms, spam, phishing attacks, etc. can bring a network to a standstill. But, while the focus has been on keeping bad traffic out, data packets have moved freely – for the most part – through and beyond the private network. After all, that’s what the network is for. It plays a supporting role to the star of the show: your data. Without data, there’s little need for a network. But therein lies the rub! Even as organizations block traffic and prevent infected or noncompliant endpoints from connecting to the network, they allow confidential, sensitive and proprietary information to flow between departments, between LAN segments, between private networks and across the Internet.
Increasingly, companies are recognizing the vulnerability this creates and the need to secure not just the network but also the data that is stored and transmitted across it. That is where data loss prevention comes in. Data loss prevention (DLP) refers to a category of information security products that aim to prevent the unauthorized distribution or loss of sensitive information. It is a complex set of technologies designed to identify confidential information, monitor the network for the transmission of this information and enforce policies accordingly. DLP solutions typically have three components: one at the endpoint where it monitors and controls activities, one at the network where it filters data streams and a component in storage devices to protect data at rest.
The Need for Data Loss Prevention
It used to be there was only one way to steal a company’s valuable assets – through the door. Not so today. Many businesses live and die based on the information they possess, be it customer data, trade secrets or other intellectual property. And that information can leave an organization any number of ways. Perhaps the most high profile means of data loss of late is through the theft or loss of mobile data-bearing devices, such as laptops, thumb drives and smartphones. The storage capacity on these types of devices continues to grow, and companies are eager to enable their users to work anytime anywhere. This means an increasing dependence on mobile devices. Sales teams have access to Web-based CRM applications. Executives email sensitive documents while on the road. While the functionality enables a more productive workforce, it also increases the vulnerability of the company’s data. Smartphones and laptops areleft in taxis,at airport checkpoints, at conferences and hotel rooms – where they can be easily picked up by the next passerby. In fact, according to Ponemon Institute’s Business Risk of a Lost Laptop study,the most vulnerable time to lose a laptop is during travel. But these devices are vulnerable wherever they are used. Laptops have been stolen from office buildings, and even end users’ homes and vehicles. For example, in January 2008 a laptop was taken from a Horizon Blue Cross Blue Shield employee in Newark, New Jersey. The laptop, which was being taken to the employee’s home, held more than 300,000 member names, Social Security numbers and other personal information.
Mobile data-bearing devices are a weak point in your company’s data security, but an even larger threat to data loss is email. In its seventh annual study of outbound email and data loss prevention issues, Proofpoint Inc. found that email is the number one source of data loss risks in large enterprises. According to the study, 35% of respondents investigated a leak of confidential or proprietary information via email in the previous 12 months. Consider how many of your end users use email and have access to sensitive information. Even authorized users sending sensitive information to legitimate recipients put your data at risk if said data is transmitted in clear text. Then there’s the possibility that data is sent to the wrong recipient or perhaps the sender or recipient shouldn’t have access to the data at all. On Sept. 2, 2010 medical technology provider Kinetic Concepts Inc. announced that an attachment with sensitive employee information was accidentally emailed to company employees*. With a simple click of a mouse unauthorized recipients had access to their colleagues’ Social Security numbers, addresses, dates of birth and salary information. Imagine the mess that created for HR!
And that brings us to another looming threat – the insider. Data can be lost by end users via accidental disclosure. These are folks who have access to sensitive information but don’t know how to use it safely. Again, perhaps they are emailing confidential documents to an appropriate recipient but are not encrypting them. Then there are users who intentionally disclose sensitive and confidential information to “get back” at their employer. In February 2010, ITPro.co.uk reported that a database containing contact information of 170,000 Royal Dutch Shell workers was emailed to organizations campaigning against the oil giant. The database is “thought to have been sent by a disaffected former employee of the company,” according to the report. That’s just the tip of the iceberg. According to the Privacy Rights Clearinghouse, 77 data breach incidents resulting from intentional disclosure by insiders were made public from January through October, 2010. Those 77 breaches exposed 1,268,807 records.
Malware and Web applications also pose a risk to corporate data. Users can download myriad Web apps to their smartphones that use or store data from the phone. For example, software marketed to catch cheating partners can be downloaded onto an unsuspecting user’s phone. The software then records all communications and stores the information on a server where it can be accessed by a third party. Other Web apps aren’t as seemingly malicious. They may enable smartphone users to send and receive virtual business cards or record telephone conversations for later playback. But these applications potentially expose sensitive and confidential information to third parties, especially if it is stored on the Web app providers’ (unsecure) servers.
Malware writers have also come to realize that there is money to be made in possessing sensitive data. Hackers create viruses, spyware and the like to steal data that can later be used to commit identity theft or blackmail, or be resold. Case in point: The United States’ fourth largest credit card payments processing company fell victim to a malware attack in 2008. Heartland Payment Systems’ system became infected with malware that allowed attackers to collect unencrypted payment card data in transit. This went on for several months.
Full articles includes information on following things:
- The Cost of a Data Breach
- Symantec DLP Solutions
- Discover Where Confidential Data is Stored
- Monitor How Confidential Data is Being Used
- Protect and Prevent Confidential Data Loss
- Manage and Enforce Unified Data Security Policies
- Data Loss Prevention Best Practices
For full article visit Logical Security Resources
Zeus, or Zbot, is a software toolkit that enables malware coders to build hard-to-detect Trojan horses, ones typically employed against the bank accounts of unsuspecting owners. (A Trojan horse is malicious software, secretly embedded in a system or application, that is “turned on” at a time of the attacker’s choosing.) Launched from behind command and control servers, Zeus is known by various names— Zeus, Zbot, Wsnpoem, PRG, Kneber, and Gorhax.
Since 2007, illicit organizations have employed Zeus to launch damaging, highly publicized attacks targeting the login credentials and other personal data associated with millions of computers, thousands of organizations, and uncounted numbers of users and their accounts. Relatively small groups of sophisticated criminal bands based in various nations–particularly in Eastern European countries such as Russia and Ukraine–have stolen tens of millions of dollars. Computers in 196 countries have been subject to attack. The countries most affected include the U.S., U.K., Saudi Arabia, Egypt, and Turkey.
In a typical scenario, malicious developers generate malware. The malicious code can be purchased on the cyber underground. Black-hat hackers who are part of criminal organizations break into and compromise computers. On the machines, they insert a Trojan which, when activated, pilfers the credentials of targeted persons, and penetrates the targets’ bank accounts. Meantime the thieves’ command and control server collects this sensitive data. The targets can be banks, ATM machines, credit card companies, social networking sites, telecommunication and other firms, and private individuals.
The hackers then transfer funds from these accounts to “mules.” Networks of mules consist of developers, non-technical individuals, and other illicit organizations. Often, they are foreigners who acquire fake passports and other identification in order to enter the country whose individuals and corporations are the targets of the attack. After opening bank accounts, they “launder” the funds in the accounts to prevent tracking of the stolen funds. In addition, they transfer the funds to the organizers of the illicit scheme, in return for a percentage of the moneys procured.
For full article visit Logical Security Resources
Smartphones are infiltrating businesses of all sizes. Decreasing price points and increasing functionality puts enterprise-class capabilities in the palm of every Tom, Dick and Harry who connects to the corporate network. No big deal, right? Blackberrys, iPhones and Androids – among many others – enable your users to work more efficiently. But, like every other piece of technology, smartphones come with a price to your organization. That price is in the form of risk. Let’s look at some of the ways smartphones introduce risk to your environment, and then look at some of the best practices for managing that risk.
Perhaps the most significant risk posed by smartphones is that of data loss. There are a number of ways data can be lost or stolen from smartphones. Most obvious is the loss or theft of the device itself. These small handheld devices can be easily forgotten in public places or picked up by casual passersby. Many users either don’t password protect their phone because of the inconvenience it poses or, if they do, use a simple four-character password that can easily be cracked. So all of the data – be it sensitive company data or personal data – is accessible by an unauthorized user.
There are also occasions upon which users have legitimate possession of another’s smartphone, but have no business accessing the data on it. For example, it is not unusual for a user to give an old phone to a friend who has lost their own or to donate an outdated phone to a charity. Data can also be exposed if a smartphone is resold or sent in to the manufacturer for repair.
But physical possession is not required to steal data off of a smartphone. Mobile applications can access the data on your users’ smartphones and, in some cases, even store that information on third-party servers. For example, applications marketed as tools to catch cheating partners and protect children can be downloaded to an unsuspecting users’ smartphone. The application captures emails, texts, browsing history and telephone calls, and stores that information on a server where it can be retrieved by an unauthorized individual. If any of those communications include corporate data then it too is saved and accessed by a third-party.
All of these scenarios put companies at risk of being noncompliant with laws and regulations around data privacy. If a user loses a smartphone storing unprotected corporate data or your data is stored on an unauthorized third-party server, your company is liable and can face fines.
Contrary to popular belief, smartphones are no better protected against denial-of-service attacks or malware infections than an unprotected PC. In fact, the applications that run on smartphones are subject to all of the same vulnerabilities. Consider Web applications, which have been used to spread malware, spyware, phishing attempts, etc., via PCs. Users are downloading similar applications to their smartphones, the difference being that smartphones typically do not have antivirus protection, so these infected files can propagate onto an IP network.
The smartphone’s small form factor further facilitates propagation of malware. It’s more difficult to identify risk web sites and suspicious emails and links on pared down sites built specifically for a small screen. Plus, users tend to be more trusting of the data they receive on their smartphones because the devices represent a more intimate communications channel. Thus, they are more likely to click on potentially dangerous links.
For full article with Ten Smartphone Security Best Practices please visit Smartphone Security Article at Logical Security.
The online predator, Joel Garcia, finally got what he deserved. The 29-year-old Texan had been communicating online for some time with a 12-year-old. He’s sent the child a number of pornographic images. In other postings he discussed having sex with the child. Finally, he and the child agree to meet to have sex.
When Garcia arrived at the agreed-on place, however, he was met by FBI agents and Corpus Christi police. One official had masqueraded online as the child. In Garcia’s car, investigators found 14 child sex videos, and hundreds of photographs of child pornography. The arrested man was later sentenced to 14 years without parole.
The Internet is a great boon for learning, including children. Yet children, due to their age and trusting nature, are at particular risk to the dangers of the Internet. The Web Wide Web poses a great many and growing risks to children.
Online predators trawl the Web seeking to involve youngsters inappropriate and illegal sexual relationships. The Internet allows sexual deviants to more easily gain access to information about youths they may be targeting. Such information can include a youth’s email address, web site, birth date and age, photos, family data, other friends, hobbies, and individual likes and dislikes. Based on such information, predators can begin to befriend impressionable youths, perhaps gaining their trust over a long period of time, perhaps through enticements such as the provision of free software games. At the same time, predators can maintain relative anonymity about themselves, or readily post false or misleading information. Once friendship is gained, predators may seek to physically meet their targets, sometimes by sending them money, tickets, or other means to travel to a rendezvous.
Common “hunting grounds” for predators include email, blogs, and social networking sites such as Facebook and MySpace. Another is online chat rooms, which by their nature promote anonymity on the one hand and encourage children eager to converse and make friends to let down their defenses. By their very nature, children are vulnerable to predators. Emotionally immature, they crave attention. They have a natural curiosity, especially about topics that their parents may have declared off limits. They are accustomed to obeying the requests of adults, and are unlikely to doubt such requests are illegitimate.
The Internet is awash with pornography sites, including children’s porn sites. Predators may seek to photograph or film children and young adults for use by such sites. To gain material for such sites, or for their own illicit purposes, predators may “cyberstalk” children, constantly harassing them, or attempting to gain their trust in online “friendships” leading to destructive real-life encounters.
A great many free online resources are available for parents, children, and other concerned individualson how to safely and effectively use Internet tools and devices.
14-year-old Phil loved his parents’ new laptop, and the Internet, and spent hours on the Web playing games and conversing with friends on Facebook. One week, however, Phil began receiving disturbing messages. A “friend” from middle school posted messages on Phil’s Facebook “wall” using offensive language and made-up slurs. An adult stranger commented weirdly about Phil’s Facebook photos, while requesting Phil’s personal email. Phil was bothered by the messages, and told his mother about it.
Phil’s mom was herself, for her job as a marketing manager, a practiced user of social networking sites. She got on Facebook with her son, and showed him how to tighten up the security and privacy of his account. Together they changed his privacy settings to allow access to his photos and profile only to certain actual friends and relatives. They blocked messages from the adult stranger. And Phil’s mother stressed to him that he should in the future only accept messages and friend requests from persons and organizations he knew and trusted.
One site full of information about the risks the Internet can pose to children, and how to mitigate those risks, is Web Wise Kids, located at: http://www.webwisekids.org/
Web Wise Kids, sponsored in part by the Department of Justice, is a 501(c)3 non-profit organization that offers informative and easy-to-understand programs for both children and adults on matters such as online predators and stalking, safe blogging and cell phone use, and computer fraud and piracy.
Programs include interactive games where children and teens play detective to “turn the tables” on Internet predators, by investigating and collecting evidence about their illicit use of spyware and counterfeit software. For parents, instructors, and law enforcement personnel, the Wired with Wisdom program is a user-accessible, online game that explores topics such as chat rooms, personal web sites, and email and social networking.
The federal government provides a number of such resources, in particular free publications from theFederal Trade Commission (FTC). The FTC publications include:
Net Cetera: Chatting with Kids about Being Online
Helps parents protect their kids and to talk to them about living their lives online. Topics covered include: parental controls, protecting the family computer, sexting, social networking sites, and increasing the safety of mobile phones. 56 pages.
Social Networking Sites: A Parent’s Guide
Urges parents and kids to talk about the risks involved in using social networking sites. Offers tips for using such sites safely. Helps parents with issues like: keeping information private, how their kids get online, avoiding sex sites, reviewing your children’s friends list, computer privacy settings.
Social Networking Sites: Safety Tips for Tweens and Teens
Deals with such issues as: limiting the posting of personal information such as photographs, street address, and credit card data, being wary of meeting online “friends,” how posted information stays online “forever”.4 pages.
For full article with 7 Practices for Safer Computing please visit Logical Security Resources
October 10, 2010
Shon Harris discusses some of the upcoming threats companies face in information security today and what she and her company, Logical Security, is doing to help in these efforts. Here is the interview with Shon Harris, Owner and President of Logical Security.
- 1. Please provide us with some background information on your organization and your industry.
I work in the information security industry, which has critical impacts on businesses, organizations and nations. Our society only increases its dependence upon technology, and properly securing it can come down to the life or death of an organization.
The information security industry is relatively new compared to other industries as in financial, medical, and telecommunications. The industry is currently going through many different ‘growth pains’ as it moves from a chaotic and infant entity to a more mature and disciplined space. I and my company have been seen as visionaries in helping some of the largest corporations and government agencies secure their most precious assets against the largest threats they face today.
Logical Security is going into its 8th year of existence, while I have been in the industry for 15 years. My company specializes in risk management consulting services and training. We build enterprise-wide risk management programs that not only allow our customers to identify their vulnerabilities and stop their adversaries, but correlate and integrate information security issues into their overall business decisions and vision.
- 2. What are some of the primary challenges your industry faces?
The threat landscape that companies face today is not the same that they had to deal with even five years ago. Today’s threats are not the lone hackers but organized, trained, and funded groups that are backed by organized crime rings or nation states.
Attackers are no longer interested in spreading benign viruses, but have very focused goals of obtaining an organization’s most sensitive data as in social security numbers, credit card information, medical data, and privacy and financial information. The attackers are using our technology against us and we are constantly being outsmarted.
Companies and government agencies are finding it difficult to keep up with a threat that can morph and adapt at the rate of speed that is currently taking place. Anti-virus products capture around 23% of the malware that is on our systems, meaning that most systems are infected and being used by an underground criminal without our knowledge.
Organizations have a false sense of security because they have anti-virus, firewalls, intrusion detection, intrusion prevention and other technologies in place. While these are necessary defenses, the enemy is circumventing them and covertly embedding themselves into the technology and devices we use day in and day out.
To view the entire Article, click here.