Two factors increase the stakes of the cyber struggle.

Tactically and operationally, the increasing dependence of modern technologically advanced forces (especially U. S. forces) on networks and information systems create new kinds of exploitable vulnerabilities. Second, as modern societies including the militaries that mirror them have continued to evolve, they have become ever more dependent on a series of interconnected, increasingly vulnerable “critical infrastructures” for their effective functioning.

These infrastructures not only have significantly increased the day-to-day efficiency of almost every part of our society, but they have also introduced new kinds of vulnerabilities. - Robert A. Miller and Daniel T. Kuehl Connectivity in the Modern World Today, computers connect us to our finances through online banking, mutual fund management, stock trading services, and a variety of other online applications that provide access to accounts twenty four hours a day.Beyond financial services, we have the ability to connect to a wide variety of information, including social media content such as Facebook, YouTube, and Twitter, as well as magazines, video games, and other Web 2. 0 content.

The interconnectivity of such systems has not only provided individuals with access to a wide variety of data, but now businesses have the ability to leverage the Internet as a part of their day-to-day operations.Whether it be human resources management, email and coordinated calendar systems, or sales tracking systems, the cloud offers opportunity to businesses for quicker, streamlined processes and potential cost savings. Furthermore, the government uses interconnected computer systems to manage public services such as energy systems, coordinate public transportation logistics, synchronize emergency services, run water treatment facilities, and leverage technology for a variety of services benefitting the public.However, personal, business, and government use of computer systems, because of their inter-connectedness, opens these systems up to a variety of activities that they were never intended for. Instead of a person gaining access to his financial data, a third party could be intercepting such communication and using it to bilk someone of their entire savings. Similarly, businesses could be storing their trade secrets on their internal file servers and a hacker could be downloading their information with the intent of selling it to one of their foreign competitors.

And with respect to government services, a state-sponsored attack could occur from a foreign country to either deny certain services, steal information, or to take control and exploit command and control systems unbeknownst to leadership. Martin C. Libicki, a noted authority on information warfare at the RAND policy institute, has written Cyberdeterrence and Cyberwar (2009) a notable work covering the current and future challenges associated with the connected world.Among the concepts within his book, Libicki discusses security vulnerabilities associated with cyberspace. ..

. In theory, all computer mischief is ultimately the fault of the system’s owner—if not because of misuse or misconfiguration, then because of using a system with security bugs in the first place. In practice, all computer systems are susceptible to errors. The divergence between design and code is a consequence of the complexity of software systems and the potential for human error.The more complex the system—and they do get continually more complex—the more places there are in which errors can hide.

(p. 18) Connectedness and Vulnerability What Libicki is referring to is vulnerability within a system which a hacker could use to “gain access to a system or to get it to accept rogue instructions [which] is called an exploit” (p. 18). A variety of vulnerabilities occur within cyberspace because of humans, hardware, software, and connection points that provide access to such systems.The United States Computer Emergency Readiness Team (US-CERT) has provided a “high level overview” of cyber vulnerabilities for control systems. Within this overview, US-CERT includes the following vulnerabilities: wireless access points, network access points, unsecured SQL databases, poorly configured firewalls, interconnected peer networks with weak security, and several others.

Similarly, the National Institute of Standards and Technology (NIST) has published the “Risk Management Guide for Information Technology Systems” (2002).This guide establishes a multi-step system analysis which IT managers can use to assess their network vulnerabilities, measure the potential of each vulnerability occurring with respect to the threat’s source, motivation, and actions, whilst developing recommendations and documentation to counteract the vulnerabilities found within the assessment. The NIST guide views vulnerabilities from the perspective of the potential consequence(s) of an exploited vulnerability.Following the US-CERT overview and NIST guide can be helpful from an IT management perspective, as both provide enterprise-level guidance on structuring network systems with respect to vulnerabilities and both apply a system level view of analyzing vulnerability.

However, both are lacking specificity, from the sense of how an external threat can tactically exploit a system.Prabhaker Mateti, in the chapter “TCP/IP Suite” from the Handbook of Information Security (2006), provides over fifteen types of security exploits related to the TCP/IP suite that hackers use to attack systems, including: sniffing, fingerprinting, Internet Protocol (IP) address spoofing, and buffer overflows (pp. 25-29). Stuart McClure, Joel Scambray and George Kurtz have provided both strategy and tactics for implementing Mateti’s notable exploitations, amongst many others, in their seminal work Hacking Exposed, now in its sixth edition.

It is where hardware, software, and the human element meet within a system that hackers try to take control and security specialists patch vulnerabilities to deny unauthorized access and the cycle appears to be never-ending. Sniffing, Fingerprinting & Footprinting From the tactical viewpoint, within the pages of Hacking Exposed the authors provide recipes for exploiting vulnerabilities, as well as instructions on countering exploitations. With regard to sniffing, the text covers a variety of security weaknesses and recommends several software applications that can be used to find a network’s Achilles heel.Cain and KerbSniff are two tools in particular that can be used for eavesdropping on a network password exchange in the Windows environment (McClure et al. , 2009, pp.

169-170). Furthermore, network sniffing can be accomplished by using applications such as tcpdump, Snort, and Wireshark, which allow anyone with the means to view traffic across a network. This can be helpful for trying to debug network problems, but in the wrong hands it can prove to be invaluable in footprinting a system (pp. 273-274).With regard to terminology, Mateti uses the term fingerprinting in his text, whereas McClure et al refer to this technique as footprinting. Though similarities exist and some confuse the two terms, Michael Greg provides clarity in his text Certified Ethical Hacker Exam Prep: Understanding Footprinting and Scanning (2006).

He defines footprinting as, “The process of accumulating data regarding a specific network environment, usually for the purpose of finding ways to intrude into the environment” (p. 89). Whereas fingerprinting can be either active or passive in nature.“Passive fingerprinting is the act of identifying systems without injecting traffic or packets into the network” and active fingerprinting is the act of using tools to “inject strangely crafted packets into the network to measure how systems respond” (Greg, 2006, p. 89). [Note: McClure et al.

use the general term of scanning versus fingerprinting (pp. 44-77). ] Essentially, both fingerprinting and footprinting are used to map accessible hardware and software services within a network. The information gleaned from such endeavors provides actionable intelligence on what hardware or services are susceptible to common hacking attempts.By determining the easiest way to gain access and exploit a system while minimizing risk of detection, the hacker can ascertain which vector of attack is worthy of his time by using a simple cost-benefit analysis (Kshteri, 2006, pp.

36-38). Microsoft provides general guidance on countering this threat through their online education documentation within their development network. Microsoft’s guidance includes “filter[ing] incoming packets that appear to come from an internal IP address” and “filter[ing] outgoing packets that appear to originate from an invalid local IP address” (Meier, Mackman, Dunner, Vasireddy, Escamilla, & Murukan, 2003).IP Spoofing With regard to the other Mateti referenced security exploits, he points out that “IP spoofing is an integral part of many attacks” (p. 26).

Matthew Tanase provides a primer on IP spoofing at Symantec’s website where he goes into the history of the technique and how the structure of the TCP/IP protocol suite and packet exchanges permit this particular exploitation to occur (2003).Tanase notes that there are several variations of IP spoofing, however they all have a common denominator – “an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by ‘spoofing’ the IP address of that machine. ” Computer World’s Jonathan Hassell has provided an authoritative view on what common attacks are used through IP spoofing and what can be done to patch them in his article “The top five ways to prevent IP spoofing” (2006).The common attacks provided by Hassell include Blind Spoofing, Nonblind spoofing, Denial-of-service (DoS) attack, and the Man-in-the-middle attack. Blind spoofing consists of a hacker outside of the network perimeter who is “blind to how transmissions take place on this network”, so he must receive sequence numbers from the target device and then falsify who he is by “injecting data into the stream of packets without having to authenticate himself when the connection was first established” (Hassell, 2006).Nonblind spoofing occurs when the hacker is inside of the subnet and can sniff out existing transmission and hijack sessions without being blind to the sequence numbers.

Denial-of-service attack is when “multiple hosts are sending constant streams of packet [sic] to the DoS target” (Hassell). This is essentially a flood of data that overwhelms a system to the point its use is unavailable or inoperable.Finally, the man-in-the-middle attack is an interception of packets between machines where the packets are read by an authorized user and sent onward unbeknownst to either parties communicating. Particularly troubling is the fact that neither the originating sender or intended receiver is aware that information was intercepted during transit and therefore if secure information was gathered, no one, except the eavesdropper, knows that data was compromised (Hassell).

Buffer Overflows“Historically, buffer overflows have been the most common type of vulnerability. They have been popular because buffer overflow exploits can often be carried out remotely and lead to complete compromise of a target” (Chen & Walsh, 2009, pp. 54-55). Since many system services susceptible to buffer overflow are running at the highest level of administration privileges it is appropriately attributed as the “coup de grace of hacking” (McClure et al. , 2009, pp. 550-551).

Essentially the hacker sends packets to the target service knowing that more data is being transmitted than is expected by the target during communication. This extra information is dealt with differently by different services and can either be ignored, crash the service or system, or, if the target is susceptible to this type of vulnerability, the service may use the extra packet data, if constructed correctly by the hacker, to run administrator-level code and allow the hacker to control some or all of the target system (Mateti, 2006, p 558).Even though the buffer overflow vulnerability was documented as a theoretical exploit in 1995 and fully substantiated in 1996, unpatched servers continue to populate the Internet that are still susceptible to this weakness (McClure et al. , 2009, pp.

550-551). The Human Element Overlooked as a security concern by Mateti in his essay on “TCP/IP Suite” vulnerabilities is the human element. It is, after all, the human that manages cyberspace and provide physical access to the terminals and systems that are interconnected.It is the human that sets up the Internet protocols used during web communications, sets the security procedures to be adhered to, codes the back-end server integration, creates the temporary passwords to access sensitive information, holds resentment against employers, forgets to patch a known weakness in sendmail, and desires to find confidential, financial information to sell to the highest bidder. It is the human element that matters, perhaps moreso than any hardware, software, or network connection when it comes to securing a system.

To many, the hacker who has taken over a system and stolen a database of financial information for monetary gain is normally conceptualized as a social pariah, living in his mother’s basement, staring at a monitor all day and night, sipping caffeinated beverages, maintaining poor hygiene and exhibiting antisocial behavior. However, “A modern-day computer criminal could be a disgruntled, middle-aged, white-collar worker sitting at a nice desk on the fourteenth floor of the headquarters building of a billion-dollar software manufacturer” (Valacich & Schneider, 2012, p. 403).In Congressional testimony by Joseph Ansanelli, a cybersecurity expert, to the United States House of Representatives Committee on Financial Services (2003), cited a Harris Interactive survey given to workers and managers that handle sensitive customer information at work. In this report, surprisingly, “66% say their co-workers, not hackers, pose the greatest risk to consumer privacy [and] only 10% said hackers were the greatest threat” (p.

5).According to Valacich and Schneider (2012), commonalities in computer criminals have been revealed through studies and these tend to be people that are current or former employees, people with technical knowledge who use their skills illegally for personal gain, career criminals, and crackers who commit intrusions with no particular purpose, but are merely snooping through a system (p. 405). Ultimately, humans are susceptible to deception and can provide access to systems by disclosing sensitive information to hackers without realizing their actions bring about terrible consequences.