Nothing to Hide

The “nothing to hide” argument has become a very important debate recently. Now, nearly everything we do leaves an electronic trace which along with data mining, can be used to see what we are doing, and even predict our behaviour. With the recently passed Investigatory Powers Act 2016 in the UK (HM Government, 2016), 48 different agencies will have access to communication data, bulk personal data and other information with the purpose of increasing our national security.A‚A The acts main aim is to increase security against acts of terrorism, but this comes at the cost of people’s privacy.

We will write a custom essay sample on

Concepts of Information Security: Politics and Technology specifically for you

for only $13.90/page

Order Now

The most common argument in favour of mass surveillance is that if you have nothing to hide you should have nothing to fear, and if you are acting in an illegal manner than you deserve to be punished, and having this surveillance is there to protect us and our country. Whereas conventional army’s movement can be tracked using radar, satellites and drones, terrorists don’t have a navy or air force, making this means of gathering information about an attacker irrelevant. Another potential means of gathering information would be inside information, whether it be an informant or someone managing to infiltrate the organisation. However, unlike organised crime, a terrorist attack has the potential to take many lives, and it only takes one successful attack to do so, meaning one successful attack is one too many. Government surveillance can therefore be argued as the best way of protecting a nation as it means conversations can be followed and behaviours can be predicted. (Doyle, 2011).

Daniel Solove has said that between the advances in technology and the government surveillance, that there are similarities with George Orwell’s ?, and although not as extreme as in the book, it is still possible to track people even if they are just walking around the corner. Along with data mining of information which could be thought of as trivial, the government are predicting whether a person is a terrorist based off purchase histories along with other personal information. Whilst this sounds like it could be useful, what happens when someone is misidentified as a terrorist and added to a no-fly list or arrested? (Doyle, 2011). The European Union has also stated the surveillance on this scale is illegal, and that only targeted information gathering I justified, but with the UK leaving the EU, this will not be a problem for them soon, and will join China and Russia in being a mass surveillance state (Agerholm, 2016).

Overall I feel that mass surveillance is not necessary, with targeted surveillance being adequate, and with the lack of a response from the public when this law was first announced, it has shown the government that they can get away with it, and sets a precedent going forward, making me ask the question, “When does it stop? When will they have enough information?” As much as George Orwell’s ? was a work of fiction, I feel like we are slowly moving in that direction, and if we get there, will there be a way back?

Assira CAPTCHA

The Asirra CAPTCHA was proposed at ACM CCS 2007 and works by showing the user 12 pictures of cats and dogs, with the user having to select the cat images and none of the dog images. A human can do this within 30 seconds 99.6% of the time, whereas algorithms find it much harder to differentiate between cats and dogs. Basing image classification off colour features gives a 56.9% accuracy, giving an overall probability of 0.2% to solve an Assira CAPTCHA.

Assira has a database of over 3,000,000 images of cats and dogs which continues to grow. 13,000 images were collected by having a script which was left to run overnight, that would refresh and download the 12 images from the Assira research page. The images were manually classified into 3 classes of cat, dog and other. A support vector machine(SVM) classifier was used due to having the ability to extract linear combination of features, its predictive power and its scalability. 5-fold cross-validation was used to measure the accuracy of the SVM classifier, with 4 partitions used for training and the last one used to validate the results.

The SVM classifier was trained on a combination of colour and texture features. Both the colour and the texture features estimate whether the picture is a cat or dog. An average of the colour and texture features is taken, with a weighting of 1/3 to colour and 2/3 to texture. Images of cats were mapped to the value of 1.0 and dogs to -1.0, meaning if the average returned a positive value, the SVM classifier would predict the image to be of a cat, whilst a negative number would result in a dog. This system resulted in a classifier with 82.7% accuracy, giving a 10.3% probability of breaking the Assira CAPTCHA, which despite being a lot lower than a human, was more than 50 times more successful than the predicted 0.2% for machine vision attacks.

Using a token bucket system will greatly help to improve the robustness of the Assira CAPTCHA. Given that a human has a 99.6% success rate, giving a total of 3 attempts before having to complete 2 successful Assira CAPTCHA’s greatly reduces the chance of it being broken via machine vision, without reducing the usability for a human too significantly.A‚A (Golle, 2008)

Impact of Artificial “Gummy” Fingers on Fingerprint Systems”

Biometrics is a form of single factor authentication used to identify someone. Fingerprints are one of the most common forms of biometric authentication, with them being used for PCs, smart cards and portable devices. Silicone fingers as well as inked fingerprints on paper where tested before the experiment, finding that all capacitive sensors and some optical sensors rejected the silicone finger, and the inked paper was accepted on one of the systems.

A fingerprint system captures data from the finger via a sensing device, extracts features from the data and encrypts it, then stores this information as a template with some personal information. When someone then uses the system, the scan is compared to the database of stored finger data and outputs an acceptance results if there is a matching template. The sensor detects the difference between ridges and valleys of the fingerprint, with optical sensors detecting the difference in reflection, whilst a capacitive sensor detects the difference in capacitance.

There are several different attacks that can be performed against a fingerprint system including a criminal forcing the live finger to be used on the scanner, an attack against the authentication system with an unregistered finger, the use of a severed fingertip, a genetic clone of the registered finger or an artificial clone of the finger.

To make an artificial finger an impression must first be made of the live finger. The impression of the fingerprint is a mirror reflection of the live finger, meaning the impression can be used as a mould to make the artificial finger. It can also be done by capturing the image of a residual fingerprint with a digital microscope and then making a mould to make the artificial finger which is made from gelatine. Both types of “gummy fingers” were accepted by the 11 different fingerprint systems used in the experiment, with varying probabilities between 67% and 100%. The number of samples was too small in this experiment to compare the different types of fingerprint systems, but there is enough evidence to show that “gummy fingers” can bypass this type of biometric security. (Matsumoto, T., Matsumoto, H., Yamada, K. and Hoshino, S, 2002)

The best way to improve security would be multi-factor authentication. A finger print falls under the “something I am” category, so taking something from the “something I own” category, such as a key fob or card, along with the “something I know” category, such as a password means that if one of these protective measures is compromised, there are still two other factors an attacker must take into consideration before being able to access the information they are trying to acquire.

FREAK SSL/TLS Vulnerability

The Factoring RSA Export Keys (FREAK) vulnerability in Secure Socket Layer (SSL) and its successor Transport Layer Security (TLS) was discovered in March 2015 by Karthikeyan Bhargavan and his research team. TLS and SSL are protocols which were designed for secure data transfer between a client and a server through authentication, encryption and integrity. (Muscat, I. 2016), (Caballero, J., Bodden, E. and Athanasopoulos, E. 2016).

The vulnerability allows for a man-in-the-middle attack through an implementation error. The vulnerability was due to a US law which restricted RSA public key sizes to 512 bytes. SSL and TLS used ciphersuites with weak keys which were marked eligible for export. If a client is tryhing to connect to a trusted server, the man-in-the-middle attacker can replace the ciphersuite with an RSA_EXPORT ciphersuite of the message sent from the client to the server. The server then responds, and again, the attacker replaces the ciphersuite with an RSA ciphersuite which gets forwarded to the client. The server then sends a strong, 2048 bits, RSA public key which the client receives. The server then sends a ServerKeyExchange message with a weak, 512-bit RSA public key. The client responds with its ClientKeyExchange, ClientCSS and ClientFinished message, which the attacker can use to find the private key. The attacker can then decrypt the pre-master secret from the Client-KeyExchange. The attacker then sends a ServerFinished message to complete the handshake. This works because of a bug in a state machine composition, the message is not rejected and is silently accepted, and the server’s strong public key is replaced with the weak public key in the ServerKeyExchange. (Beurdouche, B., Bhargavan, K., et al. 2015).

In order to protect against this attack, server administrators would have to remove all RSA export cipher suites from their accepted cipher suite collection. Installing a browser that doesn’t support RSA export cipher suites is the best way for clients to protect themselves against FREAK attacks. (Caballero, J., Bodden, E. and Athanasopoulos, E. 2016).

DDoS Attacks

Distributed Denial of Service (DDoS) attacks are relatively simple, but can be extremely powerful. They are the most advanced form of DoS attacks. They work via packet streams from different sources. The attacks make use of large numbers of coordinated hosts to overrun a critical resource of the target. Depending on the scale of the attack, or the measures put in place to prevent such an attack, the victim could suffer damages from system shutdown, to total loss of service. They are so strong due to not trying to break the system hey are attacking, meaning traditional security mechanisms aren’t effective, instead causing damage through lethal traffic.

When performing an attack, the attacker must select agents to perform the attack. The machines need to have a vulnerability that allows the attacker to gain access with enough resources for the attack. Then the attack must find vulnerabilities in the security of the agent’s machines to put in the malicious code. The attacker then communicates with handlers to see what agents are running and when to upgrade them. After all of this is done, the attack can commence.

There are several different types of DDoS attacks, some of which include: Flood attacks send large amounts of IP traffic to a victim, congesting the bandwidth of the system. Depending on the volume of packets sent, the target can experience reduced system speeds, to crashing the system. Amplification attacks exploit the broadcast IP address found on most routers to amplify and reflect the attack, sending messages to a broadcast IP address. The router sends the packets to all the IP addresses within the broadcast address range. A reflector attack requires a set of predetermined reflectors. These can simply be scattered on the internet. The reflected packets are normal with a legitimate source and cannot be filtered.A‚A Protocol exploit attacks use a specific feature or implementation bug of some protocol on the victim’s system to consume excess resources. Malformed packet attacks use incorrectly formed IP packets and are sent to the victim to crash the system.

There are two classifications for DDoS defence. The first one is based on activity deployed. Intrusion prevention aims to stop the attack from being launched in the first place. Intrusion detection means a host computer and network can guard themselves against being a source of an attack as well as being a victim. Intrusion tolerance and mitigation accepts that it is not possible to fully prevent or stop DDoS attacks, so aims to minimise the damage caused by them. Intrusion response identifies the attack sources and blocks its traffic. The second classification splits the DDoS defence into three categories. Most systems designed to fight DDoS attacks have been designed for the victim network as they suffer the greatest impact of an attack. Intermediate network defence is more effective, as they can handle the attack traffic easily and trace back to the attackers. Source network mechanisms aim to stop attacks before they enter the internet. (Douligeris, C. and Mitrokotsa, A. 2004)

Wireless networks are easier to attack than wired networks, with DoS attacks by scrambling the bands of frequency that are used. The best way of defending both bandwidth depletion attacks and resource depletion attacks is to design a defence mechanism that detects and responds to the attack. A proposed technique to defending the attacks is the use of a flow monitoring table at each node. By monitoring the previous sending rates with the current rate, they can detect an attack if both rates become the same, in which case the packets from those nodes are discarded. (Arunmozhi, S.A. and Venkataramani, Y. 2011)

Bibliography

Agerholm, H. (2016) Snooper’s charter dealt blow after EU’s highest court rules ‘indiscriminate’ government retention of emails is illegal. Available at: http://www.independent.co.uk/news/uk/politics/snoopers-charter-eu-court-ruling-illegal-investigatory-powers-act-emails-david-davis-a7488041.html (Accessed: 22 December 2016).

Arunmozhi, S.A. and Venkataramani, Y. (2011) ‘DDoS attack and defense scheme in wireless ad hoc networks’, International Journal of Network Security & Its Applications, 3(3), pp. 182-187. doi: 10.5121/ijnsa.2011.3312.

Beurdouche, B., Bhargavan, K., Delignat-Lavaud, A., Fournet, C., Kohlweiss, M., Pironti, A., Strub, P.-Y. and Zinzindohoue, J.K. (2015) ‘A messy state of the union: Taming the composite state machines of TLS’, ? IEEE Symposium on Security and Privacy, . doi: 10.1109/sp.2015.39.

Caballero, J., Bodden, E. and Athanasopoulos, E. (2016) Engineering secure software and systems: 8th international symposium, ESSoS .. Available at: https://books.google.co.uk/books?id=j6vWCwAAQBAJ&pg=PA125&dq=FREAK+SSL/TLS+Vulnerability&hl=en&sa=X&ved=0ahUKEwjlkuXEr8TRAhWhKMAKHQWMBjMQ6AEIOjAB#v=onepage&q=FREAK%20SSL%2FTLS%20Vulnerability&f=false (Accessed: 15 January 2017).

Douligeris, C. and Mitrokotsa, A. (2004) ‘DDoS attacks and defense mechanisms: Classification and state-of-the-art’, Computer Networks, 44(5), pp. 643-666. doi: 10.1016/j.comnet.2003.10.003.

Doyle, T. (2011) ‘Daniel J. Solove, nothing to hide: The false tradeoff between privacy and security’, The Journal of Value Inquiry, 46(1), pp. 107-112. doi: 10.1007/s10790-011-9303-z.

Golle, P. (2008) ‘Machine learning attacks against the Asirra CAPTCHA’, Proceedings of the 15th ACM conference on Computer and communications security – CCS ’08, . doi: 10.1145/1455770.1455838.

Investigatory Powers Act 2016 ?, c. Available at: http://www.legislation.gov.uk/ukpga/2016/25/pdfs/ukpga_20160025_en.pdf (Accessed: 22 December 2016).

Matsumoto, T., Matsumoto, H., Yamada, K. and Hoshino, S. (2002) ‘Impact of artificial “gummy” fingers on fingerprint systems’, , pp. 275-289. doi: 10.1117/12.462719.

Muscat, I. (2016) ‘Web vulnerabilities: Identifying patterns and remedies’, Network Security, 2016(2), pp. 5-10. doi: 10.1016/s1353-4858(16)30016-2.