The research work undertaken by the Data Communication Security Laboratory can be broadly defined under the following themes:
Development of verifiably secure protocols and algorithms for open and hostile environments:
Security protocols are one of the most critical elements in enabling the secure communication and processing of information, ensuring its confidentiality, integrity, authenticity, and availability. Security protocols are built on lower level cryptographic algorithms. These protocols are vulnerable to a host of subtle attacks, so designing protocols to be impervious to such attacks has proved to be extremely challenging and error prone. Frequently, informal and intuitive techniques are used to verify such protocols. However, the absence of formal verification of these protocols can lead to flaws and security errors remaining undetected [KW94], [SM95], [Rosc95], [Huim99], [Brac00], [GR02], [CDF03], [NC03]. During the design of security protocols it has to be assumed that the communication network is hostile and under the complete control of an adversary. The main difficulty in verifying security protocols is to model the vast possibilities of the adversary to gain information. In contrast to communications protocols, where the main issues are reachability of all legal states and avoidance of infinite loops, security protocol verification deals with the gain of information by an adversary. The adversary can be both passive (just listening to communication) or active (modifying message content, message order, dropping messages, etc.). Further, an adversary might use simultaneous protocol runs to gain information. One has to keep in mind that an adversary will not play fair by any rules. Hence, a security protocol should attempt to achieve its goals under all possible circumstances. However, it can be shown that complete protocol security is undecidable [CDLMS99].
The protocol developed by Beller, Chang and Yacobi [BCY93], also known as the BCY protocol, provides a very interesting case study. Published in 1993, the BCY protocol demonstrated the feasibility of public-key cryptography in mobile communications. Subsequently, Carlsen discovered weaknesses in the protocol and published an amended protocol [Carl94]. Further weaknesses were discovered in Carlsen's amended protocol by Mu and Varadharajan and another modified protocol was published [MV96]. Horn, Michtell and Martin identified a weakness in the Mu and Varadharajan version of the BCY protocol [HMM02], but did not publish a corrected version. In 2003 Coffey, Dojen and Flanagan published formal verifications of the BCY protocol and its derivates [CDF03]. A hitherto unknown flaw present in all versions of the BCY protocol was identified. Further, all previously identified weaknesses were also detected. A corrected protocol was proposed and also formally verified. This case study highlights that the design of security protocols is a highly complex and error-prone process. It also shows that formal verification is an imperative step in the design of security protocols.
Members of the DCSL have developed and verified cryptographic protocols for services such as: authentication, key agreement, non-repudiation, fair-exchange, hybrid and wireless communications. Protocol verifications have resulted in the discovery of a number of hitherto unknown protocol flaws and the publication of new, verifiably secure protocols. The non-repudiation scheme of Coffey and Saidha [CS96] was the first published security protocol to use an inline Trusted Third Party (TTP) server to resolve the issue of fair-exchange between peer systems (an issue that was generally considered "intractable"). The CS scheme contributed to developing new mechanisms that provide the OSI non-repudiation service in a strict sense. The scheme is widely cited by peer researchers in the security domain.
Formal verification of security protocols using logic-based and state-space-based techniques.
Formal verification is an essential step to confirm the security of cryptographic protocols. Several techniques can be used to analyze the security of protocols. These include state machines and modal logics - both have been used to discover numerous flaws in security protocols e.g. [KW94], [SM95], [Rosc95], [Huim99], [Brac00], [GR02], [CDF03], [NC03].
State machines can be used to analyze protocols using a method known as the reachability analysis technique [West78]. Using the technique, the global state of the system is expressed for each transition. Each global state is then analyzed - if a state is discovered where an attacker gains access to information which should be secret, there is a problem with the protocol. An exhaustive search checks that all reachable states are safe. Reachability analyses are suitable for determining if a protocol is correct with respect to its specification but they do not guarantee security from an active attacker. Another serious limitation of this technique is that drastic assumptions are required in order to keep the state-space small [Paul98].
Some state machines were not designed specifically for security protocol analysis. A problem with these tools [Kemm89] is that they cannot identify subtle flaws, which are particular to the security domain, such as replay attacks. Tools designed especially for protocol analysis address this problem, but suffer from another limitation which effects all state machines. While methods based purely on state-space searching can determine if a protocol contains a given flaw, they are unlikely to discover unknown types of flaws in protocols. Due to this limitation, techniques based on state machines will never replace other verification techniques, although they can be used effectively in conjunction with these methods.
Modal LogicsThe technique of logic-based formal verification of security protocols is accredited largely to Burrow, Abadi and Needham, developers of the BAN logic [BAN89]. This work initiated intense research in the area of logic-based formal verification. Several logics [GNY90], [CS97], [ZV01] have been developed on the basis of BAN. These logics can be used to generate concise proofs and have identified a number of flaws in protocols previously considered secure. They incorporate several improvements over BAN and are applicable to a wider range of protocols.
Logic-based formal verification involves the following steps:
The first step in logic-based verification involves specifying the protocol in the language of the logic by expressing each protocol message as a logical formula. This step is known as protocol formalisation (some authors also refer to it as idealisation). A formal description of the protocol, obtained by formalisation, does not simply list the components of each message but attempts to show the purpose of these components so as to avoid ambiguity. The second step in the verification process involves formally specifying the initial protocol assumptions. These assumptions reflect the beliefs and possessions of the involved principals at the beginning of each protocol run. In the third step, the desired protocol goals are expressed in the language of the logic. These goals are specified in terms of the beliefs and possessions of the protocol participants at the end of a successful protocol run. The final verification step concerns the application of logical postulates to establish the beliefs and possessions of protocol principals. The objective of the logical analysis is to verify whether the desired goals of the protocol can be derived from the initial assumptions and protocol steps. If such a derivation exists, the protocol is successfully verified; otherwise, the verification fails.
A successfully verified protocol can be considered secure within the scope of the logic. On the other hand, even the results of a failed verification are helpful, as these may point to missing assumptions or weaknesses in the protocol. If a weakness is discovered, the protocol should be redesigned and re-verified.
Members of the DCSL have widely used and developed formal verification techniques for cryptographic protocols. For example, the Coffey-Saidha logic (CS logic) is a formal technique, which can be used to reason about public-key cryptographic protocols [CS97]. The CS logic combines the modal logics of knowledge and belief and can analyse the evolution of both knowledge and belief during a protocol execution. It is therefore useful in addressing issues of both security and trust. The CS logic is capable of modelling a wide-range of public-key cryptographic protocols. It has been used to verify protocols such as: Fair-exchange, non-repudiation, key establishment, authentication, and zero-knowledge. Kudo and Mathuria independently published an extension of the Coffey-Saidha logic for the verification of time-release protocols [KM99]. The Coffey-Saidha logic was also extended in 2003 by Newe and Coffey, enabling the modelling of hybrid protocols [NC03].
Automated reasoning and knowledge engineering
The work on automated reasoning focuses on the automation of verification logics, while the work on knowledge engineering concentrates on intelligent data mining techniques.
Automation of Verification Logics
Formal logics have played an important role in many research fields. In recent years they are increasingly used in engineering to verify the correctness of theories and designs [Gabb98], for example in:
While logics for verifications purposes (“verification logics”) are powerful techniques for establishing that a design meets its initial specification, the manual application of verification logics often requires in-depth expertise. Further, the process is complex, tedious and prone to error. This is a serious issue, as a single mistake during any stage of the verification process can render the result of the verification useless. Automated techniques reduce the potential for human errors during verification, while simultaneously removing the need of in-depth knowledge of the employed verification logic [DC05].
The application of the postulates of a verification logic is tedious and error prone. Hence, the use of software to automatically apply the axioms of the logic has the potential to offer significant benefits to the system designer [CDF03]. Several possible sources of error are automatically removed, as an automated system will:
Also, the effort involved in protocol verification can be considerably reduced, since familiarity with the axioms is no longer required. The time taken to perform the verification is greatly reduced as software can automatically verify a system in minutes while a similar manual proof often requires hours or days.
Most current attempts to automate verification logics make use of generic theorem provers, such as HOL [GM93], Coq [HFWHD91], Isabelle [Paul91] and PVS [ORRSS96]. Due to their ability to prove generic theorems, these tools require large resources to operate. Further, such theorem provers will usually only return results of the performed proofs, but will not provide any details of how this is achieved. However, in case of security protocol verification, such details are very useful for the protocol designer. This is particularly true in the case of a failed verification, as the details of the proof are helpful in identifying the exact cause of the verification failure.
A new proving engine, called Layered Proving Trees, supporting the automation of verification logics has recently been developed by the Dojen and Coffey [DC05]. The main advantages of the new automation scheme are its low resource requirements and its abilities to trace the proof after completion and to model verification logic accurately.
Intelligent Data Mining Techniques
The research on data mining concerns the discovery of interesting patterns from large amounts of data. Particular focus is given to association rules discovery. Contributions so far include the development of an optimized association rules discovery technique and the implementation of a flexible framework used for testing and comparing different data mining algorithm implementations.
Cryptanalysis and investigation into the cryptographic strength of new ciphers.
This work is concerned with researching developments in recent symmetrical ciphers. The research includes developments in linear and differential cryptanalysis techniques and the application of these techniques to symmetrical ciphers. The focus of the work is in proving or disproving the security attributes of ciphers.
While cryptography is the art of securing information (transforming plaintext into ciphertext), its counterpart, cryptanalysis, tries either to reveal the information from the ciphertext without knowledge of the key, or to reveal the key itself [Schn96], [MOV97]. In general, it is assumed that the attacker has complete access to the used communication line between the sender and the receiver. In addition, it is assumed, that he has complete knowledge of the used cryptographic algorithm. Even if this is seldom the case in the real world, it is a good assumption. If the algorithm cannot be broken with this knowledge, it surely cannot be broken without it.
Differential cryptanalysis [BS91], [BS93] examines pairs of plaintext with a particular difference and analyses the evolution of these differences through the encipher process. Looking at the resulting differences in the ciphertext, the key, or parts of the key, can be deduced. Even if the key can not be deduced for sure, different probabilities can be assigned to different keys. Since both, the plaintext and the ciphertext, are known to the attacker and the plaintexts have to satisfy particular properties, this is a chosen-plaintext attack. By increasing the amount of the plaintexts, this can be changed to a known-plaintext attack, assuming (by statistical laws) that within the known plaintexts sufficient pairs with the needed properties are available.
Linear cryptanalysis [Mats93], [Mats94] uses linear approximations to describe the F-function of block ciphers. Simplified this means, that if some plaintext bits and ciphertext bits are XORed together, then the result equals the XOR of some bits of the key with a probability p, for the key and any random plaintext/ciphertext pair. With this method the adversary can deduce a part of the key. The remaining key bits can be obtained by a brute force search. Linear cryptanalysis is a known-plaintext attack, as the adversary only has to know, rather than to choose, the plaintext and ciphertext.
As part of this work Dojen and Coffey developed a new technique known as “Conditional Linear Cryptanalysis”, for use in the analysis of symmetric ciphers with key-dependant operations [DC04]. A successful attack on known ciphers using the new technique has also been achieved.
Multi-layer security architectures for virtual networks
The proliferation of the Internet in recent years has radically altered the landscape of network security. According to the 2004 CSI/FBI Computer Crime and Security Survey [CCSS04], 98% of the companies surveyed employed firewalls and 68% IDS, to protect the communication network. Despite these security measures, 53% of the companies surveyed reported unauthorised use of their computer systems resulting in an estimated financial loss of $140 million. It is evident that despite the high utilisation of network security tools, computer networks are still not adequately protected.
Despite the level of protection offered by network security tools such as firewalls [Bren92], [Hunt98], [CBR03] and IDS [Schep98], [VR02] the persistently high rate of intrusions underlines the need for improvement. Network security tools clearly possess a number of vulnerabilities that prohibit proper protection of the networks against attack. These vulnerabilities can be grouped under the following headings:
Location of Deployment
Network security tools are typically either network-based or host-based, each location having its associated limitations. Network-based limitations include the failure to protect systems located outside the Local Area Network in which the network security tool is deployed, inability to defend against attacks hidden in host-to-host encrypted traffic, representing a single point of failure, inability to prevent insider attacks etc. Host-based limitations include the inability to detect complex attacks on multiple systems and lack of centralised management.
Complementary limitations refer to weaknesses in the combined security offered by network security tools as a whole. Firewalls and Intrusion Detection Systems, although naturally complementary, suffer from problems due to the passive nature of Intrusion Detection Systems and high rate of false positives.
Defence-in-depth is the proposition that multiple layers of security offer greater protection than any single layer. Each technique or information source used by the network security tool is a separate layer of defence. Hence network security tools using a single technique or information source fail to adequately protect the network from attack.
Members of the DCSL developed a Distributed Cooperative Multi-layer Security Architecture (DCMSA) for virtual networks [SDC05a,b]. The DCMSA comprises a number of features designed to enhance the protection afforded to the virtual network. The DCMSA employs distributed security enforcement combined with centralised management. This scheme overcomes location of deployment shortcomings, maximising the protection offered to the virtual network. In addition the DCMSA facilitates cooperation of a wide range of network security tools. The synergy of network security tools working together allows greater protection than when working separately. Further the DCMSA adheres to the principle of defence-in-depth by using multiple layers of security. This increases the difficulty of successfully attacking the network.
Watermarking and information hiding for multimedia document protection.
Digital watermarking, information embedding, data hiding, and steganography have received considerable attention recently. These terms, which can for the most part be used interchangeably, refer to the process of embedding one signal, called the "embedded signal" or "digital watermark", within another signal, called the "host signal". The host signal is typically a speech, audio, image, or video signal, and the digital watermarking must be done in such a way that the host signal is not perceptibly degraded unacceptably. At the same time, the digital watermark must be difficult to remove without causing significant damage to the host signal and must reliably survive common signal processing manipulations such as lossy compression, additive noise, and resampling. Many emerging digital watermarking applications are a direct result of the explosive increase in the use of digital media for communications. For example, the ease with which digital signals can be copied has lead to a need for digital watermarking techniques to protect copyrighted music, video, and images that are distributed in digital formats. To satisfy this need it has been proposed to embed a digital watermark within these source signals that is designed to thwart unauthorised copying. This embedded signal could be a digital "fingerprint" that uniquely identifies the original purchaser of the copyrighted work. The purpose of this work is to investigate and implement a digital watermarking technique for use with sound and video files.
Robust power-line LAN protocols and security services systems.
Mobile agent-based middleware & virtual learning environments and reusable framework-based software development.
Salomie and Gyorodi have investigated domain problem decomposition and modelling using stationary and mobile agents and role based strategies. In addition, new paradigms for knowledge capturing and deployment to remote consumers by using mobile agents have been developed. New reusable and scalable framework solutions for designing Web and context aware applications have been developed. Coffey, Salomie and Gyorodi have developed scalable frameworks for enforcing policy based security levels.
Alternative, non-intrusive identity verification systems based on face recognition techniques
In the last few years the number of on-line transactions had a tremendous increase, mainly in the area of the Internet transactions. For these kinds of transactions the security is a main issue. A great part in assuring this security is composed of the identity of the implied parties. The mostly used identification techniques today are based on "what we know" and not on "who we are". This poses a security hole because, for example, our PIN can be lost/stolen, even our credit card id can be used by someone else.
On the other hand the so called biometric techniques are based on measurements of some characteristiques of us. Many of these methods are intrusive or inconvenient, e.g for fingerprinting and scanning of the eye retina special sensor are used. This research involves development of a non-intrusive identity confirmation system. This system is based on the picture of the face, which can be obtained descretly. This constitutes an improvement and a flexibility that other methods cannot achieve.