DCSL Research Themes       


The research work undertaken by the Data Communication Security Laboratory can be broadly defined under the following themes:

 


Development of verifiably secure protocols and algorithms for open and hostile environments:


Security protocols are one of the most critical elements in enabling the secure communication and processing of information, ensuring its confidentiality, integrity, authenticity, and availability. Security protocols are built on lower level cryptographic algorithms. These protocols are vulnerable to a host of subtle attacks, so designing protocols to be impervious to such attacks has proved to be extremely challenging and error prone. Frequently, informal and intuitive techniques are used to verify such protocols. However, the absence of formal verification of these protocols can lead to flaws and security errors remaining undetected [KW94], [SM95], [Rosc95], [Huim99], [Brac00], [GR02], [CDF03], [NC03]. During the design of security protocols it has to be assumed that the communication network is hostile and under the complete control of an adversary. The main difficulty in verifying security protocols is to model the vast possibilities of the adversary to gain information. In contrast to communications protocols, where the main issues are reachability of all legal states and avoidance of infinite loops, security protocol verification deals with the gain of information by an adversary. The adversary can be both passive (just listening to communication) or active (modifying message content, message order, dropping messages, etc.). Further, an adversary might use simultaneous protocol runs to gain information. One has to keep in mind that an adversary will not play fair by any rules. Hence, a security protocol should attempt to achieve its goals under all possible circumstances. However, it can be shown that complete protocol security is undecidable [CDLMS99].

The protocol developed by Beller, Chang and Yacobi [BCY93], also known as the BCY protocol, provides a very interesting case study. Published in 1993, the BCY protocol demonstrated the feasibility of public-key cryptography in mobile communications. Subsequently, Carlsen discovered weaknesses in the protocol and published an amended protocol [Carl94]. Further weaknesses were discovered in Carlsen's amended protocol by Mu and Varadharajan and another modified protocol was published [MV96]. Horn, Michtell and Martin identified a weakness in the Mu and Varadharajan version of the BCY protocol [HMM02], but did not publish a corrected version. In 2003 Coffey, Dojen and Flanagan published formal verifications of the BCY protocol and its derivates [CDF03]. A hitherto unknown flaw present in all versions of the BCY protocol was identified. Further, all previously identified weaknesses were also detected. A corrected protocol was proposed and also formally verified. This case study highlights that the design of security protocols is a highly complex and error-prone process. It also shows that formal verification is an imperative step in the design of security protocols.

Members of the DCSL have developed and verified cryptographic protocols for services such as: authentication, key agreement, non-repudiation, fair-exchange, hybrid and wireless communications. Protocol verifications have resulted in the discovery of a number of hitherto unknown protocol flaws and the publication of new, verifiably secure protocols. The non-repudiation scheme of Coffey and Saidha [CS96] was the first published security protocol to use an inline Trusted Third Party (TTP) server to resolve the issue of fair-exchange between peer systems (an issue that was generally considered "intractable"). The CS scheme contributed to developing new mechanisms that provide the OSI non-repudiation service in a strict sense. The scheme is widely cited by peer researchers in the security domain.

[Related Projects]     [Top of Page]

References

[BCY93]

Beller, M.J., Chang, L.-F. and Yacobi, Y.: "Privacy and authentication on a portable communications system." IEEE Journal on Selected Areas in Communications, Vol. 11, No. 6, 1993, pp.821-829.

[Brack00]

Brackin, S.: "Automatically detecting most vulnerabilities in Cryptographic protocols." Information Survivability Conference and Exposition Vol.1, Hilton Head, South Carolina, USA, January 2000, pp.222-236

[Carl95]

Carlsen, U.: "Optimal privacy and authentication on a portable communications system." ACM Operating Systems Review, Vol. 28, No.3, 1994, pp.16-23.

[CDF03]

Coffey, T., Dojen, R. and Flanagan, T.: "Formal Verification: An Imperative Step in the Design of Security Protocols." Computer Networks Journal, Vol. 43, No.5, 2003, pp.601-618

[CDLMS99]

Cervesato, I., Durgin, N., Lincoln, P., Mitchell, J. and Scedrov, A.: "A meta-notation for protocol analysis." IEEE Computer Security Foundations Workshop, Mordano, Italy, June 1999, pp.55-71.

[CS96]

Coffey, T. and Saidha, P., "Non-Repudiation with Mandatory Proof of Receipt." ACM Computer Communication Review, Vol. 26, No. 1, Jan 1996, pp. 6-17.

[GR02]

Gürgens, S. and Rudolph, C.: "Security Analysis of (Un-)Fair Non-repudiation Protocols." Formal Aspects of Security, Lecture Notes in Computer Science 2629, 2002, pp.97-114

[HMM02]

Horn, G., Martin, K. and Mitchell, C.: "Authentication Protocols for Mobile Network Environment Value-added Services." IEEE Transactions on Vehicular Technology, Vol. 51, No. 2, 2002, pp.383-392.

[Huim99]

Huima, A.: "Efficient infinite-state analysis of security protocols."Proceedings of FLOC'99 Workshop on Formal Methods and Security Protocols, July 1999.

[KW94]

Kessler, V. and Wendel, G.: "AUTLOG – An advanced logic of authentication." Proceedings of 7th IEEE Computer Security Foundations, Menlo Park, California, USA, August 1994. pp.90-99.

[MV96]

Mu, Y. and Varadharajan, V.: "On the design of security protocols for mobile communications, Information security and Privacy." Australasian Conference on Information Security and Privacy, Wollongong, Australia, June 1996, 134-145.

[NC03]

Newe T. and Coffey T.: "Formal Verification logic for hybrid security protocols." Journal of Computer Systems Science & Engineering, Vol. 18, No. 1, 2003, pp.17-25.

[Rosc95]

Roscoe, A.: "Modelling and verifying key-exchange protocols using CSP and FDR." Proceedings of 8th IEEE Computer Security Foundations Workshop, Kenmare, Ireland, June 1995, 98.

[SM94]

Syverson, P. and Meadows, C.: "Formal requirements for Key distribution protocols." Proceedings of Advances in Cryptology - EUROCRYPT'94, Perugia, Italy, May 1995, 320-331.

[Related Projects]     [Top of Page]


Formal verification of security protocols using logic-based and state-space-based techniques.


Formal verification is an essential step to confirm the security of cryptographic protocols. Several techniques can be used to analyze the security of protocols. These include state machines and modal logics - both have been used to discover numerous flaws in security protocols e.g. [KW94], [SM95], [Rosc95], [Huim99], [Brac00], [GR02], [CDF03], [NC03].

State Machines

State machines can be used to analyze protocols using a method known as the reachability analysis technique [West78]. Using the technique, the global state of the system is expressed for each transition. Each global state is then analyzed - if a state is discovered where an attacker gains access to information which should be secret, there is a problem with the protocol. An exhaustive search checks that all reachable states are safe. Reachability analyses are suitable for determining if a protocol is correct with respect to its specification but they do not guarantee security from an active attacker. Another serious limitation of this technique is that drastic assumptions are required in order to keep the state-space small [Paul98].

Some state machines were not designed specifically for security protocol analysis. A problem with these tools [Kemm89] is that they cannot identify subtle flaws, which are particular to the security domain, such as replay attacks. Tools designed especially for protocol analysis address this problem, but suffer from another limitation which effects all state machines. While methods based purely on state-space searching can determine if a protocol contains a given flaw, they are unlikely to discover unknown types of flaws in protocols. Due to this limitation, techniques based on state machines will never replace other verification techniques, although they can be used effectively in conjunction with these methods.

Modal Logics

The technique of logic-based formal verification of security protocols is accredited largely to Burrow, Abadi and Needham, developers of the BAN logic [BAN89]. This work initiated intense research in the area of logic-based formal verification. Several logics [GNY90], [CS97], [ZV01] have been developed on the basis of BAN. These logics can be used to generate concise proofs and have identified a number of flaws in protocols previously considered secure. They incorporate several improvements over BAN and are applicable to a wider range of protocols.

Logic-based formal verification involves the following steps:

  1. Formalisation of the protocol messages
  2. Specification of the initial assumptions
  3. Specification of the protocol goals
  4. Application of the logical postulates

The first step in logic-based verification involves specifying the protocol in the language of the logic by expressing each protocol message as a logical formula. This step is known as protocol formalisation (some authors also refer to it as idealisation). A formal description of the protocol, obtained by formalisation, does not simply list the components of each message but attempts to show the purpose of these components so as to avoid ambiguity. The second step in the verification process involves formally specifying the initial protocol assumptions. These assumptions reflect the beliefs and possessions of the involved principals at the beginning of each protocol run. In the third step, the desired protocol goals are expressed in the language of the logic. These goals are specified in terms of the beliefs and possessions of the protocol participants at the end of a successful protocol run. The final verification step concerns the application of logical postulates to establish the beliefs and possessions of protocol principals. The objective of the logical analysis is to verify whether the desired goals of the protocol can be derived from the initial assumptions and protocol steps. If such a derivation exists, the protocol is successfully verified; otherwise, the verification fails.

A successfully verified protocol can be considered secure within the scope of the logic. On the other hand, even the results of a failed verification are helpful, as these may point to missing assumptions or weaknesses in the protocol. If a weakness is discovered, the protocol should be redesigned and re-verified.

Members of the DCSL have widely used and developed formal verification techniques for cryptographic protocols. For example, the Coffey-Saidha logic (CS logic) is a formal technique, which can be used to reason about public-key cryptographic protocols [CS97]. The CS logic combines the modal logics of knowledge and belief and can analyse the evolution of both knowledge and belief during a protocol execution. It is therefore useful in addressing issues of both security and trust. The CS logic is capable of modelling a wide-range of public-key cryptographic protocols. It has been used to verify protocols such as: Fair-exchange, non-repudiation, key establishment, authentication, and zero-knowledge. Kudo and Mathuria independently published an extension of the Coffey-Saidha logic for the verification of time-release protocols [KM99]. The Coffey-Saidha logic was also extended in 2003 by Newe and Coffey, enabling the modelling of hybrid protocols [NC03].

[Related Projects]     [Top of Page]

References

[BAN89]

Burrows, M., Abadi, M. and Needham, R.: "A logic of authentication." ACM Operating System Review, Vol. 23, No. 5, 1989, pp.1-13.

[Brack00]

Brackin, S.: "Automatically detecting most vulnerabilities in Cryptographic protocols." Information Survivability Conference and Exposition Vol.1, Hilton Head, South Carolina, USA, January 2000, pp.222-236

[CDF03]

Coffey, T., Dojen, R. and Flanagan, T.: "Formal Verification: An Imperative Step in the Design of Security Protocols." Computer Networks Journal, Vol. 43, No.5, 2003, pp.601-618

[CS97]

Coffey, T. and Saidha, P.: "A logic for verifying public key cryptographic protocols." IEE J. Proceedings-Computers and Digital Techniques, Vol. 144, No. 1, 1997, pp.28-32..

[GNY90]

Gong, L., Needham, R. and Yahalom, R.: "Reasoning about belief in cryptographic protocols." 1990 IEEE Computer Society Synopsis on Research in Security and Privacy, 1990, pp.234-248..

[Huim99]

Huima, A.: "Efficient infinite-state analysis of security protocols."Proceedings of FLOC'99 Workshop on Formal Methods and Security Protocols, July 1999.

[Kemm89]

Kemmerer, R.A.:"Analyzing Encryption Protocols using Formal Verification Techniques." IEEE Journal on Selected Areas in Communications, Vol. 7, No. 4, 1989, pp. 448-457.

[KM99]

Kudo, M. and Mathuria, A.: "An Extended Logic for Analyzing Timed-Release Public-Key Protocols." International Conference on Information and Communication Security, 1999, pp. 183-198.

[KW94]

Kessler, V. and Wendel, G.: "AUTLOG – An advanced logic of authentication." Proceedings of 7th IEEE Computer Security Foundations, Menlo Park, California, USA, August 1994. pp.90-99.

[NC03]

Newe T. and Coffey T.: "Formal Verification logic for hybrid security protocols." Journal of Computer Systems Science & Engineering, Vol. 18, No. 1, 2003, pp.17-25.

[Paul98]

Paulson, L. C.:"The Inductive Approach to Verifying Cryptographic Protocols." Journal of Computer Security, Vol. 6, 1998, pp. 85-128.

[Rosc95]

Roscoe, A.: "Modelling and verifying key-exchange protocols using CSP and FDR." Proceedings of 8th IEEE Computer Security Foundations Workshop, Kenmare, Ireland, June 1995, 98.

[SM94]

Syverson, P. and Meadows, C.: "Formal requirements for Key distribution protocols." Proceedings of Advances in Cryptology - EUROCRYPT'94, Perugia, Italy, May 1995, 320-331.

[West78]

West, C. H.: "General Techniques for Communications Protocol Validation." Journal of Research and Development, Vol. 22, No. 4, 1978, pp. 393-404.

[ZV01]

Zhang Y. and Varadharajam V.: "A logic for modelling the dynamics of beliefs in cryptographic protocols." Australasian Computer Science Conference, 2001

[Related Projects]     [Top of Page]


Automated reasoning and knowledge engineering


The work on automated reasoning focuses on the automation of verification logics, while the work on knowledge engineering concentrates on intelligent data mining techniques.

Automation of Verification Logics

Formal logics have played an important role in many research fields. In recent years they are increasingly used in engineering to verify the correctness of theories and designs [Gabb98], for example in:

  • specification and verification of imperative programming languages.
  • distributed computing, concurrency and process control.
  • circuit design and VLSI.
  • verification of (cryptographic) protocols.

While logics for verifications purposes (“verification logics”) are powerful techniques for establishing that a design meets its initial specification, the manual application of verification logics often requires in-depth expertise. Further, the process is complex, tedious and prone to error. This is a serious issue, as a single mistake during any stage of the verification process can render the result of the verification useless. Automated techniques reduce the potential for human errors during verification, while simultaneously removing the need of in-depth knowledge of the employed verification logic [DC05].

The application of the postulates of a verification logic is tedious and error prone. Hence, the use of software to automatically apply the axioms of the logic has the potential to offer significant benefits to the system designer [CDF03]. Several possible sources of error are automatically removed, as an automated system will:

  • not make implicit assumptions,
  • ensure thorough and unambiguous use of the postulates,
  • not make implicit assumptions about failed goals,
  • allow redundant assumptions to be identified easily.

Also, the effort involved in protocol verification can be considerably reduced, since familiarity with the axioms is no longer required. The time taken to perform the verification is greatly reduced as software can automatically verify a system in minutes while a similar manual proof often requires hours or days.

Most current attempts to automate verification logics make use of generic theorem provers, such as HOL [GM93], Coq [HFWHD91], Isabelle [Paul91] and PVS [ORRSS96]. Due to their ability to prove generic theorems, these tools require large resources to operate. Further, such theorem provers will usually only return results of the performed proofs, but will not provide any details of how this is achieved. However, in case of security protocol verification, such details are very useful for the protocol designer. This is particularly true in the case of a failed verification, as the details of the proof are helpful in identifying the exact cause of the verification failure.

A new proving engine, called Layered Proving Trees, supporting the automation of verification logics has recently been developed by the Dojen and Coffey [DC05]. The main advantages of the new automation scheme are its low resource requirements and its abilities to trace the proof after completion and to model verification logic accurately.

[Related Projects]     [Top of Page]

References

[CDF03]

Coffey, T., Dojen, R. and Flanagan, T., "On the Automated Implementation of Modal Logics used to Verify Security Protocols", Proceedings of ISICT'03 (Invited Workshop on Network Security and Management at the International Symposium on Information and Communication Technologies), Dublin, Ireland, 24-26 September 2003, pp.322-328

[DC05]

Dojen, R. and Coffey, T., "Layered Proving Trees: A Novel Approach to the Automation of Logic-Based Security Protocol Verification", ACM Transactions on Information and System Security (TISSEC), Volume 8, Issue 3, August 2005, pp. 287-311.

[Gabb98] 

Gabbay, D., "Elementary Logic: A Procedural Perspective", Prentice Hall, 1998

[GM93]

Gordon, M. and Melham, T.F. "Introduction to HOL: A theorem proving environment for Higher Order Logic.", Cambridge University Press, 1993

[HFWHD91]

Huet, R., Felty, A., Werner, B., Herbelin, H. and Dowek, G., "Presenting the system Coq, version 5.6", in Proceedings of the Second Workshop on Logical Frameworks, Edinburgh 1991

[ORRSS96]

Owre, S., Rajan, S., Rushby, J.M., Shankar, N. and Srivas, M.K., "PVS: Combining specification, proof checking, and model checking". Computer-Aided Verification, CAV '96, Volume 1102 of Lecture Notes in Computer Science, pages 411-414, New Brunswick, NJ, July/August 1996. Springer-Verlag

[Paul91]

Paulson, L.C., "Isabelle: The next 700 Therorem Provers." Logic and Computer Science, Piergiorgi Odifreddi (ed.), Academic Press, 1991, pp.361-386.

[Related Projects]     [Top of Page]

Intelligent Data Mining Techniques

The research on data mining concerns the discovery of interesting patterns from large amounts of data. Particular focus is given to association rules discovery. Contributions so far include the development of an optimized association rules discovery technique and the implementation of a flexible framework used for testing and comparing different data mining algorithm implementations.

[Related Projects]     [Top of Page]


Cryptanalysis and investigation into the cryptographic strength of new ciphers.


This work is concerned with researching developments in recent symmetrical ciphers. The research includes developments in linear and differential cryptanalysis techniques and the application of these techniques to symmetrical ciphers. The focus of the work is in proving or disproving the security attributes of ciphers.

While cryptography is the art of securing information (transforming plaintext into ciphertext), its counterpart, cryptanalysis, tries either to reveal the information from the ciphertext without knowledge of the key, or to reveal the key itself [Schn96], [MOV97]. In general, it is assumed that the attacker has complete access to the used communication line between the sender and the receiver. In addition, it is assumed, that he has complete knowledge of the used cryptographic algorithm. Even if this is seldom the case in the real world, it is a good assumption. If the algorithm cannot be broken with this knowledge, it surely cannot be broken without it.

Differential cryptanalysis [BS91], [BS93] examines pairs of plaintext with a particular difference and analyses the evolution of these differences through the encipher process. Looking at the resulting differences in the ciphertext, the key, or parts of the key, can be deduced. Even if the key can not be deduced for sure, different probabilities can be assigned to different keys. Since both, the plaintext and the ciphertext, are known to the attacker and the plaintexts have to satisfy particular properties, this is a chosen-plaintext attack. By increasing the amount of the plaintexts, this can be changed to a known-plaintext attack, assuming (by statistical laws) that within the known plaintexts sufficient pairs with the needed properties are available.

Linear cryptanalysis [Mats93], [Mats94] uses linear approximations to describe the F-function of block ciphers. Simplified this means, that if some plaintext bits and ciphertext bits are XORed together, then the result equals the XOR of some bits of the key with a probability p, for the key and any random plaintext/ciphertext pair. With this method the adversary can deduce a part of the key. The remaining key bits can be obtained by a brute force search. Linear cryptanalysis is a known-plaintext attack, as the adversary only has to know, rather than to choose, the plaintext and ciphertext.

As part of this work Dojen and Coffey developed a new technique known as “Conditional Linear Cryptanalysis”, for use in the analysis of symmetric ciphers with key-dependant operations [DC04]. A successful attack on known ciphers using the new technique has also been achieved.

[Related Projects]     [Top of Page]

References

[BS91]

Biham, E. and Shamir, A., "Differential Cryptanalysis of DES-like Cryptosystems", Journal of Cryptology, Vol. 4, No. 1, 1991, pp.3-72

[BS93]

Biham, E. and Shamir, A., "Differnetial Cryptanalysis of the Data Encryption Standard", Springer Verlag, 1993

[DC04]

Dojen, R. and Coffey, T., "Conditional Linear Cryptanalysis - Applying Linear Cryptanalysis to Ciphers with Key-Dependant Operations". WSEAS Transactions on Computers, Vol. 3, No. 5, November 2004, pp.1425-1430.

[Mats93]

Matsumi, M., "Linear Cryptanalysis Method for DES Cipher", Advances in Cryptology - EUROCRYPT'93, Springer Verlag, 1993, pp. 386-397

[Mats94]

Matsumi, M., "The First Experimental Cryptanalysis of the Data Encryption Standard", Advances in Cryptology - CRYPTO'94, Springer Verlag, 1994, pp. 1-11

[MOV97]

Menezes, A.J., van Oorschot, P.C. and Vanstone, V.A., "Handbook of Applied Cryptography", CRC Press, 1997

[Schn96]

Schneier, B., "Applied Cryptography", Second Edition, Wiley 1996

[Related Projects]     [Top of Page]


Multi-layer security architectures for virtual networks


The proliferation of the Internet in recent years has radically altered the landscape of network security. According to the 2004 CSI/FBI Computer Crime and Security Survey [CCSS04], 98% of the companies surveyed employed firewalls and 68% IDS, to protect the communication network. Despite these security measures, 53% of the companies surveyed reported unauthorised use of their computer systems resulting in an estimated financial loss of $140 million. It is evident that despite the high utilisation of network security tools, computer networks are still not adequately protected.

Despite the level of protection offered by network security tools such as firewalls [Bren92], [Hunt98], [CBR03] and IDS [Schep98], [VR02] the persistently high rate of intrusions underlines the need for improvement. Network security tools clearly possess a number of vulnerabilities that prohibit proper protection of the networks against attack. These vulnerabilities can be grouped under the following headings:

Location of Deployment

Network security tools are typically either network-based or host-based, each location having its associated limitations. Network-based limitations include the failure to protect systems located outside the Local Area Network in which the network security tool is deployed, inability to defend against attacks hidden in host-to-host encrypted traffic, representing a single point of failure, inability to prevent insider attacks etc. Host-based limitations include the inability to detect complex attacks on multiple systems and lack of centralised management.

Complementary

Complementary limitations refer to weaknesses in the combined security offered by network security tools as a whole. Firewalls and Intrusion Detection Systems, although naturally complementary, suffer from problems due to the passive nature of Intrusion Detection Systems and high rate of false positives.

Defence-in-Depth

Defence-in-depth is the proposition that multiple layers of security offer greater protection than any single layer. Each technique or information source used by the network security tool is a separate layer of defence. Hence network security tools using a single technique or information source fail to adequately protect the network from attack.

Members of the DCSL developed a Distributed Cooperative Multi-layer Security Architecture (DCMSA) for virtual networks [SDC05a,b]. The DCMSA comprises a number of features designed to enhance the protection afforded to the virtual network. The DCMSA employs distributed security enforcement combined with centralised management. This scheme overcomes location of deployment shortcomings, maximising the protection offered to the virtual network. In addition the DCMSA facilitates cooperation of a wide range of network security tools. The synergy of network security tools working together allows greater protection than when working separately. Further the DCMSA adheres to the principle of defence-in-depth by using multiple layers of security. This increases the difficulty of successfully attacking the network.

[Related Projects]     [Top of Page]

References

[Bren92]

Brent Chapman, D., "Network (In)Security Through IP Packet Filtering", Proceedings of the Third USENIX UNIX Security Symposium, Baltimore MD, 1992

[CBR03]

Cheswick, W.R., Bellovin, S.M. and Rubin, A.D., "Repelling the Wily Hacker", Second Edition, Addison-Wesley, 2003

[CCSS04]

CSI/FBI, 2004 Computer Crime and Security Survey, http://www.gocsi.com/ 10 (6) (2004).

[Hunt98]

Hunt, R., "Internet/Intranet Firewall Security - Policy, Architecture and Transaction Service", Computer Communications, Vol. 21, No. 13, 1 (9), 1998, pp.1107-1123

[Schep98]

Schepers, F., "Network Versus Host-based Intrusion Detection", Information Security Technical Report, Volume 3, Issue No. 4, 1998, pp.32-42

[SDC05a]

O'Sullivan, S., Dojen, R. and Coffey, T., "Protecting Virtual Networks With A Distributed Cooperative Multi-layer Security Architecture", Accepted by WSEAS Transactions on Computers

[SDC05b]

O'Sullivan, S., Dojen, R. and Coffey, T., "A Distributed Cooperative Multi-layer Security Architecture For Virtual Networks", 4th WSEAS International Conference on Information, Security, Communications and Computers (ISCOCO 2005), Puerto De La Cruz, Tenerife, Canary Islands, Spain, December 16-18, 2005, Paper No. 502-708

[VR02]

Verwoerd, T. and Hunt, R., "Intrusion Detection Techniques and Approaches", Computer Communications, Elsevier, U.K., Vol. 25, No. 15, 2002, pp. 1356-1365

[Related Projects]     [Top of Page]


Watermarking and information hiding for multimedia document protection.


Digital watermarking, information embedding, data hiding, and steganography have received considerable attention recently. These terms, which can for the most part be used interchangeably, refer to the process of embedding one signal, called the "embedded signal" or "digital watermark", within another signal, called the "host signal". The host signal is typically a speech, audio, image, or video signal, and the digital watermarking must be done in such a way that the host signal is not perceptibly degraded unacceptably. At the same time, the digital watermark must be difficult to remove without causing significant damage to the host signal and must reliably survive common signal processing manipulations such as lossy compression, additive noise, and resampling. Many emerging digital watermarking applications are a direct result of the explosive increase in the use of digital media for communications. For example, the ease with which digital signals can be copied has lead to a need for digital watermarking techniques to protect copyrighted music, video, and images that are distributed in digital formats. To satisfy this need it has been proposed to embed a digital watermark within these source signals that is designed to thwart unauthorised copying. This embedded signal could be a digital "fingerprint" that uniquely identifies the original purchaser of the copyrighted work. The purpose of this work is to investigate and implement a digital watermarking technique for use with sound and video files.

[Related Projects]     [Top of Page]


Robust power-line LAN protocols and security services systems.


[Related Projects]     [Top of Page]


Mobile agent-based middleware & virtual learning environments and reusable framework-based software development.


Salomie and Gyorodi have investigated domain problem decomposition and modelling using stationary and mobile agents and role based strategies. In addition, new paradigms for knowledge capturing and deployment to remote consumers by using mobile agents have been developed. New reusable and scalable framework solutions for designing Web and context aware applications have been developed. Coffey, Salomie and Gyorodi have developed scalable frameworks for enforcing policy based security levels.

[Related Projects]     [Top of Page]


Alternative, non-intrusive identity verification systems based on face recognition techniques


In the last few years the number of on-line transactions had a tremendous increase, mainly in the area of the Internet transactions. For these kinds of transactions the security is a main issue. A great part in assuring this security is composed of the identity of the implied parties. The mostly used identification techniques today are based on "what we know" and not on "who we are". This poses a security hole because, for example, our PIN can be lost/stolen, even our credit card id can be used by someone else.

On the other hand the so called biometric techniques are based on measurements of some characteristiques of us. Many of these methods are intrusive or inconvenient, e.g for fingerprinting and scanning of the eye retina special sensor are used. This research involves development of a non-intrusive identity confirmation system. This system is based on the picture of the face, which can be obtained descretly. This constitutes an improvement and a flexibility that other methods cannot achieve.

Presenation by Robert Gyorodi on Alternative Non-intrusive Identification Systems

[Related Projects]     [Top of Page]