Security protocols enable secure communication over insecure channels. Privacy protocols enable private interactions over secure channels. Security protocols set up secure channels using cryptographic primitives. Privacy protocols set up private channels using secure channels. But just like some security protocols can be broken without breaking the underlying cryptography, some privacy protocols can be broken without breaking the underlying security. Such privacy attacks have been used to leverage e-commerce against targeted advertising from the outset; but their depth and scope became apparent only with the overwhelming advent of influence campaigns in politics. The blurred boundaries between privacy protocols and privacy attacks present a new challenge for protocol analysis. Covert channels turn out to be concealed not only below overt channels, but also above: subversions, and the level-below attacks are supplemented by sublimations and the level-above attacks.
The diverse views of science of security have opened up several alleys towards applying the methods of science to security. We pursue a different kind of connection between science and security. This paper explores the idea that security is not just a suitable subject for science, but that the process of security is also similar to the process of science. This similarity arises from the fact that both science and security depend on the methods of inductive inference. Because of this dependency, a scientific theory can never be definitely proved, but can only be disproved by new evidence, and improved into a better theory. Because of the same dependency, every security claim and method has a lifetime, and always eventually needs to be improved.
In this general framework of security-as-science, we explore the ways to apply the methods of scientific induction in the process of trust. The process of trust building and updating is viewed as hypothesis testing. We propose to formulate the trust hypotheses by the methods of algorithmic learning, and to build more robust trust testing and vetting methodologies on the solid foundations of statistical inference.
Cryptography is a theory of secret functions. Category theory is a general theory of functions. Cryptography has reached a stage where its structures often take several pages to define, and its formulas sometimes run from page to page. Category theory has some complicated definitions as well, but one of its specialties is taming the flood of structure. Cryptography seems to be in need of high level methods, whereas category theory always needs concrete applications. So why is there no categorical cryptography? One reason may be that the foundations of modern cryptography are built from probabilistic polynomial-time Turing machines, and category theory does not have a good handle on such things. On the other hand, such foundational problems might be the very reason why cryptographic constructions often resemble low level machine programming. I present some preliminary explorations towards categorical cryptography. It turns out that some of the main security concepts are easily characterized through the categorical technique of diagram chasing, which was first used Lambek’s seminal `Lecture Notes on Rings and Modules’.
– with Catherine Meadows
Although the problems of physical security emerged more than 10,000 years before the problems of computer security, no formal methods have been developed for them, and the solutions have been evolving slowly, mostly through social procedures. But as the traffic on physical and social networks is now increasingly expedited by comput- ers, the problems of physical and social security are becoming technical problems. From various directions, many security researchers and practi- tioners have come to a realization that the areas such as transportation security, public and private space protection, or critical infrastructure defense, are in need of formalized engineering methodologies. Following this lead, we extended Protocol Derivation Logic (PDL) to Procedure Derivation Logic (still PDL). In contrast with a protocol, where some principals send and receive some messages, in a procedure they can also exchange and move some objects. For simplicity, in the present paper we actually focus on the security issues arising from traffic of objects, and leave the data flows, and the phenomena emerging from the interac- tion of data and objects, for future work. We illustrate our approach by applying it to a flawed airport security procedure described by Schneier.
– with Wolter Pieters and Trajce Dimkov
Security policy alignment concerns the matching of security policies specified at different levels in socio-technical systems, and delegated to different agents, technical as well as human. For example, the policy that sales data should not leave an organisation is refined into policies on door locks, firewalls and employee behaviour, and this refinement should be correct with respect to the original policy. Although alignment of security policies in socio-technical systems has been discussed in literature, especially in relation to business goals, there has been no formal treatment of this topic so far in terms of consistency and completeness of policies. Where formal approaches are used in policy alignment, these are applied to well-defined technical access control scenarios instead. We therefore aim at formalising security policy alignment for complex socio-technical systems in this paper, and our formalisation is based on predicates over sequences of actions. We discuss how this formalisation provides the foundations for existing and future methods for finding security weaknesses induced by misalignment of policies in socio-technical systems.
– with Catherine Meadows
As computing and computer networks become more and more intertwined with our daily lives, the need to develop flexible and on-the-fly methods for authenticating people and their devices to each other has become increasingly pressing. Traditional methods for providing authentication have relied on very weak assumptions about communication channels, and very strong assumptions about secrecy and the availability of trusted authorities. The resulting protocols rely on infrastructures such as shared secrets and public key hierarchies that are too rigid to support the type of flexible ad-hoc communication we are growing accustomed to and beginning to rely upon.
Recently, different families of protocols allow us to weaken assumptions about trusted infrastructure by strengthening the assumptions about communication channels. Examples include proximity verification protocols, that rely, for example, on the round trip time of a challenge and response; and bootstrapping protocols that rely upon human-verifiable channels, that is, low-bandwidth communication between humans. The problem now becomes: How do we ensure that the protocols are achieve their security goals? A vast amount of literature exists on the formal analysis of cryptographic protocols, and mathematical foundations of protocol correctness, but almost all of it relies upon the standard assumptions about the channels in end-to-end, and so its usefulness for nonstandard channels in pervasive networks is limited. In this paper, we present some initial results of an effort towards a formalizing the reasoning about the security of protocols over nonstandard channels.
– with Catherine Meadows
In this paper we propose actor-networks as a formal model of computation in heterogenous networks of computers, humans and their devices, where these new procedures run; and we introduce Procedure Derivation Logic (PDL) as a framework for reasoning about security in actor-networks, as an extension of our previous Protocol Derivation Logic. Both formalisms are geared towards graphic reasoning. We illus- trate its workings by analysing a popular form of two-factor authentication.
Man-in-the-Middle (MM) is not only a ubiquitous attack pattern in security, but also an important paradigm of network computation and economics. Recognizing ongoing MM-attacks is an important security task; modeling MM-interactions is an interesting task for semantics of computation. Traced monoidal categories are a natural framework for MM-modelling, as the trace structure provides a tool to hide what happens in the middle. An effective analysis of what has been traced out seems to require an additional property of traces, called normality. We describe a modest model of network computation, based on partially ordered multisets (pomsets), where basic network interactions arise from the monoidal trace structure, and a normal trace structure arises from an iterative, i.e. coalgebraic structure over terms and messages used in computation and communication. The correspondence is established using a convenient monadic description of normally traced monoidal categories.
Shannon sought security against the attacker with unlimited computational powers: if an information source conveys some information, then Shannon’s attacker will surely extract that information. Diffie and Hellman refined Shannon’s attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer science, and led to modern cryptography.
Shannon also sought security against the attacker with unlimited logical and observational powers, expressed through the maxim that “the enemy knows the system”. This view is still endorsed in cryptography. The popular formulation, going back to Kerckhoffs, is that “there is no security by obscurity”, meaning that the algorithms cannot be kept obscured from the attacker, and that security should only rely upon the secret keys. In fact, modern cryptography goes even further than Shannon or Kerckhoffs in tacitly assuming that if there is an algorithm that can break the system, then the attacker will surely find that algorithm. The attacker is not viewed as an omnipotent computer any more, but he is still construed as an omnipotent programmer. The ongoing hackers’ successes seem to justify this view.
So the Diffie-Hellman step from unlimited to limited computational powers has not been extended into a step from unlimited to limited logical or programming powers. Is the assumption that all feasible algorithms will eventually be discovered and implemented really different from the assumption that everything that is computable will eventually be computed? The present paper explores some ways to refine the current models of the attacker, and of the defender, by taking into account their limited logical and programming powers. If the adaptive attacker actively queries the system to seek out its vulnerabilities, can the system gain some security by actively learning attacker’s methods, and adapting to them?
Cryptography is a theory of secret functions. Category theory is a general theory of functions. Cryptography has reached the stage where its structures often take several pages to define, and even its formulas sometime run from page to page. Category theory has some complicated definitions as well, but one of its specialties is taming the flood of structure by diagrams and layering. Cryptography seems to be in need of high level methods, whereas category theory always needs concrete applications. So why is there no categorical cryptography? One reason may be that the foundations of modern cryptography are laid over probabilistic polynomial-time Turing machines, and category theory does not have a good handle on such things. On the other hand, these foundations might be the very reason why the details of cryptographic constructions often resemble low level machine programming. I present some preliminary results of an effort to present the basic cryptographic concepts categorically. It turns out that the standard security definitions can be characterized by simple commutative diagrams. Some security proofs become modular. The work is at an early stage, and did not yield any new cryptographic results yet, but the approach seems natural, leads to some interesting new ideas and structures, and invites more work.
– with Catherine Meadows
As computation spreads from computers to networks of computers, and migrates into cyberspace, it ceases to be globally programmable, but it remains programmable indirectly and partially: network computations cannot be controlled, but they can be steered by imposing local constraints on network nodes. The tasks of “programming” global behaviors through local constraints belong to the area of security. The “program particles” that assure that a system of local interactions leads towards some desired global goals are called security protocols. They are the software connectors of modern, world wide software systems.
As computation spreads beyond cyberspace, into physical and social spaces, new security tasks and problems arise. As computer networks are extended by nodes with physical sensors and controllers, including the humans, and interlaced with social networks, the engineering concepts and techniques of computer security blend with the social processes of security, that evolved since the dawn of mankind. These new connectors for computational and social software require a new “discipline of programming” of global behaviors through local constraints. Since the new discipline seems to be emerging from a combination of established models of security protocols with older methods of procedural programming, we use the name procedures for these new connectors, that generalize protocols.
In the present paper we propose actor-networks as a formal model of computation in heterogenous networks of computers, humans and their devices, where these new procedures run; and we introduce Procedure Derivation Logic (PDL) as a framework for reasoning about security in actor-networks. On the way, we survey the guiding ideas of Protocol Derivation Logic (also PDL) that evolved through our work in security in last 10 years. Both formalisms are geared towards graphic reasoning and, ultimately, tool support. We illustrate their workings by analysing a popular form of two-factor authentication, and a multi-channel device pairing procedure, devised for this occasion.
Musings about security as an engineering problem and as a problem for science.
In a previous FAST paper, I presented a quantitative model of the process of trust building, and showed that trust is accumulated like wealth: the rich get richer. This explained the pervasive phenomenon of adverse selection of trust certificates, as well as the fragility of trust networks in general. But a simple explanation does not always suggest a simple solution. It turns out that it is impossible to alter the fragile distribution of trust without sacrificing some of its fundamental functions. A solution for the vulnerability of trust must thus be sought elsewhere, without tampering with its distribution. This observation was the starting point of the present paper. It explores a different method for securing trust: not by redistributing it, but by mining for its sources. The method used to break privacy is thus also used to secure trust. A high level view of the mining methods that connect the two is provided in terms of similarity networks, and spectral decomposition of similarity preserving maps. This view may be of independent interest, as it uncovers a common conceptual and structural foundation of mathematical classification theory on one hand, and of the spectral methods of graph clustering and data mining on the other hand.
In his 1960 essay, Eugene Wigner raised the question of “the unreasonable effectiveness of mathematics in natural sciences”. After several decades of security research, we are tempted to ask the opposite question: Are we not unreasonably ineffective? Why are we not more secure from all the security technologies? I sketch a conceptual landscape of security that may provide some answers, on the background of ever increasing dynamics and pervasiveness of software and computation.
– with Catherine Meadows
As mobile devices pervade physical space, the familiar authentication patterns are becoming insufficient: besides entity authentication, many applications require, e.g., location authentication. Many interesting protocols have been proposed and implemented to provide such strengthened forms of authentication, but there are very few proofs that such protocols satisfy the required security properties. In some cases, the proofs can be provided in the symbolic model. More often, various physical factors invalidate the perfect cryptography assumption, and the symbolic model does not apply. In such cases, the protocol cannot be secure in an absolute logical sense, but only with a high probability. But while probabilistic reasoning is thus necessary, the analysis in the full computational model may not be warranted, since the protocol security does not depend on any computational assumptions, or on attacker’s computational power, but only on some guessing chances.
We refine the Dolev-Yao algebraic method for protocol analysis by a probabilistic model of guessing, needed to analyze protocols that mix weak cryptography with physical properties of nonstandard communication channels. Applying this model, we provide a precise security proof for a proximity authentication protocol, due to Hancke and Kuhn, that uses probabilistic reasoning to achieve its goals.
Trust is often conveyed through delegation, or through recommendation. This makes the trust authorities, who process and publish the trust recommendations, into an attractive target for attacks and spoofing. In some recent empiric studies, this was shown to lead to a remarkable phenomenon of adverse selection: a greater percentage of unreliable or malicious web merchants were found among those with certain types of trust certificates, then among those without. While such findings can be attributed to a lack of diligence in trust authorities, or even to conflicts of interest, our analysis of trust dynamics suggests that the public trust networks would probably remain vulnerable even if the trust authorities were perfectly diligent. The reason is that the main processes of trust building, under reasonable modeling assumptions, naturally lead to power-law distributions of trust: the rich get richer, the trusted attract more trust. The evolutionary processes governed by such distributions, ubiquitous in nature, are known to be robust with respect to random failures, but vulnerable to adaptive attacks. We recommend some ways to decrease this vulnerability, and suggest some ideas for exploration.
with Catherine Meadows
In this paper we discuss some of the challenges and opportunities offered to authentication by pervasive computing and discuss the work we are doing in developing formal and graphical systems for reasoning about and understanding the security of protocols in pervasive computing. We provide an example of the verification of a proximity authentication protocol that uses several different types of channels to achieve its goals.
– with Catherine Meadows, Radha Poovendran, LiWu Chang and Paul Syverson
In this paper we consider the problem of securely measuring distance between two nodes in a wireless sensor network. The problem of measuring distance has fundamental applications in both localization and time synchronization, and thus would be a prime candidate for subversion by hostile attackers. We give a brief overview and history of protocols for secure distance bounding. We also give the first full-scale formal analysis of a distance bounding protocol, and we also show how this analysis helps us to reduce message and cryptographic complexity without reducing security. Finally, we address the important open problem of collusion. We analyze existing techniques for collusion prevention, and show how they are inadequate for addressing the collusion problems in sensor networks. We conclude with some suggestions for further research.
– with Matthias Anlauff and Asuman Suenbuel
Enterprise Service Architectures are emerging as a promising way to compose Web-Services as defined by the W3C consortium, to form complex, enterprise level services. However, due to the fact that each Web-Service composition is also a protocol composition, this composition gets problematic, if security protocol mechanisms are used for the individual Web-Services, because security properties are not preserved under composition. This paper outlines the derivational approach, that on the one hand mimics the general engineering practice when combining security features, but on the other hand avoids the problems that can arise during the composition of Web-Services by using well-founded mathematical concepts. The Protocol Derivation Assistant, a tool that supports this approach, is also briefly described.
– with Matthias Anlauff, Richard Waldinger and Stephen Westfold
We present a formal framework for incremental reasoning about authentication protocols, supported by the Protocol Derivation Assistant (Pda). A salient feature of our derivational approach is that proofs of properties of complex protocols are factored into simpler proofs of properties of their components, combined with proofs that the relevant refinement and composition operations preserve the proven properties or transform them in the desired way.
In the present paper, we introduce an axiomatic theory of authentication suitable for the automatic proof of authentication properties. We describe a proof of the authentication property of a simple protocol, as derived in Pda, for which the the proof obligations have been automatically generated and discharged. Producing the proof forced us to spell out previously unrecognized assumptions, on which the correctness of the protocol depends.
Pda has support for collaboration and tool integration. It can be freely downloaded from its web site.
– with Catherine Meadows
Secrecy and authenticity properties of protocols are mutually dependent: every authentication is based on some secrets, and every secret must be authenticated. This interdependency is a significant source of complexity in reasoning about security. We describe a method to simplify it, by encapsulating the authenticity assumptions needed in the proofs of secrecy. This complements the method for encapsulating the secrecy assumptions in proofs of authenticity, presented in the preceding paper, presented at CSFW 2005. While logically straightforward, this idea of encapsulation in general, and the present treatment of secrecy in particular, allow formulating scalable and reusable reasoning patterns about the families of protocols of practical interest. The approach came about as a design detail in an ongoing development effort towards Protocol Derivation Assistant (Pda), a semantically based environment and toolkit for derivational approach to security.
– with Iliano Cervesato and Catherine Meadows
Authentication and secrecy properties are proved by very different methods: the former by local reasoning, leading to matching knowledge of all principals about the order of their actions, the latter by global reasoning towards the impossibility of knowledge of some data. Hence, proofs conceptually decompose in two parts, each encapsulating the other as an assumption. From this observation, we develop a simple logic of authentication that encapsulates secrecy requirements as assumptions. We apply it within the derivational framework to derive a large class of key distribution protocols based on the authentication properties of their components.
with Anupam Datta, Ante Derek and John Mitchell
Many authentication and key exchange protocols are built using an accepted set of standard concepts such as Diffie-Hellman key exchange, nonces to avoid replay, certificates from an accepted authority, and encrypted or signed messages. We propose a general framework for deriving security protocols from simple components, using composition, refinements, and transformations. As a case study, we examine the structure of a family of key exchange protocols that includes Station-To-Station (STS), ISO-9798-3, Just Fast Keying (JFK), IKE and related protocols, deriving all members of the family from two basic protocols. In order to associate formal proofs with protocol derivations, we extend our previous security protocol logic with preconditions, temporal assertions, composition rules, and several other improvements. Using the logic, which we prove is sound with respect to the standard symbolic model of protocol execution and attack (the â€œDolevâ€“Yao modelâ€), the security properties of the standard signature based Challenge-Response protocol and the Diffieâ€“Hellman key exchange protocol are established. The ISO- 9798-3 protocol is then proved correct by composing the correctness proofs of these two simple protocols. Although our current formal logic is not sufficient to modularly prove security for all of our current protocol derivations, the derivation system provides a framework for further improvements.
with Anupam Datta, Ante Derek and John Mitchell
Protocols may be derived from initial components by composition, refinement, and transformation. Using a higher-order extension of a previous protocol logic, we present an abstraction-instantiation method for reasoning about a class of protocol refinements. The main idea is to view changes in a protocol as a combination of finding a meaningful “protocol template” that contains function variables in messages, and producing the refined protocol as an instance of the template. Using higher-order protocol logic, we can develop a single proof for all instances of a template. A template can also be instantiated to another templates, or a single protocol may be an instance of more than one template, allowing separate protocol properties to be proved modularly. These methods are illustrated using some challenge-response and key generation protocol templates that also require adding symmetric encryption and cryptographic hash to the protocol logic. We show additional uses of the methodology and suggest future directions by exploring a design space surrounding JFK (Just Fast Keying) and related protocols from the IKE (Internet Key Exchange) family. This exploration reveals some trade-offs between authentication, identity protection, and non-repudiation; shows how counter-examples may be transferred from one protocol to another using derivation steps; and produces some interesting protocols that appear not to have been previously studied in the open literature.
– with Catherine Meadows
As a part of a continued effort towards a logical framework for incremental reasoning about security, we attempted a derivational reconstruction of GDOI, the protocol proposed in IETF RFC 3547 for authenticated key agreement in group communication over IPsec. The main advantage of the derivational approach to protocols is that it tracks the way they are designed: by refining and composing basic protocol components — but this time within a formal system, thus providing formal assurance of their security. Moreover, by distinguishing monotonic and compositional fragments of protocol designs, this approach facilitates tracking and formalizing of the incremental changes and modifications that complicate design process, increases reusability, and minimizes need for reverification.
While deriving GDOI, we have discovered that the simple logic, that we recently developed for deriving authentication, can also be used to prove its failures, and to generate attacks. By changing the proof goal, we have discovered an attack on GDOI that had not surfaced in the previous extensive analyses by other verification techniques. At the time of writing, we are working together with the Msec Working Group to develop a solution to this problem.
After a brief overview of the logic, we outline a derivation of GDOI, which displays its valid security properties, and the derivations of two attacks on it, which display its undesired properties. Finally, we discuss some modifications that eliminate these vulnerabilities. Their derivations suggest proofs of the desired authentication property.
with Nancy Durgin and John Mitchell
We present a logic for proving security properties of protocols that use nonces (randomly generated numbers that uniquely identify a protocol session) and public-key cryptography. The logic, designed around a process calculus with actions for each possible protocol step, consists of axioms about protocol actions and inference rules that yield assertions about protocols composed of multiple steps. Although assertions are written using only steps of the protocol, the logic is sound in a stronger sense: each provable assertion about an action or sequence of actions holds in any run of the protocol that contains the given actions and arbitrary additional actions by a malicious attacker. This approach lets us prove security properties of protocols under attack while reasoning only about the sequence of actions taken by honest parties to the protocol. The main security-specific parts of the proof system are rules for reasoning about the set of messages that could reveal secret data and an invariant rule called the honesty rule.
with Anupam Datta, Ante Derek and John Mitchell
Security protocols are often built starting from an accepted set of standard conceptual components, such as Diffie-Hellman key exchange, nonces, certificates, and encryptions or signatures. We introduce a basic framework for deriving security protocols from such simple components. We apply it to a family of key exchange protocols that includes Station-To-Station (STS), ISO-9798-3, Just Fast Keying (JFK), IKE and related protocols, deriving all members of the family from two basic protocol patterns, using a small set of refinements and protocol transformations. As initial steps toward associating logical derivations with protocol derivations, we extend a previous security protocol logic with preconditions and temporal assertions. Using this logic, we prove the security properties of the standard signature based Challenge-Response protocol and the Diffie-Hellman key exchange protocol. The ISO-9798-3 protocol is then proved correct by composing the correctness proofs of these two simple protocols.
with Anupam Datta, Ante Derek and John Mitchell
This paper continues the program of research towards a derivation system for security protocols. The general idea is that complex protocols can be formally derived, starting from basic security components, using a sequence of refinements and transformations, just like logical proofs are derived starting from axioms, using proof rules and transformations. The claim is that in practice, many protocols are already derived in such a way, but informally. Capturing this practice in a suitable formalism turns out to be a considerable task.
The present paper proposes rules for composing security protocols from given security components. In general, security protocols are, of course, not compositional: information revealed by one may interfere with the security of the other. However, annotating protocol steps by pre- and post-conditions, allows secure sequential composition. Establishing that protocol components satisfy each other’s invariants allows more general forms of composition, that ensure that the individually secure sub-protocols will not interact insecurely in the composite protocol.
The applicability of the method is demonstrated on modular derivations of two standard protocols, together with their simple security properties.
with Anupam Datta, John Mitchell and Frederic Mueller
We present a protocol for authenticating Mobile IPv6 connections, addressing requirements established in the relevant Internet Draft. The protocol imposes minimal computational requirements on mobile nodes, uses as few messages as possible, and may be adapted to resist denial of service attacks. Our protocol has two parts, an initialization phase and an update phase. The initialization phase can take advantage of local public-key infrastructure, a bidirectional chain of trust, or operate without prior authenticating information. Each execution of the update phase uses the shared secret established in the previous phase to maintain security of the mobile connection. We have formally verified the correctness of the protocol using the finite-state analysis tool MurPhi, which has been used previously to analyze hardware designs and security properties of several protocols.
– with Anupam Datta and John Mitchell
We introduce a basic framework for deriving security protocols from simple components, like proofs are derived from axioms. As an initial case study, we derive the recently proposed key agreement protocol JFK (Just Fast Keying), starting from the basic Diffie-Hellman exchange, and the generic challenge-response scheme, and refining them by the other required security features, formalized in our derivation system by suitable refinement and transformation rules.
– with Nancy Durgin and John Mitchell
We present a specialized protocol logic that is built around a process language for describing the actions of a protocol. In general terms, the relation between logic and protocol is like the relation between assertions in Floyd-Hoare logic and standard imperative programs. Like Floyd-Hoare logic, our logic contains axioms and inference rules for each of the main protocol actions and proofs are protocol-directed, meaning that the outline of a proof of correctness follows the sequence of actions in the protocol. We prove that the protocol logic is sound, in a specific sense: each provable assertion about an action or sequence of actions holds in any run of the protocol, under attack, in which the given actions occur. This approach lets us prove properties of protocols that hold in all runs, while explicitly reasoning only about the sequence of actions needed to achieve this property. In particular, no explicit reasoning about the potential actions of an attacker is required.