In the last few years my main
research theme has been to develop, and practice, what I call
**practice-oriented provable security**.
This line of work bridges the gap between cryptographic theory and
cryptographic practice.
The approach falls within the provable-security tradition of modern
cryptography, as first carried
out by [Goldwasser, Micali 1982].
But there are some
differences between the way that I like to carry out provable security and the way
that it has customarily been done.
These differences stem from a shift in goals:
I am no longer interested in investigating the provable-security
relationships between different cryptographic goals (since
the most interesting questions in this domain have now been answered);
instead, I want to see reductions become powerful **tools** for the design and
analysis of practical cryptographic protocols.
This shift in the use of reductions
gives rise to a theory with a somewhat different flavor,
a theory that is
more pragmatic and, I believe, more accessible.
I'll outline a few characteristics of it.

Recurring themes in my work include:
(1) bringing provable security to bear on
practical problems like ** entity authentication** and
**session-key distribution**;
(2) using **finite pseudorandom functions** (or finite pseudorandom permutations)
as a way to bring provable-security to constructions based on
confusion/diffusion (ie., DES-like) primitives;
(3) paying close attention to **concrete security analysis**, not only
to understand how good an analysis is, but also to provide guidance
in choosing among practical protocols, to lead to better
protocols, and even to lead to better or sharper notions; and
(4) using the **random oracle paradigm** when obtaining
provable security in
the standard model would involve a loss of efficiency or simplicity so great as to
render the methods undesirable in the real world.
I'm one of the one of the few cryptographers who actually
likes working on (5) **definitions**, and I believe
that good definitions (all by themselves!) can be of lasting value.

Building on sensibilities, paradigms, and techniques that have arisen from the provable security tradition, I have worked to fashion a line of research that is simultaneously useful and foundationally strong. In large part, this line of research was developed in collaboration with Mihir Bellare, of UCSD. We have built on our experiences from MIT and IBM.

Practice-oriented provable security has steadily gained in importance, and
my own papers have been cited
about 14,000 times.
At every major conference one now sees papers
that seem to follow our approach.
Nowadays, concrete security analysis is about as popular as
asymptotic analysis;
the random-oracle model
is extensively used;
the OAEP encryption technique
has been standardized;
and
PSS-PSS/R
and DHIES (formerly named DHAES)
are, likewise, in standards and draft standards.
Just a handful of years ago many people considered
provable-security cryptography to be an esoteric
idea from the STOC/FOCS crowd;
now the term *provable security* has such a cachet
that one hears people
claim that their schemes are provably
secure even when they have no understanding what the term means!

Amongst the topics on have worked on (with links going to the corresponding paragraphs on this page): symmetric encryption . message authentication . fast cryptography . asymmetric Encryption . digital signatures . random oracles . authenticated key exchange . formal encryption . secure function evaluation . complexity theory

** Symmetric encryption.**
In 1982 Goldwasser and Micali introduced the provable-security approach
to cryptography by treating asymmetric (public key) encryption.
• In [se]
we give a concrete, systematic
analysis of *symmetric* encryption.
We describe four definitions, all of them equivalent from the point of view of
polynomial-time reductions, but inequivalent with respect to concrete security.
We give upper and lower bounds on the concrete complexity of reductions
among these notions.
This allows one to select the strongest definition for existence results.
We give a concrete-security analysis for two modes of
operation of a block cipher: CBC and CTR modes.
We establish tight bounds on the success of adversaries attacking these
schemes as a function of the resources they employ.
This paper introduces the idea of classifying *definitions* based on the
concrete security of their reductions.
• In [relations]
we compare the relative strengths of popular notions of security for
encryption schemes.
We consider the goals of indistinguishability and non-malleability,
each under chosen-plaintext attack and two kinds of
chosen-ciphertext attack.
The paper focuses on the case of asymmetric encryption,
but the results immediately lift to the symmetric setting.
• In [vil]
we consider the problem of constructing ciphers that
operate on strings of variable (and arbitrary) length.
• In [encode]
we investigate the following approach to symmetric encryption:
first *encode* the message via some keyless transform, and
then *encipher* the encoded message, meaning apply
a permutation F(K,.) based on a shared key K.
We provide conditions on the encoding functions and the cipher
which ensure that the resulting encryption scheme meets
strong privacy (eg. semantic security) and/or authenticity goals.
This paper also introduces the notion of
an *authenticated-encryption scheme*.
• In [ocb]
we introduce OCB, a block-cipher mode of operation
that combines privacy and authenticity, and has many desirable
efficiency characteristics.
This is a slick mode of operation that
is already in a draft standard of IEEE 802.11
(Wireless LANs), and it is a proposal to NIST.
The OCB homepage gives
further information.

** Message authentication.**
Message authentication
lets communicating partners who share a secret key verify that a received
message originates with the party who claims to have sent it. The sender
computes, as a function of the message and the shared key, a "tag" (or "MAC", for
"message authentication code") which he attaches to the message. The
receiver can check the validity of the message using the tag.
• In [cbcmac] we
provide the first analysis for the most widespread technique
for message authentication, the *CBC MAC*.
This is a US and International Standard, but it never received
any sort of provable-security treatment.
Roughly, we show that if the
underlying block cipher is secure then so is the CBC MAC of that block cipher.
Actually we do more: we present a quantitative analysis under which the security of
the CBC MAC can be measured as a function of the strength of the underlying block
cipher. This introduces a new approach to the analysis of block-cipher based
constructions, based on modeling them as finite pseudorandom
functions.
• In
[xormac] we introduce a new
approach to MACing with a block ciphers. Unlike
the CBC MAC, an XOR MAC is fully parallelizable, so that it is
suitable, for example, for link-layer authentication on gigabit networks.
The security is analyzed assuming
the block cipher is a finite pseudorandom function.
• In
[bucket] I introduce *bucket hashing*, a fast-to-compute
family of universal hash functions.
The paper also advocates Wegman-Carter MACs for the
next-generation of software-optimized MACs.
• In [umac]
we describe UMAC, which
authenticates messages (in software, on contemporary machines)
roughly an order of magnitude faster than current practice
(eg, HMAC-SHA1). To achieve such speeds UMAC uses a variant of
the Carter-Wegman paradigm, employing a new universal-hash-function, NH,
that can exploit SIMD instruction of modern processors.
The "cryptographic" work of UMAC is done using standard primitives.
The security of UMAC is proven, in the sense of giving exact and quantitatively
strong results which demonstrate an inability to forge UMAC-authenticated
messages assuming an inability to break the underlying
cryptographic primitive.
•
In [rfc4418]
we give a full specficiation of UMAC.
• In [3key]
we suggest some simple variants of the CBC MAC that let you
efficiently MAC messages of arbitrary lengths. Our constructions use
three keys, K1, K2, K3, to avoid unnecessary padding and MAC any binary
string M in the larger of 1 and the ceiling of |M|/n applications of the
underlying n-bit block cipher. Our favorite construction, the XCBC,
has been proposed to NIST as a block-cipher mode of operation.
We prove the security of our constructions.
Our analysis exploits new ideas which simplify proofs compared to prior work.
• In [pmac]
we describe a new block-cipher mode which is
fully parallelizable and provides the functionality of a
pseudorandom function.
It is similar in spirit to the XOR MAC, but it is more efficient.
PMAC was designed and analyzed using the provable-security paradigm.
It has been proposed as a mode of operation to NIST, and is under consideration
by an IETF working group.

** Fast cryptography.**
One of the reasons that cryptography is not used more pervasively is the
assumed-to-be-high computational cost for doing bulk software encryption
and bulk software message authentication.
• The SEAL algorithm
we proposed in '93 remains the fastest published
method for symmetric encryption.
It encrypts at a rate of less than five machine instructions
per byte.
This is my only paper not in the provable-security tradition.
But I do pick up one contribution from theoretical cryptography:
the *type* of cryptographic primitive which
SEAL is. It is
neither block cipher nor stream cipher, but a
(length-increasing) pseudorandom function family.
Such an object seems easier to use than a stream cipher,
yet more amenable than a block cipher (it seems) for
achieving extreme software speeds.
After SEAL we now knew how to encrypt messages faster than we knew
how to authenticate them.
• In
[bucket] we
describe and analyze a new universal hash-function family,
bucket hashing, and we
advocate the Carter-Wegman approach as the most promising
means for achieving software-efficient.
message authentication.
• Our UMAC algorithm incorporates
a completely different universal hash-function family,
one designed to exploit SIMD instructions of modern
architectures.
With UMAC
we carried out all of the ``crypto engineering'' necessary
to turn the hash-function family into a fully specified MAC.
The final algorithm authenticates messages at roughly
1.0 cycles per byte (for 2^-60 forgery probability), which
is about *ten times faster* than any other
MAC that has been specified in the literature.
On the downside, UMAC is complex,
requires large key-setup time, and needs one to carefully select
a variety of parameters.
• The PMAC algorithm overcomes
these problems, and is fully parallelizable,
though its speed, in software, will not come close to that
of UMAC's.
PMAC has been proposed to NIST as a block-cipher mode of operation,
and it is also being considered by an IETF working group that is defining
standards for high-speed storage.

** Asymmetric encryption.**
The RSA primitive
*f*(*x*) = *x*^*e* mod *N*
is not used
(and should not be used) directly to encrypt a string;
instead, asymmetric encryption is accomplished by an (often ignored)
protocol which sits on top of the RSA primitive.
For example, RSA-based encryption is often done using
"Public Key Cryptography Standard #1" (RSA PKCS #1),
which specifies how to
format messages prior to applying the RSA primitive.
• In [oaep] we
provide a formatting approach which has come to be
known as "OAEP".
We prove, in the random-oracle model, that
OAEP achieves semantic security, as well as a weak form of
"plaintext awareness".
Earlier techniques had no such guarantees.
Recent work by others has shown that RSA-OAEP also
achieves the strongest form of chosen-ciphertext security.
OAEP is in the
IEEE P1363 standard, and in other standards/draft standards.
It is in recent versions of SET/TLS,
which implies that you are almost certainly reading
this web page using a browser that has OAEP implemented within it.
• Moving on "El Gamal-style" encryption,
our DHIES algorithm does for the
Diffie-Hellman primitive
what OAEP did for RSA: we describe a simple way to use the
primitive so as to provably achieve standard security goals
(like the strongest form of chosen-ciphertext security),
under specified hardness assumptions, within the random-oracle model.
DHIES is in standards/draft standards of ANSI, IEEE, ISO, and SEC,
and implementations are sold by companies such as Certicom.
• In [relations]
we compare the relative strengths of popular notions of security for
public-key encryption schemes.
We consider the goals of indistinguishability and non-malleability,
each under chosen-plaintext attack and two kinds of
chosen-ciphertext attack.
For each of the resulting definitions we
prove either an implication (every scheme meeting one notion
must meet the other) or a separation (there is a scheme meeting
one notion but not the other, assuming the first notion can be met at all).
In this way the paper provides a rather complete picture of the
relationship among popular notions of encryption-scheme security.

** Digital signatures.**
Just as you should not encrypt a message by directly applying the
RSA encryption primitive, so too you should not digitally sign a message by
directly subjecting the message to a mathematical function such as the inverse RSA
map,
*f*(*y*) = *y*^*d* mod *N*.
(If you really tried to sign that way, it would be possible to forge electronic
signatures.)
• In
[sig] we
describe a good way to sign using a primitive such as RSA.
What sets apart our method is that it is proven to be secure in a very
satisfying way.
As usual, we give a reduction to demonstrate that a way to break the
higher-level protocol (the signature scheme) implies a way to break the
lower-level primitive (the RSA function). But this time the reduction is
*tight*: if you can break the signature scheme with a certain amount
of computational resources, then you can break the RSA function
with essentially identical computational resources.
This seems in many ways to be the ultimate object of
practical, scientific cryptography:
to give provable-security results for practical protocols using
completely tight reductions.
(However, note that the result is, once again, in the a random-oracle model.)

** Random oracles.**
Many cryptographic goals have been solved in the provable-security tradition,
but only by virtue of protocols too inefficient to be competitive with what
*ad. hoc* techniques already provide.
Indeed this is one of the reasons that provable security has often been ignored
by security practioners.
•
In [ro] we suggest that the *random-oracle
paradigm* can, in many cases, provide a way bridge cryptographic theory and
cryptographic practice.
The method (which has it's roots in [Fiat Shamir 86] and a variety of folklore)
says to:
(1) Assume the existence of a public random oracle;
(2) Do provable security (both the definition and the protocol)
in this enriched model of computation;
(3) Finally, instantiate the random oracle with an object like a cryptographic hash
function.
It is the *thesis* underlying the approach that, despite the heuristic
instantiation step, substantial assurance benefits remain (far better assurance than
protocols that don't have admit any sort of provable security).
We explain the random-oracle paradigm and give
many examples which make clear the power and generality of the method.
•
Subsequent papers of ours in the random-oracle model include
[oaep],
[pss], and some of
[dhies].
The popularity of the random-oracle model continues to grow,
despite limitations discussed by
[Canetti, Goldreich, Halevi].
The
[ro] paper is now my most cited paper, with
about 700 citations.

** Authenticated key exchange.**
The problem is how two parties can authenticate
each other across the network and/or obtain a joint
session key (the latter problem is the more useful.)
The adversarial model envisages an active adversary who can,
among other abilities, start multiple sessions of different entities.
Many protocols for these problems have been proposed, but
lots of them have turned out to be wrong. For the most part, the entire
field had been rather *ad. hoc* and unscientific.
In a new approach to this subject, we brought provable
security to entity authentication and associated problems
of key distribution.
This is a large area of research in which protocols either were
given no justification
for correctness whatsoever, or were justified
in a rather weak sense, by first passing to a
non-cryptographic abstraction about the underlying cryptography
(e.g., the "logical approach").
•
In [eakd]
we cover the two party symmetric case, where the
parties hold a long-lived shared key either (1) want to authenticate one another, or
(2) authenticate one another and simultaneously derive a shared
session key.
Then in
[3pkd]
we treat three party key distribution (the Needham-Schroeder, or Kerberos,
setting), where each party shares a long-lived key with a trusted server, and they
want to derive a shared session key.
•
In
[dict]
we cover key exchange in a setting where long-lived keys
are user-selected passwords, and may therefore be subject
to ``dictionary attack.''
In all cases,
we provide
a model, definitions, and protocols which are proven secure.
In all cases,
the protocols
are as practical as currently implemented ones.

** Formal encryption.**
In joint work with
Martín Abadi, our paper
"Reconciling Two Views of Cryptography"
spans two very different ways to look at cryptography.
One way (Martín's world)
relies on a simple but effective formal approach.
The other way (my world) relies on a detailed computational model
that considers complexity and probability.
These views spring from different (and basically disjoint) communities, and
there has been an uncomfortable gap between
these two approaches.
Here we start to bridge the gap, by
providing a computational justification for a formal
treatment of encryption.
Our main result says that, under appropriate complexity assumptions,
formal ciphertexts which are equivalent with respect to a simple
set of rules correspond to computationally
indistinguishable ensembles.
This is a rather new direction for me (and for my field),
and it is early to tell if it will take off.
We hope that this work engenders
much follow-on work, and eventually helps to facilitate the use of
"higher-level reasoning" as a way to get
security guarantees in the computational setting.

** Secure function evaluation.**
I have not worked in this area for a long time,
but the secure function evaluation (SFE) problem is certainly
interesting.
In this problem a group of "players", each of whom holds his own private input,
wish to evaluate some function on
these private inputs in a way that reveals only the
correctly-computed answer.
In the presence of a trusted party, each player could send his input
to the trusted party, who would compute the function and return the
answers.
We want to make communication protocols
which accomplish the same thing (in the absence of a trusted party).
This is a classical problem, first proposed by [Yao82],
and many lovely protocols have been devised, beginning with
[GMW87] and [BGW88,CCD88].
My work in this domain has been in two directions.
• In one direction, Micali and I
developed formal definitions for the goal
(described in a CRYPTO '91 paper, but see
[Dodis, Micali] for a better view of how our definitions evolved.)
The other direction that I followed was to look at the
efficiency of SFE protocols.
•
In
our STOC '90 paper (and
in my Ph.D thesis),
we show how SFE can be achieved
in a constant number of rounds
(computational setting with a complete network of private channels,
broadcast, and honest majority).
To do this, we abandon the gate-by-gate approach and employ
the generally-useful paradigm of issuing a "garbled circuit".

** Complexity theory.**
I haven't been doing complexity theory of late, but perhaps
I will return to it.
•
In [qp]
we show that, under a complexity assumption,
there is no polynomial-time approximation algorithm (even with a
terrible guarantee) for the problem of QUADRATIC PROGRAMMING.
Compared to what came later, the techniques here
are simple. But the result was one of the first
non-approximability results following [FGLSS], and the particular
problem considered is of great
interest to the mathematical programming community.
•
The other complexity theory paper I have is
shows that everything provable (the class IP) is provable in (computational)
zero knowledge, either (1) under a complexity assumption, or
(2) in the "envelope model."
This result has a somewhat confused history, with
[Ben-Or] and [Impagliazzo, Yung 87] also deserving credit for (1).

To Rogaway's home page.