ebook img

PKI Layer Cake PDF

38 Pages·2009·0.69 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview PKI Layer Cake

PKI Layer Cake: New Collision Attacks Against The Global X.509 CA Infrastructure [Version 1.8] 5-Aug-2009 Dan Kaminsky, Director of Penetration Testing, IOActive Inc. Len Sassaman Meredith Patterson Executive Summary: Research unveiled in December of 2008 showed how MD5’s long-known flaws could be actively exploited to attack the real-world Certification Authority infrastructure. In this paper, we demonstrate two new classes of collision, which unfortunately will be somewhat trickier to address than previous attacks against X.509: The applicability of MD2 preimage attacks against the primary root certificate for VeriSign, and the difficulty of validating X.509 Names contained within PKCS#10 Certificate Requests. We also call out two possibly unrecognized vectors for implementation flaws that have been problematic in the past: The ASN.1 BER decoder required to parse PKCS#10, and the potential for SQL injection from text contained within its requests. Finally, we seek to remind people that the implications of these attacks are a little larger than some have realized – first, because Client Authentication is sometimes tied to X.509, and second because Extended Validation certificates were only intended to stop phishing attacks from names similar to trusted brands. As per the work of Adam Barth and Collin Jackson, EV does not in fact prevent an attacker who can synthesize or acquire a “low assurance” certificate for a given name from acquiring the “green bar” EV experience. Note: This paper is still being edited; a 2.0 version may be forthcoming. Attack Summary: This paper contains the full details and context for this attack. For those familiar with the history, here is what is new: 1) MD2RSA Signature Transfer: VeriSign’s MD2 Root Can Be Exploited By Creating A Malicious Intermediate With The Same MD2 Hash As Its Parent and Transferring The Signature From The Root To The Malicious Intermediate VeriSign’s Class 3 Root Certificate, required for validation of many if not all certificates signed by VeriSign, is self-signed with MD2. Since a preimage attack exists against MD2, it is possible to create a new, intermediate certificate with the same MD2 hash as the root, and to then transfer the self-signature from the root to this false intermediate. This attack can be run without any interactions with VeriSign’s servers, yields a full CA certificate, and since the MD2 certificate in question is central to VeriSign’s operations, cannot easily be addressed without updating validation policies in clients. It is also possible that other CA’s once signed certificates with MD2, using roots that are still valid today. There is possibly evidence Thawte has done this as well, and may have issues too. 2) Subject Name Confusion: Inconsistent Interpretation Of The Subject X.509 Name in a PKCS#10 Request Can Cause A CA To Emit A Certificate For An Unauthorized Common Name a) Multiple Common Names in one X.509 Name are handled differently by different API’s. OpenSSL, in common use, returns only the first Common Name. Internet Explorer’s CryptoAPI trusts every common name in the list. NSS, used by Firefox, trusts only the last common name. Internet Explorer also has a unique limit on wildcards, rejecting them for first and second level DNS labels. NSS has no such limitation. As such, “CN=www.badguy.com/CN=www.bank.com/CN=www.bank2.com/CN=*” will thus pass validation when tested by OpenSSL, but will also authenticate www.bank.com and www.bank2.com for IE, and will authenticate all possible names in Firefox. Similar attacks might also be possible against SANs (Subject Alternative Names), but since this has always been intended as a multiple-name container, we expect or at least hope all parsers for it follow the “All-Inclusive” rule. b) Inefficient ASN.1 BER encodings of OIDs (Object Identifiers) can lead to some API’s, but not others, recognizing the OID of Common Names. In ASN.1 BER, OID’s are represented via numbers in Base 128. 2.5.4.3 is the “number” for Common Name. 2.5.4.2^64+3 is not, and neither is 2.5.4.0003. Most API’s recognize that. Internet Explorer’s CryptoAPI does not, however, and allows both 2.5.4.0003 and 2^64+3 to wrap over to 3. Because most of an X.509 Name cannot be authenticated by a CA, they must ignore OIDs they don’t recognize. So the CA passes both 2.5.4.0003 == www.bank.com and 2.5.4.2^64.3 == www.bank.com, while IE sees CN=www.bank.com. c) Null terminators in the midst of an X.509 Name can lead to some API’s seeing different values of Common Name than others. Consider the name www.bank.com\00.badguy.com. A validator for the Second Level Domain would see “badguy.com” and issue a WHOIS request for that. However, both IE’s CryptoAPI and Firefox’s NSS will terminate their value parsing at the null, both seeing and validating a certificate for www.bank.com. This is especially problematic for NSS, which will accept a certificate for *\00.badguy.com as being valid for all possible names, i.e. *. d) OpenSSL’s default “compat” mode for dumping X.509 Subject Names is vulnerable to injection attacks. Independent of special API’s, OpenSSL has three obvious points at which a CA can acquire the X.509 Subject Name for validation: Before signing, by dumping the text of the PKCS#10 Certificate Request, during signing, by analyzing the output of the signing command line, and after signing, by dumping the text of the generated certificate. All three are vulnerable to the First/All Inclusive/Last attack described earlier. During signing, the output becomes “subject=/O=Badguy Inc/CN=www.badguy.com/OU=Hacking Division/CN=www.bank.com”, spuriously implying a CN of www.badguy.com is present in the generated certificate. In fact, CN is listed as such because the value on O is “Badguy Inc/CN=www.badguy.com”. Similar text/ASN.1 confusion happens before and after signing – the dumped line is actually escaped out to “O=Badguy Inc, CN=www.badguy.com, OU=Hacking Division, CN=www.bank.com”. It is possible to defend against injection attacks during request or certificate dumping by using any of the (non-default) escaping nameopts, such as RFC2233, oneline, or multiline. Note that it might be worth updating to a new version of OpenSSL, since existing versions have an annoying but ultimately non-exploitable read AV when filtering malicious multibyte strings. 3) PKCS#10-Tunneled SQL Injection: Certificate Authorities Inserting PKCS#10 Subject Names Into A Database May Not Be Employing Comprehensive String Validation, Allowing SQL Injection Attacks. As mentioned earlier, ASN.1 allows many string types, with BMPString (UTF-16, supposedly minus certain characters) and UTF8String being the most flexible, but UniversalString also being worthy of analysis. The issue here is that the encoding and attacker vector is obscure, and strings from it may be getting injected into backend CA databases without sufficient validation. SQL Injection into a Certification Authority’s database backend would be distinctly problematic, due to the special trust this particular data store has to the rest of the Internet. 4) PKCS#10-Tunneled ASN.1 Attacks: Certificate Authorities Exposing PKCS#10 Receiver May Be Exposing Unhardened ASN.1 BER Listeners. ASN.1 BER is tricky to parse, with many, many possibilities for consistent and predictably exploitable attack surfaces. The PROTOS project found a large number of vulnerabilities, via the SNMP consumer, but it is possible that some of the ASN.1 BER parsers found in commercial CA implementations were not covered in the 2002 PROTOS lockdown and thus are still vulnerable. 5) Generic SSL Client Authentication Bypass: Certification Chain Compromise May Allow Client Authentication Requirements To Be Bypassed. The MD2 attacks in this paper may have larger implications in certain deployments. An attacker with the ability to directly issue certificates – rather than just the ability to get an arbitrary X.509 Subject Name past a validator – gets access to the “Client Authentication” EKU (Extended Key Usage) attribute that controls whether a certificate allows for authenticating a client to a server. Since Root CA’s do not normally issue certificates with “Client Authentication” set, some systems may not test for what would happen if such a certificate arrived. This may create a generic authentication bypass in some systems. A similar bypass may be extended from Stevens and Sotirov’s MD5 collisions, in situations where the Client Authentication EKU (which is not present in the root certificate they attacked) is insufficiently validated. 6) EV Hijack: “Extended Validation” Certificate Programs Offer No Defense Against An Attacker With A “Low Assurance” Certificate, As Per The Work Of Adam Barth and Collin Jackson. EV certs were apparently designed to address phishing attacks where a bank at https://www.bankoffoo.com is suffering attacks from people who have registered www.bank-of-foo.com or www.bankofoo.com. It was specifically not designed to deal with the case where an attacker actually has a certificate, even a low assurance certificate, for www.bankoffoo.com, and the attacker has a DNS or other route manipulation attack akin to the Summer 2008 DNS Cache Poisoning attacks. Adam Barth and Collin Jackson have shown that browsers do not enforce a scripting barrier between https://www.bankoffoo.com (EV certified) and https://www.bankoffoo.com (Low Assurance certified). So all an attacker needs to do is proxy enough of an SSL session to get the main HTML of a page loaded in EV (thus causing the green bar), then he can kill the TCP session. After that, the attacker can host whatever script he wants from the Low Assurance cert, and that script will inevitably be merged with the real site with no negative impact on the EV experience. Remediation Summary: A future version of this paper will include a full measure of who needs to address which issues. This problem is unfortunately smeared across browser manufacturers, cryptographic API maintainers, and certificate authorities (this includes their resellers). For a quick summary, however: Browser Cryptographic API Certificate Authorities Manufacturers Manufacturers 1: MD2RSA Possibly, to support Yes, to change Yes, to agree to Cryptographic API validation rules resolution plan changes 2a: Multiple Common Possibly, to Yes Possibly, to Names determine policy and determine policy and measure exposure measure exposure 2b: Inefficient ASN.1 Possibly, to Yes Possibly, to Bypass determine policy and determine policy and measure exposure measure exposure 2c: Null terminator Possibly, to Yes Possibly, to bypass determine policy and determine policy and measure exposure measure exposure 2d: OpenSSL No Yes, definitely for Yes, to determine if “compat” bypass SSL, possibly for commercial CA others implementations have similar string parsing layers 3: PKCS#10 SQL No Possibly, to add Yes Injection support for filtering at the API layer 4: PKCS#10 ASN.1 No for the major Possibly, to make Possibly, to make Exploitation browsers, since sure that PKCS#10 is sure that PKCS#10 is presumably they’ve being parsed with a being parsed with a already had to lock post-PROTOS post-PROTOS down their ASN.1 hardened library hardened library engine. 5. Client Certificate No Yes, to potential No Bypass control the list of certificates that a web server will insert into the CTL (Certificate Trust List) 6. EV bypass Yes, to manage PR / No Possibly, to manage Understanding PR, and to perhaps around the purpose create a “blacklist” of of EV EV certified names that CA’s will not issue a certificate for Background: SSL is arguably the Internet’s most popular technology for encrypting reliable data flows from one endpoint to another. Any URL that begins with HTTPS, and results in a small yellow lock showing up in the browser, is using SSL to secure its link. But encryption without authentication is worthless: One can easily end up encrypting information with the key of an attacker! Authentication is managed in SSL via certificates – assertions of identity that are cryptographically signed by mutually trusted third parties known as Certificate Authorities, or CA’s. VeriSign is probably the Internet’s most well known CA, but there are many others – over 200, by some counts. A rough summary of how the CA system works for SSL/HTTPS is as follows: 1) Alice acquires a DNS name for a website, http://www.alice.com. She would like to receive encrypted traffic. 2) She generates a public and private key, which allows the world to encrypt traffic to her, but allows only she to decrypt it. Unfortunately, anyone could generate a keypair – she has to convince people to use her keys, and nobody elses. So she approaches a Certification Authority. 3) She sends a CA a request formatted via the PKCS#10 standard, which uses ASN.1 BER encoding to represent a request to link one RSA public key to what’s known as an X.509 Name. There are many possible elements that can be part of an X.509 Name – City, State, Organization, Organizational Unit, and so on. But the only element a CA can validate is the CN, or “Common Name”. The CN contains the name of the website being secured – www.alice.com – and is what the browser uses to make sure the owner of www.alice.com cannot impersonate www.bank.com. 4) The CA validates, through some procedure, that Alice is a legitimate representative for www.alice.com. This is most commonly done by looking up the administrative and technical contacts for alice.com in the WHOIS database, and sending them an email asking if it’s OK to issue a certificate to this “Alice” character. Another mechanism, however, involves creating an HTTPS link to the IP address registered in DNS, and checking for content at a given URL. 5) Once the CA is satisfied that Alice is allowed to bind her cryptographic keypair to the X.509 Name being requested, it builds a certificate asserting such, creates a hash (a sort of short summary fingerprint) of that certificate, and signs that hash with the private key from its own certificate – a CA certificate, trusted by clients to mark other certificates as valid. 6) Alice then hosts an SSL server, advertising that she can decrypt traffic encrypted to the public key in the certificate issued by the CA. 7) Bob, or any other client, resolves the IP address for www.alice.com, and sets up an SSL connection to it. Bob authenticates the SSL connection with the public key claimed by Alice’s server, and authenticates that public key because it is contained within a certificate that chains back to a certificate he already has. Ultimately, there is a set of certificates that is installed with every major browser, and if it can be shown that a given cert is trusted by these CA certificates, then encryption will proceed against the public key inside and the client will emit whatever trusted user interface it is configured to. 2008 was not a good year for the Certification Authority system. Mike Zusman of Intrepidus Research was able to bypass WHOIS validation at Comodo by claiming his desired certificate was only going to be used “for internal servers only”. His desired certificate was for https://www.live.com, Microsoft’s search engine. A rather more embarrassing failure was disclosed by the CA Startcom, where they discovered a competing CA that simply skipped Step 4 entirely – in Startcom’s words, “no questions asked - no verification checks done - no control validation - no subscriber agreement presented, nothing.” Unfortunately, Zusman had found a similar (though presently undisclosed) attack that worked against Startcom’s systems as well, presumably through their web interface. Beyond these implementation flaws, the basic design of CA validation via both WHOIS email and HTTPS-via-IP-in-DNS was exposed as faulty in the Summer 2008 DNS Cache Poisoning attacks discussed by Dan Kaminsky (one of the authors of this paper). If DNS is compromised at the CA, both the email and the HTTPS connection can easily be subverted. While DNS has been remediated at all known CA’s, other route manipulation mechanisms such as Pilosof’s BGP attacks create some continuing exposure (though the BGP stream is small enough, and logged enough, for firms such as Renesys to know immediately if such an attack took place). Of course, the most well known attack against CA’s in some time occurred in December 2008, with Stevens and Sotirov’s applied work against CA’s that still used MD5 as their hash algorithm in Step 5, above. MD5 had been known to be insecure since at least 1996, with a regular stream of findings against the algorithm, punctuated in particular by Xiaoyun Wang’s generation of MD5 collisions in 2004 and Stevens, Lenstra, and Weger’s extension of Wang’s attacks to chosen prefix attacks in 2007. Stevens and Sotirov extended the 2007 research by applying it to the real- world Certification Authority system (very roughly) as follows: 1) They generated a certificate claiming not to be themselves, as a CA might issue, but instead an Intermediate Certificate trusted in and of itself to sign other certificates. Of course, not actually being Certificate Authorities, their generated certificate needed a signature from a trusted root certificate. 2) They found a CA, RapidSSL, that both used the MD5 signing algorithm against its trusted root certificate and generated predictable certificates with nothing unexpected in either the Serial Number or Signing/Expiration time fields, thus ensuring that which was MD5’d, could be known in advance. 3) They gave the CA a PKCS#10 request which would force it to generate an innocent certificate that nonetheless had the same MD5 hash as the certificate they generated in Step 1. Since the hash was the same, the signature generated across that hash could be transferred from the innocent certificate, synthesized by the CA and issuing no special powers, to the certificate generated by Stevens and Sotirov, which could issue certificates for https://www.bank.com. Luckily, there were very few CAs remaining using MD5, and the immediate risk from the new attack was mitigated by those CA’s switching to SHA-1. But there are other attacks along these lines, and we will go into those attacks now. Attack #1: VeriSign’s MD2 Root Can Be Exploited By Creating A Malicious Intermediate With The Same MD2 Hash As Its Parent and Transferring The Signature From The Root To The Malicious Intermediate As late as March of 1998, VeriSign, possibly the world’s most popular Certification Authority, was still issuing certificates using a predecessor of MD5, the MD2 algorithm. According to Peter Guttman’s X.509 style guide: VeriSign were, as of March 1998, still issuing certificates with an MD2 hash, despite the fact that this algorithm has been deprecated for some time. This may be because they have hardware (BBN SafeKeypers) which can only generate the older type of hash. RFC 2313 offers the following defense of VeriSign, however – historically, in the choice between MD2, MD4, and MD5, MD2 offered the highest security level at the expense of speed: MD2, the slowest of the three, has the most conservative design. No attacks on MD2 have been published. Time has passed, however, and the state of the art in cryptography has advanced. One decade later, Soren S. Thomsen released a paper, “An improved preimage attack on MD2”, showing that a preimage attack was possible against the MD2 algorithm. Preimage attacks allow an attacker who possesses a hash, to synthesize a byte stream that sums to that hash. The work effort in Thomsen’s attack is 2^73 – outside the bounds of trivial computation – but given that its predecessor attack was on the order of 2^97, this can be considered one mathematical advance away from at least a distributed computation work effort. The obvious question, however, is why would it matter if MD2 fell? Nobody has been signing certificates with MD2RSA for at least a decade. Ah, but there is a problem. The primary root certificate for VeriSign – trusted by all browsers, and required to validate certificates from sites as important as https://www.amazon.com – is itself signed with MD2RSA: $ openssl x509 -in VeriSign.cer -inform der -text Certificate: Data: Version: 1 (0x0) Serial Number: 70:ba:e4:1d:10:d9:29:34:b6:38:ca:7b:03:cc:ba:bf Signature Algorithm: md2WithRSAEncryption Issuer: C=US, O=VeriSign, Inc., OU=Class 3 Public Primary Certification Authority Validity Not Before: Jan 29 00:00:00 1996 GMT Not After : Aug 1 23:59:59 2028 GMT Subject: C=US, O=VeriSign, Inc., OU=Class 3 Public Primary Certification Authority Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): … Exponent: 65537 (0x10001) Signature Algorithm: md2WithRSAEncryption … At first glance, this shouldn’t matter. Just because a certificate is signed with MD2RSA, does not mean its children will be. VeriSign has, by all accounts, been signing only SHA-1 hashes with this certificate’s private key and has been doing so for over a decade. And the browser does not trust this certificate because it is signed with MD2RSA, or even because it’s signed at all. The browser trusts it for the same reason it trusts its crypto libraries: It was installed by the manufacturer. The fact that the certificate is signed using MD2RSA is thus not actually part of chain of trust that makes the certificate valid. But the signature is still there, and we can make use of it. The VeriSign root certificate is important – anything that it signs, is fully trusted. But signatures are only across hashes. VeriSign has signed its root certificate’s own MD2 hash. This means that, if we can generate an Intermediate CA Certificate with the same MD2 hash as the VeriSign root, we can transfer the RSA signature from the root to the Intermediate and the signature will still be valid. Because of Thomsen’s research, we are almost to the point that this is practical. Like Stevens and Sotirov, we are transferring a signature from a valid cert to an invalid one, and using identical hashes to keep the signature valid. Unlike Stevens and Sotirov, we have a preimage attack that can be done entirely offline – there is no need to interact with CA servers and force them to sign something they really shouldn’t, rather we can compute the necessary material ourselves. Remediating this attack is tricky. We cannot eliminate the VeriSign root certificate from our trust store, as it is required (if not directly, at least through intermediates) by a large portion of the signed certificates in the field. This stands in contrast to the Stevens and Sotirov scenario, which was happily almost entirely remediated upon elimination of the MD5 signers. Replacing the MD2 self-signed root with a SHA-1 self-signed root would actually have no impact: The problem is that the private key for the VeriSign root, whatever it may be, has issued a signature across an MD2 hash. That signature will be valid for a malicious intermediate certificate until 2028. One possibly effective approach might be to require, during certificate chain validation a given hash to show up only once. This would be somewhat complicated to implement, but would work for both MD2 roots and any intermediates that might still be out there. Assuming however that the only MD2 certificate still in use is this VeriSign certificate, the best approach may have been this suggestion from an affected party: Only allow MD2 at the root, where the certificate is guaranteed to be trusted via an out of band mechanism or not at all. This is simple, but we cannot find a flaw with it yet. There is another reason it may be wise to update certificate validation policies to ignore MD2 anywhere but the root: While no known Certification Authority today signs certificates with MD2, or even MD5 as of the signing of this paper, that does not mean such signatures were not issued in many years ago. One might think this wouldn’t matter – even if we could still find such a certificate, we wouldn’t have the matching private key, the X.509 Subject Name would be something useless, and the certificate would have long since expired. The problem is that large scale Internet scanning has in fact yielded a MD2-signed certificate, issued by the still-valid but assuredly not MD2 signed Thawte root. # openssl x509 -in 0.cer -inform der -text Certificate: Data: Version: 3 (0x2) Serial Number: 530833 (0x81991) Signature Algorithm: md2WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Server CA/[email protected] Validity Not Before: Jul 9 20:42:27 2001 GMT Not After : Aug 1 08:40:37 2002 GMT Subject: [removed from paper] Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): … Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Basic Constraints: critical CA:FALSE Signature Algorithm: md2WithRSAEncryption ... It turns out that the private key, the X.509 Subject Name, and the expiration date are all trivial-to- change elements of the child certificate – trivial-to-change, of course, because it’s easy to generate a new certificate with different values but the same MD2 hash, which then matches up with the RSA signature across that MD2 hash from 2002. However this problem is fixed, it is mostly likely to be addressed by VeriSign (who also owns Thawte) and Browser/Cryptographic API Manufacturers. Most CA’s do not need to worry about this specific issue. Despite the fact that this attack is slightly out of computational reach, it should be recognized that 2^64 work efforts have been accomplished in the field, and this is only ~512x that work effort. Even without mathematical advances, the system is at definite risk. Attack #2: Inconsistent Interpretation Of The Subject X.509 Name in a PKCS#10 Request Can Cause A CA To Emit A Certificate For An Unauthorized Common Name As described earlier, the process of acquiring a certificate requires sending your public key, alongside your claimed identity, to a Certification Authority. This generally requires submitting a PKCS#10 request through a web interface, which once decoded may be seen as follows: $ openssl req -in request.pem -text Certificate Request: Data: Version: 0 (0x0) Subject: O=Foo Inc., OU=IT Department, CN=www.ioactive.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): … Exponent: 65537 (0x10001) Attributes: a0:00 Signature Algorithm: md5WithRSAEncryption … This structure is not actually simply text – PKCS#10 requests are data structured according to the ASN.1 BER standard. ASN.1, or Abstract Syntax Notation #1, is a mechanism by which structured data can be efficiently exchanged between nodes on a network. BER, or Basic Encoding Rules, is a particular form by which ASN.1 may be represented as bytes on a wire. Somewhere between predecessor of and competitor to XML, it is interesting to see exactly what’s going on in this ASN.1 request: (Note: In theory, most of the protocols described in this document should be using DER – the Distinguished Encoding Rules, which are a “best practices” subset of BER. However, as per the policy of ‘Be conservative in what you send and liberal in what you accept’, real world encoders seem to try to get as close as possible to the idealized patterns of DER when they generate content, but accept much looser BER encoded bytestreams when called upon to parse. Based on real world observation, we’re simply going to describe these protocols as using BER. You are what you accept.) There is a full schema associated with PKCS#10 requests, and that schema may be found in RFC 2986. An interesting element of ASN.1 is that the encoding reflects as little of the schema as possible, preferring to simply trust that a decoder will have the schema compiled into it. At that point, position becomes fundamentally meaningful. For example, the node, two levels deep, at “(/ 0/0)” simply is the version number, while (/0/1/0) is the Subject X.509 Name, which is a sequence of sets of sequences of OID (Object Identifier) / String pairs. The Subject X.509 Name is called out specifically, because it is at the heart of the trust model in certificates. An X.509 Name is a sequence of sets of sequences of OID/String pairs. There are many possible descriptors that can live within the OID/String pairs – Country, Organization, Organizational Unit – but in the context of web browsers, the only name that matters is the Common Name, for that is the name against which the name of the website being secured is compared. By the same token, this Common Name is the one element that a Certification Authority must validate, and validate correctly, or it will issue a certificate granting rights for names that the user hasn’t proven himself worthy of asserting. There are thus two classes of consumer for the same sequence of bytes. Security requires certificate authorities to see the same thing that browsers do. Do they? Not necessarily. Attack 2A: Multiple Common Names in one X.509 Name are handled differently by different API’s. As mentioned earlier, an X.509 Name is composed of a sequence of sets of sequences of OID/String pairs. When the OID equals 2.5.4.3, then the string attached to that particular sequence is interpreted as the Common Name. But what if there are multiple sequences, each of which has an OID of 2.5.4.3? Which strings will be interpreted as the Common Name? It depends. Suppose, for a moment, that the subject name in question was “/O=Badguy Inc/CN=www.badguy.com/OU=Hacking Division/CN=www.bank.com/CN=*/” Inside a PKCS#10 Certificate Request’s ASN.1 encoding, we might see: (0x2A is hexadecimal for “*”). Upon receiving such a request, four policies are possible:

Description:
OpenSSL, since existing versions have an annoying but ultimately them was that they would simply not be issued without a thorough, manual
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.