terminology I

Le dans «X.509» par hase
Mots-clés: , ,

some boring history

The X.400 and X.500 series of standards stem from the old telephony world.
The standardization organization was the CCITT (Comité Consultatif International Télégraphique et Téléphonique), later renamed ITU (International Telecommunications Union) and back in the bad old days the members were government agencies: the Postal- and telecommuinications agencies.
Todays crowd can hardly imagine how that worked, especially since the US Telco companies were privatized early on, when the likes of Deutsche Bundespost Telecom or Telefonica were still part of the govenrment.

The CCITT/ITU defined standards used in telecommunications worldwide.
The "worldwide" part already made this a huge task, taking into account all the different situations in the different corners of the globe.
Then the "international" part made it more difficult even, as we tend to use more than one language on this planet.
But the factor that probably slowed down the standardization processes the most might have been the buerocratic nature of the CCITT/ITU members.
Government agencies are not usually known to be fast-moving innovators in the technologiy sector...

The tedious, slow and frustrating processes in the ITU led to a counterproposal on the method for defining standards.
Some couple of people (two, three, a handful) would write down a proposed standard and simply publish it as a Request For Comments.
And these became the de-factor standards that people used.

And everybody was happy ever after, as the evil buerocracy had been vanquished.

Well, actually the RFC process is indeed much faster.
But it also has provided us with pearls like smtp (read "Spam"), snmp (a fault monitoring protocol prone to loose warning messages due to faults) or ftp (a protocol requiring a form of deep-packet-inspection to NAT correctly).
In general, many standards were written in the knowledge of "we can always improve and replace it later".
But that is actually hard or next to impossible.
Once something is "out there" on the Internet, it is basically forever.
Not just content, protocols, bad design decisions, quick hacks: it all stays there (nearly) forever.
Case in point: Javascript (a proof-of-concept hacked together in 10 days by a signle person) not only still exists, it is still actively being used.

Your author has started his "career" in the telecommunications industry when the one customer you could have was Detusche Bundespost, the government-run telecommunications monopoly in Germany,
And I hated it or found the buerocratic language amusing, and unnecessarily complicated.

But the buerocratic language has one benefit: all terms are well-defined making communication very clear, precise and unabigous - at least for the initiated.
And as seen above: the X.400 and X.500 series of standards stem from this root.
This gives us the benefot of fairly precise terminology - at the cost of the langue used being a bit convoluted and unfamiliar.

some recap

We already came across the two types of Entities in the system: the End Entity and the Certificate Authority, which is an Entity not at the end of a certificate chain.

We had a short glance at the Public Key Cryptography involved, which uses key pairs in all transactions (one key encrypts, the other one decrypts).
And because we are using cryptography, we are importing quite a bit of terminology from this part of math.

We already met the certificate, albeit in a brief, fleeting encounter, and we will need to dig deepter in that territory.
- certificates bind (public) keys to names -- therefore anything that can have a name can be the subject of a certificate.
-- a certificate contains (exactly) one public key -- a certificate contains (at least) one name - A certificate is signed data.

This last point is a little different from signed documents (like the IOU from the Intro):
- when dealing with signed domuments, we treat the signature and the document as kinda separate chunks of data - one document may have multiple signatures (e.g. a contract signed by two or more parties)

Certificates have exactly one signature (that of the CA issuing the certificate) and the signed data plus the signature together are the certificate.

and some new words

Or rather: I want to define a term here, that I already used.
A Public Key Infrastructure (PKI) is the sum of - technology - policies - people operating together to issue certificates.

This could be just you, your laptop and the policy "lets just sign everything presented".
That would be a PKI - just not one whose certificates anybody would trust a lot? Well, I am guessing here, I do not even know you.

The X.509 standard like all of the X.500 series and the X.400 series they heavily lean on, was never really finished.
Unlike older ITU standards that meticiously thought trough every enventuality (also a reason they are kinda hard to read, apart from the convoluted language), these were left in a somewhat unfinished state.
And they leave the question "who should run the PKI" somewhat open.

It is plausible to assume that some authors of the standards envisioned a government agency for every nation state.
Other will have envisioned private enterprises.
But there general structure is tree-shaped a hierachy stemming out of one Root.

This herachical structure has served the Domain Name System (a system standardized on the other side of the divide in RFC-land) very well.
There is one global Root zone containing (originally a few, later more, than a lot of) domains which are at the second level of this hierarchy and are teh first level of zones that have names.
Because they are the first level of named zones ("domain" is just a names zone in the DNS; only later it became a sysonym for "Website delivering a JavaScript" and other aberrations), and because they are located directly in the topmost level of the hierarchy, they are called top-level-domains.
Ok, basically nobody ever used the hierachical nature of DNS, everybody got stuck at the second level - but this paragraph is a tangent anyway, so lets leve it here.

It is easy and plausible to envision something similar for X.509: one RootCA signing certificates for second-level Certificate Authorities which in turn would sign the certificates for third-level CAs etc.
And at some level theses SubCAs would no longer issue CA certificates but only certificates for End-Entities.

But that is not the structure we see today.
Instead, the need for keeping some data (say - passwords) confidetial on the Internet, created a need for cryptography being used.
And the newfangled Public Key Crypto nneded certificates.
So people need certificates? Sounds like a brilliant business oppurtunity, right?

Indeed, it would be, if the CAs were run on principles of diligence and striving for trustworthyness.
But as always in tech: when people do not understand it, they choose the cheap option.
(Case in point: VHS won over Betamax and Video2000 and every search on any Internet shopping site nowadays is flooded with cheap crap masquerading as teh real thing).

The structure wee see on the public Internet in late 2023 is a truckload of commercial CAs run for profit and struggling in a market, tha Lets Encrypt has thoroughly demonetized.
These are public PKIs, issuing certificates for third parties (like you or me or businesses with their company websites). Plus there is a trainload of private PKIs that businesses operate for their internal purposes, i.e. only End-Entities employed by (people) or controllerd by (machines) the same organization that also runs the PKI.

What we will see on the public Internet will be a mix of these two.
Currently many businesses open/publish their internal PKIs to to public, not so they can issue certificates for third parties, but so that the certificates they issue canbe relied upon by third parties.
In the Business-to-Business markets this is a requirement stemming from the Cyter Resiliance acts of the EU: manufacturer of equipment to-be-used in critical infrastructure must sign their Software/firmware and also there is a need for initialDeviceID certificates issued by the devices manufacturer to be verified by the buyer.

In the consumer market we will encounter devices with factory-issued iDevID certificates in all home-automation devices following the Matter standard, as this mandates iDevID certificates.

Some of these will be run by professionals with almos superhuman diligence.
I am so certain because I know some of these professionals from the PKI trainings I gave them.

Other will be run by underpaid admins on the lookout for another job as their company and their bosses are just unbearable, have no clue what this certificate nonsens is and would not care anyway.
In other words: business as usual.

That is my other motivation for writing down some of this stuff this holiday season: the more people understand, what anX.509 certificate really is
(hint: a signed data structure binding a keypiar to a name)
and how simple in structure the whole thig really is, there more likely misunderstandings will not come to hjurt or haunt us.