Already as a schoolchild, I was very interested in **secret scripts**, secret **codes**, and the like, and eagerly read several books about this stuff. At the of 12, I read one that amongst many other things explained that **Claude Elwood Shannon**, the father of **cryptology** (the science of encryption and decryption), **cryptography** (the practical application of cryptology to transfer messages secretly or store data securely), and **cryptanalysis** (the malicious application of cryptology to break secret messages or steal data), had postulated in a **set of theorems** that a **perfect encryption** was **impossible**, as for mathematical reasons, it would require the use of a secret key that is not only totally **random**, but as **long** as any message (or as big as any data file) one wants to encrypt.
But even that young, **aged 12**, I **questioned** Shannon's set of theorems, and thought: **Yes**, he is basically right ... **but** ... **what if** one could come up with an **algorithm** that **makes any key/password** automatically **infinitely long and random**?
This idea never really left me, and some years later, I finally was able to **develop** and thoroughly **test** my algorithm that should challenge Shannon's set of theorems. I tested it not only with the regular **statistical** methods, I even **listened** to it — a perfect algorithm would sound like perfect **white noise** all across any file one has encrypted with it. But I heard occasional **glitches**, **tinkling** noises, or **humming**, amongst the white noise just occasionally occuring, and thus knew my algorith was not yet perfect.
A few years and a lot of **research and development** work later, in **1990** I finally had developed **the world's first perfect encryption algorithm**, and I have never seen any approach that is similar. As I would witness so often later in life, **all others** would use **overcomplicated**, **bloated** approaches that however produce far **inferior** results.
My **INFINOIR** (**"infinite noise roll"**) algorithm generates **perfect** white noise, perfect statistical randomness across all frequency bands, and thus with near-certainty **cannot be cracked** by cryptanalysis, **only by brute-force** trying all possible keys/passwords, which can never be avoided (which is why keeping your passwords secret is always absolutely essential for your data security). — And my algorithm is at the same time also **by far** one of the **fastest** and **most efficient**!
While the algorithm is usually implemented as a **computer** algorithm that works with **bytes** (units of data that can be represented by values in the range from 0 to 255), I'll illustrate how INFINOIR works with a **pen-and-paper example** here.
Let's say a dying Caribbean **pirate** wants to make sure his **treasure** is only dug up by his dear sons once they have grown up.
First he describes the position of the treasure:
*"The gold is in the cave under Skull Rock on Parrot Island."*
This is the **original message**.
He then has to **encrypt** it via a **secret key** (aka password, passphrase, or mantra) applied to an **encryption algorithm** to create a **secret message** that can only be turned back into the original message by using the secret key with the **inverse** of the algorithm, the **decryption algorithm**.
And then he has to **send** two trusted friends strictly **independently** from each other to his dear wife at home, one with the **secret message**, and one with the **secret key** (and the decryption algorithm if his wife doesn't know it yet), ideally so that neither of the two knows of the mission of the other. (Or really ideally so that they both know nothing about their respective own actual mission, and think they deliver just an **ordinary letter** home, which is **advanced security** and not the topic here. Just briefly: He might have established a **steganographic** protocol with his wife many years ago to use for such eventualities.)
To use a code/algorithm, he has to **assign numbers** to all **characters** that he might use in his messages. Let's say he works with the following **inventory** of 40 characters, using only letters, digits, and few special characters, replacing all spaces with underscores:
# Inventory #
His **message** to his wife would then be assigned the following **values**:
# CleanText #
To encrypt this, he needs a **secret key**, and chooses (for the sake of our example, while you should normally use a much longer key):
# CleanKey #
Now one **simple** form of **encrypting** would be to **add** the key to the message, character by character. But what to do if the sum of two character codes is greater than the maximum of 39? Then we just subtract 40, and the "next" character after the last (the "-") then is again the first (the "A"). This is called a **ring vector**. We just repeat the key over and over and add its character codes to the character codes of the message, and thus get the encrypted message:
# JustAdd #
To **decrypt** the secret message, one would simply **subtract** the key, adding 40 to any negative results before finding the corresponding character. As simple as that.
For our pirate story, this would likely suffice, considering how short his message is. But how strong is this encryption really? Let's look at the **statistical distribution** graph of the encrypted message, showing the **frequency** of all possible characters, **sorted** from the most frequent to the least frequent (or not at all occuring):
# JustAddStats #
Statistics work better with greater sets of data, therefore let's better encrypt the same **message repeated 100 times**, and then look at the stats of the output of that:
# JustAdd100Stats #
Both these distributions are still miles away from the **ideal**, a perfectly **flat distribution**:
# FlatStats #
And they are just a little bit better than the **original message**, by the way:
# CleanStats #
Here are **visual renderings** of the **original message** repeated over and over (left image), the **secret key** repeated over and over (middle image), and the result of **just ring-adding** them (right image):
To get closer to Shannon's "impossible" perfect encryption, we need to do something with the key. So instead of adding the same key to the original message over and over, let's **add the message to the key**, thereby **overwriting** the key over and over, and using this as our output secret message.
Here it is in **Eas source code** (see E →Molaskes.info/Eas), working with **bytes** this time:
**secret key** gets heavily **mutated** as we encrypt our message:
# Level1 #
Here is the **distribution** graph for this output:
# Level1Stats #
But for such short messages, a flat distribution is rather unlikely. Let's however look at the distribution for this algorithm when we encrypt the same message **repeated 100 times**:
# Level1Stats100 #
Now *that* is already pretty close to the **ideal distribution**:
# FlatStats #
Compare the **visual renderings** of the prior step (**just ring-adding**, left image, still having much more **un-random patterns**, structures, artefacts) to the result of our new algorithm **with endless key-overwriting** (right image, looking **very random** indeed):
Here is the **decryption** algorithm for this stage:
**artefacts**, which are **patterns** that an attacker could use for **cracking** the secret message without having the secret key.
What we will do next is **"fold" the key** over itself every time we start at its beginning. This means that its last character is added to its first, then this new first is added to its second, this new second is added to its third, and so on. Each **addition**, because we use a **ring vector**, adds one **binary bit** of **uncertainty**, which makes the code **harder and harder to crack**, **exponentially increasing** the necessary guesswork with every encrypted character.
The **new encryption algorithm** then looks as follows (and for the decryption, the same "? @@\l=0" block is inserted in the same position):
**visual renderings** of the **message encrypted** by the prior algorithm **without key folding** (left image, where you may still notice some **diagonal streaks or bands**) versus the new algorithm **with key folding** (right image, looking even **more random**):
This folding is particularly important when **encrypting files** that may contain **regular patterns** or even **constant lines** up to the worst case of **flat zero lines**. Let's look at that now, encrypting our message 100 repeated times but replaced with "A" characters (code=0) only.
**Without key folding**, the statistics return this utter nightmare:
# ZeroStats1 #
But **with key folding**, the result is much more acceptable:
# ZeroStats2 #
In the **visual renderings** you can clearly see how the **key folding** renders the highly repetetive pattern of the key (left image, encrypted zeros without key folding) into an astonishing **near-random** output (middle image; the diagonal streaks/bands quickly vanish when the input message or file offers something else than just rows of zeros); and the same done with the key from the German version of this article (right image, looking already quite random):
This can be **improved** though **greatly** by simply **folding** over a **longer key**, for which we simply start with the key **repeated** a few times. Here is how the statistics improve for key folding using this little intervention:
In the **top row**, you see the original result of **single-key folding zeros** using the English key (top left image, with the key being **9 characters** long), the same with the longer German key (top middle image, with the key being **12 characters** long), and finally the **message encrypted with single-key-folding** the English key (top right image).
In the **bottom row** you see the **respective results** when starting with the secret **key multiplied** by repeating it a few times in a row (6 times the English one, resulting in a new length of 54 characters; 5 times the German one, resulting in a new length of 60 characters). Notice the further reduction of artefacts and patterns, such as diagonal streaks and bands:
**final step**, we will **"salt"** the message, that is, we will prepend **random noise** to it that is at least as long as our secret key. The noise uses all characters but "A" (code=0), and is followed by an "A" to signal the end of the salting and the beginning of the actual message. This is how the salted message looks like, that the pirate's wife will actually get when successfully decrypting his secret message. When she learns the decryption algorithm, she will also learn to ignore the first characters up to the first "A":
# Salted #
Now, encrypting this, with key folding, we get:
# SaltedX #
And its statistics look as follows:
# SaltedStats #
Finally, if we repeat our message 100 times, salt it, and encrypt it, the statistics look like this:
# SaltedStats100 #
**The larger** the encrypted file, **the closer** we get to the **ideal flat distribution**; and the **salt** makes **even brute-force attacks**, that try out all possible secret keys, much more **difficult**.
While **in theory**, **Shannon was right** — yes, you could only perfectly encrypt any message, including a file consisting of only zeros, only with a secret key that is perfectly random and is as long as the encrypted message or file — the **but** is that **in practical applications**, the **INFINOIR** algorithm is **perfect enough**, as it turns any secret key into an "infinite noise roll" (hence the name of the algorithm) random enough to generate excellent randomness across all frequency bands, and that while being **extremely efficient**.
I have **implemented** the **INFINOIR** algorithm **since 1990** in many different applications, with some variations depending on the specific use case, for instance for reliably verifying that the file has been decrypted properly by the correct secret key. And in the three decades since then, I've never come across any other published approach that comes even remotely close in elegance.
I designed it over 30 years ago (and actually started to work on it almost 40 years ago) as *the* **perfect encryption algorithm** that **in any realistic use case** (not test patterns such as only zeros or only strongly repetetive patterns) **creates enough randomness** that across all considerable frequency bands you get as close to **perfect white noise** (= utter randomness) as possible, which it has proven in many **statistical analyses** over the years/decades.
It is intended to be used in scenarios where the encrypted messages/files *and* the decryption algorithm may be well known to any attacker, but without the secret key, with INFINOIR there only remains the chance of brute-force cracking/guessing the key (which cannot be prevented by any algorithm), and **statistics-based cryptanalytical approaches stand no chance anymore**.
Although thanks to its **ultra-efficiency**, it can also be used in **realtime applications**, for instance **communication devices or protocols**, I have always only implemented INFINOIR for **file and file system encryption** and (social) **website security**.
The **statistics graphs** and **visual rendering** images in this article use parameters that **enormously magnify** the effects they shall illustrate, using a **very limited characters set** of only **40 characters** (values 0–39) instead of bytes (256 characters, values 0–255), rather **short and simple secret keys** (which would be unadvisable in real use cases), and a **very short message** (repeated over and over again for the analyses), all that so you can see anything.
The **40 character values**, for **human-eye physiology** reasons, are depicted in the **visual renderings** as **4 levels of Red** (0%, 33%, 67%, 100%), **5 levels of Green** (0%, 25%, 50%, 75%, 100%), and **2 levels of Blue** (0%, 100%). This makes any artefacts **optimally visible**.
When working with actual **bytes**, and using actual **meaningful files** or other signals containing meaningful information, any **artefacts completely vanish**, and the **encrypted files** (when long enough) show a **perfectly random**, **flat statistical distribution**:
# FlatStats #

Encrypt message key: l key? @@ message o (key[@@\l]+@)\256 key[@@\l] o output;o / << output / And here is what that results in, which also shows how the

Decrypt message key: l key? @@ message o (@-key[@@\l]+256)\256 key[@@\l] @ output;o / << output / A more thorough statistical analysis would however reveal that this algorithm still occasionally creates

Encrypt message key: l key? @@ message ? @@\l=0 key[0] (key[0]+key[l-1])\256 @ 1..l-1 key[@] (key[@]+key[@-1])\256 / / o (key[@@\l]+@)\256 key[@@\l] o output;o / << output / With this new algorithm, the message looks like this: # Level2 # And its statistics look like this: # Level2Stats # When encrypting the message 100 times repeated, this changes to: # Level2Stats100 # Here is a comparisons of

As a
(published 2024-09-01)