This are only notes from Nov 2024, and may be extended later.
Exclamation marks are used to express cynicism.
Most people underestimate the complexity of security. Most website authors
simplify the security thematic so hardly, that the outcomes are wrong and
make security weaker for you. I will show you why I think another way.
I concentrate on RSA-like asymmetric algorithms,
because you need that for world wide safe communication.
Todays (2024) security is based on asymmetric encryption (including electronic signatures). The encryption algorithms are not proven safe, but they are believed to be hard to break. But there are some "small" assumtions which must be fullfilled to reach the hard-to-break state.
This has some "minor" consequences ...
Your personal computer (PC) is a bad device for encryption. If you not
belong to the privileged group of people which spend half of there lifetime
to explore the secrets of computing, you are depending completely on your
providers and its quality.
On SSH (secure shell) on the first connection, the software presence you
the fingerprint of the servers public key and requires you to check its
correctness. I do not know anyone how does this always and often wether the
fingerprint is publicised nor the servers admin is reachable for asking
directly. This method follows the Trust On First Use (TOFU) principle.
But after that, ssh public key is saved localy and you are save
until the algorithm is known to be broken for that keylength someday in the future
or the server updates its OS (Operating System) updating and not backup and restore
its old key pair.
For TLS (Transport Layer Security) used in the world wide web, another way
was choosen. Here we have electronic certificates which give garantie for
correct public keys. The certificate is created by a verifier and is finaly
a electronic signature which must be checked too. This can be done by a
higher trust instance, so you have chains of trust. Finaly there is still
the problem, that you have to check one of this chain member signatures
and additionally have to trust the lower chain members to verify correct.
Very likely you will delegate this task to your software vendor.
The problem is, that as more institutions and people are involved in that
trustchain, the more errors or bad manipulations can occur.
The first way to fix this problem, was to add a way to revoce a signature,
but that involved a complex way to distribute revocations with a lot of bad
site effects (later OCSP etc, see EFAIL 2018 certs CRL as malicious back-channel).
At last the browser industry tries to fix it with shorter
lifetime of certificates, which has also its problems.
Today the short lifetime mostly brakes the original way to save the public
key of the server and compare against it. This is because your browser
aves the certificate which is the public key plus some attributes.
One of the attributes is the lifetime. Automaticly updated certifcates have
a lifetime of 3 months only.
It tells you or your software to throw the certificate away and expect a new one.
The new one could have the
same or another keypair and this will be accepted by your browser before
the old certificate will expire. It again fully trusts the certificate chain
and never presents you a warning if the public key (the inner cryptography
core) have changed. Today (2024) the distributers put a couple of (long
living) root or authority certificates together with their software.
I count 353 default Authorities within Firefox Certificate Manager from all
countries or companies over the world. The security is as weak as the
weakest authority in that list. A weak authority may be likely,
if this weak authority uses systems managed by other vendors which
purchase services from further vendors.
One of this vendors may depend on cloud providers and than another
inextricable worldwide tree you depend on.
If one of this authority companies will fail, by beeing hacked, for example,
it does not mean that it will deleted from that list.
To-big-to-fail companies are left in that list to minimize the worldwide
trouble, causing such a incident. so there are political decisions which
overgo security rules. This is why for my opinion, TLS is proved to be
broken by design.
So its a security vendor tree where the weakest point you depend on.
Also your OS or Software may update at any
time you are online and have automatic updates enabled, or let vanish
one of that authorities seconds after used in a bad way.
The TLS system is so broken, that certification authorities must be notify the big browser companies for every new certificate to give them a chance for extra checks. This is mostly because their own web-certs could be taken over. This was already happen. If you wonder why your new installed server is been scanned immediately after getting a web certificate, may be its because of that report chain.
The jabber.ru-hack 2023, a man-in-the-middle attack with regular letsencrypt-certs also shows, that TLS and its certificate trust world is broken.
I suggest changed browser software. Throw the default root authority certificates away and let the user verify the fingerprint of the public key on first visit. The websites must publicise there fingerprints and hold the public keys as long as possible and sign new keys with the old one. I think there is no good way to revoke leaked keys. The new website should do it like pgp with an advanced generated revoke-cert which is send directly to the users browser which has to store it together with the stored public key. This is more simple than the current cert chain world with less dependencies.
firefox: ctrl-mouse-right + Q network reload Domain.lock-symbol Fingerprints,noPubkey
When you browse a website you get a lock symbol in the headline for that website. It is colored and you may click on them to get more information about the connected security. But what about the inlined code from other websites (use noscript to be informed about the fact, that there are such sources)? I see no easy way. So simply distrust websites, which do include sources from other sides. I know the web is full of such sh*t.
An update is a chance for the volatile in-memory malware to silently go permanently!
You may know the "rule" Update often, update immediatly, no excuse.
But this is only a good rule within a trust-your-vendor idealized world.
But your vendor may first interested in making money our saving money,
security interests are more or far later or are skipped for money efficients
reasons even for companies naming their products
as security products.
Security products mostly are snake oil products, sometimes creating security
holes wider and more catastrophic than the original system has had.
Today we have the vendor attack-chains and some "minor" design decisions
against security.
There is another rule: Software has always (security) bugs. This is not
exact true, but the bigger the software, the more likely bugs are in there.
Simple software can made safe. Software which has security in mind is
therefore made after the KISS principle. That means "Keep it simple,
stupid!" and other similar phrases pointing to simplicity as design
principle for safety in all forms
(for aircrafts, spacecrafts ... and ... software).
If you want to make security you need KISS systems. If you do not, you
have to do updates as a way to fix the security holes inside your system ... and
create new holes!
Even the update itself can be the hole. Todays updates are mostly cryptographical
secured with a attached signed hash sum. The updating software has a public
key to verify the signature. So far so good. But if you need the update
for security reasons, there is a hole which possibly already was used and
you can not trust the update process anymore.
Do not underestimate this. The internet is faster than ever and your system
is surely put in one of the databases for marketing reasons where all
accessable data is stored too with minimum your browser and OS version and
selled to other money makers down to the dark site of the net.
If a security breach method becomes known to the web an army of interested
people sends code to the bots and hunts for every accessible system using
all available databases. Everything may happen within seconds.
If you have a read-only system media (KISS) or the trusted platform
functionality (TPM - the complex way with its security holes) you must reboot
before doing the update! Your system could be infected in memory and you
have to reboot in a clean state before update. Booting after the update is
useless from security point of view. After that you need a safe (bug-free)
update-process where you must set your system-media temporarly to
read-write mode (KISS) or trust the software do the complex resigning of the
boot system if that is included.
If you haven't that read-only functionality, you must reinstall
your system to a fresh state or the last backup after the last update
before starting the update process.
You do not go this way? Congratulation! Your system may still be infected
and is not safe.
If your system is always online and attackable? Congratulation again!
You need to be offline during reboot or reinstallation und during the update
process.
Additionally some vendors deliver you broken updates, which let your
system in an unusable state. Welcome to the real life. I guess this vendors
do not follow the KISS way. Unfortunately even the big linux distros
leaved the KISS way and replaced initV (KISS) by systemd (not-KISS).
So if the systemd updates
itself (to fix its bugs) and you have not enough memory or disk,
you will left with an unbootable system. Well done!
If you need help, search people which use KISS-software (p.e. linux without
systemd). You may recognize them if you got HTML-free emails (KISS)
with no attachements from them.
If you do not know, which system is more KISS than another,
look at the overall system size. Smaller is more likely simpler.
If you want to build a secure device, take a old computer, add a systemd-free
linux distribution like devuan from www.devuan.org to it,
install gnupg and openssl and take it offline
or use a bootable memory stick with a similar system. Transfer simple
formats (ASCII-text is the best) to that offline system for
encryption/decryption via writable transfer-stick.
Do not use complex file formats (like pdf) for signing, make detached
signatures (its a signed file-hash only).
You must be able to verify the text or file or its hash which you want to sign or
encrypt. This is against unwanted changes.
Therefore encryption devices need to have a
human readable display and at minimum a yes/no keyboard.
Such an extra encryption device is the way using separation principle for security.
The extra device is the secured zone which does all encryptions or signing.
If you do this, you go from 1% safety to 99% safety.
The often proposed two-factor authorization
is more like going from 1% safety to 2% safety. Its only for the
web-login-safety as a small part of the overall safety.
For webtraffic I do not know a practical solution. Install NoScript-plugin
and disable all default trusted website entries as a minimum. Never
execute code (JavaScript et al.) from the web.
Also disable
all unneeded webbrowser functionality which causes additional traffic for marketing
and pseudo-safety which is difficult. Most webbrowser are the opposit to
KISS and websites do there own, to require bad functionalities.
If you need that bad websites, access them from extra machines made for the
dark world (bootet from read-only media).