Little Britain24.04.2024

As it will still take at least a decade before affordable broadband is sufficiently widespread to replace satellite tv, the latter is quite a good indicator for economic health.

And looking at the continuously diminishing count of active transponders of the Astra 2 satellite group, the economic direction of former Great Britain is clear. Go Brexit, go Little Britain!

No mercy from my side, no EU country, so no sale. And if there's illusions of positive economic contracts - well, there's no empire anymore and a single little island can't dictate economic rules.

The only thing I can say is "have fun going down". And if this country wants back in the EU, which actually may happen, it will have to accept all proposed rules and regulations without any discussion. Otherwise this country is allowed to simply go broke. No one will care.

In the end, maybe the EU will have a not so far away place that can produce cheap low tech stuff, replacing the cheap chinese low tech stuff. Or have a large nearby chinese fullfillment center. Mind you, there's no EU human rights rules anymore for the UK so wages can be continuously reduced, as long as there's sleeping bags for bridges and sewers available.

Yes, people do get what they asked for. And if people are stupid enough to believe in outright lies these people will have to pay the price for their stupidity. It's as simple as that.


Friday the 13th...14.10.2023

Well, today I set up a MUA on a freshly installed system. Actually, though using KDE I'm running Evolution as it is the only MUA that doesn't choke on 5 digit mail counts in IMAP folders as Thunderbird does and that doesn't happily start to delete complete folder trees in the background by itself as an incaration of KMail I tried quite a while ago did. Well, no trust in KMail anymore, never again. And as for Thunderbird word has it that it didn't get better with "larger" amounts of mails.

Well then, Evolution works well in the KDE desktop environment (I'm using Arch). It handles all kinds of IMAP servers without problems - except for Google Account setup.

I'm using a 2FA scheme using Yubikeys for logon. And this leads, when setting up a Gmail accound with Evolution, to a endlessly waiting window after having entered username and password. The 'open in external browser' (the '>' symbol) method fails utterly, too, no matter which browser is used.

The only way to work around this is to first disable temporarily Google 2FA authentication, then assert that 'OAuth2 (Google)' is selected in the 'Receiving Email' pane of the account configuration in Evolution and then restart the account login. Voila, access granted, gnome-keyring stores the access information, all is good.

And then, for some nice extra work one has to re-enable Google 2FA and painfully re-add all security tokens, one by one. After that delete the myriad of Google security warning mails on the Google and the fallback account.

This is how security is not supposed to work. This is a three hours horror or trial and error including lots of web research. This is a mess. And it seems, for the majority of problems Google is to blame as chromium is not able to handle the transferred url to create a security token. Doh. Friday the 13th, you can tell...

Additional fun facts: Disabling 2FA doesn't work on Android. Google wants to use Chrome there, too. And if your default browser isn't Chrome for good reason you lost. And as for security I had to setup a browser with 2FA and kept the 'remember me' option for good measure while working on the logon stuff. After all logon changes back and forward trying this browser resulted in: still no passphrase or 2FA required, business as usual. Now, Google, what kind of security you're using? The 'only Google stuff works' version?


The systemd Backdoor Feature01.10.2023

So you rely on the fact that only root can start services and that the service that has a security problem which you stopped and for which you did autostart disable thus stays down?

Well, what the heck, why should systemd developers implement such an Hax0r's hindrance? Thats' what "ActivationRequest" signals are for. Just start any service with no authority check, isn't that a cute feature?


Did gnupg developers get notified...01.10.2023

... that select, poll, epoll and friends where invented a while ago and that networks tend to fail spuriously from time to time? Probably not as a short wireless outage while "dirmngr" is running makes that rubbish of a so called tool start to consume 100% cpu and needs to be killed before the laptop fans get their outage, too. And thanks to having to kill this waste of code this trash happily ruins the keyring it is supposed to be working on while going down.

Yes, planning and implementation of modern and functional software that is actually usable tends not to be part of the skills required for gnupg development.


If you want an insecure system...01.10.2023

...be sure to run all software promoted by freedesktop.org. You will get:

  • monstrous programs like accounts-daemon that for sure can't be audited but insist on accessing /etc/shadow to offer account services relying on the broken and virtually unmaintained Polkit via DBUS instead of relying on tried and usually trustable services like PAM
     
  • programs like systemd that are designed with the 1970s concept of "root is allowed everything" that don't even offer any mechanism to restrict access
     
  • again programs like systemd that are soooo flexible that you can't e.g. have your own unit definition and then mask the unit (may be temporarily required) as your unit definition and the masking symlink are require actually the same pathname, nah, doesn't happen on big $$$ corp. systems, unneeded feature
     
  • software that is designed to be liked by managers of big $$$ corporations as the broken design fits the monitoring plans on the sheet of printed paper lying on their desktop
     
  • software that is for sure not designed with end user desktop system security in mind

The mainainers of freedesktop.org should rename their site to megacorpsupport.com as this fits the intention of the software hosted there. I just do wait for the moment when "freedesktop.org" starts to promote the "we don't need any programs anymore, just shared libraries loadable by our awsome systemd" slogan. And if they manage, a little later there will probably be the "in the future only libraries signed by our big $$$ friends will be acceptable for loading by systemd" enhancement. Well, yes, like Steven King's "new and improved" interpretation (you should read "Tommyknockers").

One should really be aware that freedesktop.org is not a real promoter of the open source idea, instead it promotes "embrace and extend" which is the method of a certain Redmond company.


Polkit - Insecure by Design01.10.2023

Polkit is broken. Completely. Not only that there's usability bugs that break command line usage pending for years - it relies on an "Authentication Agent". The latter is typically provided by your desktop environment and consists of a part running as the user and a part running as root.

The brokenness is that Polkit doesn't do any actual authentication itself. It relies on the "Authentication Agent" doing "the right thing" and signalling "access granted" or "access denied".

Now desktop environments are not exacly known for providing components that are heavily security audited. And they do things their own way. Thus it is quite usual that an application requests root rights and the "Authentication Agent" of the desktop environment just asks for the user's password to signal "access granted". So just running "pkshell" and entering your accound password provides for a root shell, even if the whole system is configured to require the root password for any actual root shell. Doh.

It could have been so easy if Polkit would have been properly designed. Polkit is accessed via DBUS and DBUS allows for file descriptor transfer. Thus the authentication request via Polkit could still make use of an "Agent", the "Agent" however just providing the authentication information via the file descriptor and Polkit doing the actual authentication using tried and mostly reliable methods e.g. via PAM. And in this case Polkit could have been configured to e.g. enforce presentation of the root password for root actions by default except where defined explicitly otherwise via a policy.

But well, this ship has sailed a long time ago and thus Polkit is another example of short thinking with regard to security, resulting in bad security.


The sad state of systemd security01.10.2023

Systemd developers seem to behave like little children playing with their new toy in front of their home and not being aware that the nearby road is a constant danger to their life.

This is the only reason I can see why systemd allows for service control via DBUS without any means of restricting what can be done. I don't mind systemd having its own private control socket (except for it's location which again prevents security measures) with full service control.

So any malicious application that can pretend to be legitimate via DBUS can wreak havoc to a systemd controlled system. Systemd developers should have been aware that uid 0 is just an insufficient credential and has been for many years. But well, little children don't know better.

The sad thing is that nobody on the systemd side seems to care about security. No wonder the head systemd developer was welcomed by this certain Redmond company that is well known for losing important private keys. Maybe the remaining developers should head there, too. And maybe then developers with a sense of security in mind can try to salvage the pieces.

At least now I do understand why Ubuntu is patching the kernel so heavily for Unix domain socket and DBUS control. And yes, from a security point of view this stuff would need to be applied to mainline urgently to get at least some control back into the hands of an administrator that really takes care when it comes to system security.

Is it necessary for some zero day exploit to cause worldwide havoc until kernel developers get it that the beloved systemd init system is fatally flawed and needs restrictions that can only be applied via additional kernel security?

Does one really need to develop eBPF code that gets attached to the proper Unix domain sockets and then filter unwanted stuff - given that this is actually possible? Why is there no systemd configuration that can deny dbus requests as specified by the system adminsitrator? How big must a security hole be to be acknowledged as such?

And yes, this "feature" can threaten lives. Try to make an urgent emergency call when your home pbx has been shut down by a malicious application marauding system services via the oh so cute DBUS interface of systemd. Ok, no way to call for help, die a little bit earlier, thanks to the stupidity of systemd developers.


GnuPG - or: how to make easy things very difficult19.09.2023

Well, I just wanted to do a fresh install of Tor browser. Ahem, not possible, key expired (!?!). So I looked at the Tor Project's web site for further information. So let's try to fetch the signing keys manually according to the above information: 

gpg --auto-key-locate nodefault,wkd --locate-keys torbrowser@torproject.org

This gives some more information, which isn't really helpful at all:

gpg: error retrieving 'torbrowser@torproject.org' via WKD: General error
gpg: error reading key: General error 

So back to the drawing board and edit ~/.gnupg/dirmngr.conf to contain:

debug-level guru
log-file /tmp/dirmngr.log

Then kill dirmngr which otherwise stays as an unwanted daemon in the background and thus never re-reads its configuration and retry. As expected the above error persists, so let's look at the debug output (*** means 'obfuscated' and some wrapping applied):

dirmngr[813092] listening on socket '/run/user/***/gnupg/S.dirmngr'
dirmngr[813093.0] permanently loaded certificates: 145
dirmngr[813093.0]     runtime cached certificates: 0
dirmngr[813093.0]            trusted certificates: 145 (145,0,0,0)
dirmngr[813093.6] handler for fd 6 started
dirmngr[813093.6] DBG: chan_6 -> # Home: /home/***/.gnupg
dirmngr[813093.6] DBG: chan_6 -> # Config: /home/***/.gnupg/dirmngr.conf
dirmngr[813093.6] DBG: chan_6 -> OK Dirmngr 2.2.41 at your service
dirmngr[813093.6] connection from process 813090 (***:***)
dirmngr[813093.6] DBG: chan_6 <- GETINFO version
dirmngr[813093.6] DBG: chan_6 -> D 2.2.41
dirmngr[813093.6] DBG: chan_6 -> OK
dirmngr[813093.6] DBG: chan_6 <- WKD_GET -- torbrowser@torproject.org
dirmngr[813093.6] DBG: chan_6 -> S SOURCE https://openpgpkey.torproject.org
dirmngr[813093.6] number of system provided CAs: 0
dirmngr[813093.6] TLS verification of peer failed: status=0x0042
dirmngr[813093.6] TLS verification of peer failed: The certificate is NOT trusted.
        The certificate issuer is unknown. 
dirmngr[813093.6] DBG: expected hostname: openpgpkey.torproject.org
dirmngr[813093.6] DBG: BEGIN Certificate 'server[0]':
dirmngr[813093.6] DBG:      serial: 0468C1A495902B3A5BE46845EF6767A04B2E
dirmngr[813093.6] DBG:   notBefore: 2023-09-02 00:48:10
dirmngr[813093.6] DBG:    notAfter: 2023-12-01 00:48:09
dirmngr[813093.6] DBG:      issuer: CN=R3,O=Let's Encrypt,C=US
dirmngr[813093.6] DBG:     subject: CN=openpgpkey.torproject.org
dirmngr[813093.6] DBG:         aka: (8:dns-name25:openpgpkey.torproject.org)
dirmngr[813093.6] DBG:   hash algo: 1.2.840.113549.1.1.11
dirmngr[813093.6] DBG:   SHA1 fingerprint: 86A624B2EF0BDA723F2459B2C90D20D33E7EA8AD
dirmngr[813093.6] DBG: END Certificate
dirmngr[813093.6] DBG: BEGIN Certificate 'server[1]':
dirmngr[813093.6] DBG:      serial: 00912B084ACF0C18A753F6D62E25A75F5A
dirmngr[813093.6] DBG:   notBefore: 2020-09-04 00:00:00
dirmngr[813093.6] DBG:    notAfter: 2025-09-15 16:00:00
dirmngr[813093.6] DBG:      issuer: CN=ISRG Root X1,O=Internet Security Research Group,C=US
dirmngr[813093.6] DBG:     subject: CN=R3,O=Let's Encrypt,C=US
dirmngr[813093.6] DBG:   hash algo: 1.2.840.113549.1.1.11
dirmngr[813093.6] DBG:   SHA1 fingerprint: A053375BFE84E8B748782C7CEE15827A6AF5A405
dirmngr[813093.6] DBG: END Certificate
dirmngr[813093.6] DBG: BEGIN Certificate 'server[2]':
dirmngr[813093.6] DBG:      serial: 4001772137D4E942B8EE76AA3C640AB7
dirmngr[813093.6] DBG:   notBefore: 2021-01-20 19:14:03
dirmngr[813093.6] DBG:    notAfter: 2024-09-30 18:14:03
dirmngr[813093.6] DBG:      issuer: CN=DST Root CA X3,O=Digital Signature Trust Co.
dirmngr[813093.6] DBG:     subject: CN=ISRG Root X1,O=Internet Security Research Group,C=US
dirmngr[813093.6] DBG:   hash algo: 1.2.840.113549.1.1.11
dirmngr[813093.6] DBG:   SHA1 fingerprint: 933C6DDEE95C9C41A40F9F50493D82BE03AD87BF
dirmngr[813093.6] DBG: END Certificate
dirmngr[813093.6] TLS connection authentication failed: General error
dirmngr[813093.6] error connecting to 'https://openpgpkey.torproject.org/.well-known/
        openpgpkey/torproject.org/hu/kounek7zrdx745qydx6p59t9mqjpuhdf?l=torbrowser': General error
dirmngr[813093.6] command 'WKD_GET' failed: General error <Unspecified source>
dirmngr[813093.6] DBG: chan_6 -> ERR 1 General error <Unspecified source>
dirmngr[813093.6] DBG: chan_6 <- BYE
dirmngr[813093.6] DBG: chan_6 -> OK closing connection
dirmngr[813093.6] handler for fd 6 terminated
dirmngr[813093.0] running scheduled tasks (with network)
dirmngr[813093.0] running scheduled tasks
dirmngr[813093.0] running scheduled tasks

Now WTF!?! The ISRG X1 root CA is loaded as a system CA certificate, what is going on here? Whatever one tries, you just seemingly can't get past this error. Searching the web then points to SKS pool CA which one e.g. on Arch Linux has to include in ~/.gnupg/dirmngr.conf as: 

hkp-cacert /usr/share/gnupg/sks-keyservers.netCA.pem

Now this leads to the interesting point that dirmngr refuses the above certificate as expired. And there won't be any new SKS CA for sure. So what? Another lengthy online research turns out that the wisdom of the GnuPG developers is without any limits - and as such they decided that a trusted root CA is for sure not a trusted HKP CA. So, to get things working again one has to find the local system's storage of the ISRG X1 CA and make ~/.gnupg/dirmngr.conf to contain the following (path below is for Arch Linux):

hkp-cacert /etc/ca-certificates/extracted/cadir/ISRG_Root_X1.pem
keyserver hkps://keys.openpgp.org

Kill dirmngr and restart all over. You will be surprised that with this well thought out and and perfectly documented configuration gnupg suddenly starts to work again.

So something that should have worked out of the box has taken me half a day to trace down and adjust back to working order. Now I know what I'm doing when tracing thing down. Ordinary users don't. No wonder gnupg is as disliked as it is. This is just disgusting.

And, oh, try to use dirmngr while there is a short upstream network outage and see how happily this oh so well coded beast starts to consume 100% cpu, as GnuPG developers seemingly despise select, poll and similar calls, they do prefer to do a polling loop at any cost. Maybe they're stakeholders of certain CPU producing companies...


mosh - the dead shell13.09.2023

mosh would be beautiful to use as its designed for high latency links as well as for lossy networks - if it would only support TCP as an alternative communication mode.

The only place where I do encounter untolerable high latencies for interactive typing is emergency access to my home systems using onion routing. Thats when either there's a DNS failure or ssh access is blocked by some proxy. And that's exactly when you can't use mosh as it has a more than 10 years old PR for TCP support that continues to get ignored.

Usually the speed of your TCP based ssh connection is sufficient and you don't need mosh. And if not chances are that you can't use mosh anyway due to UDP insistance. Thats what I call a dead horse.


Privilege Escalation in AppArmor07.09.2023

There is a privilege escalation in the Linux AppArmor LSM that is present in probably all kernels since July 2019 that can lead to full system compromise. Looking at the kernel code the Tomoyo LSM is probably affected, too (not tested).

As I'm trying to be fair I did post information about this to the AppArmor mailing list where it sits "awaiting moderator approval" since days. I'm slowly but steadily going to assume that AppArmor mailing list moderation effectively means "send to /dev/null"...

Thus I'm trying now to get a CVE assigned to this. I will post further information about the problem and a band aid fix on my Github account at a later stage.


Security a la systemd02.09.2023

Edit:

Actually the problem is a rare case of Arch Linux to blame first. Arch systemd is compiled without AppArmor support though I don't see any reason for this. Now, if I would be 20 years younger I'd recompile systemd. But age causes a little lazyness.

Nevertheless, why doesn't systemd complain when a security relevant feature is used that is not compiled in? The systemd stuff is usually so noisy that it kills off an SSD every other week by means of TBW. And just for things that are really relevant there's utter silence. Doh.

Original Post:

Yes, everybody has to use systemd. And as there is a broad user base it is well audited...

Now look at the systemd.exec documentation for AppArmorProfile=. Try to use this option in good faith. Doh, target process keeps unconfined. Now, let's remove this broken fake feature line and add aa-exec -p at the start of the ExecStart= string instead and watch your process being perfectly confined by AppArmor.

Well, it seems that security is not one of the strong systemd features. So I do suppose the strongest features are annoyance and frustration to make users change to that "easy" Redmond OS...

And don't anybody try to tell me "well, in general it works, but its broken in this or that release". From a security point of view any kind of brokeness of a relevant security feature means that you can not trust such code anymore, ever - except if you use this Redmond stuff where even a catastrophic key loss is played down as "well, can happen".


Wifi PMKs can be harmful19.08.2023

If you run a environment with WLCs and a Radius server make sure that you are able to view the WLC stored PMKs for wireless clients and that you are especially able to delete a stored PMK on the WLC.

Otherwise a blocked or deleted user will have Wifi access until the PMK expires, which may take hours!


Cisco's Ministry of Silly Walks19.08.2023

Ever wondered how to install your shiny and brand new server certificate on a CBS250 or CBS350 device? Given up in utter despair? Fear no more, success just a few erratic and illogical steps away. But be sure that your certificate contains a 2048 bit RSA key (3072 bit RSA might work, too, but untested), Cisco doesn't like these fancy new EC things.

First of all, you will need a private key file with no password, you can archive this with:

openssl rsa -in encrypted.key.pem -out unencrypted.key.pem

Then you will need to extract the public key from your certificate, you can use:

openssl x509 -in cert.pem -pubkey -out pubkey.pem

This was very logical, right? Wait no more, log on via your browser to your switch and skip to:

Security => SSL Server => SSL Server Authentication Settings

Welcome to the Ministry of Silly Walks!

Rules:

  • Ignore the certificate after the end of the public key in the openssl generated file.

  • For all operations, public and private key must be modified as follows
    Public Key:
    -----BEGIN PUBLIC KEY-----      =>      -----BEGIN RSA PUBLIC KEY-----
    -----END PUBLIC KEY-----        =>      -----END RSA PUBLIC KEY-----
    
    Plaintext:
    -----BEGIN PRIVATE KEY-----     =>      -----BEGIN RSA PRIVATE KEY-----
    -----END PRIVATE KEY-----       =>      -----END RSA PRIVATE KEY-----
    

  • Whenever a private key has to be entered the 'Encrypted' input field must be cleared and the private key must be placed in the 'Plaintext' field.
     
  • Unless explicitely stated all all fields into which data has to be entered have to be cleared, i.e. the data as delivered by the switch are to be ignored.
     
  • Ignore the unhelpul "Empty value is invalid." messages and continue processing, just click again.

Now that you know the Ministry's rules let's start the silly walk  by selecting the inactive certificate slot and clicking "Import Certificate..." - never close the popup until the whole silly walk is done and, well, yes, the road to success if full of failures:

  • Try to import certificate, public key and private key. This will fail with:
    Failed to load public key

  • Try to import only the certificate, leave the "Import RSA Key-Pair"  unchecked. This will then fail with:
    SSL saved private key did not match the imported certificate.
     
  • Try to import certificate and private key, do not touch the presented public key. This results in:
    Success.

Now that you have passed all the tests for your capabilities of being a proud member of the Ministry of Silly Walks you can close the popup and enjoy your shiny new certificate.


KDE and Systemd Developers sum up to: Security, what Security?04.08.2023

It is not so nice when developers don't have any clue about security and thus constantly shift responsibility to another project. This results in insecure systems with virtually no one responsible. Somebody who can file CVEs should look at this and get into action. A few not so nice examples:

  • When using fscrypt to secure a home directory a user expects the directory to be unlocked (done via pam_fscrypt) at login and to be locked at logout. Well, pam_fscrypt tries to lock but has to fail miserably.

    The reason for this is the default setup of systemd-logind to not kill user session processes at logout time. So a bunch of processes including keyring stuff and, if started via systemd --user, ssh-agent with valid and accessible keys keeps hanging around.

    As a result files in the home directory are in use and pam_fscrypt can't lock the home directory. And even if systemd-logind is configured to kill the user session processes this happens a variable amount of seconds after logout, so still no joy.

    One has to go through hoops and start a script as root that attempts to lock the home directory for multiple seconds - success not guaranteed.
     
  • As stated before process kill and directory lock takes multiple seconds after logout. So, if a user changes settings that require re-login to be applied, user logs out and instantly logs in again. Result is a "ploink" sound and either a KDE wheels or a blank screen with no way to go backward or forward. If the user is able to switch to a console and has the permission to restart sddm from there the user can resurrect the system, the other options are 'call and wait for administrator' or 'unplug system mains'.
     
  • Now, if the home directory is not encrypted, ssh-agent is in use and systemd-logind is used in the default configuration, things tend to be equally bad. The friendly Mallory working at the next table spies the login passphrase. After his victim logs out Mallory just needs to log in to be able to use all activated ssh keys of his victim, thanks to the ignorance of KDE and systemd developers. I beg to question if those developers use putty on this OS from Redmond when they need ssh access?
     
  • Somebody should look at the keyring stuff that is left running in the default configuration. I'm too disgusted already to continue digging in this mess.
     
  • Now for the easy part: to prevent access to system control functionality like poweroff one has to create Polkit rules - KDE developers just shift responsibility and be done with. When it comes then to "Restart" one has to find out by trial and error what has to be disabled via Polkit to make "Restart" to disappear - documentation is, well, sparse. And as this wasn't ugly enough there's no way to hide the "Switch User" button except for editing "~/.config/kdeglobals" and adding a section which is best documented in mailing list archives. To remove unwanted buttons from the login screen one has to then edit the proper sddm theme file, there's no other way.
     
  • If one is not frustrated enough then here we go again: KDE developers seem to know only about passwords when it comes to the screensaver. If one is smart enough to be able to use a passwordless unlock functionality the screensaver greets the unexpecting user with an "Unlock" button probably designed to wear out the keyboard. This is a productivity hindrance par excellence and can only be resolved by patching the LockScreenUi.qml file. It turns out that someone out of their mind have decided to pop up this nonsense if the passphrase line was not displayed at unlock time. Arrrgh...
     
  • And there's pam_systemd which is active in distros e.g. for ssh access which starts stuff but never terminates on logout...

Access with Tor Browser01.08.2023

For all those who depend on privacy this site is accessible with Tor Browser using the following URL:

http://2tk3jwrferu37blz6g6oq7r3gcnbxgvi7tnddxblobywpbkkp2q3fvqd.onion/


Beam me up, Scotty...03.07.2023

...there's no intelligent life down in Redmond. This is especially true since Poettering moved there. How the f**k can one design a 'systemd and udev' trash that first renames ethernet interfaces to illegible gibberish and then make this waste of time and money rename the interface if a PCIE card (no network) is inserted or removed? I just get inspired by Edgar Alan Poe (Lyrics by Alan Parsons):

By the last breath of the cold winds that blow
I'll have revenge upon Fortunato
Smile in his face I'll say "Come let us go
I've a cask of Amontillado"

Sheltered inside from the cold of the snow
Follow me now to the vaults down below
...

Replace Fortunato with Poettering and the cask of Amontillado with any torture object of your choice and you'll get the idea. But well, he made his statement in modifying a stable system environment to become more like the unstable Redmond insanity. Disgusting.


Switch Woes31.05.2023

Why isn't there any switch manufacturer that completely fills the huge gap between unmanaged aka dumb switches for, well, similar users and multi billion big business devices? Why are there so many "managed" switches for sale with virtually no IPv6 support? Why some manufacturers sell switches with "Windows only" configuration tools?

There is for sure demand for semi-professional devices for home use that are not constructed of hardware that was designed 10+ years ago. Think power users, think extended home office. A 10GBit home backbone with at least some NBase-T ports and some 10GBit ports not allocated for the backbone would be nice. Switches located close together should be interconnected by SFP+ DAC cables, whereas remote (a.k.a. other room) switches should be interconnected by 10GBase-T if the existing wiring allows for it, otherwise via NBase-T. This results in the following wish list:

  • Preferably NBase-T, at least 2.5G, supporting 10/100/1000/2500(/5000), with POE+ support
  • Either 10GBase-T or SFP+, preferably combo port(s), at least two ports
  • Standard ports (10/100/1000) with POE+ support, can be substituted by 10/100/1000/2500 POE+ ports
  • Fanless, absolutely mandatory
  • Software allows port LEDs to be disabled, absolutely mandatory, no nightly light show
  • Managed with complete dual stack, no crippled IPv4 only management
  • Web GUI as an alternative to command line (if you don't configure switches on a daily base CLI configuration is cumbersome)
  • Stacking would be nice as no fan typically implies multiple units
  • Affordable, i.e. in the 3 digit € range per unit

The only switches I could find that do mostly fit the the above list are from Cisco:

  • CBS250-24P-4X (24x GBit, no NBase-T, 4x SFP+, usable as a central distribution unit on a home backbone)
  • CBS350-24P-4X (24x GBit, no NBase-T, 4x SFP+, usable as a central distribution unit on a home backbone)
  • CBS350-8MGP-2X (6x Gbit, 2x NBase-T 2.5G, 2x 10G combo ports, no stacking)

As for stacking even as Cisco advertises "Hybrid Stacking" between CBS350-24P-4X and CBS350-8MGP-2X online there seems to be no way (at least in the Web GUI) to configure any kind of stacking on the CBS350-8MGP-2X.

If one then looks for POE PD powered switches without a nightly light show there is actually only a single device, again from Cisco:

  • CBS250-8T-D (10/100/1000 only, no POE passthrough)

I couldn't find any POE PD powered device with POE passthrough with an option to switch off the port LEDs. The same goes for IPv6 support (most devices sold) as well as a POE PD powered NBase-T port (all devices I could find). Is this so impossible to implement?

And yes, being able to switch off the port LEDs is the actual "killer application" for home (office) use. Most people don't have a dedicated network closet in their home (though this may have changed in about 100 years). A nightly light show in bright green, yellow and orange (possibly accompanied by a stylish and bright blue or white power indicatior) from inside the office, in the living room or even better in the bedroom is not to everybodys taste. And though one can usually tape a power LED easily this is typically not so simple with port LEDs - the fun then begins when one needs the port LEDs for those five minutes once a year...

For the same reasons fanless devices are required, except for those who prefer a running lawn mower at night in their bedroom...

I do really wonder how many decades it will take for switch manufacturers to  accept that 10/100/1000/2500 is the new minimum standard, 5000 is a plus value and 10000 is the power user (and not a $$$ premium) version. When will switch manufacturers accept the fact that a dedicated network closet is not the rule but the exception in most facilities (a.k.a homes)? Instead I do actually see 10/100 desktop devices still being sold which does give me the creeps. The same goes for the 10MBit requirement - but unfortunately there are still such devices out there that one may need to connect to a switch port.

And well, as for the above mentioned Cisco switches, if one does find a bug one may keep it. No $$$ service contract, no bug report, it's as simple as that. One can only hope that a bug also affects a user with a service contract...

That all seems to suggest that the management decisions of switch manufacturers are made by persons that mentally live at least 10 years in the past. I just can't see any other reason.


The sad state of HDMI transmitters12.04.2021

Is it only me or are other people out there that do want to use their large TV screens as a temporary laptop monitor - including a wireless HDMI connection for one reason or the other?

If you want to do so 60GHz transmitters are out of the question. Absorption by oxygen is, well, great in this frequency range and family or pets blocking the line of sight between transmitter and receiver also produce not so nice results. Not usable for concentrated work.

So the choice is using a 5GHz transmitter. On to the second problem, latency. For interactive work a latency less than or equal to 35ms is necessary, otherwise keyboard and mouse input will appear with a noticeable delay on the screen. Sadly, there is no latency information for most consumer products and if there is the specified latency is typically around 100ms. So the typical and average consumer products are not usable.

Professional equipment is far to expensive (starting in the 4 digit euro/dollar range) so that's out of the question, too. And a laptop typically has no SDI output - and the TV neither has a SDI input.

On to semiprofessional equipment. The amount of affordable devices is sparse and the latency of most devices is bad, too. The only product I came across is the Swit Curve500, which is advertised with a below 32ms latency (30ms typical). Pricing is acceptable, too. DFS and sufficient channels to stay away from active WIFI frequencies. The only other product I came across regarding latency is from Gefen and at least the european version does only have two frequencies (probably no DFS) which is a show stopper - and then the european version is unreasonable expensive.

In the end I will be probably trying the Swit Curve500. Though supposed to be a camcorder/camera transmitter and thus having only two audio channels it will probably work as expected. And 1080P60 for me is a sufficient resolution for a laptop display replacement. It's more important that the connection works than maximum resolution. After all I want to use this for concentrated work.


Is this a smartphone or a gimmick?10.04.2021

Got recently a Sony Xperia 1 II. Basically a nice phone except for the unavoidable camera furuncle. But: The "Xperia Home" launcher App was somewhat revamped to the very worse. Actually so bad that I had to switch to Google's Nova Launcher.

There are two killer problems. The first is that certain widgets I use are not listed after installation by the Sony app, so they cannot be used. The second one is that suddenly the home page of the launcher is the leftmost page and that cannot be changed. I am not willing to break my fingers having to shuffle through a pile of pages rightwards when the amount of work can be halved by a center home page. Very efficient.

And if this wasn't enough, some apps like the mail app are gone. Now I don't want to use the Google mail app for each and every account and I don't want to be swamped with ads just for processing mail. Well, a premium phone is probably much too cheap for Sony to include the previously existing mail app.

Tell you what. If Sony continues making such good decisions they will find out that the market for professional photographers using a premium smartphone as the equipment of choice for their job is, well, quite small. But maybe they just want to lose sufficient customers to justify the end of their smartphone production. Who knows...


Android 12, Codename Warthog10.04.2021

Now, I don't care about smartphone camera quality beyond average - if one wants a quality picture, buy a DSLR. Already the sensor size which must be small for a phone physically doesn't allow for high quality pictures.

Personally I don't like phones that do have a camera  furuncle. If this trend continues the next smartphone generation will probably have a certain resemblance to a warthog.

Now, why is it that if I want to by a phone with hardware that can be actually used for work it is nowadays always a furuncle one? Isn't there a market for people that use the phone primarily for communication? After all, even if smart, it is a phone!


The perfect Qi wireless charger04.04.2021

I do have a dream about a perfect Qi wireless charger:

  1. 15W/10W/7.5W/5W output with temperature control
  2. USB-C input with QC as well as PD support
  3. Ambient brightness sensor for LED brightness control
  4. Bluetooth LE and APP support for programming, e.g.:
    - selectable LED brightness curves, including LED off
    - selectable temperature limits for charge power levels
    - selectable maximum charge power depending on time of day
  5. NFC pairing support for Bluetooth LE

Unfortunately the sad state of affairs is that only a smaller amount of Qi wireless chargers currently support 15W charging. Even fewer do have a USB-C connector and only some of them do support PD as well as QC. And as far I know only two products of one manufacturer (stand and pad) do additionally have an ambient light sensor controlling the LED brightness - the other products seem to advertise the song "Blinded by the light". Finally, no product does support Bluetooth LE/NFC for charger programming and control.

I'd wish manufacturers would invest more resources in technical functionality than fancy design and the quest for the world's most annoying LED color, position, flashing and brightness.


GnuTLS and session tickets13.08.2020

No,
this is not about CVE-2020-13777 which is horrible. It is about the way GnuTLS handles session ticket lifetime and the key to en/decrypt session tickets. One would expect that if GnuTLS is configured to issue session tickets with a lifetime of one hour the tickets will be valid one hour starting from current time. A reasonable assumption. But completely wrong when GnuTLS is used as TLS backend for a server.

First of all, GnuTLS doesn't work with current time but divides the time since the epoch in ticket-lifetime multiplied by 3 slices. To simplify things for the current example let's assume the configured ticket lifetime is one hour and there were never any leap seconds. So GnuTLS divides the day into 8 slices of three hours.

This means that a ticket issued at 03:00:00 and a ticket issued at 05:59:59 belong to the same slice. Now, as a ticket issued as 05:59:59 would become instantly invalid GnuTLS uses a rollback mechanism to allow the previous slice to be valid too. Thus a ticket issued at 03:00:00 which is expected to be valid until 03:59:59 is actually valid until 08:59:59 or 5 hours longer that configured and expected.

Oh, wait, that's not all. GnuTLS doesn't use a monotonic clock for ticket lifetime handling, it uses the standard adjustable system clock (time() or CLOCK_REALTIME) and can thus be nudget to accept tickets forever if an attacker can adjust the sytem time.

Moreover, one has, if the API of GnuTLS is used as documented, to use a single resumption master key for all sessions which is valid during the server lifetime. Now if an attacker can get hold of this key anything PFS for the whole lifetime of the server is moot, as the "current" en/decryption key for session tickets can always be derived from this master key(SHA512 of time slice number and master key). This means that an attacker can extract the key, replace the server application with a malicious one and any client that has connected to the original server application during its lifetime will unknowingly resume successfully with the malicious server. And yes, an attacker can thus decrypt all recorded session resumption data. This should allow to decrypt recorded encrypted traffic, too. To make that more understandable, an attacker records the traffic of a long running server for a some months until access to the server is gained and the resumption key is extracted. Then the attacker can start decrypting all those recorced data...

There is no easy workaround for this behaviour as it seems the GnuTLS deems its users dumb and thus offers no API for user based key management. I can only advise to either never enable default session resumption with GnuTLS or to use other libraries like OpenSSL which do have a proper key management API and thus allow for short lived ephemeral keys for session ticket management.


Browsers are Censor friendly08.08.2020

So, how can a browser be censor friendly? Well, by the way it behaves behind the scenes. Nearly every Linux and Android browser (with the notable exceptions of Firefox 78 Linux and Konqueror 5.0 Linux) I did test does an automatic reconnect when a server closes the connection after the browser has sent the TLS client hello message. And the user doesn't get notified in any way.

This would be no big deal except for the fact that on retry the browsers change the contents of the TLS client hello and e.g. include 3DES as an acceptable cipher. Doh, back to DES times! Now, the server does not need to select this cipher, but if the server is managed by an evil entity the server may propose 3DES usage and if this implementation is still well tested and safe and sound on the browser's side is another question.

To add insult to injury this has the side effect that every client application that mimics a browser's client hello must mimic the reconnect behaviour, too, or it can be easily detected by an evil censor that plays MITM and forces a client disconnect at the proper time of the handshake.

In the end this behaviour doesn't really help the user but it does help any censoring goverment or institution to detect censoring avoidance attempts, resulting either in a blocked connection or worse human rights violations.

Browser developer should disable this behaviour by default and make it selectable on a per site base. It is not a problem if a browser tries to reconnect but the TLS client hello must not be modified if the user didn't explicitely agree to it. This is the only way a browser manufacturer does not become a willing censor's servant.

 


The sad state of eBPF15.05.2019

 eBPF could be cute. Really cute. One could do smart in-kernel network packet processing. Except that there's really huge roadblocks.

First of all there is no usable documentation. One has to collect bits and pieces from all over the internet. Then LLVM compiled stuff quite often does not fit the bill as efficient eBPF code needs to be runtime modifiable before it is loaded into the kernel. Now how one does this with compiler output in ELF format? Arrrghhh!

So I did write a little eBPF to C assembler (find it here). And things look great again. 11 registers, let's start developing - until the in-kernel verifier kills register usage. It looses track of r1 to r5 even on simple conditional branches. So 5 registers are so volatile that they are merely unusable. This still leaves 6 registers. Not quite so. r0 is quite often implicit soure or destination, so it is only half usable. r6 is expected to contain the context pointer you do need to be able to access the packet data. And r10 is the frame pointer that one needs to be able to call helper functions. So only r7 to r9 are left that one can actually use. Compared to cBPF which does have an accumulator and an index register (which I do count as a half register) you have 1.5 cBPF registers versus 3.5 eBPF registers. Not much improvement here.

Then you want to access packet meta data, e.g. the source address, from an eBPF program that is attached to a UDP socket. The data is there. But you are not allowed to access it. Doh.

Now you look at the helper functions and get the idea, it would be nice to move certain received packets to another socket. Could save userspace complexity. Go, try. EINVAL. Not only that one has to read through a pile of kernel code to find out that related code requires additional kernel configuration to be compiled in, the gods of eBPF added extra insult to injury by allowing packet transfer only for TCP. Why, only why??? Isn't UDP or raw packets even simpler than TCP? And what about SCTP? Did somebody pay to prevent socket transfer for raw and UDP sockets?

Then you don't want to enable the eBPF JIT system wide but only for "proven" eBPF code whereas code under development should be interpreted. To no avail. There's a huge slew of parameters to the bpf system call but there is no way to do a system wide "use JIT if told so" configuration option as well an no "I want JIT" bpf call option. Who designed that?!? Now, if JIT fails to work properly for a single eBPF program one has to disable eBPF JIT system wide. As Stephen King wrote: "new and improved" (Tommyknockers).

And then such stuff like eBPF cgroup programs. Oh, not programs, a single program per cgroup. You are allowed do more in a cgroup program. Except that cgroups are (ab)used by distros and it is not so easy just to create a cgroup here, a cgroup there, ... - without overall concept you will create a mess of your system and one really doesn't want an application to create cgroups on the fly, especially as there's no one there to clean up afterwards.

In the end it boils down that from an application point of view only socket filtering makes sense. And this is artificially so restricted that writing eBPF code for this purpose is not much fun. The only reason to do so is the dreadfull slowness of [e]poll.

All other network related eBPF use cases seem to be custom tailored for expensive and specialized big iron. So after all for the more common use case eBPF is sadly nothing more than a helper to workaround kernel slowness.


[e]poll bottleneck15.05.2019

 [e]poll on Linux is slow. Very slow. To be more precise: slow as a dead horse. 45us on a system where a clock_getttime syscall (yes, the syscall, not the vDSO) takes 335ns and a read call via library takes 4us. This is on a i7-7500U CPU @ 2.70GHz (not fast but not that slow).

So lets do some quick calculation for a process receiving a UDP packet and answering with another UDP packet: 45us epoll + 10us recvmsg + 35us sendmsg + 10us userspace data processing. So every single packet received and replied to takes 100us and only 10% of this time is actual packet processing. 50% of the remaining 90% is consumed by a single [e]poll call. And, yes, 100us means no more than 10000 packets per second and core which in case of 1K packet size resorts to a network throughput of only 80Mbit/s for a single core. This is horrible.

And there is no workaround, recvmmsg is more often than not unusable for an application as usually signalled data from [e]poll means a single received packet is pending. And if only a single packet has to be sent sendmmsg is unusable, too.

And this is why one should learn a little eBPF assembler (no, not LLVM compiled stuff, handcrafted code). Especially if the application in question may be attacked. The basic data checks need to be processed in-kernel as the epoll+recvmsg overhead is so huge.

Hopefully the responsible kernel developers will one day understand that [e]poll is a fast path and requires a race horse, not a dead one.


The sad state of smartcards on Linux22.12.2018

Well,
the idea is simple: use a smartcard reader or a CCID compliant USB token and get rid of passwords. Simple, isn't it? That's, until it comes to practice. First of all, many smartcard vendors are still going the vendor lock-in way and even with smartcards or tokens one can configure under Linux there are roadbocks:

There's a stack, consisting of the smartcard driver, e.g. CCID, then there's PCSC-Lite, then OpenSC, followed by libp11 and finally OpenSSL. All different developer groups, seemingly no real communication resulting more often than not in broken interworking or sudden parameter changes. Documentation is sparse at best, users are left alone in the dark, developers need to use trial and error. Furthermore certain functionality is missing with no easy way to push required code upstream.

Take for example the CCID driver. Suddenly smartcard names are changed. Nobody cares if this could make problems.

OpenSC developers don't take patches in unified format, one shall make a master degree in git usage first instead, bah! I still have to investigate why 0.19 does seemingly break libp11 and thus my applications.

As for libp11 there was once another library to be used instead which was abandoned - no information that libp11 was to be used instead. Then there's missing functionality, e.g. the 'door opener' slot of a smartcard doesn't need and doesn't want a pin, so let's patch...

OpenSSL engine documentation is, well, horrible as well as non existing. Reading the sources is not the way it should be.

Having to use library pathnames to configure the stack doesn't really help portability at all.

It's bad for developers and nearly impossible to use for end users.


Snooker Commentators02.05.2018

Well,
this is quite subjective but watching the Snooker World Championship with an 1.5m dish and remembering the Masters I've got my commmentator favourites, positive and negative:

Most entertaining: John Virgo "Where's the queue ball going" / "there's always a gap" / "You have to remember the pockets are always in the same place"

Most boring: Clive Everton "Bzzzzs (silence) ---- Chrrrrr (silence) ----" after one to two minutes something obvious with a 10 seconds delay ...

Situation based dry british humour: Dennis Taylor "Remember the championship ends on Bank Holiday monday" (a tip tap situation with a very delayed rerack)

Straight forward precise: Steve Davis, Steven Hendry, Alan McManus

Non playing best: Phil Yates - sometimes overdoing which means you hear the professional reporter

You may have other opinions, YMMV.


Quickies14.02.2018

Shitter: Exhaust pipe of a moron in chief.

UK has replaced April Fool's Day. The now celebrate May Fool's Day.


Why DNSSEC is a terrible beast28.11.2017

 Well, the idea behind DNSSEC is not bad. But the design sucks, especially for what people do most - surfing the web. The designers of DNSSEC seemingly ignored the fact that most of the larger web sites, ad networks, etc., are served via some sort if CDN. So, as a resolver has to look up the DNS records for all elements of a web page, when DNSSEC is activated all intermediate CNAME records must be checked, if there is a DS record. The final A and AAAA records must be checked for existing DS records too. Even ignoring the fact that the TLD keys must additionally be fetched, DNSSEC slows down resolving by a factor of 10. So the address lookup for a single web page element now does require not 30ms, but 300ms instead. Let's assume that a typical web page requires 20 resolver queries that's then 6s instead of 600ms. Ouch. Especially as more bandwidth will not help due to the required query and answer time. Only reduced latency would help but one has to remember that the speed of light is the limit and that a remote DNS server will take processing time, too. A shortcut redesign for DNSSEC is required for what an end user will call "browsing experience".

Well, it seems the only solution to this problem actually is TLS and DNS views. If all web servers use TLS the certificate will validate the server sufficiently, thus for surfing the web DNSSEC is no longer required. Which in turn requires DNS views, one view with DNSSEC disabled which is used for surfing and a DNSSEC enabled view for all other services. One can only hope that some time in the future DNSSEC is no longer required due to somewhat mandatory TLS (or similar cryptographic validation) usage. Die, DNSSEC, die!


HTTP/2 Support21.07.2017

This site now supports HTTP/2. Have fun!


Resent Big Brother17.07.2017

 When Big Brother wants to supervise and control all your activities you should resent that. One way to do this is to use VPNs. Unfortunately most VPNs are easy to detect and have a star topology which means removing the central node kills off the whole VPN. How about a VPN that is difficult to detect, can with some creativity be hidden in a large variety of "harmless" communication protocols, can be used as a base for a Mesh-VPN and works via Tor? Add in state of the art cryptographic methods. Now, the first release is available (Linux and only Linux). It probably has some minor problems but works stable enough for me to be used for my daily work.


Availability via Tor07.07.2017

This site is now available also via Tor. Please visit kjsadwudhdnrkxhd.onion if you want to keep your privacy.


Brother CUPS Pico Howto14.03.2017

If you are using a printer like the Brother DCP-9022CDW and use CUPS you don't need to use the LPD or JetDirect protocols, you can use IPP via https. Actually it is quite simple and you don't need any proprietary drivers when the printer is network connected. The only thing you need is the proper ppd file. Get yourself the cupswrapper GPL source from Brother and extract the ppd file contained in the archive. The install it as (adjust file name according to your printer):

/usr/share/cups/model/Brother/brother_dcp9022cdw_printer_en.ppd 

Now you can create https based IPP printers. The required urls are very simple:

https://<printer-hostname-or-ip>/<name-of-service>

As for the <name-of-service>, open the management interface of your printer in your browser, select Network and then Service. You will see a column Service Name with all uppercase letters. <name-of-service> is any of the listed service names converted to lowercase, e.g. binary_p1.

I forgot: for https you should create proper certificates matching you printer's host name. If you don't want any certificate hassle it is probably sufficient for you to use http instead of https for IPP printing. All of the above applies for http, too.


How to break things the Google way...23.02.2017

...or Nougat at its worst. No, I won't call Android 7 things, but there are some areas with severe "as designed" breakage:

  1. User CA certificates

    Unfortunately Google has deceided in its wisdom that user CA certificates will not be accepted by default anymore. But instead of allowing a system controlled setting they pushed the task to the individual apps. Don't they know from experience that especially stock applications of device manufacturers will probably never get any such opt in option? Couldn't they have made at least a default setting for certain system applications like the stock mail client? The only educated guess I do have here is that Alphabet (Google's mother company) must be planning their own CA and dreaming of big sales...

     
  2. Doze Mode

    OK, doze mode exists since Android 6, happily breaking things. Well, the basic idea isn't a bad one, prevening over ambicious apps to do a race as to who is the fastest battery drainer. But the implementation went wrong, horribly wrong. Google deceided to implement a user defined whitelist (not bad) but even apps on this list do get restrictions (really bad) which more often than not cause communication breakage. And the app user doesn't care if Google, the device manufacturer or the app developer is the culprit if connectivity is broken. Now, on Android 6 you can at least remedy this in an acceptable way, by creating an app that just calls "dumpsys deviceidle disable" and has granted "android.permission.DUMP" privilege via adb. One can at least change the setting on the phone with such an app after a reboot on the phone.
    But with Nougat there's no such joy. The app would additionally need "android.permission.DEVICE_POWER" which, in short, can't be granted via adb. Thus, either never reboot or always do have a trusted host handy. If you start to see an increasing amount of backpackers commencing these are probably Nougat users. The only way I do know of to prevent communication breakage for nearly most apps on Nougat after about 30 minutes of idle time is to connect the device to a host after every reboot and the issue "dumpsys deviceidle disable deep". Thanks, Google.

     
  3. FCM/GCM

    Sou you shall be able to receive FCM (GCM successor) while in doze mode. No, no go. At least not on when connected to a WLAN with a proxy configuration script (PAC). See for youself and dial *#*#426#*#* on your phone. Do a tcpdump (ports 5228, 5229 and 5230) on a gateway and try to ping/connect to google play services (you did dial the stated number, did you?). The result is that google play services are completely ignorant of any proxy setting deliverd via a proxy configuration script and do try only a direct connection. Right so. Bring a bed to work and have some rest, you won't be disturbed...

There's more long standing bugs like DHCPv6 which are simply deferred, though I do believe that Google will not implement DHCPv6 unless Samsung stops to kill IPv6 when the display is switched off.


Also available as TLS version...12.01.2017

Well,
I just have enabled a TLS version of this site. Actually it is the same site, only encrypted. Though it doesn't really make sense for my content TLS seems to be the trend of the day and sometimes I'm trendy.


SAT>IP Library for Linux13.03.2016

I did create a SAT>IP library for Linux including example applications. It's probably not quite ready yet for prime time but should already be sufficiently usable. You can get the code from Github.


QNAP Virtualization Station Woes27.12.2015

 So you want to use QNAP's Virtualization Station, i.e. KVM on a QNAP. Looks good, everything works - but: all of sudden you get an "urlopen error" when accessing the Virtualization Station.

Probably you do have IPv6 enabled on the QNAP and connect via IPv6. Then you do have "Force secure connection (HTTPS) only" enabled in "System Settings" -> "General Settings".

Now, this takes a while to analyze. Luckily I did have one system with such a setup that does work available which I could use for a comparison. Turns out, when changing to SSL only it seems the Stunnel settings are somehow not adjusted. You need to change the following in /etc/config/uLinux.conf:

[Stunnel]
Enable = 1
Port = 443

 Furthermore you need to edit /etc/config/stunnel/stunnel.conf and change the following: 

Old:

pid = /var/run/stunnel.pid
...
[https]
accept = 443

New:

pid = /stunnel.pid
...
[https]
accept = :::443

Reboot the QNAP device and Virtualization Station should be accessible again. BTW, I don't think the pid entry above causes any harm but I didn't test not changing it after a day of analysis.


QNAP Security Vulnerability Details08.08.2015

As posted previously there is a real bad security problem affecting QNAP devices running kernel 3.12.6 and a firmware release prior to 4.1.4 build 0804.

In short:

If you use encrypted volumes on such a device consider all such volumes to be compromised. The disk access keys are logged on unencrypted partitions in world readable files. This means that no matter what key you enter in the GUI and however often you change it that somebody who did read these logs has access to all encrypted data forever unless you take appropriate and quite time consuming measures.

More details:

If you grab the latest available GPL source which is GPL_TS-20150505-4.1.3.tar.gz you will find the following code in the file GPL_TS/src/linux-3.12.6/drivers/md/dm-table.c:

#ifdef CONFIG_MACH_QNAPTS
        printk(KERN_ALERT "dm_table_add_target start %s, start=%lu,len=%lu, param=%s, type=%lu...n", type, start, len, params, tgt->type);
#endif

This line causes all disk access keys protected by cryptsetup to be logged on disk in world readable files. You just need to feed the log data to dmsetup to gain access to encrypted volumes. Doh. So you just need to do a grep on offline copied disks to gain full access to all encrypted volumes. Online access through other security vulnerabilities is quite thinkable of. And the sad thing is that QNAP doesn't find it a necessity to notify customers about this. It seems they corrected this in firmware 4.1.4 build 0804 and keep very quiet about it.

They do probably keep this quiet as there is real time consuming work to be done to regain confidentiality. First you have to install the fixed firmware. Then you have to do a full backup of all data contained in the affected volumes. After that you have two possible ways to proceed as far as I can see:

  1. Use cryptsetup-reencrypt to replace the disk access key. You have to bring your own version of this utility as well as the whole slew of required libraries as QNAP doesn't ship it.
  2. Delete all encrypted volumes. Then create new encrypted volumes. You will have to recreate all additional configuration related to these volumes.

So sadly you will have to check for yourself by calling dmesg after unlocking an encrypted volume if for you this vulnerability is fixed and then you will need to proceed accordingly, taking into account that your QNAP device may be unavailable for many hours or even days. Some more details will be available on BUGTRAQ where I did post information about this security vulnerability today (mind, it can take a bit until released by the admins).


QNAP Security Vulnerability Advance Warning22.07.2015

There is a real bad security vulnerability which does probably affect quite some of the current QNAP models in a certain and not so uncommon configuration. Firmware release 4.1.4 build 0522 is for sure affected and from looking at the GPL sources all 4.1.3 versions are affected, too (I didn't check older releases). Note that this vulnerability was introduced by a source modification of Open Source code by QNAP.

Qnap was notified 2015/07/12 and acknowledged receipt on 2015/07/13. If there's timely responses from QNAP regarding this vulnerability with regards to the release of a corrected firmware I will wait with publishing further details until the firmware is released. If there is no or no timely response I will post details here and on BUGTRAQ after a month, thus mid August.

Be aware that after a fix for this vulnerability is available and installed there will be quite some additional work to be done by anybody affected by this vulnerability, so adapt your plans accordingly.


Why one should never trust "OPAL" compliant self encypting drives27.06.2015

I had recently a quick glance at the OPAL specification. Oh well. Self encrypting drives, AES, doesn't sound good?

The only thing OPAL actually specifies related to security is that AES128 and/or AES256 must be supported. Not a word about cryptographic modes. Not a word about key storage or encryption. Not a word about tamper proof design. Actually nothing. So let's specify a drive that is "OPAL" compliant and thus "secure".

  1. Don't spend money for secure hardware and use AES in ECB mode.
  2. Spare the effort and store the master key as plaintext. The user key to "unlock" the master key is stored as plaintext, too, to enable easy coded string comparison.
  3. Make the majority of bits of the master key a dependency of the device serial number and use only 48 bits of randomness to get some NSA sponsoring.
  4. Implement some manufacturer specific hidden ATA commands to be always able to upload "special" firmware (naturally "only" for development) and download configuration data, logically including those itty gritty keys.

Now, we do have a "secure" and OPAL compliant drive. So let's move forward, sell and make big money.

As long as storage manufacturers use broken specifications and do not open up security relevant firmware for public review including a trust chain that guarantees that the firmware flashed to the drive is 100% identical to the reviewed one, one should trust stuff like "OPAL" only to prevent access for kids being less than 5 years old. The only way to go is to use open source software encryption that does get a thorough review, or, at least a more thorough review than closed source code.

And the best thing for the average user is when OPAL is used with TPM: Mainboard broken, data gone. That's what I do call "proper" design. Which once more leads to "Do not use. Never!".


The Password Dilemma and pam_pivcard as a Solution01.04.2015

There's a well known and constantly inherent security problem with computers which is called password. Now, we all do need a bunch of them. And to be honest, we all use something memorable and keyboard friendly. And we prefer a single sign on scenario with one password e.g. unlocking browser password stores, joining AD domains, and so on. Then, there's a theoretical solution called smartcard. But: most smartcard vendors insist on proprietary card configuration tools (easy extra money and vendor lock in). Furthermore, for home and small office systems a complete PKI infrastructure is well beyond any accepable limit. So we stick to passwords. And if too complicated passwords are enforced, we change to password stickers. And even for those PKI based systems there is a need for administrators if something goes wrong which in turn need, to be able to access a limping system - you guessed it - a password.

Thus, a solution is required which can't be perfect from any theoretical security point of view but which is more secure than the current state of affairs. Let's see what we do require for a single or few user Linux system:

  • Access must be convenient, even if password enforcement rules are in place to prevent password stickers
  • A password has actually be used to allow for single sign on comfort
  • The password must be complicated enough to withstand password crackers for a sufficient long time
  • The human brain must not be overloaded with complicated and hard to remember passwords
  • The password must be in a form that it can be entered by a human to be able to do emergency administration

Now, the only convenient solution to this dilemma is to use a smartcard without PKI infrastructure. But, that doesn't gives us a password, right? Wrong. Here is the a simple and convenient bridge that allows one to use smartcards without PKI infrastructure for convenience and still use a hopefully quite complicated password that is kept in a safe place for emergency access. We use the a public key on the smartcard to encrypt the actual password and store this encrypted data on the local system. When the user wants to log in he or she activates the smartcard by means of inserting it or (flame war ahead) places it on a contactless reader. After the smartcard is in place hitting Return is all that's required. The system decrypts the stored password with the private key of the smartcard and injects it into the PAM authentication stack as if the user would have entered it on the keyboard. Optionally for more security relevant scenarios the smartcard can be PIN protected as usual in which case the user enters the smartcard PIN instead of the actual password. As the PAM stack now has an actual password to continue with, all these nifty single sign on PAM modules come into play. If then the system fails in any way one can just enter the regular password instead of using the smartcard as a fallback scenario. In a regular maintenance scenario the smartcard (or probably another one) can be used to access an administrative account.

When one wants to change either a password or the actual smartcard only the regular systen password change command is required. The system authentication stores the hashed password as usual and the new password is encrypted with the currently presented smartcard. A storage is done as single file per user an administrator can simply delete a file to deny smartcard based user access.

As a result one can use complicated and hard to remember passwords stored for emergency use in a safe place and still have convenient system access with single sign on features. This is then so easy that there is a high probability of such a system being actually used. True, when the smartcard is used without PIN or the the PIN is entered to access another secure element of the card malicious software could try to steal the password by means of decrypting it. This software would then have to have root access anyway. Which means that in this case you are already in greater trouble. And looking at passwords on can remember versus computing power and password crackers (you know OpenCL, do you?) the risk of the password being stolen by root access is acceptable compared to how fast and good password crackers are.

Ok, I do hear the security experts crying out loud "...but you can't store a paintext password, never ever, if encrypted or not, ...". Well, compare it to your home. If you let have security experts their way every room, including the toilet, has to have a tamper proof and resistent door, all windows have bulletproof glass and the walls are thick concrete. Furthermore every door is locked by a tamper resistent lock the key of which is stored in a key store which releases only one key at a time and only if you authenticate yourself to the key store every time. As the system is solid the keys are quite large and heavy. So if you want to enter a room you have to return the current key to the key store which can only be done if you did lock the room you just left, then identify yourself to acquire the next key, unlock the door of the next room, and so on... - but in a real world, you live in a convenient flat with a usable external door lock and not in a prison, do you? If you have data so sensitive that these must be protected by special means you will have proper security measures anyway (proper building, access control at various levels, separated data and system access, and so on). But I don't think that most of us need this kind of stuff. A convenient lock and a generally speaking burglar resistent design of your home will do. If in doubt add a safe or preferably use a bank vault. If the burglar doesn't get in fast he or she will try an easier target. And if it is no burglar but special forces you can't prevent entry anyway - except maybe by exploding everything including yourself with a tactical nuclear device which is probably not what you want.

So what's required is a smartcard that can be initialized without a proprietary PKI configuration software, that leaves the user the choice between PIN and no PIN and that can be used with either no reader at all or for comfort (but not security) through a contactless NFC reader. Well, there is such a 'card', the Yubikey NEO. To make it short the device has deficiencies that can lead to DOS attacks against the device. Thus I wouldn't use it e.g. for hard drive encryption with an on device generated key. But it is very convenient to use for local or remote access, can be plugged into any USB port or be used through NFC. The device configuration software is open source and thus the device can be easily initialized. The PIV card applet on the device is exactly what's required, it features one PIN free and three PIN protected certificate/key pair storage slots. The device has quite some more features which are of no interest here. Sadly, there is no EMV cover available to protect the device from unauthorized access while moving around. But in this usage scenario the worst thing that can happen is that the device is rendered useless.

Then, a PAM modules is required that encrypts the password at password change time with the card's public key and that decrypts the password with the card's private key at login time. Such a PAM module is pam_pivcard which uses:

  • OpenSSL for encryption and decryption
  • PKCS11 engine for OpenSSL to have OpenSSL to use the smartcard
  • OpenSC which provides the actual PKCS11 interface and is able to access PIV cards
  • PC/SC Lite which is the middleware between the smartcard reader and the PKCS11 interface
  • The open source CCID driver which allows access to CCID compliant smartcard readers an USB tokens
  • Yubikey NEO PIV configuration software (yubico-piv-tool)

Sounds much, but isn't in reality. All of the above stuff is standard which should be provided by any reasonable Linux distribution. Only OpenSC may require a patch depending on how you do select the smartcard if you have sufficient different readers and tokens (hopefully this will be incorporated in a future OpenSC release). If the Yubikey configuration software isn't readily available for your distibution all required sources are available from Github. pam_pivcard is available from the Downloads section of this server.


Intel AMT and Linux24.02.2014

 I did recently set up AMT on a few Linux system. This resulted in quite some frustration, testing and coding until I could finally call it a success. If you want to configure AMT too you may want to read in detail what I did and you can download the required utilities. In short, AMT comes in handy for system maintenance and development. Otherways there is no day to day use case.


Android and why the NSA does probably like it30.01.2014

 A bit of a provocative title. But sadly very true. In short it comes all down to the fact that Android devices do have unrestricted /dev/mem access. Thus anybody with access to /dev/mem can read the entire memory, modify the running kernel and reprogram the hardware. So you say only root can access /dev/mem and thus it is surely secure? Wrong. Count the lines of code of the kernel. See how many local privilege escalation CVEs exist for the kernel. Now make an educated guess, how many "undetected" exploits exist in the kernel source. By undetected I mean undetected by the good guys. And if you are a three letter organization you may as well pressure the firmware providers to add a tiny bit of code to change /dev/mem access on demand, e.g. by reception of a specially crafted silent SMS.

Now you only need to produce an application that behaves well and not as a detectable spyware and which requires periodically data from an external server. Either the application contains some local root exploit or permission change is granted remotely via the device firmware and the application can instantly access all memory. This includes plaintext access to all passwords, certificates, everything. No need for cracking, you get it for free. Just upload the results to the remote server. And as Android relies on the "apps" concept you can probably easily have your victim(s) installing your application. And well, the application will do none of this if not activated by the remote server.

The neat thing about this is that it can go undetected for a very long time. Neat for the exploiting party, that is. So don't believe your Android phone manufacturer is selling you "enterprise grade security" which more often than not anyway means "lots of gaping holes". Think of Android as an ordinary suitcase. X-Rayed on many occasions, easily opened with a lock pick. And you would store all your valuables in such a suitcase instead of a vault?


A Tribute to Alan Parsons05.07.2013

Well, Parsons is just great. Mayby you're of an age that prefers  people like David Guetta. But, please, try to listen to someone who did make somewhat perfect music and remember that synthesizers and samplers were either not around or quite crude. Sadly there is, as for so many people, no way to thank him for what he arranged. Alan Parsons together with Eric Wolfsson did create some of the greatest music of modern times. Quite a lot of people think that "Tales of Mystery and Imagination", the first album, was the best. I do agree as far as it is perfect for a first album. However, "Eve", the second album is the very best one in my opinion. Careful selection of studio musicians, a perfect arrangement of what is today called a playlist and an album cover that has visual features that go undetected by mostly everyone - this is one of the most perfect bits of music I've ever heard (and seen). I'm so sad that the project stopped, though I do understand a variety of reasons for this to happen. Thank you, Mr. Parsons, for all you did for us…


MythTV and USB Tuners17.04.2013

Well,
I had a bad time trying to setup MythTV (0.26 in my case but this shouldn't matter) using USB tuners. Actually I would have preferred PCIE variants but as I do use a somewhat special Hardware for my HTPC this is out of the question. In any case, trying to use mythtv-setup to configure my tuners ended up in a failure with "FE_GET_INFO ioctl failed". Now after searching for quite a while, I came across the relevant bug entry at the MythTV site. In the end, if you encounter this error, run "femon -H -a<n>" where <n> is the DVB adapter number (/dev/dvb/adapter<n>) in another shell before trying to configure the DVB adapter with mythtv-setup.
 


Linux iSCSI target mess14.12.2012

Well, the good news for Linux is that iSCSI initiator handling using open-iscsi works. But beware when it comes to target handling. I do have a simple usage case. I do need to export a SAS attached Ultrium 5 tape device via iSCSI. Simple passthrough for backup/restore. Speed wouldn't be a problem, all systems requiring access to this tape are attached to a 10Gbit network. But there is a problem. With the in kernel iSCSI target implementation, called LIO and using a tool named targetcli the tape device is just not configurable. Associated low level tools, e.g. tcm_node, are utterly broken and not updated for many months - they do not fit the current mainline kernel (3.7 in my case) anymore. And the "beautiful" kernel implementation just panics when manually configuring via configfs.

I just can only shake my head in despair. Who except a certain kernel developer that always preferred code beauty and presumably personal preferences more than general useability could help getting this broken stuff into mainline kernel? Userspace tools not available as release packages and seemingly tied to certain kernel versions, to add to this mess none of the possible userspace packages and kernel version permutations seems to be documented anywhere. Userspace tools are using outdated python, broken for many months, seemingly not maintained anymore. Kernel space implementation allowing for panics by just writing configfs values which is even more broken than the userspace implementation in my opinion.

Folks, do me a favour. Get rid of LIO, swallow the pill and head along with SCST. Personally I tried to refuse to use SCST first because it is an out of tree implementation requiring kernel patches, thus you depend on the SCST folks delivering timely and well tested updates for kernel releases or you do need to upgrade the SCST sources yourself (arghhh...). But, even as there is no useable documentation on how to export a tape with current SCST there was sufficient information available all over the web to create a working configuration. Yes, working, as in "it does what it is expected to do".

Maybe most people only want to export disks. Maybe LIO is working in that case. I don't care. I do need a working implementation that doesn't panic for tape access. And looking at what's going on it seems the only maintained and working implementation is SCST. Either LIO gets fixed or it should be scrapped ASAP (hch, do you listen?)...


In Kernel Bridge versus openvswitch19.10.2012

In short: if you need a L2 bridge between physical interfaces you're today better off with commodity network equipment. Bridging is nowadays mainly interesting for access to and from virtual systems. I will stick to kvm here. After doing quite some iterations of netperf I did come to a simple conclusion: scrap the old brctl based implementation and use openvswitch. At least with current kernels and openvswitch (I'm at the time of writing using kernel 3.6.2 and openvswitch 1.7.1) openvswitch outperforms the in kernel bridge implementation by 0.5 to 2GBit/s (depending on protocols used) and is quite more stable with regards to constant throughput. These tests were performed on a Core i7 six core LGA2011 system betweeen host and guest.

The only drawback of openvswitch is documentation. It is missing a kind of howto with regards on how to start using openvswitch with all db based caveats in comparison to the brctl based implementation. If this missing bit is filled in by a brave soul openvswitch seems, at least for me, the way to go.


Intel DH77DF mess25.09.2012

Well, the DH77DF board ist quite nice - as long as you can get past the BIOS and live with the BIOS mess. It is called an UEFI BIOS - but for which operating system?!? As for Linux, it seems to be impossible to configure an UEFI boot. There is no option to launch an EFI shell. All of the two BIOS versions available fail miserably. As long as Intel seems to sell alpha quality boards I'm going to refrain to buy any other Intel board and I do advise everybody else not to buy broken UEFI stuff from Intel. I don't care about the hardware if the associated firmware is - balantly put - utter crap. For me the BIOS of this board is as helpful as two concrete boots in the middle of the Atlantic, period.

Now for an update - I did manage to get this beast doing a Linux UEFI boot. Here's how:

  • Get yourself an UEFI Shell (the 64 bit version).
  • Update your DH77DF Bios to at least version 0100 (the third BIOS for this board).
  • Make sure (use gdisk -l /dev/<your-device>) that your boot partition has code EF00.
  • Make sure that your boot partition is FAT32 formatted, in case of doubt reformat it with mkdosfs -F 32 /dev/<your-boot-partition>.
  • Copy the downloaded UEFI Shell to <mounted-boot-partition>/EFI/BOOT/BOOTX64.EFI (note the all uppercase pathname!).
    Now you are able to boot the UEFI Shell which you can use to start your favourite EFI boot loader, mine is elilo).
  • Copy your bootloader to your boot partition using an all uppercase 8.3 compatible pathname, e.g.
    <mounted-boot-partition>/EFI/GENTOO/ELILO.EFI and copy all files required by the bootloader to their proper locations
    (note that there is no longer any case sensitivity from here on).
  • Use the UEFI Shell to boot your boot manager and with that your EFI enabled kernel.
  • use efibootmgr from the UEFI booted system to add your EFI boot manager to the BIOS.

Now you should have an UEFI boot entry for your boot manager in the BIOS and it should be loaded by the BIOS as the default. It works, at least for me. YMMV.

Note: Even though the 0100 BIOS release notes contain a line about added code for UEFI PXE boot, well, lets say it seems the added code consists of exactly this line in the release notes. Just boot into an UEFI Shell and try ifconfig -l (evil grin) - and, oh, you can try the UNDI CTRL-S game too (again evil grin)...


IPv6? Its (still) dead, Jim!13.08.2012

There's a variety of reasons why IPv6 is as dead as a horse run over by a bus. Let's have a look.

Internet Providers:

At least for me in Germany it doesn't seem that ISPs are really willing to provide IPv6 PPP. I just don't see options and i give a damn about 6in4 cludges.

Networking Equipment:

Look for affordable switches that do provide MLDv1/MLDv2 snooping. Its as bad as IGMPv3 snooping. Switch developers seem to care more about some itty gritty SNMP features than for basic networking stuff. As long as home user affordable manageable switches do not support IPv6 related configuration users will stick to IPv4.

Routers:

Let's specifically talk about Linux here. For IPv6 there is no, and probably won't ever be: masquerading. I don't give a damn about the technically legal reasoning why there is no IPv6 masquerading. I *WANT* masquerading as I don't want the outside world to know how many systems I'm running. I don't want the outside world to know which of my systems is connecting. I want to be able to control which system is able to connect to the outside world in a simple way. All of this works only if the developers skip the technical yadda yadda crap and go through the pains of implementing IPv6 masquerading.

It's not that I'm not using IPv6. I do have a local IPv6 vlan running which is used for IPTV distribution. And I did develop IPv6 software that uses IPv6 multicast. Believe me, I do know the pain. Which makes me believe that until proper infrastructure is available IPv6 is dead. For sure.
 


NFSv4 - Nightmare on Elm Street22.07.2012

 Here we are, here we go: There's a new kid in town, its name is NFSv4. It's been around for quite a while and it should be sufficiently mature for production use. Its hyped to be faster, better, everything. But:

You do have a common root for NFSv4 exports. On the first glance that's ok. But on the server you need an additional shitload of bind mounts if the exported trees are also used locally. So much for ease of life. And if you export directories for common use and directories for use by certain hosts only everybody will get a full list of all exports, even if some of them are not useable. Cute, secure, well done...

Autofs needs an executable map. Thus the NFSv4 base directory will initially be empty, desintegrating the autofs browse option to shreds. Cool.

NFS root on NFSv4 is not supported by linux kernels, one has to use an initrd or initramfs. Something one would call a great step in the wrong direction...

Then, speaking of linux with a current kernel (3.4.4) the execute only permission is still broken for NFSv4 even though the developers know this for more than half a year. This means that NFSv4 is in pactice only usable for "noexec" mount types. And it doesn't look like any of the developers would care...

After all, it seems that NFSv4 was designed with connection to Redmond in mind. Embrace, extend, kill usability, own it... - why for any reason NFSv4 would be so unusable as it is. I don't want to be forced to have autofs mount points based on server names or script based mappings. I dont want to be forced to have tons of bind mounts just to be able to use NFSv4. I don't want to have exports that are not available except for certain clients shown to all clients. I don't want to be forced to go through initrd/initramfs hoops just to be able to use an NFSv4 root on linux (incidentally: thanks, Trond).

Well, call me a dinosaur. But I'm not going to switch to NFSv4 as it is right now. I would need a "hide" option for /etc/exports, that would cause mount points not available to a client to be hidden by the server. I would need a kernel being able to boot from a NFSv4 root without add on "no-fun" stuff. I would need NFSv4 honor the execute only semantics the way NFSv2 and NFSv3 do.

So long as these requirements are not met it is "so long" to NFSv4 for me. Probably I'm not the only one...


I have a dream02.09.2011

 I'm dreaming of a unified POSIX event interface. No separate handling for streams, processes, threads, ... - just a single interface. I'm dreaming of such an interface not being based on a toolkit layer, but of a native interface. Well, I guess I'm having a nightmare...


Mozilla Foundation Crap31.07.2011

Mozilla,  Firefox and Thunderbird once used to be a viable alternative to closed source browsers and they were (especially Firefox) the only alternative for the open source community. It seems, however, that things have changed.

I didn't track Seamonkey Project (the Mozilla follow up) lately, but at least Firefox and Thunderbird always get undocumented behaviour changes. Who in the FSCKing world can guess that one needs to enable IPv6 communication on the loopback interface to prevent slow menus on Firefox and Thunderbird? And if things like this ain't enough: Why for heavens sake was the usable "Reload" button removed and repaced by a small "awsome" field in the URL bar? Try to hit this little beast on a Netbook! Usability seems to be anyway of no concern to Mozilla Foundation. Did anybody test Thunderbird usability with more than 10 emails? I guess no. I do have a mail archive consisting of roughly 1,000,000 mails residing on cyrus imap on localhost. Now watch Thunderbird building its local cache for lots of hours...

Fortunately there are alternatives for Firefox today. What I'm missing, however, is a somewhat bug free alternative to Thunderbird. Currently I'm using Evolution which magically undeletes mails. Otherwise it is already a good and working alternative to Thunderbird.

So for me it is mostly "Good Bye" to Mozilla Foundation software. I call the whole stuff from there FOOBAR (oh, why is my little server is named catch22?).

 


Parsons giving finally in?22.12.2010

 The alan Parsons Project

Music I like. A musician I adore. But - the rights of this music seem now to be owned by Sony. Did Parsons really go the easy way? Is there any chance to get this back to real life? It doesn't seem, at least to me, that sanity was the thing going on when that happened. A perfect musician and a company I won't by anything from as far as possible for reasons you know all too well. The sad thing is you can't contact Parsons personally. I give a damn about some tour management contact. I've been in stage business too long to know better. Parsons hides. Sadly this looks like another case of a turn of a friendly card (pun intended).


Firefox backspace key shortcut crazyness21.11.2010

Well,

I had to modify some form on a web page. This page consists of a lot of Javascript with an ARM bases backend. Actually the backend is a home automation system so a low power ARM based system does make sense.

OK, I had to modify some values. Hit the BACKSPACE key. Was sent straight back to the login page. Page forward? Not possible as the pages were Javascript generated. Reason? After that happened to me more than once and I lost at least half an hour of editing I did investigate. Wo was the one smoking crack when defining the Backspace key of Firefox as a Previous Page Shortcut with no way of disabling this? I mean, anybody in their normal sense wouldn't bind any key required for editing to a shortcut as you never now if the user has moved the focus properly (and if so if the focus change request is faster than the next key pressed). I'm not in the mood of placing a bug report. I'm tired of bug Mozilla reports as the tend to stay open for years without any solution.

At least there is some way to disable this f***ing key binding. There is an Add On called keyconfig that is at least a cure even if the symtom (developer on crack) is probably staying.


Porting libipq applications to libnetfilter_queue18.07.2009

Given that you want to use the new NFQUEUE features of netfilter but you do have a legacy appliation using the old QUEUE target and libipq, here is an example how to port the legacy application to the NFQUEUE (using queue 0). Unfortunately this is necessary as you can't use CONFIG_IP_NF_QUEUE and CONFIG_NETFILTER_NETLINK_QUEUE at the same time. If you compile both into the kernel the legacy version wins. When you port your application you can even use the old netfilter QUEUE target, this maps to NFQUEUE with queue number 0.

The code below shows how to port a libipq based application (left column) to libnetfilter_queue (right column).

#include <stdio.h>
#include <stdlib.h>
#include <netinet/ip.h>
#include <netinet/ip_icmp.h>
#include <netinet/udp.h>
#include <netinet/tcp.h>

#include <linux/netfilter.h>
#include <libipq.h>


static struct ipq_handle *h=NULL;


static unsigned char bfr[32768];

static int worker(void *packet,int len,void *mac)
{
        struct iphdr *i;
        struct icmphdr *c;
        struct udphdr *u;
        struct tcphdr *t;

        i=(struct iphdr *)(packet);
        if(len<sizeof(struct iphdr))return -1;
        if(len!=ntohs(i->tot_len))return -1;

        switch(i->protocol)
        {
        case IPPROTO_TCP:
                t=(struct tcphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct tcphdr))return -1;
                break;

        case IPPROTO_UDP:
                u=(struct udphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct udphdr))return -1;
                if(len!=i->ihl*4+ntohs(u->len))return -1;
                break;

        case IPPROTO_ICMP:
                c=(struct icmphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct icmphdr))return -1;
                break;
        }

        return 0;
}



























static void die(void)
{
        if(h)ipq_destroy_handle(h);
        exit(1);
}

int main(int argc,char *argv[])
{
        ipq_packet_msg_t *m;


        if((h=ipq_create_handle(0,PF_INET))==NULL)
        {
                fprintf(stderr,"No IPQ handle, aborting.n");
                die();
        }

        if(ipq_set_mode(h,IPQ_COPY_PACKET,sizeof(bfr))<0)
        {
                fprintf(stderr,
                        "IPQ mode setup failed, aborting.n");
                die();
        }





















        while(1)
        {
                if(ipq_read(h,bfr,sizeof(bfr),0)<0)continue;
                if(ipq_message_type(bfr)!=IPQM_PACKET)continue;
                m=ipq_get_packet(bfr);
                ipq_set_verdict(h,m->packet_id,worker(m->payload,
                        m->data_len,
                        (m->hw_addrlen==6)?m->hw_addr:NULL)
                        ?NF_DROP:NF_ACCEPT,0,NULL);
        }
}
#include <stdio.h>
#include <stdlib.h>
#include <netinet/ip.h>
#include <netinet/ip_icmp.h>
#include <netinet/udp.h>
#include <netinet/tcp.h>

#include <linux/netfilter.h>
#include <libnetfilter_queue/libnetfilter_queue.h>
#include <arpa/inet.h>

static struct nfq_handle *h=NULL;
static struct nfq_q_handle *qh=NULL;

static unsigned char bfr[32768];

static int worker(void *packet,int len,void *mac)
{
        struct iphdr *i;
        struct icmphdr *c;
        struct udphdr *u;
        struct tcphdr *t;

        i=(struct iphdr *)(packet);
        if(len<sizeof(struct iphdr))return -1;
        if(len!=ntohs(i->tot_len))return -1;

        switch(i->protocol)
        {
        case IPPROTO_TCP:
                t=(struct tcphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct tcphdr))return -1;
                break;

        case IPPROTO_UDP:
                u=(struct udphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct udphdr))return -1;
                if(len!=i->ihl*4+ntohs(u->len))return -1;
                break;

        case IPPROTO_ICMP:
                c=(struct icmphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct icmphdr))return -1;
                break;
        }

        return 0;
}

static int cb(struct nfq_q_handle *qh,struct nfgenmsg *nfmsg,
        struct nfq_data *nfa,void *data)
{
        int id;
        int len;
        int hlen;
        char *pkt;
        struct nfqnl_msg_packet_hdr *ph;
        struct nfqnl_msg_packet_hw *hwph;

        if((ph=nfq_get_msg_packet_hdr(nfa)))
                id=ntohl(ph->packet_id);
        else id=0;

        if((hwph=nfq_get_packet_hw(nfa)))
                hlen=ntohs(hwph->hw_addrlen);
        else hlen=0;

        if((len=nfq_get_payload(nfa,&pkt))<0)
                return nfq_set_verdict(qh,id,NF_ACCEPT,0,NULL);

        return nfq_set_verdict(qh,id,worker(pkt,len,
                (hlen==6?hwph->hw_addr:NULL))?NF_DROP:NF_ACCEPT,
                0,NULL);
}

static void die(void)
{
        if(h)nfq_close(h);
        exit(1);
}

int main(int argc,char *argv[])
{
        int fd;
        int len;

        if((h=nfq_open())==NULL)
        {
                fprintf(stderr,"No nfqueue handle, aborting.n");
                die();
        }

        if(nfq_unbind_pf(h,AF_INET)<0)
        {
                fprintf(stderr,"No nfqueue unbind, aborting.n");
                die();
        }

        if(nfq_bind_pf(h,AF_INET)<0)
        {
                fprintf(stderr,"No nfqueue bind, aborting.n");
                die();
        }

        if(!(qh=nfq_create_queue(h,0,&cb,NULL)))
        {
                fprintf(stderr,"No nfqueue queue, aborting.n");
                die();
        }

        if(nfq_set_mode(qh,NFQNL_COPY_PACKET,0xffff)<0)
        {
                fprintf(stderr,
                        "nfqueue mode setup failed, aborting.n");
                die();
        }

        fd=nfq_fd(h);

        while(1)
        {
                if((len=recv(fd,bfr,sizeof(bfr),0))<0)continue;
                nfq_handle_packet(h,(char *)bfr,len);
        }





}

 

 

For the libipq based code you need to link with -lipq, for the nfqueue based code you need to link with -lnetfilter_queue.

 


Networking Priority Madness13.01.2009

The Networking Priority Madness

Did you ever wonder about stuff like EF, AF41 or CS3? Have you ever wondered how to use WMM to assert that your VoIP call isn't interrupted by your file download? If so, read on and dive into the wonderful world of different standardization groups and incompatabilities.

 

TOS, DSCP, so what?

Once, there was a byte in the IP header that was called TOS with the following structure:

TOS according to RFC 1349

07 06 05 04 03 02 01 00
Precedence D T R M 0

 

Precedence Description
0 Routine
1 Priority
2 Immediate
3 Flash
4 Flash Override
5 CRITIC/ECP
6 Internetwork Control
7

Network Control

 

Field Value Description
D 0 Normal Delay
D 1 Low Delay
T 0 Normal Throughput
T 1 High Throughput
R 0 Normal Reliability
R 1 High Reliability
M 0 Normal Monetary Cost
M 1 Minimize Monetary Cost

 

Most applications didn't ever care. Today, there are only a few applications like ssh that set the D bit (for interactive shell) or the T bit (for scp). So usage of the TOS byte is changing and it now resembles the somewhat backward compatible DSCP value:

DSCP according to RFC 2474

07 06 05 04 03 02 01 00
DSCP (0-63) 0 0

 

WAN and Wired LAN

DSCP based on RFC 2474, RFC 2597, RFC 3246

Name Value Classification
CS0 0 Standard
CS1 8 Low Priority Data
CS2 16 High Throughput Data
CS3 24 Low Latency Data
CS4 32 Multimedia Streaming
CS5 40 Telephony
CS6 48 Network Control
CS7 56 Administration
AF11 10 High Throughput Data
AF12 12 High Throughput Data
AF13 14 High Throughput Data
AF21 18 Low Latency Data
AF22 20 Low Latency Data
AF23 22 Low Latency Data
AF31 26 Multimedia Streaming
AF32 28 Multimedia Streaming
AF33 30 Multimedia Streaming
AF41 34 Multimedia Conferencing
AF42 36 Multimedia Conferencing
AF43 38 Multimedia Conferencing
EF 46 Telephony

 

If you look at the defined values you will see that 1, 2 and 4 which resemble the TOS values of the R, T and D bits are not contained in the table. This doesn't matter as they're treated as standard classification.

Now if this wouldn't be enough there is another priority classifier that is contained in the additional bytes of tagged vlan packets:

802.1p (VLAN) Priority

Value Traffic Type
0 Best Effort
1 Backgroupd
2 Undefined
3 Excellent Effort
4 Controlled Load
5 Video
6 Voice
7 Network Management

 

Most managed switches will let you today do traffic priorization either based on the physical port, the 802.1p priority value or the DSCP value, whereas these methods are usually mutually exclusive.

 

Wireless LAN

Here the WiFi Alliance defined WMM (formerly called WME) which is a subset of the 802.11e specification. This was done as the specification did take too long until it was finally released. Thus we have today on the one hand 802.11e which is nowhere completely implemented as far as I know and on the other hand WMM with a growing amount of implementations.

WMM (a subset of 802.11e), 802.1p Priority and DSCP relation

802.1p Value DSCP Value WMM Access Category
6 and 7 48-63 VO (Voice)
4 and 5 32-47 VI (Video)
0 and 3 24-31 and 0-7 BE (Best Effort)
1 and 2 8-23 BK (Background)

 

If a tagged VLAN packet received for WLAN transport carries a 802.1p priority value which is nonzero, this value is used for WMM classification. Otherwise or in case of an untagged packet the DSCP value of the IP header is used according to the above table.

 

Bringing it all together...

According to the RFCs for DSCP Telephony packets should be assigned a DSCP value of 46 (EF), if you have a WLAN with WMM support you are better off when assigning a value of 48 to 63 for top priority. Video shoud be assigned a value of 34 (AF41) where for a WLAN a value from 32 to 47 is ok. For SIP messages a value of 24 (CS3) should be used, for a WLAN this is either the value range from 0 to 7 or 24 to 31.

This means that on a LAN where there are WLAN APs with WMM support a DSCP value of 48 for audio traffic should be used whereas this value needs to be transformend to 46 at the LAN border routers to conform to the RFCs.

Who did design that mess? Now, there are VoIP devices out there that use by default a DSCP value of 46 and do only certain stuff when this value is used. Let's quote a manufacturer we all know: "From S60 3rd Edition, FP1 onwards, the U-APSD power save scheme of WMM is also enabled with the IETF default value (46), if the feature is supported by the terminal and the WLAN access point.". Ain't that great? For maximum AP throughput we need to use 48, but to make use of powersaving features we're forced to a value of 46. Well, this will work, too. Until you start to stream video...

Well, I can only guess that either the RFC authors didn't want to spent money to buy the WMM specification or that the WMM folks don't give a damn about freely available documents like RFCs. And IEEE should proceed faster. The time they need to define a standard will quite often make the standard look like something from a museum of modern art. As for manufacturer ideas I'm speechless.

 


VMWare 6.5 Configuration07.01.2009

If you did upgrade to VMware 6.5 on Linux it may happen that you are lost, yes, I really do mean lost. The network configuration you were used to is gone. The way you did rebuild the kernel modules is gone. Documentation is, well, sparse to say at least. So you will sooner or later find out that the way to rebuild your kernel modules is:

vmware-modconfig --console --install-all

The network configuration is a different beast. It is easy to find out that you must invoke vmware-netcfg to configure your network settings. But it can happen that this command silently fails. This means that your /etc/vmware/networking file does not exist or is corrupt. In this case do the following:

touch /etc/vmware/x ; vmware-networks --migrate-network-settings /etc/vmware/x ; rm /etc/vmware/x

Now you can invoke vmware-netcfg which will then work as expected.