The sad state of eBPF15.05.2019

 eBPF could be cute. Really cute. One could do smart in-kernel network packet processing. Except that there's really huge roadblocks.

First of all there is no usable documentation. One has to collect bits and pieces from all over the internet. Then LLVM compiled stuff quite often does not fit the bill as efficient eBPF code needs to be runtime modifiable before it is loaded into the kernel. Now how one does this with compiler output in ELF format? Arrrghhh!

So I did write a little eBPF to C assembler (find it here). And things look great again. 11 registers, let's start developing - until the in-kernel verifier kills register usage. It looses track of r1 to r5 even on simple conditional branches. So 5 registers are so volatile that they are merely unusable. This still leaves 6 registers. Not quite so. r0 is quite often implicit soure or destination, so it is only half usable. r6 is expected to contain the context pointer you do need to be able to access the packet data. And r10 is the frame pointer that one needs to be able to call helper functions. So only r7 to r9 are left that one can actually use. Compared to cBPF which does have an accumulator and an index register (which I do count as a half register) you have 1.5 cBPF registers versus 3.5 eBPF registers. Not much improvement here.

Then you want to access packet meta data, e.g. the source address, from an eBPF program that is attached to a UDP socket. The data is there. But you are not allowed to access it. Doh.

Now you look at the helper functions and get the idea, it would be nice to move certain received packets to another socket. Could save userspace complexity. Go, try. EINVAL. Not only that one has to read through a pile of kernel code to find out that related code requires additional kernel configuration to be compiled in, the gods of eBPF added extra insult to injury by allowing packet transfer only for TCP. Why, only why??? Isn't UDP or raw packets even simpler than TCP? And what about SCTP? Did somebody pay to prevent socket transfer for raw and UDP sockets?

Then you don't want to enable the eBPF JIT system wide but only for "proven" eBPF code whereas code under development should be interpreted. To no avail. There's a huge slew of parameters to the bpf system call but there is no way to do a system wide "use JIT if told so" configuration option as well an no "I want JIT" bpf call option. Who designed that?!? Now, if JIT fails to work properly for a single eBPF program one has to disable eBPF JIT system wide. As Stephen King wrote: "new and improved" (Tommyknockers).

And then such stuff like eBPF cgroup programs. Oh, not programs, a single program per cgroup. You are allowed do more in a cgroup program. Except that cgroups are (ab)used by distros and it is not so easy just to create a cgroup here, a cgroup there, ... - without overall concept you will create a mess of your system and one really doesn't want an application to create cgroups on the fly, especially as there's no one there to clean up afterwards.

In the end it boils down that from an application point of view only socket filtering makes sense. And this is artificially so restricted that writing eBPF code for this purpose is not much fun. The only reason to do so is the dreadfull slowness of [e]poll.

All other network related eBPF use cases seem to be custom tailored for expensive and specialized big iron. So after all for the more common use case eBPF is sadly nothing more than a helper to workaround kernel slowness.

[e]poll bottleneck15.05.2019

 [e]poll on Linux is slow. Very slow. To be more precise: slow as a dead horse. 45us on a system where a clock_getttime syscall (yes, the syscall, not the vDSO) takes 335ns and a read call via library takes 4us. This is on a i7-7500U CPU @ 2.70GHz (not fast but not that slow).

So lets do some quick calculation for a process receiving a UDP packet and answering with another UDP packet: 45us epoll + 10us recvmsg + 35us sendmsg + 10us userspace data processing. So every single packet received and replied to takes 100us and only 10% of this time is actual packet processing. 50% of the remaining 90% is consumed by a single [e]poll call. And, yes, 100us means no more than 10000 packets per second and core which in case of 1K packet size resorts to a network throughput of only 80Mbit/s for a single core. This is horrible.

And there is no workaround, recvmmsg is more often than not unusable for an application as usually signalled data from [e]poll means a single received packet is pending. And if only a single packet has to be sent sendmmsg is unusable, too.

And this is why one should learn a little eBPF assembler (no, not LLVM compiled stuff, handcrafted code). Especially if the application in question may be attacked. The basic data checks need to be processed in-kernel as the epoll+recvmsg overhead is so huge.

Hopefully the responsible kernel developers will one day understand that [e]poll is a fast path and requires a race horse, not a dead one.

The sad state of smartcards on Linux22.12.2018

the idea is simple: use a smartcard reader or a CCID compliant USB token and get rid of passwords. Simple, isn't it? That's, until it comes to practice. First of all, many smartcard vendors are still going the vendor lock-in way and even with smartcards or tokens one can configure under Linux there are roadbocks:

There's a stack, consisting of the smartcard driver, e.g. CCID, then there's PCSC-Lite, then OpenSC, followed by libp11 and finally OpenSSL. All different developer groups, seemingly no real communication resulting more often than not in broken interworking or sudden parameter changes. Documentation is sparse at best, users are left alone in the dark, developers need to use trial and error. Furthermore certain functionality is missing with no easy way to push required code upstream.

Take for example the CCID driver. Suddenly smartcard names are changed. Nobody cares if this could make problems.

OpenSC developers don't take patches in unified format, one shall make a master degree in git usage first instead, bah! I still have to investigate why 0.19 does seemingly break libp11 and thus my applications.

As for libp11 there was once another library to be used instead which was abandoned - no information that libp11 was to be used instead. Then there's missing functionality, e.g. the 'door opener' slot of a smartcard doesn't need and doesn't want a pin, so let's patch...

OpenSSL engine documentation is, well, horrible as well as non existing. Reading the sources is not the way it should be.

Having to use library pathnames to configure the stack doesn't really help portability at all.

It's bad for developers and nearly impossible to use for end users.

Snooker Commentators02.05.2018

this is quite subjective but watching the Snooker World Championship with an 1.5m dish and remembering the Masters I've got my commmentator favourites, positive and negative:

Most entertaining: John Virgo "Where's the queue ball going" / "there's always a gap" / "You have to remember the pockets are always in the same place"

Most boring: Clive Everton "Bzzzzs (silence) ---- Chrrrrr (silence) ----" after one to two minutes something obvious with a 10 seconds delay ...

Situation based dry british humour: Dennis Taylor "Remember the championship ends on Bank Holiday monday" (a tip tap situation with a very delayed rerack)

Straight forward precise: Steve Davis, Steven Hendry, Alan McManus

Non playing best: Phil Yates - sometimes overdoing which means you hear the professional reporter

You may have other opinions, YMMV.


Shitter: Exhaust pipe of a moron in chief.

UK has replaced April Fool's Day. The now celebrate May Fool's Day.

Why DNSSEC is a terrible beast28.11.2017

 Well, the idea behind DNSSEC is not bad. But the design sucks, especially for what people do most - surfing the web. The designers of DNSSEC seemingly ignored the fact that most of the larger web sites, ad networks, etc., are served via some sort if CDN. So, as a resolver has to look up the DNS records for all elements of a web page, when DNSSEC is activated all intermediate CNAME records must be checked, if there is a DS record. The final A and AAAA records must be checked for existing DS records too. Even ignoring the fact that the TLD keys must additionally be fetched, DNSSEC slows down resolving by a factor of 10. So the address lookup for a single web page element now does require not 30ms, but 300ms instead. Let's assume that a typical web page requires 20 resolver queries that's then 6s instead of 600ms. Ouch. Especially as more bandwidth will not help due to the required query and answer time. Only reduced latency would help but one has to remember that the speed of light is the limit and that a remote DNS server will take processing time, too. A shortcut redesign for DNSSEC is required for what an end user will call "browsing experience".

Well, it seems the only solution to this problem actually is TLS and DNS views. If all web servers use TLS the certificate will validate the server sufficiently, thus for surfing the web DNSSEC is no longer required. Which in turn requires DNS views, one view with DNSSEC disabled which is used for surfing and a DNSSEC enabled view for all other services. One can only hope that some time in the future DNSSEC is no longer required due to somewhat mandatory TLS (or similar cryptographic validation) usage. Die, DNSSEC, die!

HTTP/2 Support21.07.2017

This site now supports HTTP/2. Have fun!

Resent Big Brother17.07.2017

 When Big Brother wants to supervise and control all your activities you should resent that. One way to do this is to use VPNs. Unfortunately most VPNs are easy to detect and have a star topology which means removing the central node kills off the whole VPN. How about a VPN that is difficult to detect, can with some creativity be hidden in a large variety of "harmless" communication protocols, can be used as a base for a Mesh-VPN and works via Tor? Add in state of the art cryptographic methods. Now, the first release is available (Linux and only Linux). It probably has some minor problems but works stable enough for me to be used for my daily work.

Availability via Tor07.07.2017

This site is now available also via Tor. Please visit kjsadwudhdnrkxhd.onion if you want to keep your privacy.

Brother CUPS Pico Howto14.03.2017

If you are using a printer like the Brother DCP-9022CDW and use CUPS you don't need to use the LPD or JetDirect protocols, you can use IPP via https. Actually it is quite simple and you don't need any proprietary drivers when the printer is network connected. The only thing you need is the proper ppd file. Get yourself the cupswrapper GPL source from Brother and extract the ppd file contained in the archive. The install it as (adjust file name according to your printer):


Now you can create https based IPP printers. The required urls are very simple:


As for the <name-of-service>, open the management interface of your printer in your browser, select Network and then Service. You will see a column Service Name with all uppercase letters. <name-of-service> is any of the listed service names converted to lowercase, e.g. binary_p1.

I forgot: for https you should create proper certificates matching you printer's host name. If you don't want any certificate hassle it is probably sufficient for you to use http instead of https for IPP printing. All of the above applies for http, too.

How to break things the Google way...23.02.2017

...or Nougat at its worst. No, I won't call Android 7 things, but there are some areas with severe "as designed" breakage:

  1. User CA certificates

    Unfortunately Google has deceided in its wisdom that user CA certificates will not be accepted by default anymore. But instead of allowing a system controlled setting they pushed the task to the individual apps. Don't they know from experience that especially stock applications of device manufacturers will probably never get any such opt in option? Couldn't they have made at least a default setting for certain system applications like the stock mail client? The only educated guess I do have here is that Alphabet (Google's mother company) must be planning their own CA and dreaming of big sales...

  2. Doze Mode

    OK, doze mode exists since Android 6, happily breaking things. Well, the basic idea isn't a bad one, prevening over ambicious apps to do a race as to who is the fastest battery drainer. But the implementation went wrong, horribly wrong. Google deceided to implement a user defined whitelist (not bad) but even apps on this list do get restrictions (really bad) which more often than not cause communication breakage. And the app user doesn't care if Google, the device manufacturer or the app developer is the culprit if connectivity is broken. Now, on Android 6 you can at least remedy this in an acceptable way, by creating an app that just calls "dumpsys deviceidle disable" and has granted "android.permission.DUMP" privilege via adb. One can at least change the setting on the phone with such an app after a reboot on the phone.
    But with Nougat there's no such joy. The app would additionally need "android.permission.DEVICE_POWER" which, in short, can't be granted via adb. Thus, either never reboot or always do have a trusted host handy. If you start to see an increasing amount of backpackers commencing these are probably Nougat users. The only way I do know of to prevent communication breakage for nearly most apps on Nougat after about 30 minutes of idle time is to connect the device to a host after every reboot and the issue "dumpsys deviceidle disable deep". Thanks, Google.

  3. FCM/GCM

    Sou you shall be able to receive FCM (GCM successor) while in doze mode. No, no go. At least not on when connected to a WLAN with a proxy configuration script (PAC). See for youself and dial *#*#426#*#* on your phone. Do a tcpdump (ports 5228, 5229 and 5230) on a gateway and try to ping/connect to google play services (you did dial the stated number, did you?). The result is that google play services are completely ignorant of any proxy setting deliverd via a proxy configuration script and do try only a direct connection. Right so. Bring a bed to work and have some rest, you won't be disturbed...

There's more long standing bugs like DHCPv6 which are simply deferred, though I do believe that Google will not implement DHCPv6 unless Samsung stops to kill IPv6 when the display is switched off.

Also available as TLS version...12.01.2017

I just have enabled a TLS version of this site. Actually it is the same site, only encrypted. Though it doesn't really make sense for my content TLS seems to be the trend of the day and sometimes I'm trendy.

SAT>IP Library for Linux13.03.2016

I did create a SAT>IP library for Linux including example applications. It's probably not quite ready yet for prime time but should already be sufficiently usable. You can get the code from Github.

QNAP Virtualization Station Woes27.12.2015

 So you want to use QNAP's Virtualization Station, i.e. KVM on a QNAP. Looks good, everything works - but: all of sudden you get an "urlopen error" when accessing the Virtualization Station.

Probably you do have IPv6 enabled on the QNAP and connect via IPv6. Then you do have "Force secure connection (HTTPS) only" enabled in "System Settings" -> "General Settings".

Now, this takes a while to analyze. Luckily I did have one system with such a setup that does work available which I could use for a comparison. Turns out, when changing to SSL only it seems the Stunnel settings are somehow not adjusted. You need to change the following in /etc/config/uLinux.conf:

Enable = 1
Port = 443

 Furthermore you need to edit /etc/config/stunnel/stunnel.conf and change the following: 


pid = /var/run/
accept = 443


pid = /
accept = :::443

Reboot the QNAP device and Virtualization Station should be accessible again. BTW, I don't think the pid entry above causes any harm but I didn't test not changing it after a day of analysis.

QNAP Security Vulnerability Details08.08.2015

As posted previously there is a real bad security problem affecting QNAP devices running kernel 3.12.6 and a firmware release prior to 4.1.4 build 0804.

In short:

If you use encrypted volumes on such a device consider all such volumes to be compromised. The disk access keys are logged on unencrypted partitions in world readable files. This means that no matter what key you enter in the GUI and however often you change it that somebody who did read these logs has access to all encrypted data forever unless you take appropriate and quite time consuming measures.

More details:

If you grab the latest available GPL source which is GPL_TS-20150505-4.1.3.tar.gz you will find the following code in the file GPL_TS/src/linux-3.12.6/drivers/md/dm-table.c:

        printk(KERN_ALERT "dm_table_add_target start %s, start=%lu,len=%lu, param=%s, type=%lu...n", type, start, len, params, tgt->type);

This line causes all disk access keys protected by cryptsetup to be logged on disk in world readable files. You just need to feed the log data to dmsetup to gain access to encrypted volumes. Doh. So you just need to do a grep on offline copied disks to gain full access to all encrypted volumes. Online access through other security vulnerabilities is quite thinkable of. And the sad thing is that QNAP doesn't find it a necessity to notify customers about this. It seems they corrected this in firmware 4.1.4 build 0804 and keep very quiet about it.

They do probably keep this quiet as there is real time consuming work to be done to regain confidentiality. First you have to install the fixed firmware. Then you have to do a full backup of all data contained in the affected volumes. After that you have two possible ways to proceed as far as I can see:

  1. Use cryptsetup-reencrypt to replace the disk access key. You have to bring your own version of this utility as well as the whole slew of required libraries as QNAP doesn't ship it.
  2. Delete all encrypted volumes. Then create new encrypted volumes. You will have to recreate all additional configuration related to these volumes.

So sadly you will have to check for yourself by calling dmesg after unlocking an encrypted volume if for you this vulnerability is fixed and then you will need to proceed accordingly, taking into account that your QNAP device may be unavailable for many hours or even days. Some more details will be available on BUGTRAQ where I did post information about this security vulnerability today (mind, it can take a bit until released by the admins).

QNAP Security Vulnerability Advance Warning22.07.2015

There is a real bad security vulnerability which does probably affect quite some of the current QNAP models in a certain and not so uncommon configuration. Firmware release 4.1.4 build 0522 is for sure affected and from looking at the GPL sources all 4.1.3 versions are affected, too (I didn't check older releases). Note that this vulnerability was introduced by a source modification of Open Source code by QNAP.

Qnap was notified 2015/07/12 and acknowledged receipt on 2015/07/13. If there's timely responses from QNAP regarding this vulnerability with regards to the release of a corrected firmware I will wait with publishing further details until the firmware is released. If there is no or no timely response I will post details here and on BUGTRAQ after a month, thus mid August.

Be aware that after a fix for this vulnerability is available and installed there will be quite some additional work to be done by anybody affected by this vulnerability, so adapt your plans accordingly.

Why one should never trust "OPAL" compliant self encypting drives27.06.2015

I had recently a quick glance at the OPAL specification. Oh well. Self encrypting drives, AES, doesn't sound good?

The only thing OPAL actually specifies related to security is that AES128 and/or AES256 must be supported. Not a word about cryptographic modes. Not a word about key storage or encryption. Not a word about tamper proof design. Actually nothing. So let's specify a drive that is "OPAL" compliant and thus "secure".

  1. Don't spend money for secure hardware and use AES in ECB mode.
  2. Spare the effort and store the master key as plaintext. The user key to "unlock" the master key is stored as plaintext, too, to enable easy coded string comparison.
  3. Make the majority of bits of the master key a dependency of the device serial number and use only 48 bits of randomness to get some NSA sponsoring.
  4. Implement some manufacturer specific hidden ATA commands to be always able to upload "special" firmware (naturally "only" for development) and download configuration data, logically including those itty gritty keys.

Now, we do have a "secure" and OPAL compliant drive. So let's move forward, sell and make big money.

As long as storage manufacturers use broken specifications and do not open up security relevant firmware for public review including a trust chain that guarantees that the firmware flashed to the drive is 100% identical to the reviewed one, one should trust stuff like "OPAL" only to prevent access for kids being less than 5 years old. The only way to go is to use open source software encryption that does get a thorough review, or, at least a more thorough review than closed source code.

And the best thing for the average user is when OPAL is used with TPM: Mainboard broken, data gone. That's what I do call "proper" design. Which once more leads to "Do not use. Never!".

The Password Dilemma and pam_pivcard as a Solution01.04.2015

There's a well known and constantly inherent security problem with computers which is called password. Now, we all do need a bunch of them. And to be honest, we all use something memorable and keyboard friendly. And we prefer a single sign on scenario with one password e.g. unlocking browser password stores, joining AD domains, and so on. Then, there's a theoretical solution called smartcard. But: most smartcard vendors insist on proprietary card configuration tools (easy extra money and vendor lock in). Furthermore, for home and small office systems a complete PKI infrastructure is well beyond any accepable limit. So we stick to passwords. And if too complicated passwords are enforced, we change to password stickers. And even for those PKI based systems there is a need for administrators if something goes wrong which in turn need, to be able to access a limping system - you guessed it - a password.

Thus, a solution is required which can't be perfect from any theoretical security point of view but which is more secure than the current state of affairs. Let's see what we do require for a single or few user Linux system:

  • Access must be convenient, even if password enforcement rules are in place to prevent password stickers
  • A password has actually be used to allow for single sign on comfort
  • The password must be complicated enough to withstand password crackers for a sufficient long time
  • The human brain must not be overloaded with complicated and hard to remember passwords
  • The password must be in a form that it can be entered by a human to be able to do emergency administration

Now, the only convenient solution to this dilemma is to use a smartcard without PKI infrastructure. But, that doesn't gives us a password, right? Wrong. Here is the a simple and convenient bridge that allows one to use smartcards without PKI infrastructure for convenience and still use a hopefully quite complicated password that is kept in a safe place for emergency access. We use the a public key on the smartcard to encrypt the actual password and store this encrypted data on the local system. When the user wants to log in he or she activates the smartcard by means of inserting it or (flame war ahead) places it on a contactless reader. After the smartcard is in place hitting Return is all that's required. The system decrypts the stored password with the private key of the smartcard and injects it into the PAM authentication stack as if the user would have entered it on the keyboard. Optionally for more security relevant scenarios the smartcard can be PIN protected as usual in which case the user enters the smartcard PIN instead of the actual password. As the PAM stack now has an actual password to continue with, all these nifty single sign on PAM modules come into play. If then the system fails in any way one can just enter the regular password instead of using the smartcard as a fallback scenario. In a regular maintenance scenario the smartcard (or probably another one) can be used to access an administrative account.

When one wants to change either a password or the actual smartcard only the regular systen password change command is required. The system authentication stores the hashed password as usual and the new password is encrypted with the currently presented smartcard. A storage is done as single file per user an administrator can simply delete a file to deny smartcard based user access.

As a result one can use complicated and hard to remember passwords stored for emergency use in a safe place and still have convenient system access with single sign on features. This is then so easy that there is a high probability of such a system being actually used. True, when the smartcard is used without PIN or the the PIN is entered to access another secure element of the card malicious software could try to steal the password by means of decrypting it. This software would then have to have root access anyway. Which means that in this case you are already in greater trouble. And looking at passwords on can remember versus computing power and password crackers (you know OpenCL, do you?) the risk of the password being stolen by root access is acceptable compared to how fast and good password crackers are.

Ok, I do hear the security experts crying out loud "...but you can't store a paintext password, never ever, if encrypted or not, ...". Well, compare it to your home. If you let have security experts their way every room, including the toilet, has to have a tamper proof and resistent door, all windows have bulletproof glass and the walls are thick concrete. Furthermore every door is locked by a tamper resistent lock the key of which is stored in a key store which releases only one key at a time and only if you authenticate yourself to the key store every time. As the system is solid the keys are quite large and heavy. So if you want to enter a room you have to return the current key to the key store which can only be done if you did lock the room you just left, then identify yourself to acquire the next key, unlock the door of the next room, and so on... - but in a real world, you live in a convenient flat with a usable external door lock and not in a prison, do you? If you have data so sensitive that these must be protected by special means you will have proper security measures anyway (proper building, access control at various levels, separated data and system access, and so on). But I don't think that most of us need this kind of stuff. A convenient lock and a generally speaking burglar resistent design of your home will do. If in doubt add a safe or preferably use a bank vault. If the burglar doesn't get in fast he or she will try an easier target. And if it is no burglar but special forces you can't prevent entry anyway - except maybe by exploding everything including yourself with a tactical nuclear device which is probably not what you want.

So what's required is a smartcard that can be initialized without a proprietary PKI configuration software, that leaves the user the choice between PIN and no PIN and that can be used with either no reader at all or for comfort (but not security) through a contactless NFC reader. Well, there is such a 'card', the Yubikey NEO. To make it short the device has deficiencies that can lead to DOS attacks against the device. Thus I wouldn't use it e.g. for hard drive encryption with an on device generated key. But it is very convenient to use for local or remote access, can be plugged into any USB port or be used through NFC. The device configuration software is open source and thus the device can be easily initialized. The PIV card applet on the device is exactly what's required, it features one PIN free and three PIN protected certificate/key pair storage slots. The device has quite some more features which are of no interest here. Sadly, there is no EMV cover available to protect the device from unauthorized access while moving around. But in this usage scenario the worst thing that can happen is that the device is rendered useless.

Then, a PAM modules is required that encrypts the password at password change time with the card's public key and that decrypts the password with the card's private key at login time. Such a PAM module is pam_pivcard which uses:

  • OpenSSL for encryption and decryption
  • PKCS11 engine for OpenSSL to have OpenSSL to use the smartcard
  • OpenSC which provides the actual PKCS11 interface and is able to access PIV cards
  • PC/SC Lite which is the middleware between the smartcard reader and the PKCS11 interface
  • The open source CCID driver which allows access to CCID compliant smartcard readers an USB tokens
  • Yubikey NEO PIV configuration software (yubico-piv-tool)

Sounds much, but isn't in reality. All of the above stuff is standard which should be provided by any reasonable Linux distribution. Only OpenSC may require a patch depending on how you do select the smartcard if you have sufficient different readers and tokens (hopefully this will be incorporated in a future OpenSC release). If the Yubikey configuration software isn't readily available for your distibution all required sources are available from Github. pam_pivcard is available from the Downloads section of this server.

Intel AMT and Linux24.02.2014

 I did recently set up AMT on a few Linux system. This resulted in quite some frustration, testing and coding until I could finally call it a success. If you want to configure AMT too you may want to read in detail what I did and you can download the required utilities. In short, AMT comes in handy for system maintenance and development. Otherways there is no day to day use case.

Android and why the NSA does probably like it30.01.2014

 A bit of a provocative title. But sadly very true. In short it comes all down to the fact that Android devices do have unrestricted /dev/mem access. Thus anybody with access to /dev/mem can read the entire memory, modify the running kernel and reprogram the hardware. So you say only root can access /dev/mem and thus it is surely secure? Wrong. Count the lines of code of the kernel. See how many local privilege escalation CVEs exist for the kernel. Now make an educated guess, how many "undetected" exploits exist in the kernel source. By undetected I mean undetected by the good guys. And if you are a three letter organization you may as well pressure the firmware providers to add a tiny bit of code to change /dev/mem access on demand, e.g. by reception of a specially crafted silent SMS.

Now you only need to produce an application that behaves well and not as a detectable spyware and which requires periodically data from an external server. Either the application contains some local root exploit or permission change is granted remotely via the device firmware and the application can instantly access all memory. This includes plaintext access to all passwords, certificates, everything. No need for cracking, you get it for free. Just upload the results to the remote server. And as Android relies on the "apps" concept you can probably easily have your victim(s) installing your application. And well, the application will do none of this if not activated by the remote server.

The neat thing about this is that it can go undetected for a very long time. Neat for the exploiting party, that is. So don't believe your Android phone manufacturer is selling you "enterprise grade security" which more often than not anyway means "lots of gaping holes". Think of Android as an ordinary suitcase. X-Rayed on many occasions, easily opened with a lock pick. And you would store all your valuables in such a suitcase instead of a vault?

A Tribute to Alan Parsons05.07.2013

Well, Parsons is just great. Mayby you're of an age that prefers  people like David Guetta. But, please, try to listen to someone who did make somewhat perfect music and remember that synthesizers and samplers were either not around or quite crude. Sadly there is, as for so many people, no way to thank him for what he arranged. Alan Parsons together with Eric Wolfsson did create some of the greatest music of modern times. Quite a lot of people think that "Tales of Mystery and Imagination", the first album, was the best. I do agree as far as it is perfect for a first album. However, "Eve", the second album is the very best one in my opinion. Careful selection of studio musicians, a perfect arrangement of what is today called a playlist and an album cover that has visual features that go undetected by mostly everyone - this is one of the most perfect bits of music I've ever heard (and seen). I'm so sad that the project stopped, though I do understand a variety of reasons for this to happen. Thank you, Mr. Parsons, for all you did for us…

MythTV and USB Tuners17.04.2013

I had a bad time trying to setup MythTV (0.26 in my case but this shouldn't matter) using USB tuners. Actually I would have preferred PCIE variants but as I do use a somewhat special Hardware for my HTPC this is out of the question. In any case, trying to use mythtv-setup to configure my tuners ended up in a failure with "FE_GET_INFO ioctl failed". Now after searching for quite a while, I came across the relevant bug entry at the MythTV site. In the end, if you encounter this error, run "femon -H -a<n>" where <n> is the DVB adapter number (/dev/dvb/adapter<n>) in another shell before trying to configure the DVB adapter with mythtv-setup.

Linux iSCSI target mess14.12.2012

Well, the good news for Linux is that iSCSI initiator handling using open-iscsi works. But beware when it comes to target handling. I do have a simple usage case. I do need to export a SAS attached Ultrium 5 tape device via iSCSI. Simple passthrough for backup/restore. Speed wouldn't be a problem, all systems requiring access to this tape are attached to a 10Gbit network. But there is a problem. With the in kernel iSCSI target implementation, called LIO and using a tool named targetcli the tape device is just not configurable. Associated low level tools, e.g. tcm_node, are utterly broken and not updated for many months - they do not fit the current mainline kernel (3.7 in my case) anymore. And the "beautiful" kernel implementation just panics when manually configuring via configfs.

I just can only shake my head in despair. Who except a certain kernel developer that always preferred code beauty and presumably personal preferences more than general useability could help getting this broken stuff into mainline kernel? Userspace tools not available as release packages and seemingly tied to certain kernel versions, to add to this mess none of the possible userspace packages and kernel version permutations seems to be documented anywhere. Userspace tools are using outdated python, broken for many months, seemingly not maintained anymore. Kernel space implementation allowing for panics by just writing configfs values which is even more broken than the userspace implementation in my opinion.

Folks, do me a favour. Get rid of LIO, swallow the pill and head along with SCST. Personally I tried to refuse to use SCST first because it is an out of tree implementation requiring kernel patches, thus you depend on the SCST folks delivering timely and well tested updates for kernel releases or you do need to upgrade the SCST sources yourself (arghhh...). But, even as there is no useable documentation on how to export a tape with current SCST there was sufficient information available all over the web to create a working configuration. Yes, working, as in "it does what it is expected to do".

Maybe most people only want to export disks. Maybe LIO is working in that case. I don't care. I do need a working implementation that doesn't panic for tape access. And looking at what's going on it seems the only maintained and working implementation is SCST. Either LIO gets fixed or it should be scrapped ASAP (hch, do you listen?)...

In Kernel Bridge versus openvswitch19.10.2012

In short: if you need a L2 bridge between physical interfaces you're today better off with commodity network equipment. Bridging is nowadays mainly interesting for access to and from virtual systems. I will stick to kvm here. After doing quite some iterations of netperf I did come to a simple conclusion: scrap the old brctl based implementation and use openvswitch. At least with current kernels and openvswitch (I'm at the time of writing using kernel 3.6.2 and openvswitch 1.7.1) openvswitch outperforms the in kernel bridge implementation by 0.5 to 2GBit/s (depending on protocols used) and is quite more stable with regards to constant throughput. These tests were performed on a Core i7 six core LGA2011 system betweeen host and guest.

The only drawback of openvswitch is documentation. It is missing a kind of howto with regards on how to start using openvswitch with all db based caveats in comparison to the brctl based implementation. If this missing bit is filled in by a brave soul openvswitch seems, at least for me, the way to go.

Intel DH77DF mess25.09.2012

Well, the DH77DF board ist quite nice - as long as you can get past the BIOS and live with the BIOS mess. It is called an UEFI BIOS - but for which operating system?!? As for Linux, it seems to be impossible to configure an UEFI boot. There is no option to launch an EFI shell. All of the two BIOS versions available fail miserably. As long as Intel seems to sell alpha quality boards I'm going to refrain to buy any other Intel board and I do advise everybody else not to buy broken UEFI stuff from Intel. I don't care about the hardware if the associated firmware is - balantly put - utter crap. For me the BIOS of this board is as helpful as two concrete boots in the middle of the Atlantic, period.

Now for an update - I did manage to get this beast doing a Linux UEFI boot. Here's how:

  • Get yourself an UEFI Shell (the 64 bit version).
  • Update your DH77DF Bios to at least version 0100 (the third BIOS for this board).
  • Make sure (use gdisk -l /dev/<your-device>) that your boot partition has code EF00.
  • Make sure that your boot partition is FAT32 formatted, in case of doubt reformat it with mkdosfs -F 32 /dev/<your-boot-partition>.
  • Copy the downloaded UEFI Shell to <mounted-boot-partition>/EFI/BOOT/BOOTX64.EFI (note the all uppercase pathname!).
    Now you are able to boot the UEFI Shell which you can use to start your favourite EFI boot loader, mine is elilo).
  • Copy your bootloader to your boot partition using an all uppercase 8.3 compatible pathname, e.g.
    <mounted-boot-partition>/EFI/GENTOO/ELILO.EFI and copy all files required by the bootloader to their proper locations
    (note that there is no longer any case sensitivity from here on).
  • Use the UEFI Shell to boot your boot manager and with that your EFI enabled kernel.
  • use efibootmgr from the UEFI booted system to add your EFI boot manager to the BIOS.

Now you should have an UEFI boot entry for your boot manager in the BIOS and it should be loaded by the BIOS as the default. It works, at least for me. YMMV.

Note: Even though the 0100 BIOS release notes contain a line about added code for UEFI PXE boot, well, lets say it seems the added code consists of exactly this line in the release notes. Just boot into an UEFI Shell and try ifconfig -l (evil grin) - and, oh, you can try the UNDI CTRL-S game too (again evil grin)...

IPv6? Its (still) dead, Jim!13.08.2012

There's a variety of reasons why IPv6 is as dead as a horse run over by a bus. Let's have a look.

Internet Providers:

At least for me in Germany it doesn't seem that ISPs are really willing to provide IPv6 PPP. I just don't see options and i give a damn about 6in4 cludges.

Networking Equipment:

Look for affordable switches that do provide MLDv1/MLDv2 snooping. Its as bad as IGMPv3 snooping. Switch developers seem to care more about some itty gritty SNMP features than for basic networking stuff. As long as home user affordable manageable switches do not support IPv6 related configuration users will stick to IPv4.


Let's specifically talk about Linux here. For IPv6 there is no, and probably won't ever be: masquerading. I don't give a damn about the technically legal reasoning why there is no IPv6 masquerading. I *WANT* masquerading as I don't want the outside world to know how many systems I'm running. I don't want the outside world to know which of my systems is connecting. I want to be able to control which system is able to connect to the outside world in a simple way. All of this works only if the developers skip the technical yadda yadda crap and go through the pains of implementing IPv6 masquerading.

It's not that I'm not using IPv6. I do have a local IPv6 vlan running which is used for IPTV distribution. And I did develop IPv6 software that uses IPv6 multicast. Believe me, I do know the pain. Which makes me believe that until proper infrastructure is available IPv6 is dead. For sure.

NFSv4 - Nightmare on Elm Street22.07.2012

 Here we are, here we go: There's a new kid in town, its name is NFSv4. It's been around for quite a while and it should be sufficiently mature for production use. Its hyped to be faster, better, everything. But:

You do have a common root for NFSv4 exports. On the first glance that's ok. But on the server you need an additional shitload of bind mounts if the exported trees are also used locally. So much for ease of life. And if you export directories for common use and directories for use by certain hosts only everybody will get a full list of all exports, even if some of them are not useable. Cute, secure, well done...

Autofs needs an executable map. Thus the NFSv4 base directory will initially be empty, desintegrating the autofs browse option to shreds. Cool.

NFS root on NFSv4 is not supported by linux kernels, one has to use an initrd or initramfs. Something one would call a great step in the wrong direction...

Then, speaking of linux with a current kernel (3.4.4) the execute only permission is still broken for NFSv4 even though the developers know this for more than half a year. This means that NFSv4 is in pactice only usable for "noexec" mount types. And it doesn't look like any of the developers would care...

After all, it seems that NFSv4 was designed with connection to Redmond in mind. Embrace, extend, kill usability, own it... - why for any reason NFSv4 would be so unusable as it is. I don't want to be forced to have autofs mount points based on server names or script based mappings. I dont want to be forced to have tons of bind mounts just to be able to use NFSv4. I don't want to have exports that are not available except for certain clients shown to all clients. I don't want to be forced to go through initrd/initramfs hoops just to be able to use an NFSv4 root on linux (incidentally: thanks, Trond).

Well, call me a dinosaur. But I'm not going to switch to NFSv4 as it is right now. I would need a "hide" option for /etc/exports, that would cause mount points not available to a client to be hidden by the server. I would need a kernel being able to boot from a NFSv4 root without add on "no-fun" stuff. I would need NFSv4 honor the execute only semantics the way NFSv2 and NFSv3 do.

So long as these requirements are not met it is "so long" to NFSv4 for me. Probably I'm not the only one...

I have a dream02.09.2011

 I'm dreaming of a unified POSIX event interface. No separate handling for streams, processes, threads, ... - just a single interface. I'm dreaming of such an interface not being based on a toolkit layer, but of a native interface. Well, I guess I'm having a nightmare...

Mozilla Foundation Crap31.07.2011

Mozilla,  Firefox and Thunderbird once used to be a viable alternative to closed source browsers and they were (especially Firefox) the only alternative for the open source community. It seems, however, that things have changed.

I didn't track Seamonkey Project (the Mozilla follow up) lately, but at least Firefox and Thunderbird always get undocumented behaviour changes. Who in the FSCKing world can guess that one needs to enable IPv6 communication on the loopback interface to prevent slow menus on Firefox and Thunderbird? And if things like this ain't enough: Why for heavens sake was the usable "Reload" button removed and repaced by a small "awsome" field in the URL bar? Try to hit this little beast on a Netbook! Usability seems to be anyway of no concern to Mozilla Foundation. Did anybody test Thunderbird usability with more than 10 emails? I guess no. I do have a mail archive consisting of roughly 1,000,000 mails residing on cyrus imap on localhost. Now watch Thunderbird building its local cache for lots of hours...

Fortunately there are alternatives for Firefox today. What I'm missing, however, is a somewhat bug free alternative to Thunderbird. Currently I'm using Evolution which magically undeletes mails. Otherwise it is already a good and working alternative to Thunderbird.

So for me it is mostly "Good Bye" to Mozilla Foundation software. I call the whole stuff from there FOOBAR (oh, why is my little server is named catch22?).


Parsons giving finally in?22.12.2010

 The alan Parsons Project

Music I like. A musician I adore. But - the rights of this music seem now to be owned by Sony. Did Parsons really go the easy way? Is there any chance to get this back to real life? It doesn't seem, at least to me, that sanity was the thing going on when that happened. A perfect musician and a company I won't by anything from as far as possible for reasons you know all too well. The sad thing is you can't contact Parsons personally. I give a damn about some tour management contact. I've been in stage business too long to know better. Parsons hides. Sadly this looks like another case of a turn of a friendly card (pun intended).

Firefox backspace key shortcut crazyness21.11.2010


I had to modify some form on a web page. This page consists of a lot of Javascript with an ARM bases backend. Actually the backend is a home automation system so a low power ARM based system does make sense.

OK, I had to modify some values. Hit the BACKSPACE key. Was sent straight back to the login page. Page forward? Not possible as the pages were Javascript generated. Reason? After that happened to me more than once and I lost at least half an hour of editing I did investigate. Wo was the one smoking crack when defining the Backspace key of Firefox as a Previous Page Shortcut with no way of disabling this? I mean, anybody in their normal sense wouldn't bind any key required for editing to a shortcut as you never now if the user has moved the focus properly (and if so if the focus change request is faster than the next key pressed). I'm not in the mood of placing a bug report. I'm tired of bug Mozilla reports as the tend to stay open for years without any solution.

At least there is some way to disable this f***ing key binding. There is an Add On called keyconfig that is at least a cure even if the symtom (developer on crack) is probably staying.

Porting libipq applications to libnetfilter_queue18.07.2009

Given that you want to use the new NFQUEUE features of netfilter but you do have a legacy appliation using the old QUEUE target and libipq, here is an example how to port the legacy application to the NFQUEUE (using queue 0). Unfortunately this is necessary as you can't use CONFIG_IP_NF_QUEUE and CONFIG_NETFILTER_NETLINK_QUEUE at the same time. If you compile both into the kernel the legacy version wins. When you port your application you can even use the old netfilter QUEUE target, this maps to NFQUEUE with queue number 0.

The code below shows how to port a libipq based application (left column) to libnetfilter_queue (right column).

#include <stdio.h>
#include <stdlib.h>
#include <netinet/ip.h>
#include <netinet/ip_icmp.h>
#include <netinet/udp.h>
#include <netinet/tcp.h>

#include <linux/netfilter.h>
#include <libipq.h>

static struct ipq_handle *h=NULL;

static unsigned char bfr[32768];

static int worker(void *packet,int len,void *mac)
        struct iphdr *i;
        struct icmphdr *c;
        struct udphdr *u;
        struct tcphdr *t;

        i=(struct iphdr *)(packet);
        if(len<sizeof(struct iphdr))return -1;
        if(len!=ntohs(i->tot_len))return -1;

        case IPPROTO_TCP:
                t=(struct tcphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct tcphdr))return -1;

        case IPPROTO_UDP:
                u=(struct udphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct udphdr))return -1;
                if(len!=i->ihl*4+ntohs(u->len))return -1;

        case IPPROTO_ICMP:
                c=(struct icmphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct icmphdr))return -1;

        return 0;

static void die(void)

int main(int argc,char *argv[])
        ipq_packet_msg_t *m;

                fprintf(stderr,"No IPQ handle, aborting.n");

                        "IPQ mode setup failed, aborting.n");

#include <stdio.h>
#include <stdlib.h>
#include <netinet/ip.h>
#include <netinet/ip_icmp.h>
#include <netinet/udp.h>
#include <netinet/tcp.h>

#include <linux/netfilter.h>
#include <libnetfilter_queue/libnetfilter_queue.h>
#include <arpa/inet.h>

static struct nfq_handle *h=NULL;
static struct nfq_q_handle *qh=NULL;

static unsigned char bfr[32768];

static int worker(void *packet,int len,void *mac)
        struct iphdr *i;
        struct icmphdr *c;
        struct udphdr *u;
        struct tcphdr *t;

        i=(struct iphdr *)(packet);
        if(len<sizeof(struct iphdr))return -1;
        if(len!=ntohs(i->tot_len))return -1;

        case IPPROTO_TCP:
                t=(struct tcphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct tcphdr))return -1;

        case IPPROTO_UDP:
                u=(struct udphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct udphdr))return -1;
                if(len!=i->ihl*4+ntohs(u->len))return -1;

        case IPPROTO_ICMP:
                c=(struct icmphdr *)(packet+i->ihl*4);
                if(i->ihl<5)return -1;
                if(len<i->ihl*4+sizeof(struct icmphdr))return -1;

        return 0;

static int cb(struct nfq_q_handle *qh,struct nfgenmsg *nfmsg,
        struct nfq_data *nfa,void *data)
        int id;
        int len;
        int hlen;
        char *pkt;
        struct nfqnl_msg_packet_hdr *ph;
        struct nfqnl_msg_packet_hw *hwph;

        else id=0;

        else hlen=0;

                return nfq_set_verdict(qh,id,NF_ACCEPT,0,NULL);

        return nfq_set_verdict(qh,id,worker(pkt,len,

static void die(void)

int main(int argc,char *argv[])
        int fd;
        int len;

                fprintf(stderr,"No nfqueue handle, aborting.n");

                fprintf(stderr,"No nfqueue unbind, aborting.n");

                fprintf(stderr,"No nfqueue bind, aborting.n");

                fprintf(stderr,"No nfqueue queue, aborting.n");

                        "nfqueue mode setup failed, aborting.n");


                nfq_handle_packet(h,(char *)bfr,len);




For the libipq based code you need to link with -lipq, for the nfqueue based code you need to link with -lnetfilter_queue.


Networking Priority Madness13.01.2009

The Networking Priority Madness

Did you ever wonder about stuff like EF, AF41 or CS3? Have you ever wondered how to use WMM to assert that your VoIP call isn't interrupted by your file download? If so, read on and dive into the wonderful world of different standardization groups and incompatabilities.


TOS, DSCP, so what?

Once, there was a byte in the IP header that was called TOS with the following structure:

TOS according to RFC 1349

07 06 05 04 03 02 01 00
Precedence D T R M 0


Precedence Description
0 Routine
1 Priority
2 Immediate
3 Flash
4 Flash Override
6 Internetwork Control

Network Control


Field Value Description
D 0 Normal Delay
D 1 Low Delay
T 0 Normal Throughput
T 1 High Throughput
R 0 Normal Reliability
R 1 High Reliability
M 0 Normal Monetary Cost
M 1 Minimize Monetary Cost


Most applications didn't ever care. Today, there are only a few applications like ssh that set the D bit (for interactive shell) or the T bit (for scp). So usage of the TOS byte is changing and it now resembles the somewhat backward compatible DSCP value:

DSCP according to RFC 2474

07 06 05 04 03 02 01 00
DSCP (0-63) 0 0


WAN and Wired LAN

DSCP based on RFC 2474, RFC 2597, RFC 3246

Name Value Classification
CS0 0 Standard
CS1 8 Low Priority Data
CS2 16 High Throughput Data
CS3 24 Low Latency Data
CS4 32 Multimedia Streaming
CS5 40 Telephony
CS6 48 Network Control
CS7 56 Administration
AF11 10 High Throughput Data
AF12 12 High Throughput Data
AF13 14 High Throughput Data
AF21 18 Low Latency Data
AF22 20 Low Latency Data
AF23 22 Low Latency Data
AF31 26 Multimedia Streaming
AF32 28 Multimedia Streaming
AF33 30 Multimedia Streaming
AF41 34 Multimedia Conferencing
AF42 36 Multimedia Conferencing
AF43 38 Multimedia Conferencing
EF 46 Telephony


If you look at the defined values you will see that 1, 2 and 4 which resemble the TOS values of the R, T and D bits are not contained in the table. This doesn't matter as they're treated as standard classification.

Now if this wouldn't be enough there is another priority classifier that is contained in the additional bytes of tagged vlan packets:

802.1p (VLAN) Priority

Value Traffic Type
0 Best Effort
1 Backgroupd
2 Undefined
3 Excellent Effort
4 Controlled Load
5 Video
6 Voice
7 Network Management


Most managed switches will let you today do traffic priorization either based on the physical port, the 802.1p priority value or the DSCP value, whereas these methods are usually mutually exclusive.


Wireless LAN

Here the WiFi Alliance defined WMM (formerly called WME) which is a subset of the 802.11e specification. This was done as the specification did take too long until it was finally released. Thus we have today on the one hand 802.11e which is nowhere completely implemented as far as I know and on the other hand WMM with a growing amount of implementations.

WMM (a subset of 802.11e), 802.1p Priority and DSCP relation

802.1p Value DSCP Value WMM Access Category
6 and 7 48-63 VO (Voice)
4 and 5 32-47 VI (Video)
0 and 3 24-31 and 0-7 BE (Best Effort)
1 and 2 8-23 BK (Background)


If a tagged VLAN packet received for WLAN transport carries a 802.1p priority value which is nonzero, this value is used for WMM classification. Otherwise or in case of an untagged packet the DSCP value of the IP header is used according to the above table.


Bringing it all together...

According to the RFCs for DSCP Telephony packets should be assigned a DSCP value of 46 (EF), if you have a WLAN with WMM support you are better off when assigning a value of 48 to 63 for top priority. Video shoud be assigned a value of 34 (AF41) where for a WLAN a value from 32 to 47 is ok. For SIP messages a value of 24 (CS3) should be used, for a WLAN this is either the value range from 0 to 7 or 24 to 31.

This means that on a LAN where there are WLAN APs with WMM support a DSCP value of 48 for audio traffic should be used whereas this value needs to be transformend to 46 at the LAN border routers to conform to the RFCs.

Who did design that mess? Now, there are VoIP devices out there that use by default a DSCP value of 46 and do only certain stuff when this value is used. Let's quote a manufacturer we all know: "From S60 3rd Edition, FP1 onwards, the U-APSD power save scheme of WMM is also enabled with the IETF default value (46), if the feature is supported by the terminal and the WLAN access point.". Ain't that great? For maximum AP throughput we need to use 48, but to make use of powersaving features we're forced to a value of 46. Well, this will work, too. Until you start to stream video...

Well, I can only guess that either the RFC authors didn't want to spent money to buy the WMM specification or that the WMM folks don't give a damn about freely available documents like RFCs. And IEEE should proceed faster. The time they need to define a standard will quite often make the standard look like something from a museum of modern art. As for manufacturer ideas I'm speechless.


VMWare 6.5 Configuration07.01.2009

If you did upgrade to VMware 6.5 on Linux it may happen that you are lost, yes, I really do mean lost. The network configuration you were used to is gone. The way you did rebuild the kernel modules is gone. Documentation is, well, sparse to say at least. So you will sooner or later find out that the way to rebuild your kernel modules is:

vmware-modconfig --console --install-all

The network configuration is a different beast. It is easy to find out that you must invoke vmware-netcfg to configure your network settings. But it can happen that this command silently fails. This means that your /etc/vmware/networking file does not exist or is corrupt. In this case do the following:

touch /etc/vmware/x ; vmware-networks --migrate-network-settings /etc/vmware/x ; rm /etc/vmware/x

Now you can invoke vmware-netcfg which will then work as expected.