Tuesday, April 11, 2017

Deploying OBS

Open Build Service from SuSE is web service building deb/rpm packages. It has recently been added to Debian, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (osc) to manage. See Hectors blog for quickstart instructions.

Things to learned while setting up OBS

Me coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise.

Well done packaging

Usually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows.

OBS does automatic rebuilds of reverse dependencies

Aka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices.

OBS server and worker do http requests in both directions

On startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here..

Signing repositories is complicated

With Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;)

Git integration is rather bolted-on than integrated

OBS provides a method to integrate with git using services. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git.

The upstream community is friendly

Including the happiest thanks from an upstream I've seen recently.

Summary

All in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, openSUSE public OBS will happily build Debian packages for you.

*How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.

Monday, January 9, 2017

20 years of being a debian maintainer

fte (0.44-1) unstable; urgency=low

  * initial Release.

 -- Riku Voipio   Wed, 25 Dec 1996 20:41:34 +0200
Welp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.
uid                  Riku Voipio 
sig          89A7BF01 1996-12-15  Riku Voipio 
sig          4CBA92D1 1997-02-24  Lars Wirzenius 
A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;) Much later an alternative process of phone-calling prospective DD's would be added.

Monday, May 9, 2016

Booting ubuntu 16.04 cloud images on Arm64

For testing kvm/qemu, prebaked images cloud images are nice. However, there is a few steps to get started. First we need a recent Qemu (2.5 is good enough). An efi firmware is needed, and cloud-utils, for customizing our VM.
sudo apt install -y qemu qemu-utils cloud-utils
wget https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-arm64-uefi1.img
Cloud images are plain - there is no user setup, no default user/pw combo, so to log in to the image, we need to customize the image on first boot. The defacto tool for this is cloud-init. The simplest method for using cloud-init is passing a block media with a settings file - of course for real cloud deployment, you would use one of fancy network based initialization protocols cloud-init supports. Enter the following to a file, say cloud.txt:
#cloud-config

users:
  - name: you
    ssh-authorized-keys:
      - ssh-rsa AAAAB3Nz....
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    groups: sudo
    shell: /bin/bash
This minimal config will just set you a user with ssh key. A more complex setup can install packages, write files and run arbitrary commands on first boot. In professional setups, you would most likely end up using cloud-init only to start Ansible or another configuration management tool.
cloud-localds cloud.img cloud.txt
qemu-system-aarch64 -smp 2 -m 1024 -M virt -bios QEMU_EFI.fd -nographic \
       -device virtio-blk-device,drive=image \
       -drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.img \
       -device virtio-blk-device,drive=cloud \
       -drive if=none,id=cloud,file=cloud.img \
       -netdev user,id=user0 -device virtio-net-device,netdev=user0 -redir tcp:2222::22 \
       -enable-kvm -cpu host 
If you are on an X86 host and want to use qemu to run an aarch64 image, replace the last line with "-cpu cortex-a57". Now, since the example uses user networking with tcp port redirect, you can ssh into the VM:
ssh -p 2222 you@localhost
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-22-generic aarch64)
....

Wednesday, February 17, 2016

Ancient Linux swag

Since I've now been using Linux for 20 years, I've dug up some artifacts from the early journey.

  1. First the book, from late 1995. This from before Tux, so the penguin in the cover is just a co-incidence. The book came with a slackware 3.0 CD, which was my entrance to Linux. Today, almost all of the book is outdated - slackware and lilo install? printing with lpr? mtools and dosemu? ftp, telnet with SLIP dialup? Manually configuring XFree86 and fvwm? How I miss those times!* The only parts of the book are still valid are: shell and vi guides. I didn't read latter, and instead imported my favorite editor from dos FTE.
  2. Fast forward some years, into my first programming job. Ready to advertise the Linux revolution, I bought the mug on right. Nobody else would have a Tux mug, so nobody would accidentally take my mug from the office dishwasher. That only worked for my first work place (a huge and nationally hated IT consultant house). The next workplace, a mobile gaming startup (in 2001, I was there before it was trendy!) - and there was already plenty of Linux mugs when I joined...
  3. While today it may be hard to imagine, those days using Microsoft office tools was mandatory. That leads to the third memorabilia in the picture. Wordperfect for Linux existed for a brief while, and in the box (can you imagine, software came in physical boxes?) came a Tux plush.

* Wait no, I don't miss those times at all

Monday, November 23, 2015

Using ser2net for serial access.

Is your table a mess of wires? Do you have multiple devices connected via serial and can't remember which is /dev/ttyUSBX is connected to what board? Unless you are a embedded developer, you are unlikely to deal with serial much anymore - In that case you can just jump to the next post in your news feed.

Introducting ser2net

Usually people start with minicom for serial access. There are better tools - picocom, screen, etc. But to easily map multiple serial ports, use ser2net. Ser2net makes serial ports available over telnet.

Persistent usb device names and ser2net

To remember which usb-serial adapter is connected to what, we use the /dev/serial tree created by udev, in /etc/ser2net.conf:
# arndale
7004:telnet:0:'/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.8.1:1.0-port0':115200 8DATABITS NONE 1STOPBIT
# cubox
7005:telnet:0:/dev/serial/by-id/usb-Prolific_Technology_Inc._USB-Serial_Controller_D-if00-port0:115200 8DATABITS NONE 1STOPBIT
# sonic-screwdriver
7006:telnet:0:/dev/serial/by-id/usb-FTDI_FT230X_96Boards_Console_DAZ0KA02-if00-port0:115200 8DATABITS NONE 1STOPBIT
The by-path syntax is needed, if you have many identical usb-to-serial adapters. In that case a Patch from BTS is needed to support quoting in serial path. Ser2net doesn't seems very actively maintained upstream - a sure sign that project is stagnant is a homepage still at sourceforge.net... This patch among other interesting features can be also be found in various ser2net forks in github.

Setting easy to remember names

Finally, unless you want to memorize the port numbers, set TCP port to name mappings in /etc/services:
# Local services
arndale            7004/tcp
cubox              7005/tcp
sonic-screwdriver  7006/tcp
Now finally:
telnet localhost sonic-screwdriver
^Mandatory picture of serial port connection in action

Friday, September 4, 2015

Migration to Scaleway ARM server

The C1 Server

Scaleway started selling ARM based hosted server in April. I've intended to blog about this for a while, since it was time to upgrade from wheezy to jessie was timely, why not switch provider from an X86 based to ARM one at the same time?

In many ways scaleway node is opposite to what "Enterprise ARM" people are working on. Each server is based on an oldish ARMv7 Quad-Core Marvell Armada XP, instead of a brand new 64-bit ARMv8 cpu. There is no UEFI, ACPI or any other "industry standards" involved, just a smooth web interface and a command line tool to manage your node(s). And the node is yours, it's not shared with others with virtualization. The picture above is a single node, which is stacked with 911 other nodes into a single rack.

This week, the C1 price was dropped to a very reasonable €2.99 per month, or €0.006 per hour.

Software runs on hardware, news at 11

The performance is more than enough for my needs - shell, email and light web serving. dovecot, postfix, irssi and apache2 are just an apt-get away. Anyone who says you need x86 for Linux servers is forgetting that Linux software is open source, and if not already available, can be compiled to any architecture with little effort. Thus the migration pains were only because I chose to modernize configuration of dovecot and friends. Details of the new setup shall be left for another post.

Friday, June 12, 2015

Dystopia of Things

The Thing on Internet

I've now had an "Internet of Things" device for about a year. It is Logitech Harmony HUB, an universal remote controller. It comes with a traditional remote, but the interesting part is that it allows me to use my smartphone/tablet as remote over WiFi. With the android app it provides a rather nice use experience, yet I can see the inevitable end in anger.

Bare minimum GPL respect

Today, the GPL sources for hub are available - at least the kernel and a patch for busybox. The proper GPL release is still only through written offer. The sources appeared online April this year while Hub has been sold for two years already. Even if I ordered the GPL CD, it's unlikely I could build a modified system with it - too many proprietary bits. The whole GPL was invented by someone who couldn't make a printer do what he wanted. The dystopian today where I have to rewrite the whole stack running on a Linux-based system if I'm not happy what's running there as provided by OEM.

App only

The smartphone app is mandatory. The app is used to set up the hub. There is no HTML5 interface or any other way to control to the hub - just the bundled remote and phone apps. Fully proprietary apps, with limited customization options. And if app store update removes a feature you have used.. well you can't get it from anywhere anymore. The dystopian today where "Internet of Things" is actually "Smartphone App of Things".

Locked API

Maybe instead of modifying the official app you could write your own UI? Like one screen with only the buttons you ever use when watching TV? There *is* an API with delightful headline "Better home experiences, together". However, not together with me, the owner of the harmony hub. The official API is locked to selected partners. And the API is not to control the hub - it's to let the hub connect to other IoT devices. Of course, for talented people, locked api is usually just undocumented api. People have reverse engineered how the app talks to the hub over wifi. Curiously it is actually Jabber based with some twists like logging credentials through Logitech servers. The dystopian today where I can't write programs to remotely control the internet connected thing I own without reverse engineering protocols.

Central Server

Did someone say Logitech servers? Ah yes, all configuring of the remote happens via myharmony servers, where the database of remote controllers lives. There is some irony in calling the service *my* harmony when it's clearly theirs. The communication with cloud servers leaks at minimum back to Logitech what hardware I control with my hub. At the worst, it will become an avenue of exploits. And how long will Logitech manage these servers? The moment they are gone, harmony hub becomes a semi-brick. It will still work, but I can't change any configuration anymore. The dystopian future where the Internet of Thing will stop working *when* cloud servers get sunset

What now

This is not just the Harmony hub - this is a pattern that many IoT products follow - Linux based gadget, smartphone app, cloud services, monetized apis. After the gadget is bought, the vendor has little incentive to provide any updates. After all, the next chance I'll carry them money is when the current gadget gets obsolete.

I can see two ways out. The easy way is to get IoT gadgets as monthly paid service. Now the gadget vendor has the right incentive - instead of trying to convince me to buy their next gadget, their incentive is to keep me happily paying the monthly bill. The polar opposite is to start making open competing IoT's, and market to people the advantage of being yourself in control. I can see markets for both options. But half-way between is just pure dystopy.