tag:blogger.com,1999:blog-9979205555105654522024-03-05T18:50:15.929+02:00suihkulokki ramblingRiku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.comBlogger65125tag:blogger.com,1999:blog-997920555510565452.post-9971965135810261162023-12-30T17:57:00.002+02:002023-12-30T17:57:59.177+02:00Adguard DNS, or how to reduce ads without apps/extensions<div>Looking at the options for blocking ads, people usually first look at browser extensions. Google's plan is to<a href="https://arstechnica.com/gadgets/2023/11/google-chrome-will-limit-ad-blockers-starting-june-2024/"> disable adblock extensions</a> in 2024. The alternative is usually an app (on phones) or a "VPN" that does filtering for you. All these methods are quite heavyweight, and require installing software on your phone or PC. What is less known, is that you can you DNS-over-TLS or DNS-over-HTTPS for ad blocking.</div>
<h3>What is DNS-over-TLS and DNS-over-HTTPS</h3>
<div> Since Android 9, Google has provided a setting called<a href="https://blog.cloudflare.com/enable-private-dns-with-1-1-1-1-on-android-9-pie">Private DNS</a>. Traditional DNS is unencrypted UDP so anyone can monitor your requests and/or return false records. With private DNS, DNS-over-TLS or DNS-over-HTTPS is used to guarantee the DNS request is sent to the server you configured. Which Google hopes is of course Google's own public servers. If you do so, your ISP and hotspot providers no longer can monitor, monetize and enshittify your DNS requests - only Google can do so.</div>
<h3>Subverting private DNS for ad blocking</h3> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMYd6DtG9CIQTDbtXIfNNnjYNhIh7fvgnHs59Da5kVptmFDULof590kpJUj4rXCTIX8W_a7y6zNzaqG3VCl3m0NxNp7mm5j44ENKUXqZH9mPCMkLouHnCtv5EhaLXTs9OKvmr3OtYQdkPiu3yS_mgxQeRmFCXgdzXoWPbvoKJa1TpkVZg41RvJO-649S8/s1600/android-dns-03.webp" style="display: block; padding: 1em 0; text-align: center; clear: right; float: right;"><img alt="" border="0" data-original-height="281" data-original-width="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMYd6DtG9CIQTDbtXIfNNnjYNhIh7fvgnHs59Da5kVptmFDULof590kpJUj4rXCTIX8W_a7y6zNzaqG3VCl3m0NxNp7mm5j44ENKUXqZH9mPCMkLouHnCtv5EhaLXTs9OKvmr3OtYQdkPiu3yS_mgxQeRmFCXgdzXoWPbvoKJa1TpkVZg41RvJO-649S8/s1600/android-dns-03.webp"/></a></div>
<div>This is where <a href="https://adguard-dns.io/en/welcome.html">AdGuard DNS</a> comes useful. By setting the AdGuard DNS server as your "private DNS" server <a href="https://adguard-dns.io/en/public-dns.html"> following the instructions</a>,you can start blocking right away. Note, on PC you can also configure the Adguard DNS server on the Browser settings (Firefox -> Enable secure DNS and Chrome -> Use Secure DNS) instead of configuring a system-wide DNS server. Blocking via DNS, of course, limits effectiveness to ads distributed from 3rd party servers. </div>
<h3>Other uses for AdGuard DNS</h3>
<div>If you register for Adguard DNS, you get your "own", customizable DNS server address to point to. You can, for example, create your own /etc/hosts style records that are now available to all you devices you have connected to the Adguard DNS server - whether your a are home or not. Of course, you choose to use the personal DNS server, your DNS query privacy is in the hands of AdGuard.</div>
<h3>Going further</h3>
<div>What else is ruining the web than Ads? Well commercial social media. An article ("Ei näin! – Algoritmiähky") from the latest Finnish Magazine SKROLLI (mainos: jos luet suomeksi, <a href="https://skrolli.fi/product-category/vuositilaus/">Tilaa skrolli</a>!) hit a chord for me. The algorithms of social media sites are designed not to serve you, but to addict you. For example, If you stop to watch a hateful meme image, the algorithm will record "The user spent time watching this, show more of the same!". It doesn't help block or mute - yeah that spefic hate engager will be blocked, but all the dozens similar hate pages will still be shown to you. Worse, the social media sites are being overrun by <a href="https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/">AI-generated crap</a>. Unfortunately the addictive nature of the algorithms works. You reload in vain, hoping this time the algorithmic god will show something your friends share. How do you cure addiction? By blocking yourself out: <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHS7ukVD29_KmgUvosCdx-jMcsy2vYjdkNzhIeZ07Ej5v29bNtp0_S3qpbHRVYOpXp1Gh7lzaZX89GzBpfwgiPlaXqCgJ4uK0gwuWcv8xP3xEaIfKpL_kV1YNkg2GVE0oYRChq79UUieUEXOzY3L04tE_13DQKncoBSzwsshqnZ-PkARmvGTIIKUoOAR0/s1600/user_rules.png"><img alt="" border="0" data-original-height="169" data-original-width="507" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHS7ukVD29_KmgUvosCdx-jMcsy2vYjdkNzhIeZ07Ej5v29bNtp0_S3qpbHRVYOpXp1Gh7lzaZX89GzBpfwgiPlaXqCgJ4uK0gwuWcv8xP3xEaIfKpL_kV1YNkg2GVE0oYRChq79UUieUEXOzY3L04tE_13DQKncoBSzwsshqnZ-PkARmvGTIIKUoOAR0/s1600/user_rules.png"/></a></div> </div>
<p>
<h3>Epilogue</h3>
<div>I didn't block myself out of Fediverse - yet. It's not engineered to be addictive, which is also probably why it isn't as popular as the commercial alternatives... </div>Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-67862649538329533872022-07-09T22:07:00.001+03:002022-07-09T22:07:06.320+03:00Dropping gas taxes is pointless<div data-en-clipboard="true" data-pm-slice="1 1 []"><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCKmtUu6wsdm1B08U7Wl-ujOOdB6fmcbZdH6COLxQkZ8cA-SN3ThcvpGo_XIxmTujIfWxC3CP0B9gW5sfats1isASspMiFMhFEZBwst9Dl8Gs2uMr8JdGzd0oUeYoE_tr7Pukh-Bb5GpS2EzF-nnezNahxp1D64rxbbK58Sk_DGatasTOou4LM_4gr/s658/bensis.png" imageanchor="1"><img alt="Translated to Murican't thats $10/gallon" border="0" data-original-height="370" data-original-width="658" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCKmtUu6wsdm1B08U7Wl-ujOOdB6fmcbZdH6COLxQkZ8cA-SN3ThcvpGo_XIxmTujIfWxC3CP0B9gW5sfats1isASspMiFMhFEZBwst9Dl8Gs2uMr8JdGzd0oUeYoE_tr7Pukh-Bb5GpS2EzF-nnezNahxp1D64rxbbK58Sk_DGatasTOou4LM_4gr/w400-h225/bensis.png" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div></div><div data-en-clipboard="true" data-pm-slice="1 1 []">The current high prices of gasoline have made many people to demand gas tax cuts. Some governments have even responded to the call, and cut gas taxes. I do understand some people are hit hard by rising gas prices. But in Finnish we have a saying "Pakkasella pissa housuissa lämmittää vain hetken". Roughly Translated "In winter, peeing into your pants will only keep you warm for a moment".</div><div><br /></div><div>See, the reason for the current high gas prices is simple: Demand > Supply. As long as there more demand than there is supply, the prices keep rising until demand and supply matches. If you want the gas prices to drop, you have to either increase supply or reduce demand - usually you want both. Reducing gas tax does neither. In fact, reducing gas tax INCREASES DEMAND, as after tax cut people can afford to drive around more... </div><div><br /></div><div>So while cutting taxes reduce prices for a short period of time, the end result is just more pressure to rise gas prices. Cutting gas taxes is a great way to get you re-elected, but if you really want to help the people struggling with high gas prices, you should do something else. </div><div><br /></div><div>Reality is of course more complex - with everyone expecting a recession, the oil prices are already falling regardless of any tax changes. </div>Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-10777273999035218052020-03-11T22:43:00.000+02:002020-03-11T22:43:18.115+02:00This is the year not to fly
<img src="https://kos.to/plane.jpg" height="320" width="488">
<p>If you have to choose one year when you won't fly, this year, 2020, is the one to choose. Why? Because CORSIA.</p>
<h3>What the heck is CORSIA?</h3>
<p><a href="https://www.icao.int/environmental-protection/CORSIA/Pages/default.aspx">CORSIA</a> is not a novel virus, but "Carbon Offsetting and Reduction Scheme for International Aviation". In a nutshell, the
aviation industry says they will freeze their co2 emissions from growing. Actually, aviation emissions are still going to grow. The airlines will just pay someone else to reduce emissions with the same amount aviation
emissions rise - the "Offsetting" word in CORSIA. If that sounds like greenwashing, well it pretty much is. But that was expected. Getting every country and airline abroad CORSIA would not have been possible if the scheme would actually bite. So it's pretty much a joke.
</p>
<img src="https://kos.to/corsia.png" height="227" width="488">
<h3>What does it have to do with *this* Year?</h3>
<p>The first phase of CORSIA will start next year, so the emissions are frozen to year 2020 levels. Due to certain recent events, lots of flights have already been cancelled - which means the reference year aviation emissions are already
a lot less than the aviation industry was expecting. By avoiding flying this year, the aviation emissions are going
to be frozen at an even lower level. This will increase cost of co2 offsetting for airlines, and the joke is
going to be on them.
</p>
<p>So consider skipping business travel and taking your holiday trip this year with something else than a plane.
Wouldn't recommend a cruise ship, tho...</p>
Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com1tag:blogger.com,1999:blog-997920555510565452.post-42026975875594382342019-03-23T18:07:00.000+02:002019-03-23T18:07:46.062+02:00On the #uploadfilter problemThe copyright holders in europe are pushing hard mandate upload filters for internet. We have been here before - when they outlawed circumventing DRM. Both have roots in the same problem. The copyright holders look at computers and see bad things happening to their revenue. They come to IT companies and say "FIX IT". It industry comes back and says.. "We cant.. <b>making data impossible to copy is like trying to make water not wet!</b>". But we fail at convincing copyright holders in how perfect DRM or upload filter is not possible. Then copyright holders go to law makers and ask them in turn to fix it.
<p>
We need to turn tables around. If they want something impossible, it should be upto them to implement it.
<p>
It is simply unfair to require each online provider to implement an AI to detect copyright infringement, manage a database of copyrighted content and pay for the costs running it all.. ..And getting slapped with a lawsuit anyways, since copyrighted content is still slipping through.
<p>
<b>The burden of implementing #uploadfilter should be on the copyright holder organizations</b>. Implement as a SaaS. Youtube other web platforms call your API and pay $0.01 each time a pirate content is detected. On the other side, to ensure correctness of the filter, copyright holders have to pay any lost revenue, court costs and so on for each false positive.
<p>
Filtering uploads is still problematic. But it's now the copyright holders problem. Instead people blaming web companies for poor filters, it's the copyright holders now who have to answer to the public why their filters are rejecting content that doesn't belong to them.
<p>Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-31108180247583926722019-02-26T22:25:00.000+02:002019-02-26T23:03:30.720+02:00Linus Torvalds is wrong - PC no longer defines a platformHey, I can do these clickbait headlines too! Recently it has gotten <a href="https://www.theregister.co.uk/2019/02/23/linus_torvalds_arm_x86_servers/">media's attention</a> that Linus is <a href="https://www.realworldtech.com/forum/?threadid=183440&curpostid=183486">dismissive of ARM servers</a>. The argument is roughly "Developers use X86 PCs, cross-platform development is painful, and therefor devs will use X86 servers, unless they get ARM PCs to play with".
<p>
This ignores the reality where majority of developers do cross-platform development every day. They develop on <a href="https://insights.stackoverflow.com/survey/2018/#technology-developers-primary-operating-systems">Mac and Windows PC's</a> and deploy on Linux servers or mobile phones. The two biggest Linux success stories, cloud and Android, are built on cross-platform development. Yes, cross-platform development sucks. But it's just one of the many things that sucks in software development.
<p>
More importantly, the ship of "local dev enviroment" has long since sailed. Using Linus's other great innovation, git, developers push their code to a <a href="https://github.com/">Microsoft server</a>, which triggers a Rube Goldberg machine of software build, container assembly, unit tests, deployment to test environment and so on - all in cloud servers.
<p>
Yes, the ability to easily by a cheap whitebox PC from CompUSA was the important factor in making X86 dominate server space. But people get cheap servers from cloud now, and even that is getting out of fashion. Services like <a href="https://aws.amazon.com/lambda/">AWS lambda</a> abstract the whole server away, and the instruction set becomes irrelevant. Which CPU and architecture will be used to run these "serverless" services is not going to depend on developers having Arm Linux Desktop PC's.
<p>
Of course there are still plenty of people like me who use Linux Desktop and run things locally. But in the big picture things are just going one way. The way where it gets easier to test things in your git-based CI loop rather than in local development setup.
<p>
But like Linus, I still do want to see an powerful PC-like Arm NUC or Laptop. One that could run mainline Linux kernel and offer a PC-like desktop experience. Not because ARM depends on it to succeed in server space (what it needs is out of scope for this blogpost) - but because PC's are useful in their own. Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com4tag:blogger.com,1999:blog-997920555510565452.post-75209741968154900002018-02-13T16:33:00.000+02:002018-02-13T21:18:06.040+02:00Making sense of /proc/cpuinfo on ARMEver stared at output of /proc/cpuinfo and wondered what the CPU is?
<pre>
...
processor : 7
BogoMIPS : 2.40
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 3
</pre>
Or maybe like:
<pre>
$ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 2 (v7l)
BogoMIPS : 50.00
Features : half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
CPU implementer : 0x56
CPU architecture: 7
CPU variant : 0x2
CPU part : 0x584
CPU revision : 2
...
</pre>
The bits "CPU implementer" and "CPU part" could be mapped to human understandable strings. But the Kernel developers are heavily against the idea. Therefor, to the next idea: Parse in userspace. Turns out, there is a common tool almost everyone has installed does similar stuff. <a href="https://manpages.debian.org/unstable/util-linux/lscpu.1.en.html">lscpu(1)</a> from util-linux. So I proposed a patch to do <a href="https://github.com/karelzak/util-linux/pull/564">ID mapping</a> on arm/arm64 to util-linux, and it was accepted! So using lscpu from util-linux 2.32 (hopefully to be released soon) the above two systems look like:
<pre>
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: ARM
Model: 3
Model name: Cortex-A53
Stepping: r0p3
CPU max MHz: 1200.0000
CPU min MHz: 208.0000
BogoMIPS: 2.40
L1d cache: unknown size
L1i cache: unknown size
L2 cache: unknown size
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
</pre>
And
<pre>
$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Vendor ID: Marvell
Model: 2
Model name: PJ4B-MP
Stepping: 0x2
CPU max MHz: 1333.0000
CPU min MHz: 666.5000
BogoMIPS: 50.00
Flags: half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
</pre>
As we can see, lscpu is quite versatile and can show more information than just what is available in cpuinfo. Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-42264784126604585072017-06-23T16:36:00.000+03:002017-06-24T19:03:40.744+03:00Cross-compiling with debian stretchDebian stretch comes with cross-compiler packages for selected architectures:
<pre> $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...
⏎
</pre>
Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:
<pre>
sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
</pre>
Then we set up cross-building enviroment for arm64 inside the container:
<pre>
# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
</pre>
Now we have a nice build enviroment,
lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:
<pre>
# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
</pre>
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-57991628407477328052017-04-11T23:14:00.001+03:002017-04-11T23:23:27.170+03:00Deploying OBS <a href="http://openbuildservice.org/">Open Build Service</a> from SuSE is web service building deb/rpm packages. It has recently been added to <a href="https://packages.qa.debian.org/o/open-build-service.html">Debian</a>, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (<a href="https://packages.qa.debian.org/o/osc.html">osc</a>) to manage. See <a href="http://nibbles.halon.org.uk/2016/10/build-a-debian-package-against-debian-8-0-using-download-on-demand-dod-service/">Hectors blog</a> for quickstart instructions.
<p>
<h2>Things to learned while setting up OBS</h2>
Me coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise.
<p>
<h3>Well done packaging</h3>
Usually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows.
<p>
<h3>OBS does automatic rebuilds of reverse dependencies</h3>
Aka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices.
<p>
<h3>OBS server and worker do http requests in both directions</h3>
On startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here..
<p>
<h3>Signing repositories is complicated</h3>
With Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;)
<p>
<h3>Git integration is rather bolted-on than integrated</h3>
OBS provides a method to integrate with <a href="http://openbuildservice.org/2016/04/08/new_git_in_27/">git using services</a>. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git.
<p>
<h3>The upstream community is friendly</h3>
Including the <a href="https://github.com/openSUSE/open-build-service/pull/2713#issuecomment-283925825">happiest thanks</a> from an upstream I've seen recently.
<p>
<h3>Summary</h3>
All in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, <a href="https://build.opensuse.org/">openSUSE public OBS</a> will happily build Debian packages for you.
<p>
*How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-70671286416175438762017-01-09T10:01:00.000+02:002017-01-09T10:01:42.092+02:0020 years of being a debian maintainer<pre>
fte (0.44-1) unstable; urgency=low
* initial Release.
-- Riku Voipio <riku.voipio@sci.fi> Wed, 25 Dec 1996 20:41:34 +0200
</pre>
Welp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.
<pre>
uid Riku Voipio <riku.voipio@sci.fi>
sig 89A7BF01 1996-12-15 Riku Voipio <riku.voipio@iki.fi>
sig 4CBA92D1 1997-02-24 Lars Wirzenius <liw@iki.fi>
</pre>
A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;)
Much later an alternative process of phone-calling prospective DD's would be added.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-67670831900083523402016-05-09T15:32:00.002+03:002016-05-09T15:32:58.945+03:00Booting ubuntu 16.04 cloud images on Arm64For testing kvm/qemu, prebaked images cloud images are nice. However, there is a few steps to get started. First we need a recent Qemu (2.5 is good enough). An efi firmware is needed, and cloud-utils, for customizing our VM.
<pre>
sudo apt install -y qemu qemu-utils cloud-utils
wget https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-arm64-uefi1.img
</pre>
Cloud images are plain - there is no user setup, no default user/pw combo, so to log in to the image, we need to customize the image on first boot. The defacto tool for this is <a href="http://cloudinit.readthedocs.io/en/latest/">cloud-init</a>. The simplest method for using cloud-init is passing a block media with a settings file - of course for real cloud deployment, you would use one of fancy network based initialization protocols cloud-init supports. Enter the following to a file, say cloud.txt:
<pre>
#cloud-config
users:
- name: you
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz....
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
</pre>
This minimal config will just set you a user with ssh key. A more complex setup can install packages, write files and run arbitrary commands on first boot. In professional setups, you would most likely end up using cloud-init only to start <a href="https://www.ansible.com/">Ansible</a> or another configuration management tool.
<pre>
cloud-localds cloud.img cloud.txt
qemu-system-aarch64 -smp 2 -m 1024 -M virt -bios QEMU_EFI.fd -nographic \
-device virtio-blk-device,drive=image \
-drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.img \
-device virtio-blk-device,drive=cloud \
-drive if=none,id=cloud,file=cloud.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -redir tcp:2222::22 \
-enable-kvm -cpu host
</pre>
If you are on an X86 host and want to use qemu to run an aarch64 image, replace the last line with "-cpu cortex-a57". Now, since the example uses user networking with tcp port redirect, you can ssh into the VM:
<pre>
ssh -p 2222 you@localhost
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-22-generic aarch64)
....
</pre>
Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-9346192789709770292016-02-17T22:19:00.000+02:002016-02-17T22:19:08.371+02:00Ancient Linux swag<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjla5M-Y5zfuTd6lQitFcD_ud3i69tTOBZmcIQRvqIHHoBgXfvdewUGTzlb3ROUSa2fQpuWy0ciu_RW2QkgvlPp9_AtfzNYIL8dE950-JWxhvhER9DvqwTuZgvyxWrFZO5_rIcE6Xqxd_M/s1600/IMG_20160212_135349.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjla5M-Y5zfuTd6lQitFcD_ud3i69tTOBZmcIQRvqIHHoBgXfvdewUGTzlb3ROUSa2fQpuWy0ciu_RW2QkgvlPp9_AtfzNYIL8dE950-JWxhvhER9DvqwTuZgvyxWrFZO5_rIcE6Xqxd_M/s640/IMG_20160212_135349.jpg" /></a>
<p>Since I've now been using Linux for 20 years, I've dug up some artifacts from the early journey.
<ol>
<li>First the book, from late 1995. This from before <a href="https://en.wikipedia.org/wiki/Tux">Tux</a>, so the penguin in the cover is just a co-incidence. The book came with a slackware 3.0 CD, which was my entrance to Linux. Today, almost all of the book is outdated - slackware and lilo install? printing with lpr? mtools and dosemu? ftp, telnet with SLIP dialup? Manually configuring XFree86 and fvwm? How I miss those times!* The only parts of the book are still valid are: shell and vi guides. I didn't read latter, and instead imported my favorite editor from dos <a href=https://packages.qa.debian.org/f/fte.html>FTE</a>.
<li>Fast forward some years, into my first programming job. Ready to advertise the Linux revolution, I bought the mug on right. Nobody else would have a Tux mug, so nobody would accidentally take my mug from the office dishwasher. That only worked for my first work place (a huge and nationally hated IT consultant house). The next workplace, a mobile gaming startup (in 2001, I was there before it was trendy!) - and there was already plenty of Linux mugs when I joined...
<li>While today it may be hard to imagine, those days using Microsoft office tools was mandatory. That leads to the third memorabilia in the picture. <a href="https://youtu.be/TnEbaaZEEEE">Wordperfect for Linux</a> existed for a brief while, and in the box (can you imagine, software came in physical boxes?) came a Tux plush.
</ol>
<p>* Wait no, I don't miss those times at allRiku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-50758857999786933442015-11-23T21:55:00.000+02:002015-11-23T21:55:20.516+02:00Using ser2net for serial access.Is your table a mess of wires? Do you have multiple devices connected via serial and can't remember which is /dev/ttyUSBX is connected to what board? Unless you are a embedded developer, you are unlikely to deal with serial much anymore - In that case you can just jump to the next post in your news feed.
<h3>Introducting ser2net</h3>
Usually people start with minicom for serial access. There are better tools - picocom, screen, etc. But to easily map multiple serial ports, use <a href="http://sourceforge.net/projects/ser2net/">ser2net</a>. Ser2net makes serial ports available over telnet.
<h3>Persistent usb device names and ser2net</h3>
To remember which usb-serial adapter is connected to what, we use the /dev/serial tree created by udev, in /etc/ser2net.conf:
<pre>
# arndale
7004:telnet:0:'/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.8.1:1.0-port0':115200 8DATABITS NONE 1STOPBIT
# cubox
7005:telnet:0:/dev/serial/by-id/usb-Prolific_Technology_Inc._USB-Serial_Controller_D-if00-port0:115200 8DATABITS NONE 1STOPBIT
# sonic-screwdriver
7006:telnet:0:/dev/serial/by-id/usb-FTDI_FT230X_96Boards_Console_DAZ0KA02-if00-port0:115200 8DATABITS NONE 1STOPBIT
</pre>
The by-path syntax is needed, if you have many identical usb-to-serial adapters. In that case a
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=803031">Patch from BTS</a> is needed to support quoting in serial path. Ser2net doesn't seems very actively maintained upstream - a sure sign that project is stagnant is a homepage still at sourceforge.net... This patch among other interesting features can be also be found in various ser2net forks in github.
<h3>Setting easy to remember names</h3>
Finally, unless you want to memorize the port numbers, set TCP port to name mappings in /etc/services:
<pre>
# Local services
arndale 7004/tcp
cubox 7005/tcp
sonic-screwdriver 7006/tcp
</pre>
Now finally:
<pre>telnet localhost sonic-screwdriver</pre>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyPouECLCWFkpxo1MRg4Ai-lDbM45GOXr_9iTdliL7-QrGjVQEbHE7HxNAm1jfRpBCZTmQOUrPTVSf6KISGSqYlbJ8ncN9mZDNDBu08sePLETWyZ6SNRrfbDVETMd2hamhpQwAp4bQ2qQ/w1066-h797-no/" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyPouECLCWFkpxo1MRg4Ai-lDbM45GOXr_9iTdliL7-QrGjVQEbHE7HxNAm1jfRpBCZTmQOUrPTVSf6KISGSqYlbJ8ncN9mZDNDBu08sePLETWyZ6SNRrfbDVETMd2hamhpQwAp4bQ2qQ/w1066-h797-no/" /></a>
^Mandatory picture of serial port connection in actionRiku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-6321542401147949492015-09-04T22:25:00.000+03:002015-09-04T22:25:40.428+03:00Migration to Scaleway ARM server<h2>The C1 Server</h2><p/><img src="https://www.scaleway.com/img/c1.fdf7.jpg"/><p/>
<a href="https://www.scaleway.com/">Scaleway</a> started selling ARM based hosted server in April. I've intended to blog about this for a while, since it was time to upgrade from wheezy to jessie was timely, why not switch provider from an X86 based to ARM one at the same time? <p> In many ways scaleway node is opposite to what "Enterprise ARM" people are working on. Each server is based on an oldish ARMv7 Quad-Core <a href="http://www.marvell.com/embedded-processors/armada-xp/">Marvell Armada XP</a>, instead of a brand new 64-bit ARMv8 cpu. There is no UEFI, ACPI or any other "industry standards" involved, just a smooth web interface and <a href="https://github.com/scaleway/scaleway-cli">a command line tool</a> to manage your node(s). And the node is yours, it's not shared with others with virtualization. The picture above is a single node, which is stacked with 911 other nodes into a single rack.
<p/>This week, the C1 price was dropped to a very reasonable <a href="https://blog.scaleway.com/2015/09/02/we-are-slashing-the-c1-price-by-70-percent/">€2.99 per month</a>, or €0.006 per hour.
<h2>Software runs on hardware, news at 11</h2>
<p/>The performance is more than enough for my needs - shell, email and light web serving. dovecot, postfix, irssi and apache2 are just an apt-get away. Anyone who says you need x86 for Linux servers is forgetting that Linux software is open source, and if not already available, can be compiled to any architecture with little effort. Thus the migration pains were only because I chose to modernize configuration of dovecot and friends. Details of the new setup shall be left for another post.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-53113911037637395732015-06-12T21:42:00.000+03:002015-06-12T21:42:11.562+03:00Dystopia of Things<h3>The Thing on Internet</h3>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgU16agpsRrM1mrUOQTN781q4Iek_IuPvZYc_b8S6BjBTDpljMDXpgRopnXUyjg8l5LpTS9ePEOt21u9_aVT5Km2PM92O5L72WBv7D9ntHOWpltb_cH0OH2EXvr4Tn9YkwXljRxlXT68MM/s1600/harmony.jpeg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgU16agpsRrM1mrUOQTN781q4Iek_IuPvZYc_b8S6BjBTDpljMDXpgRopnXUyjg8l5LpTS9ePEOt21u9_aVT5Km2PM92O5L72WBv7D9ntHOWpltb_cH0OH2EXvr4Tn9YkwXljRxlXT68MM/s320/harmony.jpeg" /></a></div>
I've now had an "Internet of Things" device for about a year. It is <a href="http://www.logitech.com/en-us/product/harmony-home-hub">Logitech Harmony HUB</a>, an universal remote controller. It comes with a traditional remote, but the interesting part is that it allows me to use my smartphone/tablet as remote over WiFi. With the <a href="https://play.google.com/store/apps/details?id=com.logitech.harmonyhub">android app</a> it provides a rather nice use experience, yet I can see the inevitable end in anger.
<p>
<h3>Bare minimum GPL respect</h3>
Today, the <a href="https://opensource.logitech.com/opensource/index.php/Logitech_Harmony_Hubs">GPL sources</a> for hub are available - at least the kernel and a patch for busybox. The proper GPL release is still only through written offer. The sources appeared online April this year while Hub has been sold for two years already. Even if I ordered the GPL CD, it's unlikely I could build a modified system with it - too many proprietary bits. The whole GPL was invented by someone who couldn't <a href="http://www.oreilly.com/openbook/freedom/ch01.html">make a printer do what he wanted</a>. <i>The dystopian today where I have to rewrite the whole stack running on a Linux-based system if I'm not happy what's running there as provided by OEM.</i>
<p>
<h3>App only</h3>
The smartphone app is mandatory. The app is used to set up the hub. There is no HTML5 interface or any other way to control to the hub - just the bundled remote and phone apps. Fully proprietary apps, with limited customization options. And if app store update removes a feature you have used.. well you can't get it from anywhere anymore. <i>The dystopian today where "Internet of Things" is actually "Smartphone App of Things".</i>
<p>
<h3>Locked API</h3>
Maybe instead of modifying the official app you could write your own UI? Like one screen with only the buttons you ever use when watching TV? There *is* an <a href="http://myharmony.com/discover/harmony-api/">API</a> with delightful headline "Better home experiences, together". However, not together with me, the owner of the harmony hub. The official API is <a href="https://forums.logitech.com/t5/Harmony-Home-Control-Experience/Harmony-Hub-API/td-p/1348995">locked to selected partners</a>. And the API is not to control the hub - it's to let the hub connect to other IoT devices. Of course, for talented people, locked api is usually just undocumented api. People have <a href="https://github.com/jterrace/pyharmony/blob/master/PROTOCOL.md">reverse engineered</a> how the app talks to the hub over wifi. Curiously it is actually Jabber based with some twists like logging credentials through Logitech servers.
<i>The dystopian today where I can't write programs to remotely control the internet connected thing I own without reverse engineering protocols.</i>
<p>
<h3>Central Server</h3>
Did someone say Logitech servers? Ah yes, all configuring of the remote happens via <a href="http://www.myharmony.com/">myharmony servers</a>, where the database of remote controllers lives. There is some irony in calling the service *my* harmony when it's clearly theirs. The communication with cloud servers leaks at minimum back to Logitech what hardware I control with my hub. At the worst, it will become an avenue of exploits. And how long will Logitech manage these servers? The moment they are gone, harmony hub becomes a semi-brick. It will still work, but I can't change any configuration anymore. <i>The dystopian future where the Internet of Thing will stop working *when* cloud servers get sunset</i>
<p>
<h3>What now</h3>
This is not just the Harmony hub - this is a pattern that many IoT products follow - Linux based gadget, smartphone app, cloud services, monetized apis. After the gadget is bought, the vendor has little incentive to provide any updates. After all, the next chance I'll carry them money is when the current gadget gets obsolete.
<p>
I can see two ways out. The easy way is to get IoT gadgets as monthly paid service. Now the gadget vendor has the right incentive - instead of trying to convince me to buy their next gadget, their incentive is to keep me happily paying the monthly bill. The polar opposite is to start making open competing IoT's, and market to people the advantage of being yourself in control. I can see markets for both options. But half-way between is just pure dystopy.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-64492432972968818052015-04-22T15:51:00.000+03:002015-04-22T15:51:00.615+03:00Fastest way to change running dtbTollef posted about using <a href="http://err.no/personal/blog/tech/2015-04-22-09-32_1-wire_monitoring_with_a_BBB.html"> BeagleBone Black for temperature monitoring</a>. There was a passage about patching the DTB (device tree) file:
<blockquote>... This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs.</blockquote>
There are easier ways. For example, you can get the current device tree file generated from /proc:
<pre>
apt-get install device-tree-compiler
dtc -I fs -O dts -o current.dts /proc/device-tree/
</pre>
(Why /proc and not /sys ? because device tree predates /sys) Now you can just modify and build the dtb again, and install it back to where bootloader reads the dtb from:
<pre>
vim current.dts
dtc -I dts -O dtb -o new.dtb current.dts
</pre>
Alternative, of course, is to build a brand new mainline kernel and use the <a href="http://events.linuxfoundation.org/sites/events/files/slides/dynamic-dt-elce14.pdf">dynamic Device tree</a> code now available.
Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-6857236792255625852014-12-31T23:28:00.000+02:002014-12-31T23:29:59.008+02:00Crowdfunding better GCompris graphics<div xmlns="http://www.w3.org/1999/xhtml">
<a href="http://gcompris.net/index-en.html">GCompris</a> is the most established open source kids educational game. Here we practice use of mouse with an <a href=http://genesi.company/products/smartbook>Efika smartbook</a>. In this subgame, mouse is moved around to uncover a image behind.<br />
<img border="0" src="https://lh6.googleusercontent.com/-S961mEJ_VxQ/VKOlM4IvLWI/AAAAAAAACTA/la8LC8PJPNE/s400/1420010802564.jpeg" style=" display: block; margin: 0px auto 10px; text-align: center;" />
<br /> While GCompris is nice, it needs nice graphics badly. Now the GCompris authors are running a <a href="https://www.indiegogo.com/projects/new-unified-graphics-for-gcompris">indiegogo crowfund</a> exactly for that - to get new unified graphics.<br /><br />
Why should you fund? Apart from the "I want to be nice for any oss project", I see a couple of reasons specific for this crowdfund.<br /> <br />
First, to show kids that apps can be changed! Instead of just using existing iPad apps as a consumer, Gcompris allows you to show kids how games are built and modified. With the new graphics, more kids will play longer, and eventually some will ask if something can be changed/added..<br />
<br />
Second, GCompris has recently become QT/QML based, making it more portable than before. Wouldn't you like to see it in your <a href="https://www.indiegogo.com/projects/jolla-tablet-world-s-first-crowdsourced-tablet">Jolla tablet</a> or a future Ubuntu phone? The crowfund doesn't promise to make new ports, but if you are eager to show your friends nice looking apps on your platform, this probably one of the easiest ways to help them happen.<br />
<br />
Finally, as a nice way to say happy new year 2015 :)
</div>Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-6715293116531411852014-11-06T22:28:00.000+02:002014-11-07T11:03:28.571+02:00Adventures in setting up local lava serviceLinaro uses LAVA as a tool to test variety of devices. So far I had not installed it myself, mostly due to assuming it to be enermously complex to set up. But thanks to <a href="http://linux.codehelp.co.uk/?page_id=32">Neil Williams</a> work on packaging, installation has got a lot easier. Follow the <a href="https://validation.linaro.org/static/docs/installation.html">Official Install Doc</a> and <a href="https://validation.linaro.org/static/docs/installing_on_debian.html#">Official install to debian Doc</a>, roughly looking like:
<p>
1. Install Jessie into kvm
<pre>
kvm -m 2048 -drive file=lava2.img,if=virtio -cdrom debian-testing-amd64-netinst.iso
</pre>
2. Install lava-server
<pre>
apt-get update; apt-get install -y postgresql nfs-kernel-server apache2
apt-get install lava-server
# answer debconf questions
a2dissite 000-default && a2ensite lava-server.conf
service apache2 reload
lava-server manage createsuperuser --username default --email=foo.bar@example.com
$EDITOR /etc/lava-dispatcher/lava-dispatcher.conf # make sure LAVA_SERVER_IP is right
</pre>
That's the generic setup. Now you can point your browser to the IP address of the kvm machine, and log in with the default user and the password you made. <p>
3 ... 1000 Each LAVA instance is site customized for the boards, network, serial ports, etc. In this example, I now add a single arndale board.
<pre>
cp /usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types/arndale.conf /etc/lava-dispatcher/device-types/
sudo /usr/share/lava-server/add_device.py -s arndale arndale-01 -t 7001
</pre>
This generates us a almost usable config for the arndale. For site specifics I have usb-to-serial. Outside kvm, I provide access to serial ports using the following ser2net config:
<pre>
7001:telnet:0:/dev/ttyUSB0:115200 8DATABITS NONE 1STOPBIT
7002:telnet:0:/dev/ttyUSB1:115200 8DATABITS NONE 1STOPBIT
</pre>
TODO: make ser2net not run as root and ensure usb2serial devices always get same name..
<p>
For automatic power reset, I wanted something cheap, yet something that wouldn't require much soldering (I'm not a real embedded engineer.. I prefer software side ;) . Discussed with Hector, who hinted about prebuilt relay boxes. Chose one from Ebay, a <a href="http://sigma-shop.com/product/8/-usb-eight-channel-relay-controller-rs232-serial-controlled-12v.html">kmtronic 8-port USB Relay</a>.
So now I have this cute boxed nonsense hack.
<img src="http://kos.to/lavalab.jpeg"></img>
<p>
The USB relay is driven with a short script, hard-reset-1
<pre>
stty -F /dev/ttyACM0 9600
echo -e '\xFF\x01\x00' > /dev/ttyACM0
sleep 1
echo -e '\xFF\x01\x01' > /dev/ttyACM0
</pre>
Sidenote: If you don't have or want automated power relay for lava, you can always replace this this script with something along "mpg123 puny_human_press_the_power_button_now.mp3"
<p>
Both the serial port and reset script are on server with dns name <b>aimless</b>. So we take the /etc/lava-dispatcher/devices/arndale-01.conf that add_device.py created and make it look like:
<pre>
device_type = arndale
hostname = arndale-01
connection_command = telnet aimless 7001
hard_reset_command = slogin lava@aimless -i /etc/lava-dispatcher/id_rsa /home/lava/hard-reset-1
</pre>
Since in my case I'm only going to test with tftp/nfs boot, the arndale board needs only to be setup to have a u-boot bootloader ready on power-on.
<p>
Now everything is ready for a test job. I have a locally built kernel and device tree, and I export the directory using the httpd available by default in debian.. Python!
<pre>
cd out/
python -m SimpleHTTPServer
</pre>
Go to the lava web server, select api->tokens and create a new token. Next we add the token and use it to submit a job
<pre>
$ sudo apt-get install lava-tool
$ lava-tool auth-add http://default@lava-server/RPC2/
$ lava-tool submit-job http://default@lava-server/RPC2/ <a href="http://kos.to/lava_test.json">lava_test.json</a>
submitted as job id: 1
$
</pre>
The first job should now be visible in the lava web frontend, in the scheduler -> jobs part. If everything goes fine, the relay will click in a moment and the job will finish in a few minutes.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-48944907321641761872014-11-01T12:21:00.000+02:002014-11-01T12:21:08.410+02:00Using networkd for kvm tap networkingSetting up basic systemd-network was recently described by <a href="http://www.joachim-breitner.de/blog/664-Switching_to_systemd-networkd">Joachim</a>, and the post inspired me to try it as well. The twist is that in my case I need a bridge for my KVM with <a href="https://wiki.linaro.org/LAVA">Lava server</a> and arm/aarch64 qemu system emulators...<p>
For background, qemu/kvm support a few ways to provide network to guests. The default is <a href="http://wiki.qemu.org/Documentation/Networking#User_Networking_.28SLIRP.29">user networking</a>, which requires no privileges, but is slow and based on ancient SLIRP code. The other common option is <a href="http://wiki.qemu.org/Documentation/Networking#Tap">tap</a> networking, which is fast, but complicated to set up. Turns out, with networkd and qemu bridge helper, tap is easy to set up.
<pre>
$ for file in /etc/systemd/network/*; do echo $file; cat $file; done
/etc/systemd/network/eth.network
[Match]
Name=eth1
[Network]
Bridge=br0
/etc/systemd/network/kvm.netdev
[NetDev]
Name=br0
Kind=bridge
/etc/systemd/network/kvm.network
[Match]
Name=br0
[Network]
DHCP=yes
</pre>
Diverging from Joachims simple example, we replaced "DHCP=yes" with "Bridge=br0". Then we proceed to define the bridge (in the kvm.netdev) and give it an ip via dhcp in kvm.network. From the kvm side, if you haven't used the bridge helper before, you need to give the helper permissions (setuid root or cap_net_admin) to create a tap device to attach on the bridge. The helper needs an configuration file to tell what bridge it may meddle with.
<pre>
# cat > /etc/qemu/bridge.conf <<__END__
allow br0
__END__
# setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helper
</pre>
Now we can start kvm with bridge networking as easily as with user networking:
<pre>
$ kvm -m 2048 -drive file=jessie.img,if=virtio -net bridge -net nic,model=virtio -serial stdio
</pre>
The manpages
<a href="http://www.freedesktop.org/software/systemd/man/systemd.network.html">systemd.network(5)</a> and
<a href="http://www.freedesktop.org/software/systemd/man/systemd.netdev.html">systemd.netdev(5)</a> do a great job explaining the files. Qemu/kvm networking docs are unfortunately not as detailed.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-18187834154792371282014-08-13T17:36:00.002+03:002014-08-13T17:36:33.583+03:00Booting Linaro ARMv8 OE images with QemuA quick update - <a href="http://releases.linaro.org/14.07/openembedded/aarch64/">Linaro ARMv8 OpenEmbbeded images</a> work just fine with qemu 2.1 as well:
<pre>
$ http://releases.linaro.org/14.07/openembedded/aarch64/Image
$ http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
-kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
-drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
root@genericarmv8:~#
</pre>
Quick benchmarking with age-old <a href="http://www.tux.org/~mayer/linux/bmark.html">ByteMark nbench</a>:
<table>
<tr>
<th>Index</th>
<th>Qemu</th>
<th>Foundation</th>
<th>Host</th>
</tr>
<tr>
<td>Memory</td>
<td>4.294</td>
<td>0.712</td>
<td>44.534</td>
</tr>
<tr>
<td>Integer</td>
<td>6.270</td>
<td>0.686</td>
<td>41.983</td>
</tr>
<tr>
<td>Float</td>
<td>1.463</td>
<td>1.065</td>
<td>59.528</td>
</tr>
<tr><td colspan="3">Baseline (LINUX) : AMD K6/233*
</td></tr>
</table>
Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.
Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-19095593920582287192014-08-05T22:45:00.001+03:002014-08-05T22:45:55.825+03:00Testing qemu 2.1 arm64 supportQemu 2.1 was just released a few days ago, and is now a available on Debian/unstable.
Trying out an (virtual) arm64 machine is now just a few steps away for unstable users:
<pre>
$ sudo apt-get install qemu-system-arm
$ wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-arm64-disk1.img
$ wget https://cloud-images.ubuntu.com/trusty/current/unpacked/trusty-server-cloudimg-arm64-vmlinuz-generic
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt -kernel trusty-server-cloudimg-arm64-vmlinuz-generic \
-append 'root=/dev/vda1 rw rootwait mem=1024M console=ttyAMA0,38400n8 init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring' \
-drive if=none,id=image,file=trusty-server-cloudimg-arm64-disk1.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.13.0-32-generic (buildd@beebe) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1) ) #57-Ubuntu SMP Tue Jul 15 03:52:14 UTC 2014 (Ubuntu 3.13.0-32.57-generic 3.13.11.4)
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
-snip-
...
ubuntu@ubuntu:~$ cat /proc/cpuinfo
Processor : AArch64 Processor rev 0 (aarch64)
processor : 0
Features : fp asimd evtstrm
CPU implementer : 0x41
CPU architecture: AArch64
CPU variant : 0x1
CPU part : 0xd07
CPU revision : 0
Hardware : linux,dummy-virt
ubuntu@ubuntu:~$
</pre>
The "init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring" is ubuntu cloud stuff that will set the ubuntu user password to "randomstring" - don't use "randomstring" literally there, if you are connected to internets...
<p/>
For more detailed writeup of using qemu-system-aarch64, check the <a href="http://www.bennee.com/~alex/blog/2014/05/09/running-linux-in-qemus-aarch64-system-emulation-mode/">excellent writeup from Alex Bennee</a>.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-42581489105994172002014-05-08T21:49:00.000+03:002014-05-08T22:14:14.427+03:00Arm builder updatesDebian has recently received a donation of 8 build machines from Marvell. The new machines come with
<a href="http://www.marvell.com/embedded-processors/armada-xp/">Quad core MV78460 Armada XP</a> CPU's, DDR3 DIMM slot so we can plug in more memory, and speedy sata ports. They replace the well served Marvell MV78200 based builders - ones that have been building debian armel since 2009. We are planning a more detailed announcement, but I'll provide a quick summary:
<p/>
The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april:
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDdjPJ42YkHh6kNqIef1ZNeZ4jhD8mHx2r-bdJ64ABrsuq5Zh9VMd3SCtb1EQ0klN-mc8wv8avh6WAPB8ke4lNL3WeupByEdTzTaSsHnFHzrm3RswZkIshggfwiOgah47lgHDfdxQNIbQ/s1600/buildtimes.png" /><p/>
<a href="https://buildd.debian.org/status/logs.php?pkg=qemu&arch=armel&suite=sid">Qemu build times</a>.<p/>
We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell!
But not all packages gain this amount of speedup:<p/>
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtABEj_gSVqOM5Gxt9seF1KeDHGaFCJ0rVkiXQIM_JoWsCKof4up_e4kLKhdJuwfYQGBy8XrtYUH75dt8XqWt1UoC7jomws-fGI3MHSS7aSOP8Livuc6ZfnhhmaMCEJQbsdfAYHqybGSk/s1600/buildtimes2.png" /><p/>
<a href="https://buildd.debian.org/status/logs.php?pkg=webkitgtk&arch=armel" >webkitgtk build times</a>.<p/>
This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:
<pre>
# Parallel builds are unstable, see #714072 and #722520
# ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# MAKEARGUMENTS += -j$(NUMJOBS)
# endif
</pre>
The old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules.
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiSWadNDy4e1jfAwvQxQzKgrCLjlfvoR7X-dyHaLvf-8WqECXmzpzRxE0gyZAmXJcTjcb3Vdny0MgBgGEkHoOGHai4qWjDygKbKwX6dUf2ZU8I-DUYbxR_7Hnfu2vpMyc5gwpAkYBtqxM/s1600/henze-cpu-day.png" /><p/>
During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building.
<p/>
For developers, abel.debian.org is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go.
<p/>
Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen.
<p/>
Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out...
<p/>
[1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.
Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-61040040014722864042014-02-21T15:32:00.001+02:002014-02-21T15:32:28.514+02:00Where the armel buildd time wentWanna-build, wanna-build, which packages spent most time on armel buildd's since beginning of 2013?
<pre>
package | sum(build_time)
-------------------------+--------------
libreoffice | 114 09:16:34
linux | 113 02:58:50
gcc-4.8 | 064 01:21:09
webkitgtk | 059 19:09:27
acl2 | 043 16:40:50
gcc-4.7 | 028 14:03:53
iceweasel | 026 19:02:13
gcc-snapshot | 026 01:31:21
openjdk-7 | 020 02:41:53
php5 | 019 16:13:22
llvm-toolchain-3.3 | 017 19:05:38
qt4-x11 | 017 02:57:09
espresso | 016 03:50:37
pypy | 015 07:07:25
icedove | 014 18:57:08
insighttoolkit4 | 014 17:16:43
qtbase-opensource-src | 014 12:39:09
llvm-toolchain-3.4 | 012 03:06:15
mono | 011 22:30:13
atlas | 011 20:40:54
qemu | 011 17:11:09
calligra | 011 16:05:55
gnuradio | 011 15:19:35
resiprocate | 011 10:14:56
llvm-toolchain-snapshot | 011 02:04:44
libav | 010 13:52:03
python2.7 | 009 18:58:33
ghc | 009 18:28:48
gnat-4.8 | 009 13:59:57
axiom | 009 12:40:24
cython | 009 00:47:04
openjdk-6 | 008 16:38:14
oce | 008 10:29:20
eglibc | 008 06:04:26
ppl | 007 20:48:45
root-system | 007 17:32:16
openturns | 007 10:12:53
gcl | 007 08:02:42
gcc-4.6 | 007 02:50:48
k3d | 007 00:36:11
python3.3 | 007 00:25:42
llvm-toolchain-3.2 | 007 00:17:59
vtk | 006 17:53:28
samba | 006 17:17:27
mysql-workbench | 006 14:36:46
kde-workspace | 006 07:31:12
gmsh | 006 04:32:42
psi-plus | 006 04:30:08
octave | 006 04:17:22
paraview | 006 04:13:25
</pre>
Timeformat is "days HH:MM:SS". Our ridiculously stable mv78x00 buildd's have served well, but has come to become let them rest. Now, to find out how many of these top time consuming packages can build with parallel make and are not doing so already.
Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-33991112893795890152013-12-20T22:41:00.002+02:002013-12-20T22:41:42.495+02:00Replicant on Galaxy S3I recently got my self and <a href="http://www.amazon.co.uk/gp/product/B0080DJ6C2/">Galaxy S3</a> for testing out <a href="http://replicant.us/">Replicant</a>, an android image made out of only open source components.
<h3>Why Galaxy S3?</h3> It is <a href="http://redmine.replicant.us/projects/replicant/wiki/GalaxyS3">well supported in Replicant</a>, almost every driver is already open source. The hardware specs are acceptable, 1.4Ghz quad core, 1GB ram, microsd, and all the peripheral chips one expects for a phone. Galaxy S3 has sold insanely (<a href="http://www.androidauthority.com/galaxy-s4-demand-galaxy-s3-sales-197892/">50 million units</a> supposedly), meaning I won't run out of accessories and aftermarket spare parts any time soon. The massive installed base also means a huge potential user community. S3 is still available as new, with two years of warranty.
<h3>Why not</h3>
While the S3 is still available new, it is safe to assume production is ending already - 1.5 year old product is ancient history in mobile world! It remains to be seen how much the massive user base will defend against the obsolescence. Upstream kernel support for "old" cpu is open question, replicant is still basing kernel on vendor kernel. Bootloader is unlocked, but it can't be changed due to trusted^Wtreacherous computing, preventing things like boot from sd card. Finally, not everything is open source, the GPU (mali) driver while being reverse engineered, is taking it's time - and the <a href="http://redmine.replicant.us/projects/replicant/wiki/BCM4751">GPS hasn't been reversed yet</a>.
<h3>Installing replicant</h3>
Before install, from the original installation, you might want to take a copy of firmware files (since replicant won't provide them). enable developer mode on the S3 and:
<pre>sudo apt-get install android-tools
mkdir firmware
adb pull /system/vendor/firmware/
adb pull /system/etc/wifi
</pre>
After then, just follow official <a href="http://redmine.replicant.us/projects/replicant/wiki/GalaxyS3Installation">replicant install guide</a> for S3. If you don't mind closed source firmwares, post-install you need to push the firmware files back:
<pre>
adb shell
mount -o remount,rw /system
adb push . /system/vendor/firmware
</pre>
Here was my first catch, the wifi firmwares from jelly bean based image were not compatible with older ICS based replicant.
<h3>Using replicant</h3>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjS7mFyxOIN6g_h44Al1gWY4hpzq15ez943wXulQLkx8lutX1JD1bEvGqr3dNXPfHdGvRNjmUJSnzxEb_MgJPf_Cu5IjI81CjwI7xLbFssnA0wSwhuPidKA2Ihyphenhyphenjxvr-xWSPReUm1bYnCM/s1600/lockscreen.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjS7mFyxOIN6g_h44Al1gWY4hpzq15ez943wXulQLkx8lutX1JD1bEvGqr3dNXPfHdGvRNjmUJSnzxEb_MgJPf_Cu5IjI81CjwI7xLbFssnA0wSwhuPidKA2Ihyphenhyphenjxvr-xWSPReUm1bYnCM/s320/lockscreen.png" /></a></div>
Booting to replicant is fast, few seconds to the pin screen.
You are treated with the standard android lockscreen, usual slide/pin/pattern options are available. Basic functions like phone, sms and web browsing have icons from the homescreen and work without a hitch. Likewise camera seems to work, really the only smartphone feature missing is GPS.
<p/>
Sidenote - this image looks a LOT better on the S3 than on my thinkpad. No wonder people are flocking to phones and tablets when laptop makers use such crappy components.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-tR5v-iQl0EN1i-RYns4FHEJxefs5MAbPrZ5sIsD5Fn_Lirhyi6h5gPh8ZqSXE7yK2Y0YNd-nR074HUOPNF1eusqAPxYJkmZnxKrGHsKLi34HqLUOZBTrg-uqGWvXP3XtCbrrq7B0p-c/s1600/menu.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-tR5v-iQl0EN1i-RYns4FHEJxefs5MAbPrZ5sIsD5Fn_Lirhyi6h5gPh8ZqSXE7yK2Y0YNd-nR074HUOPNF1eusqAPxYJkmZnxKrGHsKLi34HqLUOZBTrg-uqGWvXP3XtCbrrq7B0p-c/s320/menu.png" /></a></div>
The grid menu has the standard android AOSP opensource applications in the ICS style menu with the extra of <a href="http://f-droid.org">f-droid</a> icon - which is the installer for open source applications. F-droid is it's own project that complements replicant project by maintaining a catalog of Free Software.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqd7HzXNbfvUioUINdcvjhMgusQZ6YKM_TbKiK3RYAKjptbWzlwcHP9KxmZEDmuvGyA-ZmK8XpZGBSS0a39Vzmvs3n3BrHBTsUIMkMfYEwaA8KVrDJHkB6U-ooGj6WzEascjsVebK8pxM/s1600/installer.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqd7HzXNbfvUioUINdcvjhMgusQZ6YKM_TbKiK3RYAKjptbWzlwcHP9KxmZEDmuvGyA-ZmK8XpZGBSS0a39Vzmvs3n3BrHBTsUIMkMfYEwaA8KVrDJHkB6U-ooGj6WzEascjsVebK8pxM/s320/installer.png" /></a></div>
F-droid brings hundreds of open source applications not only for replicant, but for any other android users, including platforms with android compatibility, such as <a href="http://jolla.com/">Jolla's</a> Sailfish OS. Of course f-droid client is <a href="https://gitorious.org/f-droid/fdroidclient/">open source</a>, like the f-droid server (in <a href="http://packages.qa.debian.org/f/fdroidserver.html">Debian</a> too). F-droid server is not just repository management, it can take care of building and deploying android apps.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwmTltD6SvKDH0Je-DQwhxecdxV_wP_tN-yT_Zs47ywUC3-Rg95BoYsbAvupxcrjVQPDRC9IcWv4ycM7MpADsdzVij6bG6xxvv6HzWJphm9wGrg7zPQgF6LxufGufuqlhZ6yMCu_EyzUc/s1600/browser.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwmTltD6SvKDH0Je-DQwhxecdxV_wP_tN-yT_Zs47ywUC3-Rg95BoYsbAvupxcrjVQPDRC9IcWv4ycM7MpADsdzVij6bG6xxvv6HzWJphm9wGrg7zPQgF6LxufGufuqlhZ6yMCu_EyzUc/s320/browser.png" /></a></div>
The WebKit based android browser renders web sites without issues, and if you are not happy with, you can download <a href="https://f-droid.org/repository/browse/?fdfilter=firefox&fdid=org.mozilla.firefox">Firefox</a> from f-droid. Many websites will notice you are mobile, and provide mobile web sites, which is sometimes good and sometimes annoying. Worse, some pages detect you are android and only offer you to load their closed android app for viewing the page. OTOH I am already viewing their closed source website, so using closed source app to view it isn't much worse.
<p/>
This keyboard is again the android standard one, but for most unixy people the <a href="https://f-droid.org/repository/browse/?fdid=org.pocketworkstation.pckeyboard">hacker's keyboard</a> with arrow buttons and ctrl/alt will probably be the one you want.
<h3>Closing thoughts</h3>
While using replicant has been very smooth, the lack of GPS is becoming a deal-breaker. I could just copy the gpsd from cyanogen, like <a href="http://blog.josefsson.org/2013/11/11/using-replicant-on-samsung-s3/">some have done</a>, but it kind of beats the purpose of having replicant on the phone. So it might be that I move back to cyanogen, unless I find time to help reverse engineering the <a href="http://redmine.replicant.us/projects/replicant/wiki/BCM4751">BCM4751 GPS</a>. Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-87065982668119629712013-07-22T11:14:00.000+03:002013-07-22T11:14:14.414+03:00ACPI on ARM storm in teacupA recent <a href="https://plus.google.com/106265217227408958782/posts/MhDfDSp8i8a">google+ post by Jon Masters</a> caused some <a href="https://plus.google.com/111104121194250082892/posts/Vvfc7J6Cwnj">stormy</a> and some <a href="https://plus.google.com/106977132481886848688/posts/2ADZFk48AAa">less stormy</a> responses.
<p>
A lot of BIOS/UEFI/ACPI hate comes X86, where ACPI is used from everything from suspending devices to reading buttons and setting leds. So when X86 kernel suspends, it does magic calls to ACPI and prays that the firmware vendor did not screw it up. Now vendors do screw up, hence lots of cursing and ugly workarounds in the kernel follows. My Lenovo has a firwmare bug where the FN-buttons and Fan stops working if the laptop is attached on AC adapter for too long. The fan is probably a simple i2c device the kernel could control directly without jumping through ACPI hiding layer hoops. But the X86 people hold the view it is better to trust the firmware engineer to control devices instead of having the kernel folk to write device drivers to ... control devices!
<p>
Now on ARM(64) the idea of using ACPI is to have none of that.
<p>
Instead the idea is to use ACPI only to provide tables for enumerating what devices are available in the platform. Just like what device tree does. Now if this is the same as device tree, why bother?
<p>
The main reason is to allow the distribution installer behave same on X86 / ARM / ARM64. This is crucial for distributions like fedora and RHEL where a cabal holds the point of view that X86 distribution development must not be constrained by ARM support. But it also important for everyone that the method of installing your favorite distribution to an ARM64 server is standard and works the same for any server from any vendor. Now while UEFI and ACPI are definitely not my preferred solutions, I can accept them as necessary evil for having a more standard platform.Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com0tag:blogger.com,1999:blog-997920555510565452.post-64313061306757486492013-02-04T15:36:00.002+02:002013-02-04T15:36:59.193+02:00On behalf of aarch64 porters<h3>Public service announcement</h3>
<p>When porting GNU/Linux applications to a new architecture, such as 64-Bit ARM, one gets familiar with the following error message:</p>
<pre>
checking build system type... x86_64-pc-linux-gnu
checking host system type... Invalid configuration `aarch64-oe-linux': machine `aarch64-oe' not recognized
configure: error: /bin/sh config.sub aarch64-oe-linux failed
</pre>
<p>This in itself is trivial to fix - run autoreconf or just copy in new versions of config.sub and config.guess. However, when bootstrapping a distribution of 12000+ packages, this becomes quickly tiresome. Thus we have a small request:</p>
<p align="center"><b>If you are an upstream of a software that uses autoconf - Please run autoreconf against autotools-dev 20120210.1 or later, and make a release of your software.</b></p>
<p>Aarch64 porters will be grateful as updated software trickles down to distributions.</p>
<p>This was the most discussed point during my FOSDEM talk "<a href="http://people.linaro.org/~rikuvoipio/aarch64-talk/">Porting applications to 64-Bit ARM</a>".</p>
Riku Voipiohttp://www.blogger.com/profile/11009374403959477488noreply@blogger.com1