Snap interfaces from Android App developer perspective

If you are already building Snap packages, I guess you know what an “interface” is in the world for Snapcraft. However it seems, the terminology may not be very clear for someone who doesn’t know anything (my developer friends and colleagues) about Snap. So I just wanted to clearly write this one out so that it becomes clear for people coming from Android App development background. I like to think that “permissions” in Android and “interfaces” in Snap world are pretty much interchangeable. I have done a fair amount of Android App development (professionally and privately) in the past six years and have been involved in the Snapcraft ecosystem almost since it’s inception.

Before going into further details I think it’d make sense to clear some of the Snapcraft terminologies as well.

  • Snapcraft is the name of the wider project, which involves the build tools, the public Store, the daemon that runs on your computer to manage and update snap packages. However snapcraft is also the name of the command line tool that is used to actually build snap packages, confusing ? yeah, a bit!
  • Snapd is the software and daemon that runs on your computer to install/remove/update snap packages both from the Snap store or a locally built one. The CLI tool is called snap

Android App permissions

In Android, if an app wants to access geolocation, it has to “request” the OS for it’s permissions, which then pops up a dialog that the user sees, in that dialog the user may choose to grant the requesting app the permission to use geolocation or deny it.

It is also pertinent to mention that some permissions like INTERNET only needs to be added to AndroidManifest.xml file and the user is not asked if the App should be allowed to access the internet.

Snap Permissions

In the classic Linux packaging like deb and rpm, the installation is mostly the extraction of the relevant software into the rootfs and some scripts get run (as root!) during the installation process. After that the software has unrestricted access to the system, it can access different hardware devices, can read the whole filesystem and even change it.

However the above stuff is quite different for a Snap package, a snap package’s build configuration is a simple yaml file, that defines what system resources the snap is expected to access, like the network, usb camera, opengl or the sound server etc. Specifically that gets defined under the plugs stanza, similar to how permissions are defined in AndroidManifest.xml.

Just like the INTERNET permission in Android, there are multiple such interfaces in the snap-world that are pre-granted or “connected” as you would call it in the snap world.

For interfaces that are deemed sensitive for auto-connection, there is a process by which an app developer can request the snap store admins to grant their snap the permissions to automatically connect a specific interface on installation. That process is documented here. The developers mostly need to justify why their App needs permission to access a certain system resource.

One thing that is missing currently is that there is no way for a software to request the system to prompt the user to grant permissions for an interface. I think something like that could help circumvent the need for asking the Snap store admins. Hopefully we can have that feature some day as well.

That mostly concludes this article. We have been using Snap packages for building a commercial product and have been using them on a Yocto-based system. I will be writing quite a bit more in the coming days and months about that journey.

Network-based IPC using WAMP protocol

Most Linux based distributions come pre-installed with DBus, which is a language-independent way of IPC on such systems. DBus is great and have been extensively used for a long-time. It, however is written largely to be used on a single client computer, where apps running locally are able to talk to each other. It could be used over TCP, however it may not be suitable for reasons I state below.

In modern times and especially with the advent of smartphones, many new app communication paradigms have appeared. With IoT being the new cool kid in town, its becoming more and more a “required” when different apps running in a premise have to “talk” to each other. DBus daemon can be accessed over TCP, however a client running in a web browser cannot talk to it because browsers no longer provide direct access to the TCP socket, so writing a DBus client library won’t be possible. For Android and iOS, talking to a DBus daemon running on a PC would need new implementations.

Much of the above effort could be reduced if we used a more general purpose protocol, that supports PubSub and RPCs, is secure (supports end to end encryption), cross-platform and have an ever increasing client libraries ecosystem. WAMP protocol is one such protocol, it can be run over WebSocket, allowing “free” browser support. It also runs over RawSocket (custom framing atop TCP). In principle, WAMP could run on any bi-directional and reliable transport hence the future prospects of the protocol look quite good.

To that effort I have been working on a pet project for the last couple of months, called DeskConn. It uses Crossbar as WAMP router (equivalent: DBus daemon) and couple with it an authentication scheme and service discovery using python zerconf, allowing for the daemon running on the desktop/RPi to be discover-able by Clients on the local network (WiFi, LAN or other interfaces).

With the network layer figured, writing applications on top that is pretty straightforward and can be done with very little code. I’ll come up with some example code in different programming languages in a later blogpost. For the curious, the umbrella deskconn project has quite a few sub-projects to be run on different environment https://github.com/deskconn/

Note: I am a Core developer at Crossbar.io GmbH, the company that funds the development of Crossbar (the router) and a few WAMP client library implementations in Java, Python, JS/Node and C++, under the Autobahn project. I am the maintainer of autobahn-java and autobahn-js. DeskConn is a personal project that I have been working on in my free time.

A more wider list of implementations mostly done by the community could be seen here https://crossbar.io/about/Supported-Languages

My first-ever FOSDEM; it was awesome

I came back from FOSDEM on Tuesday but got busy with my day time job at Crossbar.io. Finally, today when I got to write something, I found my blogspot based web page to be really uncomfortable to navigate and manage, so I spent the last few hours trying to move my blog over to wordpress. I also had to update the planet ubuntu bzr repository for my new blog to show up on Planet Ubuntu.

Having been part of the Ubuntu community, I have had the chance to travel to different software events, mostly Ubuntu specific. While at Canonical, my travel was for Ubuntu Developer Summit and for internal Canonical sprints. Post-Canonical layoff in 2017, I didn’t really travel much for conferences, though last year, while visiting Crossbar.io GmbH’s HQ in Erlagen, Germany, I used that opportunity to plan my trip as such that it coincides with UbuCon Europe in Sintra. That was a great event and I got to meet really great people, the social part of that event was on par or even better than the talks/workshops.

So when FOSDEM’s date was announced, I was yet again excited to travel to a community event and since its known as the biggest FOSS conference in Europe and the fact that lots of super-intelligent people from the wider open-source community attend it every year, I knew I had to be there. To that regard I applied for the Ubuntu community donation fund and guess what I got the nod. Rest is just details.

Talks were great

I attended lots of great talks (lighting as well) and one of the great and a "must watch" talk was from James Bottomley of IBM titled "The Selfish Contributor Explained", according to him that to unleash the true potential of an employee, companies should make an effort to figure out what interests their employee and if a developer is working on something they enjoy, they will likely go out of their way to make things work better.

From future’s perspective and something that affects us all is how the web will transform in the coming years; for that Daniel Stenberg (curl creator) gave an informative talk about HTTP/3 and the problems that it solves. Of course much of the "heavy lifting" was done by the new underlying transport QUIC (thanks Google for the earlier work)

Behold HTTP/3 is coming

I gave a talk

DeskConn is a project that I have been working on in my free time for a bit and I wanted to introduce that to a wider audience, hence I gave a brief talk on what could potentially be done with it. DeskConn project enables network based IPC, allowing for different apps, written in different languages to be able to communicate with each other and since the technology is based around WebSocket/WAMP/Zeroconf, a client could be running in any programming language that has a WAMP library.

For simplicity sake: Its a technology that could enable creation of projects like KDE Connect but something that runs on all platforms like Windows, macOS and Linux.

My talk about the DeskConn project

Met old colleagues and friends

FOSDEM gave me the opportunity to meet lots of great people that I truly admire in the Ubuntu community, people I hadn’t seen or talked to for more than 3 years.

I met quite a few people from the Ubuntu desktop team and it was refreshing to know how hard they are working on making Ubuntu 20.04 a success. Olivier Tilloy and I had a short discussion about browser maintenance that he does to ensure we have the latest and greatest versions of our two favorite browsers (Firefox and Chromium). Jibel told me about the ZFS installation feature work that He and Didier have been doing; I hope we’ll be able to use that technology in "production" soon.

from left to right: Martin Pitt (from RedHat), Ian Lane and Jean-Baptiste Lallement and I

Conclusion

My first FOSDEM was a great learning experience, navigating around the ULB is also a challenge of sorts but it was all worth it. I’d definitely go back to a FOSDEM given the chance, maybe next year 😉

Using Your Ubuntu Server As Telegram Proxy (MTProxy Snap)

Telegram is great, especially because it helps one stay away from the distractions that WhatsApp brings with it. Its unfortunately blocked in Pakistan, due to unknown reason but likely not related to censorship, given WhatsApp, Signal and every other messaging app works just fine.

The good news is Telegram upstream have their own proxy protocol and an implementation (https://github.com/TelegramMessenger/MTProxy), which seems to work well. I published MTProxy as a snap (https://snapcraft.io/mtproxy) yesterday, so thought it would make sense to share how others could setup their own proxy. This guide, will of course help me as a future reference as well.

So lets get started by installing MTProxy

snap install mtproxy

Due to security reasons, mtproxy drops privileges (if run as root) by calling setuid(), something a strictly confined snap does not allow due to security reasons, so my workaround was to create a new user on the server, so that mtproxy does not try to drop privileges.

So lets setup a new user and download proxy configurations from Telegram servers, more details: https://github.com/TelegramMessenger/MTProxy#running

useradd mtproxy -m
su - mtproxy
mkdir proxyconfig
curl -s https://core.telegram.org/getProxySecret -o proxyconfig/proxy-secret
curl -s https://core.telegram.org/getProxyConfig -o proxyconfig/proxy-multi.conf

Now lets exit the mtproxy user shell and create a secret to be used later by Telegram client apps

exit
head -c 16 /dev/urandom | xxd -ps

Now we create a systemd service so that our proxy runs in the background and starts automatically whenever the server is restarted. Open the below file for editing using nano (or the editor of your choice) and paste the below configuration.
Note: you must replace the random string that was generated in previous step with “my_secret” in below config.

sudo nano /etc/systemd/system/mtproxy.sevice
 [Unit]  
 Description=MTProxy  
 After=network.target  
 [Service]  
 Type=simple  
 User=mtproxy  
 WorkingDirectory=/home/mtproxy/proxyconfig  
 ExecStart=/snap/bin/mtproxy -u mtproxy -p 8888 -H 8000 -S my_secret --aes-pwd proxy-secret proxy-multi.conf -M 1  
 Restart=on-failure  
 [Install]  
 WantedBy=multi-user.target

Lets now start the service

systemctl enable mtproxy
systemctl start mtproxy

That’s it, we are done, you now have the Telegram proxy setup and (hopefully) working.

NOTE: This was only tested on DigitalOcean droplet, so your mileage may vary.

Control GPIO pins on a RaspberryPi 3 running Ubuntu Core 18, remotely (part 1/4)

Ubuntu Core 18 is out and one of the features that it packs with it is a set of snapd interfaces to access GPIO pins on the Raspberry Pi 2/3 in a fully confined snap. This enables one to just flash Ubuntu Core 18 on a micro sd card, boot, install a snap (which I author), connect a few interfaces and start controlling relays attached to a Raspberry Pi 2/3.

If you don’t have Ubuntu Core 18 already installed, you can see the install instructions here

To get started (assuming you have Ubuntu Core 18 installed and have working ssh access to the Pi), you need to install a snap that exposes the said functionality over the network (local)

snap install pigpio

The above command installed the pigpio server, which automatically starts in the background. The server could take as much as 30 seconds to start, you have been warned.

We also need to allow the newly installed snap to access a few GPIO pins

snap connect pigpio:gpio pi:bcm-gpio-4
snap connect pigpio:gpio pi:bcm-gpio-5
snap connect pigpio:gpio pi:bcm-gpio-6
snap connect pigpio:gpio pi:bcm-gpio-12
snap connect pigpio:gpio pi:bcm-gpio-13
snap connect pigpio:gpio pi:bcm-gpio-17
snap connect pigpio:gpio pi:bcm-gpio-18
snap connect pigpio:gpio pi:bcm-gpio-19
snap connect pigpio:gpio pi:bcm-gpio-20
snap connect pigpio:gpio pi:bcm-gpio-21
snap connect pigpio:gpio pi:bcm-gpio-22
snap connect pigpio:gpio pi:bcm-gpio-23
snap connect pigpio:gpio pi:bcm-gpio-24
snap connect pigpio:gpio pi:bcm-gpio-26

The above pin numbers might look strange, but if you read a bit about the Raspberry Pi 3’s GPIO pinout, you will realize, I only selected the “basic” pins, you are however free to connect all GPIO pin interfaces.

The pigpio snap that we installed above exposes the GPIO functionality over WAMP protocol and http. The HTTP implementation is very basic and allows to “turn on” and “turn off” a GPIO pin and get current state(s) of the pins.

Note: below commands assumes you have httpie installed (snap install http).

To get the state of all pins

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.get_states

If we only want the state of a specific pin

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.get_state args:='[4]'

To “turn on” a pin

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.turn_on args:='[4]'

To “turn off”

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.turn_off args:='[4]'

I am skipping talking about the WAMP based API for this, to keep this blogpost short, I must add though, that the WAMP implementation is much more powerful than the http one, especially because it has “event publishing”, imagine multiple people controlling a single GPIO pin from different clients, we publish an event that can be subscribed to, hence ensuring all client apps stay in sync. I’ll talk about this in a different blog post. In a later post, I will also be talking about making the GPIO pins accessible over the internet.

For me personally, I have a few projects for home and one for my co-working space that I plan to accomplish using this.

The code lives on github

Introducing PySide2 (Qt for Python) Snap Runtime

Lately at Crossbar.io, we have been PySide2 for an internal project. Last week it reached a milestone and I am now in the process of code cleanup and refactoring as we had to rush quite a few things for that deadline. We also create a snap package for the project, our previous approach was to ship the whole PySide2 runtime (170mb+) with the Snap, it worked but was a slow process, because each new snap build involved downloading PySide2 from PyPI and installing some deb dependencies.

So I decided to play with the content interface and cooked up a new snap that is now published to snap store. This definitely resulted in overall size reduction of the snap but at the same time opens a lot of different opportunities for app development on the Linux desktop.

I created a ‘Hello World’ snap that is just 8Kb in size since it doesn’t include any dependencies with it as they are provided by the pyside2 snap. I am currently working on a very simple “sound recorder” app using PySide and will publish to the Snap store.

With pyside2 snap installed, we can probably export a few environment variables to make the runtime available outside of snap environment, for someone who is developing an app on their computer.

Software security over convenience

Recently I got inspired (paranoid ?) by my boss who cares a lot about software security. Previously, I had almost the same password on all the websites I used, I had them synced to google servers (Chrome user previously), but once I started taking software security seriously, I knew the biggest mistake I was making was to have a single password everywhere, so I went one step forward and set randomly generated passwords on all online accounts and stored them in a keystore.

I then enabled 2FA authentication on some important services (GMail, GitHub, Twitter, DO) and adopted the policy to never login to my browser’s sync features. Doing that, I realize that the browser is just a commodity, it doesn’t matter which browser I use as long as I can log into my online accounts and of course a browser that actually works.

I am pretty sure there are many things that I could still improve around my computing patterns, which I will over time.

Motto: software security over convenience.

Lets Snap The World

I am a long-time Ubuntu user and community contributor. I love how open-source communities generally work, sure there are hiccups, like companies mandating decisions that aren’t popular amongst the community. The idea of I being able to fix an issue and getting that released to hundreds of thousands of people is just priceless for me.

For the long time, I have distinguished some issues in Linux on the desktop that I want fixed. Biggest is always having the latest version of the software I use. Think of Android for example, you always get the latest version of the app, directly from the developers with no package maintainer in between. That’s the ideal scenario but for us currently on Linux it may not be possible in all cases because of the fragmentation we have.

Snaps, I believe tries to solve that.

Whenever I find a new software that I want to install these days, the first thing I do is search the snap store (snap find my_query). I have found some unexpected snaps while doing that but other times I faced disappointment. On a personal level I have slowly started to fix that. I published Android Studio as a snap, Sublime Text is work-in-progress and I am looking into snapping Keybase.

The other apps that are absolutely important for me are already available, like PyCharm and Slack.

I have also discovered that MySQL and Firefox to some extent have their snaps there, which is super awesome.

The great thing today is that most new open-source projects are doing development on GitHub, so we can just go and contribute snap support for a project and quickly get automatic builds on build.snapcraft.io. I hope more people who care about Ubuntu, and Linux in general get behind this effort and make application delivery on Linux the best amongst all Desktop OSes.

Its time to put our egos aside and work for a larger cause.

I am a QA Engineer and Python developer and I need a job

Yesterday I was laid off by Canonical after working 6 years with them as a QA Engineer. I really loved my job and learned quite a lot, but now I must find a new job to survive. I have been involved with Ubuntu for close to 8 years, started as a contributor to the Ubuntu community, later I was offered a role at Canonical to work on Ubuntu.

I am very passionate about software. When I am not working, I write code and when I am working I write code. I never stopped learning technologies of different domains during last few years, so apart from my full-time job, I taught myself Android app development, Django to write RESTful APIs (not a full-stack developer yet) and to some extent I am also a DevOp and have been managing a lot of my deployments.

As a QA Engineer, I can help you setup test plans, find coverage gaps, automate your tests and enable them to run as part of CI in Jenkins. Apart from automation, I do have extensive experience with manual testing as well, so I can really break your product(for good).

My Linux skills are quite competitive, having used Ubuntu exclusively for 8 years I am very comfortable with command-line, with remote debugging over ssh. I am experienced with both git and bzr. I am also very passionate about embedded devices, have experimented very cool things on the RaspberryPi.

I live in Pakistan (GMT+5) but I am pretty flexible with work hours, so if an opportunity arrives I can really work in a different timezone. I don’t have any specific preference over the size of the company, so I am very willing to work for companies of all size. I am also up for any freelance opportunities, so if you don’t have a full-time role, I can be a freelancer/consultant.

my linkedin: https://www.linkedin.com/in/omer-akram-44830248/
my github: https://www.github.com/om26er
my launchpad: https://www.launchpad.net/~om26er
email: om26er@gmail.com

Exciting times ahead.