Android rocks in Stockholm

Mikael KindborgBlogs

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+

In this blogpost I would like to share some personal reflections and highlights from two developer events that took place this week, the Stockholm Android Meetup and the Stockholm GDG Meetup. Two impressive, fun, and friendly events, so read on to get a glimpse of the state of the Android community and learn more about what people currently are talking about.

Stockholm has a vibrant Android community with several meetup groups and the high-profile DroidCon Stockholm Conference is coming up in October, featuring 15+ international speakers from the Android scene.

Stockholm Android Meetup (Sept 3, 2014)

Android Wear

Henrik SandströmHenrik Sandström presented Android Wear, a hot topic in the Android world, with lots of watches coming up from the major device manufacturers.

Personally, I would like to see a self-contained Wear device, that operates stand-alone. Now you need to bring the mobile phone, and as Henrik said, “Without the phone, it is just like any watch”.

It would be cool with new wear form factors (I have not had a wrist watch for over 25 years). Why not a tiny mobile phone you can wear as a necklace, or a device you can attach to the clothes, like the Star Trek combadge ;). Then we would have gone full circle, back to small phones in fashion again! (Such devices would also be perfect for the Veridict speech recognition and AI technology presented at the GDG meetup, see below.)

Google IO report

Erik HellmanErik Hellman (author of “Android Programming: Pushing the Limits“) did a recap of Google IO 2014. Here are my reflections on some of the items covered.

Material Design. A big new thing for the Android UI is Material Design. I understand this as a paper-inspired design language, influenced by responsive web design. Check out Erik’s slide deck to see examples of layouts and animations.

In my view, web design and hybrid app WebUIs are in some ways ahead of native mobile UIs. Web design feels more flexible, and if you can learn not to hate CSS totally, you can develop very cool UIs. Native UIs feel more restricted in terms of expressive freedom, as they impose a standard to follow.

It will be interesting to see if and when we will reach the same tipping point that happened on desktop operating systems, where the web browser took over for many applications.

BLE (Bluetooth). BLE now supports peripheral role (advertising) and scanning for
peripherals by service id. This can make Android a more attractive platform for BLE applications. Question is how long it will take for device manufactures to upgrade their software/hardware to fully support BLE. Many of the low-cost Android devices sold in e.g. Asia do not have BLE support. This may slow down the adoption of BLE in markets where older Android versions dominate.

Android Multi-networking. An interesting remark was that WiFi Direct has not become widely used because you cannot be on multiple WiFi networks at once. Mainly as if you’d connect to a peer over WiFi Direct, you’ll lose your Internet WiFi connection. It was mentioned that Google is working on more robust/flexible networking, called Android Multi-networking.

Cloud Save. Google Cloud Save was mentioned as a great way for a quick online data solution, e.g. for startups.

ART (Android Runtime). The new Android Runtime and compiler looks very promising. Enhances performance and solves problems like the pauses cased by Davik’s garbage collector. I was also mentioned there is a new compiler toolchain called “Jack & Jill” that replaces javac.

What would be cool is support for other languages besides Java, specifically dynamically typed languages, with support for on-device development. Compiling and running native programs right on the device – just plug in a keyboard and type ahead – would be very cool indeed! Something like “LuaJIT meets ART” would be sweet :)

Stockholm Google Developer Group (Sept 4, 2014)

Already the next evening, there was a another meetup with Android and Google developers attending.

The Nova AI assistant

The first presentation was made by Alexander Seward, from Veridict.

To put this presentation into context, I am going to start by listing some of the most significant “aha” and “wow” moments I remember having experienced in computer science:

  • First time using the FINE editor in 1981 (an Emacs clone – “FINE Is Not Emacs”).
  • Hands on with the Apple Lisa and the Macintosh in 1984/1985.
  • First time using a Xerox Lisp machine in 1986.
  • First time using a Tektronix Smalltalk machine in 1986.
  • Alan Kay presenting the Vivarium project at a conference in 1986.
  • Randy Smith giving a demo of the Alternative Reality Kit at Xerox PARC in 1986.
  • Watching a video demo of the “Put That There” system sometime mid 1980.
  • Hands on preview of the Experia X10 Mini at an Android meetup in 2010 (it was so incredibly small and cool).
  • Veridict presenting a demo of the Nova AI assistant in 2014.

What you can do with Nova is asking verbal questions. Alexander did a demo of an Android app scheduled to be launched this Fall. You simply ask questions, like:

  • “Where is the closest restaurant that serves Italian pizza?”
  • “When is the next bus to the hospital?”
  • “I want to listen to channel three.”
  • “What is 12% of 99.90?”

Answers were presented using both voice (spoken reply), sound (e.g. music or radio channel) and visual data (e.g. a map). The answer was in most cases presented instantly as you finished the question. This is possible because the system starts to analyse what you say as soon as you start speaking, and begins to search for information as soon as some context framing the question has been picked up. Recall that asking a question takes a couple seconds, which is time available for the system to search for information.

I should mention that I have always disliked speaking to voice services (I usually say “I want to speak to a human”, which is never understood!). But the Nova assistant was something entirely different. It gave a sense of control, rather than a sense of being directed. It was simply one of the most amazing demos I have ever seen in computer science. The team at Veridict has made a massively impressive job by interdisciplinary research and development of a working system.

Interestingly, asking questions almost always beats a GUI-based system for querying. Even a professional ticket sales person using a specialised GUI is slower than the Nova assistant. This is so because the system is just much faster at doing the searches required to answer a question. And stating a question is a very powerful single-point-of-entry interaction technique. No GUI hierarchies to traverse or information to search, just ask and receive.

Mobile Games

Next speaker was John Turesson, who did a great overview of mobile game development using hybrid web technologies. John showed some impressive demos, how CSS3 can be used to achieve fancy graphical effects, how to create animated characters using Blender, importing the model to JavaScript, and much more.

It was pointed out that hybrid solutions to mobile app development have many advantages. Your code works cross platform, great performance with WebGL and CSS3, JavaScript is a high-level language and many libraries use high-level constructs that greatly reduce the amount of code needed to be written.

Two alternative hybrid app engines are PhoneGap/Cordova and Coocon.js. Cocoon.js boosts performance, but does not allow use of HTML, which can make overlay gaming UI elements difficult to implement. Hopefully, we will see a performance increase of the Android WebView widget. If I recall correctly, John mentioned that the latest version of mobile Chrome outperformed Cocoon.js (this might have been for 2D canvas graphics).

I also want to take the opportunity to mention that Evothings Studio is great for developing hybrid apps! (Check out a recent blogpost “Hybrid app development made fast“.)