Android and Material Design

The new Android version, so far called 'L' and most likely winding up being called Lollipop when it will be released, has a new visual style called Material Design.

For Android this change of visual style means that the code base also needs to service both old API deployments (lower than v20) as well as the new ones (v21 and upwards). In effect this means you have to create res/layout-v21 and res/values-v21 directories to customize the layout and modify the styles for the new API.

In your module's build.gradle you have to change the compileSdkVersion to 'android-L' and the targetSdkVersion to 'L'. If you have any dependencies on support-v4 or appcompat-v7 you need to switch those to v21.+ to pick up future updates, such as a different release candidates up to the released version.

Revisiting Android and support libraries (support-v4 and appcompat-v7)

I have previously written on this subject, but now I am using IntelliJ IDEA 13 with the latest Android SDK of this writing (September 2014) and when you create a project you might be greeted by an error message like the following:

Error:Gradle: A problem occurred configuring project ':projectname'.
> Could not resolve all dependencies for configuration ':projectname:_debugCompile'.
   > Could not find any version that matches com.android.support:support-v4:0.0.+.
     Required by:
         Projectname:projectname:unspecified
   > Could not find any version that matches com.android.support:appcompat-v7:19.+.
     Required by:
         Projectname:projectname:unspecified

The Android SDK has switched over to Gradle since I last wrote about it. In this case the default setup already searches the local libs directory under Projectname/projectname for any jars to compile with the build of the application. But if you would follow the instructions from my previous post the chance is high that you keep running into this problem. Aside from the installation of the Android Support Library, you will also need to install the Android Support Repository in order to make dependency resolution work. Do verify that your Projectname/local.properties contains a set property for sdk.dir that points to the root of your locally installed Android SDK.

Now, you might still run into problems. The thing is that in your Projectname/projectname/build.gradle you generally want to have the compile lines for support-v4 and appcompat-v7 match the version of your targetSdkVersion. So this might become:

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile 'com.android.support:support-v4:20.0.0'
    compile 'com.android.support:appcompat-v7:20.0.0'
}

These numbers can be found in the SDK under extras/android/m2repository/com/android/support under the respective entries for appcompat-v7 and support-v4. If you would use + for the version identifier, you run the chance of picking up the latest version, including release candidates and this might break your build. So in this case being explicit is better than depending on it implictly.

Edit: On second thought, it might be better to use 20.+ or 20.0.+ for the version identifier in order to automatically pick up bugfix releases down the line. Looking at the release notes of the support library it seems that Google is quite strict in sticking to semantic versioning.

Detecting keyboard layout used on Windows

int localeId = GetKeyboardLayout(0);

// The low word contains a Language Identifier for the input language
// and the high word contains a device handle to the physical layout
// of the keyboard.
localeId &= 0xffff;    // mask off high word to get the locale
                       // identifier

switch (localeId) {
    0x040c:
    0x080c:
    0x1009:
        useAzertyMapping();
    default:
        useQwertyMapping();
}

Relevant links:

Adding android.support.v4 to your Android application in IntelliJ IDEA

You can enable support for certain forward version features via the android.support namespace. In order to accomplish this you need to start the Android SDK Manager and make sure that under the Extras heading you select and install the Android Support package.

Once done you go into the directory extras/android/support/v4 and copy the android-support-v4.jar to your own project's libs directory. Next go within IntelliJ IDEA to File » Project Structure and under Project Settings go to Modules, make sure your application is selected in the middle pane, then on the right side select the Dependencies tab. In the window below click the plus-icon and select Jars or directories. From the resulting window browse to your libs directory and select the android-support-v4.jar and press OK to close the window and add the jar-file to the dependencies of the project. Since you are now using certain constants from a newer version of Android the Module SDK needs to be changed to Android 4.0.3 Platform as well. Press Apply and close the Project Settings by pressing the OK button.

PyCharm and external lint tools

PyCharm already has a number of features present in various tools to lint/check your source code with, but offers a way to hook up external tools. Under File > Settings is a section called IDE Settings. One of the headings here is called External Tools. Select this heading and then press the Add... button on the right hand pane to configure a new external tool.

In the Edit Tool window that now appeared fill in a name, e.g. PEP8 and a group name Lint and add a description. Next point the Program to the location of the pep8.exe executable, e.g. C:Python27Scriptspep8.exe. For Parameters you need to use $FilePath and Working directory should be filled in by default. Once done, you can close it by pressing the OK button.

Now, pyflakes has no .exe or .bat file to accompany it. You will need to add a pyflakes.bat in your Scripts directory inside Python with the following contents:

@echo off
rem Use python to execute the python script having the same name as this batch
rem file, but without any extension, located in the same directory as this
rem batch file
python "%~dpn0" %*

Within PyCharm you follow largely the same settings as for pep8, however make sure to point to the batch file of pyflakes under Program. Close the external tools configuration windows by clicking OK twice. Under the menu heading Tools you should see an submenu heading Lint which, in turn, should contain two menu items: PEP8 and Pyflakes.

Now open a Python file, go to Tools > Lint > PEP8 and you should get output like the following in your Run (4) window:

D:\Python26\Scripts\pep8.exe D:\pprojects\babel\babel\tests\__init__.py
D:\pprojects\babel\babel\tests\__init__.py:16:1: E302 expected 2 blank lines, found 1

Process finished with exit code 1

On the topic of sensible date and temperature defaults in applications and websites

Something that can always get me a bit frustrated is the choice of defaults used in applications.

Dates: Aside from Belize, Canada, the Federated States of Micronesia, Palau, the Philippines, and the United States are the only countries using a date format where the month is the first entry, followed by day, and lastly year (mm/dd/yyyy). To put to numbers that's about 436 million people who use this versus 6.35 billion that don't (ratio of about 14:1). Of that 6.35 billion about 3.8 billion use a date format where day is first, followed by month, and lastly year (dd/mm/yyyy — ratio of about 9:1 to the month first users). About 1.81 billion use a form where the year is first, followed by month, and lastly day (yyyy/mm/dd, roughly equivalent to ISO 8601 — ratio of about 4:1 to the month first users). (Note: these 1.81 billion have a slight overlap with the 3.8 billion due to some countries having two date formatting forms in use or due to two or more distinct scripts with different date formatting styles.) So using a format where the month is first is only confusing for the majority of the world's population. If you need a default date, use the ISO 8601 format — not only is it less ambiguous, it also allows for much better chronological sorting.

Temperature: Aside from Belize and the United States (I so far managed to find), the worldwide standard for temperature is Celcius, not Fahrenheit. If you are using Fahrenheit you are putting 6.48 billion people at a disadvantage solely against something like 313 million people. That's a ratio of about 22:1, meaning you put 22 people at a disadvantage for every one person you are trying to please.

Disclaimer: do note that this of course only makes sense if you are appealing to an international audience. If you are just targeting a specific country you will of course default to what they use. On the other hand, properly fixing your code to be i18n-ready is the way to go anyway.

Predefined macros

So with the GNU compiler you can use the preprocessor to get a list of the predefined macros:

$ cpp -dM /dev/null

or if you prefer to invoke the preprocessor via gcc itself:

$ gcc -dM -E - < /dev/null

This should give you a list similar like:

#define __DBL_MIN_EXP__ (-1021)
#define __FLT_MIN__ 1.17549435e-38F
#define __DEC64_DEN__ 0.000000000000001E-383DD
#define __CHAR_BIT__ 8
#define __WCHAR_MAX__ 2147483647

For Microsoft's Visual C++ compiler I have only found pages like:

For Intel's C++ compiler I found the following page with predefined macros.

And I find this interesting page with a lot of different compilers and their predefined macros to identify them and their versions, if any.

Edit: I also found how to do this with Clang:

$ clang -dD -E - < /dev/null

Clustering and relevant algorithms

Disclaimer: I'm mainland European, we tend to use the , to separate digits from the whole numbers.

Clustering is quite a common approach to aggregate coordinates that are relatively close together. The problem lies in the choice of algorithm to use. This choice is highly dependent on the space in which the coordinates are laid out. Quite often you can just use basic Euclidean distance which, for a 2-dimensional space, simply takes the square root of the sum of the squared subtraction of the respective coordinates of each point. So if you have a point p with coordinates (33, 52) and a point q with coordinates (82, 19), the distance between p and q would be:

>>> import math
>>> math.sqrt(pow(33 - 82, 2) + pow(52 - 19, 2))
59.076221950967721

And based on that distance you can start to cluster points together that are all roughly the same distance from a certain point, say 59,1. The fun part of this is that this distance is the radius of a circle. So if you would plot every possible coordinate at that distance you will see a circle emerge.

In looking at clustering algorithms I also encountered something called Manhattan distance, but this algorithm only makes sense if you are working in a grid with roughly equidistant lengths to the other coordinates in this space. Normally the shortest distance from A to B would be a straight line, as the Euclidean distance shows. However, if the movement from coordinate to coordinate is restricted to straight lines, say the grid layout of a lot of North American cities, then Euclidean distance cannot apply. This is the same problem a taxi faces when trying to find the shortest distance to drive from A to B and as such the algorithm is also known as the taxicab distance or geometry. It takes the sum of the absolute value of the subtraction of the respective coordinates of each point. So if you take point p and q again, the distance would in this case be:

>>> abs(33 - 82) + abs(52 - 19)
82

Now, if you would plot all possible coordinates with that distance you will see a circle emerge again. However, keep in mind that a circle is nothing more than a set of points with a fixed distance (the radius). In this case our geometry uses a differently defined distance. If you would plot this out with a finer and finer grid the circle shape that emerges is a square rotated 45° so that it rests on its point.

JSONP with Werkzeug

So I had implemented a simple JSON data server with Werkzeug for a classroom experiment. Unfortunately in my haste to get everything up and running I totally forgot about the fact that, since we cannot allow uploads to this server of various custom made webpages, using jQuery's $.ajax() everything just fails since it will then be a cross-site scripting request.

So, normally you would do something like the following in order to return JSON data:

return json.dumps(data)

Which would be used with the $.ajax() call in a way like the following:

$.ajax({
  type: "POST",
  url: "http://example.com/json/something",
  data: "parameter=value",
  dataType: "json",
  error: function(XMLHttpRequest, textStatus, errorThrown){},
  success: function(data, msg){}
});

Which is perfectly fine for scripts getting and using the data on the same host/domain. But, as said before, this will fail with warnings similar to: "Access to restricted URI denied" code: "1012" nsresult: "0xdeadc0de (NS_ERROR_DOM_BAD_URI)".

One way out of this is using JSONP. jQuery has a $.getJSON() function, which loads JSON data using a HTTP GET request. Now, the simplistic way to convert your code would be to change it as such:

$.getJSON("http://example.com/json/something",
  function(data){}
);

But this causes another issue. Since $.getJSON() GETs the JSON data, but doesn't use eval() on it, but instead pulls the result into script tags, it somehow causes,on Firefox at least, an invalid label error. In order to fix this you need to set up the JSON data server to properly support a callback argument, to use $.getJSON() how it is meant to be used:

$.getJSON("http://example.com/json/something?jsoncallback=?",
  function(data){}
);

In the code above the additional parameter jsoncallback will, thanks to jQuery, get the question mark replaced by an alphanumeric string (typically in the form of jsonp followed by a timestamp). This value should be used to wrap the resulting JSON data with. This means you would have to change the initial Python code to something like this:

return request.args.get('jsoncallback') + '(' + json.dumps(data) + ')'

Of course this causes problems when you want to reuse the code for both AJAX use on the same host/domain and use it from outside. So in order to make both work you can test on whether or not the callback parameter is available and return the appropriate data. I came up with this little snippet for that:

def jsonwrapper(self, request, data):
    callback = request.args.get('jsoncallback')

    if callback:
        return callback + '(' + json.dumps(data) + ')'
    else:
        return json.dumps(data)

Python's sys.stdout loses encoding

When you use Python with sys.stdout you might run into a problem where sys.stdout.encoding suddenly becomes None. This happens due to the fact that upon using a pipe or redirection, at least under Unix, it falls back to not knowing anything about the target. In order to work around this you can add a fallback to use locale.getpreferredencoding(). So if you use encode() on a string you can do something like:

from locale import getpreferredencoding

text = u"Something special"

print text.encode(sys.stdout.encoding or getpreferredencoding() or 'ascii', 'replace')

This is how we currently use it within Babel as well for printing the locale list.