Selenium, Chromedriver2, and SSL pages

If you are doing Selenium testing using Chromedriver2 0.8 and are having problems with self-signed SSL certificates, this is a known problem and will be fixed in a subsequent release. In the meantime I found that using the Chromedriver 26.0.1383.0 still worked without problems for Chrome 27 and also did not have this SSL certificate regression in it.

Firefox 3.6 and the million proxy password popups

I needed to authenticate with a proxy today and with Firefox 3.6 I constantly got user/password dialog boxes. Of course, this is annoying. Some searching lead me to an article on the Mozilla support site which mentions a setting (which you can reach via about:config) named network.auth.force-generic-ntlm which, when set from false to true, fixes a lot of these popups.

MathML and SVG in HTML 5 with Firefox

I’ve been using MathML for a while now for some of my documentation work on 3D graphics. Unfortunately the only way at the moment is to use XHTML 1.1 modular doctype to include either or both of MathML and SVG. In HTML 5 these have become embedded content parts of the specification. So for example, using MathML would be as simple as doing:

[html]<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<title>MathML test</title>
</head>
<body>
<math>
<mrow>
<mi>y</mi>
<mo>=</mo>
<msup>
<mi>x</mi>
<mn>2</mn>
</msup>
</mrow>
</math>
</body>
</html>[/html]

Unfortunately the only browser to support either MathML or (parts of) HTML 5 at this moment is Firefox 3.5. However, the MathML or SVG embedded content did not render under 3.5. After reading John Resig’s post about a new HTML parsing engine in Mozilla’s Gecko engine I set out to test this engine’s support  by downloading the latest nightly and setting html5.enable to true in about:config and ‘lo and behold, it renders as expected.

JSONP with Werkzeug

So I had implemented a simple JSON data server with Werkzeug for a classroom experiment. Unfortunately in my haste to get everything up and running I totally forgot about the fact that, since we cannot allow uploads to this server of various custom made webpages, using jQuery’s $.ajax() everything just fails since it will then be a cross-site scripting request.

So, normally you would do something like the following in order to return JSON data:

return json.dumps(data)

Which would be used with the $.ajax() call in a way like the following:

$.ajax({
  type: "POST",
  url: "http://example.com/json/something",
  data: "parameter=value",
  dataType: "json",
  error: function(XMLHttpRequest, textStatus, errorThrown){},
  success: function(data, msg){}
});

Which is perfectly fine for scripts getting and using the data on the same host/domain. But, as said before, this will fail with warnings similar to: "Access to restricted URI denied" code: "1012" nsresult: "0xdeadc0de (NS_ERROR_DOM_BAD_URI)".

One way out of this is using JSONP. jQuery has a $.getJSON() function, which loads JSON data using a HTTP GET request. Now, the simplistic way to convert your code would be to change it as such:

$.getJSON("http://example.com/json/something",
  function(data){}
);

But this causes another issue. Since $.getJSON() GETs the JSON data, but doesn’t use eval() on it, but instead pulls the result into script tags, it somehow causes,on Firefox at least, an invalid label error. In order to fix this you need to set up the JSON data server to properly support a callback argument, to use $.getJSON() how it is meant to be used:

$.getJSON("http://example.com/json/something?jsoncallback=?",
  function(data){}
);

In the code above the additional parameter jsoncallback will, thanks to jQuery, get the question mark replaced by an alphanumeric string (typically in the form of jsonp followed by a timestamp). This value should be used to wrap the resulting JSON data with. This means you would have to change the initial Python code to something like this:

return request.args.get('jsoncallback') + '(' + json.dumps(data) + ')'

Of course this causes problems when you want to reuse the code for both AJAX use on the same host/domain and use it from outside. So in order to make both work you can test on whether or not the callback parameter is available and return the appropriate data. I came up with this little snippet for that:

def jsonwrapper(self, request, data):
    callback = request.args.get('jsoncallback')
 
    if callback:
        return callback + '(' + json.dumps(data) + ')'
    else:
        return json.dumps(data)

sendfile() mishandling

As I said in http://www.in-nomine.org/2009/02/22/tinymce-in-wordpress-271-not-working-on-freebsd/ I had problems making TinyMCE work.

After some digging I discovered that the tiny_mce.js file was being oddly truncated and repeated when fetched. Subsequent tests showed that md5 hashes differed every single time with the original file.

During these fetch tests my kernel crashed twice in the network stack. Since my lighttpd configuration uses kqueue and the likes I suspected that perhaps something was not right in lighttpd 1.4.21, which was only released a couple of days ago. So I downgraded to 1.4.20. ‘Lo and behold, the problems disappeared. To be absolutely sure, I recompiled lighttpd 1.4.21 as well. And at first it seemed to work, but then the corruption kicked in again. After talking about it on the #lighttpd channel on FreeNode I found out it was a problem with lighttpd 1.4.21′s handling of sendfile() on FreeBSD.

The fix is now also in the lighttpd port of FreeBSD, so other people should not encounter this problem.

TinyMCE in WordPress 2.7.1 not working on FreeBSD?

Discovered today that with both Firefox 3.0 and Opera 9.63 on FreeBSD, TinyMCE within WordPress 2.7.1 is not allowing me to use the visual editing mode. I tried the example of TinyMCE and it works without problems. Based on this and the fact it works on Windows, there must be something weird in either WordPress or its included version of TinyMCE with FreeBSD. I logged a post over at the WordPress forums.

Zotero and PDF indexing on FreeBSD

For a while now I have been using Zotero on Firefox to handle researching topics. It also allows PDF indexing, but for this you need to set some things up first. Start by installing xpdf from ports, it’s located under graphics/xpdf. This will install pdfinfo and pdftotext amongst others. Next go to your Zotero data directory, by default this is the zotero directory under your profile directory in $HOME/.mozilla/firefox, and create two symbolic links pdfinfo-`uname -s`-`uname -m` and pdftotext-`uname -s`-`uname -m` which will point to /usr/local/bin/pdfinfo and /usr/local/bin/pdftotext, respectively.

Now, when you restart Firefox, Zotero should be able to pick up the files. Check by going into Zotero’s preferences and navigate to the Search tab. It should state something to the effect of pdftotext version UNKNOWN is installed.

Character encoding in mailcap for mutt and w3m

I use mutt on my FreeBSD system to read my mail. To read HTML mail I simply use a .mailcap file with an entry such as

text/html;      w3m -dump %s; nametemplate=%s.html; copiousoutput

This in effect dumps the HTML using w3m to a text file in order to safely display it. The problem that I had is that, because some emails that I receive are from a Japanese translators list, they are in Shift_JIS. When dumped w3m doesn’t properly detect the Shift_JIS encoding and as such the resulting output becomes garbled.

When I looked at the attachments in the mail with mutt’s ‘v’ command I saw that mutt at least knows the encoding of the attachment, so I figured that there should be a way of using this information with my mailcap. Turns out that there is indeed a way to do so, namely the charset variable. It turns out the mailcap format is a full RFC. RFC 1524 to be exact. Mutt furthermore uses the Content-Type headers to pull any specific settings into mailcap variables. So a Content-Type: text/html; charset=shift_jis means that %{charset} in the mailcap file will be expanded to shift_jis. We can use this with w3m’s -I flag to set a proper encoding prior to dumping.

text/html;      w3m -I %{charset} -dump %s; nametemplate=%s.html; copiousoutput

As such you can be relatively sure that the dumped text will be in the appropriate encoding. Of course it depends on a properly set Content-Type header, but if you cannot depend on that one you need to dig out the recovery tools already.