Monthly Archives: February 2009

sendfile() mishandling

As I said in I had problems making TinyMCE work.

After some digging I discovered that the tiny_mce.js file was being oddly truncated and repeated when fetched. Subsequent tests showed that md5 hashes differed every single time with the original file.

During these fetch tests my kernel crashed twice in the network stack. Since my lighttpd configuration uses kqueue and the likes I suspected that perhaps something was not right in lighttpd 1.4.21, which was only released a couple of days ago. So I downgraded to 1.4.20. ‘Lo and behold, the problems disappeared. To be absolutely sure, I recompiled lighttpd 1.4.21 as well. And at first it seemed to work, but then the corruption kicked in again. After talking about it on the #lighttpd channel on FreeNode I found out it was a problem with lighttpd 1.4.21’s handling of sendfile() on FreeBSD.

The fix is now also in the lighttpd port of FreeBSD, so other people should not encounter this problem.

TinyMCE in WordPress 2.7.1 not working on FreeBSD?

Discovered today that with both Firefox 3.0 and Opera 9.63 on FreeBSD, TinyMCE within WordPress 2.7.1 is not allowing me to use the visual editing mode. I tried the example of TinyMCE and it works without problems. Based on this and the fact it works on Windows, there must be something weird in either WordPress or its included version of TinyMCE with FreeBSD. I logged a post over at the WordPress forums.

Zotero and PDF indexing on FreeBSD

For a while now I have been using Zotero on Firefox to handle researching topics. It also allows PDF indexing, but for this you need to set some things up first. Start by installing xpdf from ports, it’s located under graphics/xpdf. This will install pdfinfo and pdftotext amongst others. Next go to your Zotero data directory, by default this is the zotero directory under your profile directory in $HOME/.mozilla/firefox, and create two symbolic links pdfinfo-`uname -s`-`uname -m` and pdftotext-`uname -s`-`uname -m` which will point to /usr/local/bin/pdfinfo and /usr/local/bin/pdftotext, respectively.

Now, when you restart Firefox, Zotero should be able to pick up the files. Check by going into Zotero’s preferences and navigate to the Search tab. It should state something to the effect of pdftotext version UNKNOWN is installed.

Defense of the Ancients and Hamachi

So I tried to play Defense of the Ancients (DotA – a Warcraft 3 mod) over Hamachi with some people only to find out that I couldn’t see the LAN game in the lobby. If you have this issue, try to see if you can telnet to port 6112 of the hoster’s IP address. If that works, you’ll get disconnected after a few return presses, you need to check if all Warcraft 3 versions are all the same. This was the issue I was running in. I had missed the latest patch (1.21b to 1.22a) back in July 2008.

Character encoding in mailcap for mutt and w3m

I use mutt on my FreeBSD system to read my mail. To read HTML mail I simply use a .mailcap file with an entry such as

text/html;      w3m -dump %s; nametemplate=%s.html; copiousoutput

This in effect dumps the HTML using w3m to a text file in order to safely display it. The problem that I had is that, because some emails that I receive are from a Japanese translators list, they are in Shift_JIS. When dumped w3m doesn’t properly detect the Shift_JIS encoding and as such the resulting output becomes garbled.

When I looked at the attachments in the mail with mutt’s ‘v’ command I saw that mutt at least knows the encoding of the attachment, so I figured that there should be a way of using this information with my mailcap. Turns out that there is indeed a way to do so, namely the charset variable. It turns out the mailcap format is a full RFC. RFC 1524 to be exact. Mutt furthermore uses the Content-Type headers to pull any specific settings into mailcap variables. So a Content-Type: text/html; charset=shift_jis means that %{charset} in the mailcap file will be expanded to shift_jis. We can use this with w3m’s -I flag to set a proper encoding prior to dumping.

text/html;      w3m -I %{charset} -dump %s; nametemplate=%s.html; copiousoutput

As such you can be relatively sure that the dumped text will be in the appropriate encoding. Of course it depends on a properly set Content-Type header, but if you cannot depend on that one you need to dig out the recovery tools already.