Monthly Archives: April 2008

The stars in your eyes

Your smile goes so deep,
it makes me happy and I wonder
if I can ever find a star
that matches the brightness of your eyes.
The curve of your lips hides nothing,
everything is there in plain sight.
It is all so simple, yet a beauty beyond compare.
How can I protect that smile of yours?
When I lay the moon and stars at your feet,
will you keep smiling for me?
Or is it better to leave them in the sky above,
so I can catch their reflection in your eyes
as you move ever closer and closer and closer…

Being an author

There are no clouds of boredom in my living sky,
no lack of purpose as time passes by.
I grasp the moment with all my soul,
firm, yet gentle, of my own destiny I take control.
Shaping, like a potter, form from naught,
organically grown, the shape at end matters not
for it is the experience itself, the path I took
that writes the chapters of life in my book.

Help flu research in BE, IT, NL, and PT

For a few years now there’s been a website in the Netherlands and Belgium that asks participants to fill out their details on a weekly basis with regard to cold and influenza symptoms.

After that there was a Portuguese site doing the same thing.

And now there is an Italian site as well.

There is still not much known about migratory patterns and occurences of the flu within the world, these websites will help create more understanding, so please help them out. It will take a maximum of 5 minutes per week, but the information is very useful for scientists (virologists).

Microsoft IME 2007 on Windows x64

So I was updating my input method editors (IME) from the default in Windows x64 (IME 2002) to the ones provided by Office 2007’s language packs. As explained in a previous post of mine you can install the proofing tools and input by passing LAUNCHEDBYSETUPEXE=1 to the execution of the MSI. Now, on my Windows x64 I installed the IME by installing the IME64.MSI with this added variable. The weird thing was that some applications worked flawlessly and yet others showed me the wrong number of icons or no icons at all! It turns out that these applications are 32-bits applications and need to have the 32-bits IME installed as well. So next to installing IME64.MSI of the language you want to install, you will also have to install IME32.MSI. Only after doing this will you notice the applications working as you want them.

Thinking back on it, it makes perfect sense, but while you are in the middle of working with it you keep wondering: “why?”

OpenSSH ControlMaster and Subversion

OpenSSH has a fantastic feature called ControlMaster. Basically this option allows you to create a socket that will share your already opened ssh session to the same host. To enable this option for all you put the following snippet in your $HOME/.ssh/config after creating something like $HOME/.ssh/sockets:

Host *
  ControlMaster auto
  ControlPath ~/.ssh/sockets/%r@%h:%p

For every username@host:port it will create a socket in $HOME/.ssh/sockets. The only problem is that current Subversion (1.4.6 on my FreeBSD box) cannot work well with control sockets when using the svn+ssh:// URI identifier. In order to work around this problem you can add a specific host before the wildcard entry, for example:

  ControlMaster no

Host *
  ControlMaster auto
  ControlPath ~/.ssh/sockets/%r@%h:%p

Of course, doing it like this is a bit tedious for every Subversion repository you use in this manner. Thankfully there is another way to do this. In $HOME/.subversion/config there is a section called [tunnels]. If you add the following entry to that section it will disable the ControlMaster:

ssh = ssh -o ControlMaster=no

Python 2.6 compiler options results

So after yesterday’s post about some compiler results with Python 2.6 I wanted to show how some of GCC’s architecture-specific compiler flags affect the execution of pybench. As I explained in comments I think most people will never even touch the flags passed to Python’s build. Nonetheless, some people asked if I had tuned it in any way. Pádraig Brady had asked me if I had used the optimal GCC architecture flags. On my FreeBSD 7.0-STABLE machine at home (AMD Athlon(tm) 64 X2 Dual Core Processor 4600+ (2411.13-MHz K8-class CPU)) his script stated I had to pass along “-m32 -march=k8 -mfpmath=sse”. My machine is fully 64 bits so I left out the -m32 (since it will not link anyway) and used “-march=k8 -mfpmath=sse” (using -march=native instead of k8 resulted in a 0,1 seconds faster result and -mtune=native -march=native instead of k8 resulted in a 0,1 – 0,2 seconds faster result).

The default option flags are on my system: -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes.

Considering some other comments about how I did not use a 0-origin for my y-axis I have to point out two things: firstly, given the sometimes close results zooming out too much can eliminate detailed information (of course you have to be careful not to zoom in too much as well); secondly, I like to make sure the graph itself is appropriately centered so you do not get a whitespace skewing in the resulting image. I think, being a follower of the Edward Tufte school of graphic displaying, I did reasonably well. The graphs were made with a tool called Ploticus.

GCC 4.2.1 default and architecture options

I was curious how the optimization level influenced the resulting program and as such I removed the -O3 option from the compiler flags. As is evident from the graph you are looking at a bit more than a doubling of execution time (an average of 14,2 seconds versus the previous 6,6 and 6,5 seconds).

GCC 4.2.1 with no O3

So, given the huge performance hit by merely leaving out the -O3, I was interested how the other optimization levels worked out. Holger Hoffstätte asked to use -O2 -fomit-frame-pointer instead of -O3. Basically the results of -O3 (average of 6,5 seconds) and -O2 -fomit-frame-pointer (average of 6,5 seconds) were equal. The result of using -O1 (I could not really discern much of a speed difference by adding -fomit-frame-pointer, also for the -O2 case it was still an average of 6,5 seconds) was quite interesting. It already improves execution by ~86%. From -O1 to -O2/-O3 we are looking at another increase of ~16%. From the no optimization case to -O2/-O3 execution improves by ~118%

GCC 4.2.1 with various options

I tried a profile-guided optimization build, but I have some issues on my FreeBSD 7.0-STABLE with libgcov. Apparently only a libgconv.a is provided and linking gives me a relocation warning. Thankfully I also had a GCC 4.2.4 snapshot from March installed and did a PGO build, but I managed to only shave of about 0,2 seconds on the average time.

Python 2.6a2 execution times with various compilers

Due to recent concerns with memory use and execution speed I was curious how Python would behave with different compilers. I took Python 2.6a2 r62288 from the Subversion repository and compiled it with the flags: –with-threads –enable-unicode=ucs4 –enable-ipv6. The machine is a HP dc7700p with 1GB memory with an Intel Core2 6300 @ 1.86GHz running Ubuntu 7.10. I installed GCC 3.3.6, 3.4.6, 4.1.3, 4.2.1 from the Gutsy repository, and Intel 10.1.015. The MS Visual Studio 2008 Python was the MSI snapshot of 2008-04-10 from the main Python site. I ran this through Wine 0.9.46 after installing the VC2008 runtime.

First various GCC versions: 3.3.6, 3.4.6, 4.1.3, 4.2.1:

Python 2.6a2 compiled with GCC

It is good to see that the 3.4 series is faster than the 3.3 series and the 4.2 series is faster than the 4.1 series. I am a bit worried about the 4.1 series drop in performance compared to the 3 series though.

Next we have Python compiled with GCC 3.4.6, 4.2.1, Intel CC 10.1.015, MSC from Visual Studio 2008:

Python 2.6a2 compiled with GCC, ICC, MSC

It is nice to see how the Microsoft Visual Studio 2008 compiler produces a binary that, when run through Wine, still performs quite well compared to GCC. I am not quite sure if Wine incurs a performance penalty or not. What’s quite impressive is the performance of the Intel CC compiled Python. If we take the fastest GCC, which is 4.2.1 at the moment, take the average of the 10 rounds of execution, which is 6,574 seconds, and compare that to the average of ICC, which is 5,412 seconds, we see that ICC is about 21% faster. If we take the slowest, GCC 4.1.3 with an average of 7,002 seconds, we even get a result that ICC is about 29% faster.

So it seems for people who want to get the full performance out of Python compiling with ICC might be quite beneficial. I want to check out how ICC progressed from version 8 to version 10 performance-wise.

The raw data can be found at

Lightning 0.8 released

For those of you using Thunderbird and want a calendaring option inside of Thunderbird to communicate properly with people using Outlook or Lotus Notes that send you invitations for meetings and the like: Mozilla’s Lightning is now at version 0.8. Lightning is an add-on for Thunderbird based on Sunbird.

If you then also use the Provider for Google Calendar you can synchronise your Google Calendar with your Lightning setup.