So after a recent update of VirtualBox from 5.0.20 to 5.0.22 I found that a Windows 7 image I have suddenly didn’t work anymore. Worse yet, everything was empty or reset to default values.
You can most likely recover from this by following these steps:
- Close VirtualBox (although you might not want to until you safeguarded vbox-prev if you find out you don’t have a versioned backup file)
- Go to the directory you have your VM files
- In here look for your faulty image’s directory and cd in
- Look for a vbox file with the same name as your image, but most likely containing a version number as the settings/metadata file got upgraded. DO NOT lose this file.
- Run a
vboxmanage list vms
- Copy the UUID listed as
vboxmanage unregistervm <UUID>
- Copy the backup vbox file over the existing, wrong one
vboxmanage registervm /path/to/file.vbox
- Most likely VBoxManage will error out with an error about a conflict with DVD images with different UUIDs (in my case)
- Edit the vbox file, remove offending line (in the case of the DVD image, might be more difficult with other error cases)
vboxmanage registervm /path/to/file.vbox again
- VBoxManage should now not error out
- Start VirtualBox and your VM should be ok again
Of course the behaviour from VirtualBox is downright dangerous here. If there is such a conflict or error it should NEVER mess around with your metadata file and thus corrupt it. This is one of the biggest sins in software programming. Only after you successfully start an application are you allowed to write out any updates of settings files and whatnot.
I like Meld as a visual diff/merge tool. You can also use it as the default in TortoiseHg. Open TortoiseHg’s Workbench. Go to
Settings, make sure the
global settings tab is active. Click
Edit File, if it doesn’t yet exist create a section called
[extdiff], and under
cmd.meld = /path/to/meld. For Windows it would be something like
cmd.meld = C:\Program Files (x86)\Meld\meld.exe. From the command line you should be able to use hg meld to get your diff shown in Meld.
So today I was trying to fix a problem with the deployment of some software with Ansible. I am using
get_url in combination with a
http_proxy environment setting in order to pull a file in from a HTTP URL. However, when I ran the playbook I was greeted with a
[Errno 111] Connection refused error message. After fixing the proxy to have the netblock properly configured I tested again and was again greeted by the error. The problem became more confounding when I ran a test with curl on the command line using the proxy parameters, this test actually worked. So the proxy was running as it should. After some long testing and trying to figure out just what was going wrong, I replaced the
get_url with a
command: curl set up to test if it might be Ansible itself. The output of curl was enlightening, it turned out the HTTP URL was 301 redirecting to another HTTP URL, which in turn was 302 redirecting to a HTTPS URL! And since I wanted to be explicit I had not added the https_proxy environment variable.
The problem however now comes in how to to fix this. Is it documentation? Ansible code fix? Python code fix?
The new Android version, so far called ‘L’ and most likely winding up being called Lollipop when it will be released, has a new visual style called Material Design.
For Android this change of visual style means that the code base also needs to service both old API deployments (lower than v20) as well as the new ones (v21 and upwards). In effect this means you have to create
res/values-v21 directories to customize the layout and modify the styles for the new API.
In your module’s
build.gradle you have to change the
'android-L' and the
'L'. If you have any dependencies on support-v4 or appcompat-v7 you need to switch those to v21.+ to pick up future updates, such as a different release candidates up to the released version.
I have previously written on this subject, but now I am using IntelliJ IDEA 13 with the latest Android SDK of this writing (September 2014) and when you create a project you might be greeted by an error message like the following:
Error:Gradle: A problem occurred configuring project ':projectname'.
> Could not resolve all dependencies for configuration ':projectname:_debugCompile'.
> Could not find any version that matches com.android.support:support-v4:0.0.+.
> Could not find any version that matches com.android.support:appcompat-v7:19.+.
The Android SDK has switched over to Gradle since I last wrote about it. In this case the default setup already searches the local
libs directory under
Projectname/projectname for any jars to compile with the build of the application. But if you would follow the instructions from my previous post the chance is high that you keep running into this problem. Aside from the installation of the
Android Support Library, you will also need to install the
Android Support Repository in order to make dependency resolution work. Do verify that your
Projectname/local.properties contains a set property for
sdk.dir that points to the root of your locally installed Android SDK.
Now, you might still run into problems. The thing is that in your
Projectname/projectname/build.gradle you generally want to have the compile lines for
appcompat-v7 match the version of your
targetSdkVersion. So this might become:
compile fileTree(dir: 'libs', include: ['*.jar'])
These numbers can be found in the SDK under
extras/android/m2repository/com/android/support under the respective entries for
support-v4. If you would use
+ for the version identifier, you run the chance of picking up the latest version, including release candidates and this might break your build. So in this case being explicit is better than depending on it implictly.
Edit: On second thought, it might be better to use 20.+ or 20.0.+ for the version identifier in order to automatically pick up bugfix releases down the line. Looking at the release notes of the support library it seems that Google is quite strict in sticking to semantic versioning.
In an earlier post I documented how to set up an encrypted file store for your keyring. With recent versions of Python keyring (at least 3 and up) the
CryptedFileKeyring backend got removed and replaced by
EncryptedKeyring. So in your
$HOME/.local/share/python_keyring/keyringrc.cfg you need to now have the following:
If PyCharm complains that it
Can't start Mercurial: /usr/bin/hg Probably the path to hg executable is not valid, then check if running hg from the command line triggers a problem running a certain extension. In my case I had a version of
mercurial_keyring that did not play nice with each other. After upgrading these to 3.0.5 and 0.6.0 respectively, the problem went away. I guess PyCharm tests the run of the hg binary and if the shell return code (
echo $?) is something other than 0 will show this warning.
If you are doing Selenium testing using Chromedriver2 0.8 and are having problems with self-signed SSL certificates, this is a known problem and will be fixed in a subsequent release. In the meantime I found that using the Chromedriver 26.0.1383.0 still worked without problems for Chrome 27 and also did not have this SSL certificate regression in it.
Mercurial allows for tying in keyring configuration for those of us who do not want to store passwords in plain-text in our
.hgrc files or constantly using SSH.
First install the Python keyring library by running
pip install keyring. After that is installed, checkout https://bitbucket.org/Mekk/mercurial_keyring/ and add to
$HOME/.hgrc the following:
mercurial_keyring = ~/path/to/mercurial_keyring/mercurial_keyring.py
Next up, configure your repositories, e.g. in the case of Bitbucket I use:
bitbucket.prefix = bitbucket.org/asmodai
bitbucket.username = asmodai
bitbucket.schemes = https
Mercurial keyring will automatically decide on the best keyring to use. On a FreeBSD system with no Gnome or other systems providing a keyring, if you do not specify a specific keyring, the system will use the file
~/.local/share/python_keyring/keyring_pass.cfg. This keyring file stores the passwords encoded in Base64 in plain-text. This is not quite what you would want from a security point of view. You can configure which backend store to use by editing
$HOME/.local/share/python-keyring/keyringrc.cfg. To get a plain-text file with encrypted keys use the following configuration:
This will create the file
~/.local/share/python-keyring/crypted_pass.cfg after initializing the backend store with a password. Look at the documentation for keyring on what other configuration options are available.
Note: make sure the PyCrypto dependency is installed with the
_fastmath module. This in turn depends on the
If you have a Subversion repository setup with multiple top-level projects and their typical branches/tags/trunk setup and want to migrate these to individual Mercurial (Hg) repositories, you can do this with the convert extension.
First you need to enable convert in your
.hgrc by adding a section like the following:
Next, if needed, create a plain-text file, e.g. author-map.txt, containing SVN username to Hg author mappings, e.g.
asmodai=Jeroen Ruigrok van der Werven <email@example.com>.
Next run Hg as follows:
hg --authors author-map.txt --config convert.svn.branches=project/branches --config convert.svn.tags=project/tags --config convert.svn.trunk=project/trunk path/to/svn/repository path/to/destination/hg/repository
This will start a SVN to Hg conversion, picking up only the changes and commit messages applicable for the various paths you gave for the branches, tags, and trunk, effectively splitting off this project from the main SVN tree into its own Hg repository.
Do note that for large SVN repositories this might not be the most efficient conversion way forward. In that case converting once from SVN to Hg and then split off Hg into many Hg repositories might be faster. Will adjust this post when I write that up.