If you try to use the PS Vita's Near functionality over WiFi and get "Location data cannot be obtained" constantly, try to turn off the Vita completely (hold the power button at the top for a few seconds) and then turn it on again. This at least solved it for my PS Vita.
Found some information in two of Intels chipset datasheets about this issue:
The board utilizes 4 GB of addressable system memory. Typically the address space that is allocated for PCI Conventional bus add-in cards, PCI Express configuration space, BIOS (firmware hub), and chipset overhead resides above the top of DRAM (total system memory). On a system that has 4 GB of system memory installed, it is not possible to use all of the installed memory due to system address space being allocated for other system critical functions. These functions include the following: BIOS/firmware hub (2 MB) Local APIC (19 MB) Digital Media Interface (40 MB) Front side bus interrupts (17 MB) PCI Express configuration space (256 MB) MCH base address registers, internal graphics ranges, PCI Express ports (up to 512 MB) Memory-mapped I/O that is dynamically allocated for PCI Conventional and PCI Express add-in cards
And the other note:
Memory between 4GB and 4GB minus 512MB will not be accessible for use by the operating system and may be lost to the user, because this area is reserved for BIOS, APIC configuration space, PCI adapter interface, and virtual video memory space. This means that if 4GB of memory is installed, 3.5GB of this memory is usable. The chipset should allow the remapping of unused memory above the 4GB address, but this memory may not be accessible to an operating system that has a 4GB memory limit.
An item over at the Inquirer said that at the E3 they were unofficially informed that Intel is working on getting back into the games graphics market. This was in May.
In July we have the following item over at the Inquirer in which Intel reveals their G965 graphics solution, which is using technology licensed from PowerVR, back then codenamed Eurasia and now known as GSX (Graphics Shader Accelerator. Now furthermore the article tells us that Intel will also start work on using Muse (Media Unified Shading Engine) for its graphics offering. Simplistically compare Muse to what Cell did, cheap cores in a parellel setup thus enabling improved calculations. In this case with multiple GSX cores (due to the Universal Scalable Shader Engine I think, which combines the pixel and vertex shaders into one design) it might even build a multi-pipeline graphics solution. The only funny thing is that Wikipedia's PowerVR article lists Muse as a mobile solution according to a company presentation. And indeed, when one checks another article at the Inquirer you see that the roadmap indeed introduces Muse and Athena, the first aimed at portable computing, the latter aimed at desktops (introducing programmable shaders). Of course, if the GSX is a SoC design the lower energy consumption and die size (and gate count)
Funnily enough on the Intel site right now there are two job openings (posted end of July and early August) in the United States for a Senior Graphics Software Engineer and Software Engineer 3D Graphics. The first focusing on what seems to be identify from the end-user (including game developers) point of view where the bottlenecks in the hardware design lie. The latter job seems to focus on driver improvements and driver support for OpenGL 2.0 and DirectX 10. Coincedence?
According to a what older article at LaptopLogic NVIDIA has an idea to make the system use its integrated simpler GPU for handling day-to-day desktop graphics, while switching to the stronger and more featureful (and often more power-consuming and warmer) GPU when needed for 3D work or gaming.
Interesting idea, but with SoC designs coming from Imagination Technologies, Falanx, and other designers that reduce a GPU's power-consumption and warmth build-up you can wonder if such a design is interesting enough to work out.
According to a statement by AMD/ATI:
We've always supported open source, and for relevant markets such as servers, we release open source drivers so that companies such as Red Hat can include them in their distros. [...] However, for other markets, such as workstation and consumer, performance and feature differentiation are key metrics. Proprietary, patented optimizations are part of the value we provide to our customers and we have no plans to release these drivers to open source. [...] In addition, multimedia elements such as content protection must not, by their very nature, be allowed to go open source.
This makes one wonder. AMD has also, like Intel, always publicized their specifications and programming manuals for their processors and chipsets.
In related news Intel asked to be able to serve a subpoena (a legal writ which calls you to attend and function as a witness in a judicial proceeding under a penalty in case of disobedience) on ATI in the anti-trust case versus AMD.
With recent news of AMD acquiring ATI there was some incorrect reporting that AMD would be dropping the ATI brand in favour of tacking Radeon on its own brand name of AMD. According to Ars Technica the name drop was a miscommunication that spread like fire through the blogosphere as well as online news. This is both the advantage and disadvantage of the quick turn-around time of the online media. One would think that serious technical journalists, however, would verify this with AMD and/or ATI themselves before reporting.
In related news Intel revealed a website where they are offering source code for their 965 IGP. And it becomes even more interesting when put into perspective with an article from InfoWorld where it is said that AMD is strongly considering releasing open source at least a part of the ATI graphics drivers. One can only wonder how NVIDIA will react to these developments.