It’s too bad that they don’t actually tell you about this when you are, I don’t know, actually purchasing your computer!
I picked up a new Dell Inspiron 530 computer back in early October of 2007 and saw that, in spite of having paid for 4 Gigabytes of RAM, the system reported only 3.2 Gigabytes.
In looking around I saw various folks talking about BIOS issues etc. But I’m pretty sure that this MS article explains the situation the best.
To paraphrase, your chip set is only capable of addressing up to 4 Gigabytes of memory. Addressing for other devices (video card memory is given as an example) must also come out of that. So if you have 2 or 3 Gigabytes of RAM installed, the other pieces can still be addressed by the operating system and you don’t notice anything. But install 4 Gigabytes of RAM and you will see Vista’s need to address those devices limited by your chip set and it will eat into the addressing available for your RAM.
The proposed workaround is to ensure that you use a chip set that supports at least 8 Gigabytes of address space.
Ah, well, as a consumer I suppose I should have known that. Silly me.
This series of exchanges in “James Hayes’ Blog” indicate that there is still an advantage to going with the 4 Gigabytes of RAM if you want more than 2 Gigabytes (check the comments section and look for the postings by “DellCA”). I can’t vouch for what is being expressed, but I can say that they knew about the issue nearly a year before I bought my system and made no effort to either inform or correct my purchase options.
Of course I cannot see how much the wasted .75 Gigabytes of memory has cost me, Dell’s pretty savvy in how they report the computer options on the receipt – one lump sum price. But I think that one of the posters in the blog comments is right in that what Dell is liable for is 3/8 of the cost of the hefty 2 Gigabyte upgrade price. Not so much because it’s not usable, but because they knew it to be unusable and blithely offer the option anyway. Let’s face it. Were I a *real* computer hardware expert, I would be piecing together my own system, not buying from Dell anyway. We buy from Dell because we know enough to want to customize our systems for a known need (I know my computer habits mean that I need more memory than average) but do not want to spend all of our waking hours troubleshooting those systems. Dell’s biggest value to me is that they will ensure that all the pieces I have chosen will all work together properly and then deliver the result to me so I can just get on with using it.
Dell has fallen down on the job here.
I was going to say try running it under VMWare, but if that ‘solved’ the problem, Jame’s blog would likely have noted that.
I was also going to say “why would you want more than 640K”, but even then, drivers took out the top 5-10%
I guess for all practical purposes, one might as well go for 3 gig of memory, but hey, what’s a 100 bucks.
VMWare is a curious beastie to me, if I’m running it under Vista, wouldn’t the best that I could hope for would be that the VM is able to reflect 100% of what Vista reports? I’d expect it would be somewhat less as the Vista-consumed resources would also be unavailable to any VMs.
Or does VMWare do something fancy to emulate the base-OS consumed resources and make the VM think it’s got the whole hardware box to itself?
Happy New Year BTW.
I haven’t looked at VMWare myself yet, but from what people have told me, at boot time, it loads first, and creates ‘virtual machines’ on your hardware. Then you go to a virtual machine, and start an operating system on it (i.e. Vista). Then you can go to a 2end machine and start another operating system (i.e. Linux, or anohter instance of Vista).
With OS/2 the dos boxes acually had 640K of memory usable, so I thought VMWare might perform similar ‘magic’.
At work, there is a big push to move off real servers to virtual servers. I’m not sure what ‘virtualizing’ software they have been using, but they are moving to VMWare, as curently, a Virtual serer has a limit of just over 3gig memory, and a single CPU image. With VMWare, we can get virtual serers with > 3Gig memory, and multiple CPU images. (I don’t know what the real hardware requirements are or this.) I would think you could create a virual image with more memory than is really on the hardware (might perform poorly).
Ah, that sounds like a “hypervisor”. I was listening to a podcast about that a couple of months ago.
There seem to be a couple of flavors. the hypervisor is what you describe above and the other one that I’ve read about on VMWare’s site is where you just run some kind of executable from within your OS and you get your VM perched on top of everything else.
I think the latter is largely intended to be used as an “appliance” where something like Oracle or other specialized app can be distributed on a customized or properly configured platform rather than relying on the end user to set things up correctly. Also, I suppose, different software may have conflicting requirements and this would be a cool way to get around that issue.
I was actually looking at VMs for home use so I could have a virtualized version of my laptop and / or desktop that I could then back them up nightly so, in the event of a problem, I could just restore an older copy. Also, then I could run my laptop VM on my desktop for configuration and consistency purposes but then take it with me when I traveled.
A lot of potential there, but I really couldn’t figure out the licensing for the OS(es) or which product(s) would be needed from VMWare. I figure it will take about 3-5 years before they make the software / pricing / offerings useful to folks like me.
I *do* rather like the idea of taking my computer around with me on a USB key or my iPod and just plugging it into any computer, working as if I was at home and then unplugging without leaving a trace on other either machine.