[ Jump to bottom ]



index -- intro -- rules1 -- principles1 -- tweaks -- hints1 -- articles -- software1 -- links1 -- config1 -- glossary -- projects -- diverse -- events16 -- about -- sitemap



Myths


Google
Web tadej-ivan.50webs.com
sponsored links


Updated: 06.07.2017


View My Stats

The contents of this site's pages are protected with a Copyscape.  Copyscape Website Content Copyright Protection
Copyscape site's mission is to offer a website plagiarism search and content copyright protection.



This page contains especially important articles written by me (most of them are exposing/debunking various PC-related myths), and similarly to content on other pages on this website, the content is mostly personal (things that I've discovered myself), while some contain stuff that I've found browsing the Internet and reading them (that's the articles written by others), although note that for the non-personal ones, they at least contain modified text and are not just copied and pasted, meaning that they contain text that I wrote in my own words, however, the general point remains the same.

For example on this "myths.html" page, all the articles are written by me, however, on other pages under "articles" section (i.e. "articles1.html" and "articles2.html"), particularly for the "COMPUTER-IDENTIFICATION ON THE NET" one, I've got the general idea on some website (i.e. an article was written by some other guy), but then I've added and removed a few, in my opinion non-relevant things, changed others etc., and it's quite similar with "RUNNING A COMPUTER NON-STOP OR NOT" (the part about "MTBF ratings" is not solely mine), and with "THE MEMORY-FREEING PROGRAMS MYTH" articles (the first part is modified text from the linked articles), but again, as mentioned, all the others are written by myself entirely. And anyway, for one to write an article, one needs to get that knowledge (and/or an idea) somewhere, so in my opinion the "originality" of something is a bit relative thing.

To say/summarize on the very beginning all that's further described below in the two articles (i.e. "THE REGISTRY-CLEANING SOFTWARE MYTH" and "THE MEMORY-FREEING PROGRAMS MYTH" articles) on this page in two short sentences: I have never ever seen an application or system crash because of so-called "low memory conditons". And futher, all this stuff that you hear about "registry cleaning" (if it would even work that way) would only be true if you would be running most of the time at 100% of RAM used (which is btw. a good, not a bad thing), so "slows downs" because of too much applications running (i.e. again, if they don't fill-up the entire RAM) is a completely non-existant thing.




NAVIGATE:  previous --> articles2.html  previous --> articles1.html


DEFRAGMENTING XP'S PAGEFILE MYTH


The fact is that the pagefile really can't get fragmented enough to cause noticable performance issues, except maybe in "pathological" cases. You see, the thing is that the pagefile is accessed randomly, no more than 64 KB at a time, and the chances that the next access will be to a region adjacent to the previous access are just about nil. Therefore the next access will need a head movement anyway. If you're paging to the pagefile a lot, chances are you are paging in from code files and paging for the file cache as well; these I/O will be interleaved with pagefile accesses as well. So it really doesn't matter much if the pagefile is fragmented or not. You're going to be moving the heads around a lot for successive pagefile accesses anyway. Pagefile fragmentation simply doesn't affect the system's performance since typically what's being read and written are individual memory pages (4 KB, i.e. the same as the cluster size on most NTFS formatted drives) It's not like the whole thing's getting contiguously read. There is no "constant resizing" as you may have beeing led to belive.

If you set to "system managed size" what you'll get is a default size of 1.5x RAM (or 1x if RAM > 512 MB), maximum size of twice the default. There is no more "constant resizing" than if you set the parameters to these values. Well yeah, if the initial or default allocation turns out to be too small, it will extend the pagefile. That's a good thing; it's preferable to having apps crash for "out of virtual memory" errors. The file will return to its original size (and fragmentation state) upon the next reboot at the latest. sooner if nobody is using the extended area. Neither the addition or the deletion of extra space takes significant time. So if the OS has to extend the pagefile, the new piece will be discontiguous with the old. This is not a problem except in pathological cases, for reasons I described above. Even if it were a problem it would be preferable to app simply failing. Thus if you are paging enough for pagefile performance to matter, you are also very likely paging to many other files: .exes, .dlls, other pageable code files, and all the mapped files that are being handled by the file cache. Moving one single file to a hard-disk of its own just won't make a big difference at all. Oh and yes, please rather check out the various related posts by user/member with nick DriverGuruhttp://episteme.arstechnica.com/eve/personal?x_myspace_page=profile&u=3880942621 on "Ars OpenForum" forum; he explained all this numerous times in lenghty forum-posts and did it much better than me.






THE CACHING OF THE PAGEFILE MYTH


I've so far read various "Getting Rid of pagefile" related threads; for instance here is the latest one titled Getting Rid of pagefile.sysArs Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/992004887731 on forum on "Ars Technica" website (and there are numerous others dealing with this), and additionally I've also read the Caching pagefile increases performance 25-50% no joke. thread: http://forums.2cpu.com/showthread.php?t=10014 on 2CPU.com Discussion forums. But this article/entry will also deal with "caching of the pagefile" (with help of SuperCache software for instance) and also with "placing the pagefile on RAM-drive/disk". You see, the main problem/question is: why go through the extra steps of having pages moved in and out of your working set or the modified list cache just to be moved to a memory pagefile. I mean why not rather use that extra memory for larger working sets and modified lists. If there is memory enough to host the pagefile in core, that memory is better served as directly accessable pages. Puting a pagefile in your cache simply adds another level of indirection which can be avoided by making that physical memory accessable to the OS for program/data storage rather than the cache. The simple fact is that placing the paging space in memory, whether by using a RAM-disk or this cache you have, is wrong. If there is memory enough to host the pagefile in core, that memory is better served as directly accessable pages. Puting a pagefile in your cache simply adds another level of indirection which can be avoided by making that physical memory accessable to the OS for program/data storage rather than the cache. It obviouslly doesn't make any logical sense. Cacheing the pagefile is really doing exactly what the OS is already doing with the modified and standby page lists. Windows does it much better actually; putting a page on the modified page list just involves unlinking it from one list, linking it to another, and updating maybe 12 bytes of other "bookkeeping" data; while "writing" it to a cached pagefile (or to a pagefile on a RAM-drive) takes a memory-to-memory copy. With all the bad effects on L1/L2 cache contents that that implies. Now you've got TWO copies of the page in the L1 and L2 cache, while the second copy replacing 4K of other stuff that would have been better left in the cache. There's no real penalty from growing the pagefile or having it be fragmented. I/O to the pagefile is random anyway, and most of the pageable code in the OS tends not to get paged out once it's paged in. Demand paged virtual memory and merged VM/buffer cache mechanism is designed to efficiently satisfies applications with large appetites for memory.

So remember this: putting a pagefile on a RAM-drive/disk is ridiculous even only in theory. If you might think what about the additional page faults will that may go faster (than otherwise because they're moved solely in RAM), well, but it's still better for them to not to occur in the first place. And you will also be increasing the page faults that have to be resolved to .exes and .dlls, and the pagefile on RAM-drive/disk doens't speed those up. But because of the pagefile being in RAM you will have more of them. Also note that the system already caches pages in memory. Nameyl, pages lost from working sets are not written out to disk immediately (or at all if they weren't modified), and even after being written out to disk, are not assigned to (they are kept on the modified and standby page lists) another process immediately. The behavior of most programs is that they tend to access the same sets of pages over time, thereofore if an OS accesses a page that it has lost from your working set recently, there're odds that its contents are still in the memory )on one of the mentioned lists), therefore it doesn't need to go to disk for it. And you can't do this (i.e. put a pagefile on a RAM-drive/disk) unless you have plenty of RAM, but if you have plenty of RAM, the OS isn't hitting your pagefile very often in the first place. And conversely, if you don't have plenty of RAM, dedicating some of it to a RAM-drive/disk will only increase the page fault rate. While also remember that the cache takes RAM, and that memory allocated to the cache reduces the RAM available for process working sets. If anything a simple RAM-drive would be faster, since stuff written to the cached pagefile will still be written to the hard drive eventually, whereas the RAM-drive would leave it in the RAM allocated to the RAM-drive. So either way, it's still better to leave the RAM for the process working sets and so not incur the additional page faults at all. This only makes the I/Os go faster, but why it wouldn't be better to not have to do the IOs in the first place.






THE PREFETCH-FOLDER CLEANING MYTH


As first, I you should really check the Ed Bott's: http://www.edbott.com article titled Windows Expertise: One more time: do not clean out your Prefetch folder!: http://www.edbott.com/weblog/archives/000743.html (note my own comments below under name "Ivan Tadej") and Popular Technology: CCleaner Cripples Application Load Times: http://poptech.blogspot.com/2005/10/ccleaner-cripples-application-load.html articles. As you can read in the Ed Boot's article linked above (and I guess also stated somewhere on the Microsoft's website too), the Windows cleans the old/obsolete files in Prefetch folder by itself (after 128 files were/are created); but anyway, why then to bother at all with doing the OS' job, i.e. to bother with deleting these files manually (or with 3th party program). The only thing that I do (or rather ,was used to do) regarding the Prefetch folder, even though it's completely unnecessary (see the paragraph below), is that I delete those various "setup.exe-hash.pf" files that are remains/leftovers of various installation programs.

While other files in the Prefetch folder that I was also used to delete beside the mentioned "setup.exe-hash.pf" files (i.e. these were the ones from few of my programs that get updated frequently and so I use a different executable with the same name each time, i.e. on each installation/updating) were .pf files of various temporary processes' files, although those are rarely created/run on my system. And finally I was also used to delete those orphaned .pf files from executables that I moved after the new .pf file was already created (with new file's location), however, I know that OS would delete them by itself after time, but I am a "maintenance maniac" and so I do it myself in certain cases. I know now that this task was also completely unnecessary, since the Windows deletes the oldest files itself, when there are 128 prefetch files already created under Prefetch directory. So why on earth to make programs run/launch slower, even if only once?!

Then regarding what data that these files contain, I guess it's quite obvious that they don't "pre-load" anything (or whatever), they just contain a list of directories, OS-libraries that executable loads (or maps/hooks; not sure which term is appropriate) when executed and other non-OS libraries that are called or better dynamically/delay-loaded during the run-time by executable in question (I assume this because .pf files are created after the respective process is closed and not on or right after the execution) with regard to the device (i.e. with regard to the hard-disk volume on which they reside) that's where "Layout.ini" file comes into action. So you see, a prefetch file it is only a some kind of map (containing references to files which a respective executable loads on launch).

A few lines from the AntiVir-related "AVGUARD.EXE-17927959.pf" file:

AVGUARD.EXE

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\NTDLL.DLL

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\KERNEL32.DLL

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\UNICODE.NLS

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\LOCALE.NLS

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\SORTTBLS.NLS

\DEVICE\HARDDISKVOLUME2\PROGRAMS\AVPERSONAL\AVGUARD.EXE

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\WS2_32.DLL

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\MSVCRT.DLL

\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\WS2HELP.DLL

Also note, if prefetching doesn't seem to work for you (i.e. prefetch files NOT being created at all), the reason for this might be that you have probably disabled the "Task Scheduler" service; so to enable prefetching again just set it to Automatic startup-type. The other possibility is however that you've disabled the prefetching itself. Open the Regedit and check the value of "EnablePrefetcher" entry under the "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters" registry key.

Here are the descriptions of these values (all four possibilites):

0 = Disabled

1 = Application launch prefetching enabled

2 = Boot prefetching enabled

3 = Applaunch and Boot enabled (default and optimal setting)

Oh, and one more note; this particular article is more or less only a copied text from my comment that I posted in that Ed Bott's article's (linked above) "comment section". I just modified it a bit to tell the whole truth; namely, I changed "I do" to "I was used to do" since now I don't delete any of the .pf files anymore.






THE REGISTRY-CLEANING SOFTWARE MYTH


As far as registry "cleaning" programs being useless (and things that they do to your registry), I must say that I've always thought that this is not the case. However, I must note here on the beginning that two are actually two "major sorts" of registry cleaners. First, there are the ones that only clean (or should I rather say delete) any orphaned entries found during the registry-checking procedure; in other words, they are capable only of finding and then cleaning/deleting such orphaned entries. And as second, there are the others that beside finding these entries/values also search the hard-disk's drives for corrections of such entries; these are mostly the paths stored as values (i.e. files and folders), paths in values of other similar entries, or names of the entries themselves if they contain path-references. Anyway, the important thing here is that the registry is basically just a rather huge monolithic flat-file database which Windows loads into the main memory (i.e. a RAM) every time the computer is booted (and operating system loaded), and keeps it in memory whilst Windows is running. Therefore lots of programs leave various "left-overs" (useless values) after uninstalling them. My opinion is (or better was, please read on) that is somehow clever to remove them, sooner rather than later. They are just taking additional space, and each additional bit that causes bigger registry means bigger registry file, and more time for system to find a particular entry/value), making the registry operations slower. Yes it is true, slower for 0.0000001 times, but slower anyway, and having left-overs from 20 or more uninstalled programs. And for the end of this intro paragraph, here's a link to a "blog-entry" titled Registry Junk: A Windows Fact of Life: http://blogs.technet.com/markrussinovich/archive/2005/10/02/registry-junk-a-windows-fact-of-life.aspx, written by Mark Russinovich of Sysinternals fame.

Well and then I was told on "Ars Technica" forum (thanks goes to a member/user with nick DriverGuru) that registry cleaning is rather useless. Yes, it surely might save some disk-space, but this is so small amount. And as the most important thing, please remember that as opposed to what various sites offerring these products say: the impact of so-called "registry cleaning" on any aspect of computer's performance is only a minor one (or yet better there is no impact at all), mainly because the registry queries (reads and writes) are not linear, and also the registry is a demand-paged database (or to be more precise, a "memory mapped file") of a few dozen MBs, and so even multiple tens or hundreds of MB of utter junk has no impact whatsoever on registry access; in other words, registry cleaning doesn't impact the speed of registry operations (and therefore overall computer's speed) nomather how many of them were left after some program was un-installed and such, or how "deep" the respective key/entry/value resides in the registry structure. But especially removing unused/orphaned entires will not prevent operating system crashes, or so-called "registry or application conflicts" (because an application in most cases simply rewrites an old registry data i.e. a key, entry or value that's already there) and such. Uhm, on some websites they even state that "cleaning" the registry will prevent BSODs; well this is simply a big pile of bull-shit.

See various registry-cleaning related threads, first see the Registry cleaner thread: Winforums 12 x 12 pixels icon http://www.winforums.com/showthread.php?p=33859 on Winforums forum (my nick there is satyr), then a Suggestions For a Good Registry Cleaner thread: http://www.wilderssecurity.com/showthread.php?p=260057 on Wilderssecurity forums (my nick there is stalker), and finally threads posted on "Ars Technica" forum (my nick there is shirker), the first one being removing unused registry entries? thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/eve/ubb.x?a=tpc&s=50009562&f=99609816&m=397004907631, the second one Keeping the registry tidy - How DO you do it ?! thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/eve/ubb.x?a=tpc&s=50009562&f=12009443&m=464007507631&r=123002707631, and the third one Registry Mechanic - Yay or Nay... thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/348008086731. As a special hint regarding threads on "Ars Technica" forum: look for the DriverGuru's posts, the guy really knows what he is talking about. Finally, I especially recommend you to check out the Registry junk windows fact of life article: http://www.sysinternals.com/blog/2005/10/registry-junk-windows-fact-of-life.html on Sysinternals website.

Of course you don't need special/additional software for that, one can simply delete them with Regedit, but it is much more comfortably this way. But if you just *must* use any of these "registry cleaning" programs, see the first two threads above for programs used to do the job. One very good and powerful is called Registry First Aid (or shortly: Reg 1 Aid): http://www.registry-first-aid.com, http://www.RoseCitySoftware.com/Reg1Aid from KsL Software and Published by RoseCitySoftware: http://www.RoseCitySoftware.com. I rather call this program a registry "maintainer" (an "editor" in a way) than registry cleanup application. The best is that doesn't "clean" registry automatically, but simply scans registry for orphaned invalid/orphaned data, and after that it then scans hard-disk for possible solutions, and offers the best one. You still need to decide by yourself. Sadly it is a SHAREWARE program, but anyway, it's so useful that it's worth paing for.

It offers this options after completed scan:

"Fix entry" (to the suggested value or the one that you choose manually)

"Leave entry without change"

"Delete entry"

"Cut Invalid Substring" (for more complicated values, like those with more than one path etc.)

There is also another one that is pretty widely used. It is a FREEWARE program, called RegCleaner, or shortened to RegCleanr because of Microsoft's similar (or same, I forgot) name for some in-built application on 9x systems. It was developed by/under Macecraft Software (Macecraft Inc.), their main website is: http://www.jv16.org, while you get RegCleaner here: http://www.worldstart.com/weekly-download/archives/reg-cleaner4.3.htm.

Registry First Aid application is without any doubt crucial for me, as devoted user of "non-setup" applications, and devoted explorer of finding the optimal folder-structures, therefore moving things around a lot. For example hen I move some "non-setup" programs (or programs "group", like Players or Internet apps), it would be such a waste of time changing their paths in registry manually, or deleting them in few cases, i.e. in cases of application which do not overwrite an entry-value on next first execution, like ATM - Another Task Manager on my Win98, or old versions of Soulseek p2p client etc. But there are other cases also. For example when I changed my Program Files folder to just Programs, all the paths of various MS's software dlls, executables, data/config files etc. were suddenly wrong (and therefor Outlook not working etc.), and well, it was a matter of few clicks to fix them all in one pass. I rather not think about how it would look like, if I would try to fix them all manually. It is the word "cleanup" that is somehow wrong in my opinion for this kind of program, I would rather say that it is a "registry maintaining" or "registry fixing" programs , but only for the software capable of such operations.

/UPDATE: I need to refer you all also to the discussion in the Registry Cleaners thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/eve/forums/a/tpc/f/99609816/m/488004770831 (see this post in particular: http://episteme.arstechnica.com/eve/forums/a/tpc/f/99609816/m/853006300931) on "Ars OpenForum" forum (my own nick there is "shirker"), where user with nick "PeterB" wrote about a little program that he made which creates a bunch of registry entries in various configurations, and then times some of the operations (i.e. opening keys, opening non-existent keys, querying values, and querying non-existent values) on those entries. Basically, gives a feel for how the performance of the registry is influenced by registry size, and clearly, the registry does not suffer any significant performance impact from being large. By the way, I also posted the Nesmiselnost uporabe registry cleaner-jevSopca blog 12 x 12 pixels icon http://tadej.sopca.com/2008/02/12/nesmiselnost-uporabe-registry-cleaner-jev entry about this on my slovenian Sopca blog.






THE MEMORY-FREEING PROGRAMS MYTH


This article is trying to persuade you all who will read it, to not be led by sites offerring memory freeing/boosting/optimizing/defragmenting, ehm, even washing. As first, I urge you to check this article at Winnetmag site, written by Mark Russinovich (the co-author of Winternals/Sysinternals utilities) and titled The Memory-Optimization Hoax: http://www.winnetmag.com/Windows/Article/ArticleID/41095/41095.html, then as second the article written by Jeremy Collake (from Bitsum Technologies a.k.a Collake Software) called The Truth About Windows Memory Optimizers: http://www.bitsum.com/winmemboost.htm, then third the Fred Langa's article at InformationWeek with a title The Explorer: Resource Leaks, Part Two > June 5, 2000: http://www.informationweek.com/story/showArticle.jhtml?articleID=17200583, the entry titled RAM Optimizers/Defragmenters on Mywebpages-SupportCD website, the "XP Myths" page: http://mywebpages.comcast.net/SupportCD/XPMyths.html, and finally the article at Aumha website by Alex Nichol titled Virtual Memory in Windows XP: http://aumha.org/win5/a/xpvm.htm. You see, the general bottom-line is that you want your RAM to be at full load more or less all of the time (i.e. the RAM "space" to be as used as possible; in terms of allocated addresses holding the "real" data), so remember: free RAM is a wasted one. One example of an application that explains the principle of how OS manages memory totally wrong is the SuperRam: http://www.pgware.com/products/superram program from "PGWARE" website. Here is program's description: "SuperRam 6 increases computer performance by freeing wasted memory back to your computer. By optimizing memory utilization your computer will operate at stable speeds and never run out of memory." that is of course totally false.

And now a few words on how Windows manages virtual memory. With modern computing, the worst thing you one can do for computer's performance is to touch the hard drive or in fact touch any non-memory storage. The fastest hard drives on earth are still slow compared to the computer's main memory (i.e. RAM), so you see, even with the "solid state" drives, in order to access the drive, one has to jump into system code and drivers, and this will push your own program's code out of the CPU's L2 cache (this is btw. called a "locality loss"), while there are two typical reasons one has to touch the disk. The first reason is when the application requests it explicitly (Word asks Windows to load "somefile.doc" file into the main memory), and the other is a so-called "hard fault", which occurs when the application tries to use memory that has been paged out to disk via "virtual memory" and needs to be paged back in. The principle is quite simple, i.e. Windows tries to keep commonly used pages of data in RAM, while less commonly used ones in the pagefile. So if on a given moment there is no RAM available, then pages not currently in use (i.e not actively used, since they might be used in near future) are moved to the pagefile. For this task it uses various lists; a so-called "stand by" list, and a "recently used" list. The process of moving a page of memory from RAM to the pagefile is calling "paging-out". So because these "optimizer programs" force the available memory counter up, the documents opened after this so-called "optimization" (the code that was part of processes' working set before the "optimization processes", i.e. present in the physical memory), must be re-read from the hard-disk when one continues to edit the document or open anopther instance of already running process. Thus it only slows-down the overall performance and responsiveness.

On this point, I urge you to read the five DriverGuru's posts (posted one after another) on Ars Technica. The topic-tile is Where'd my free memory go?, and here is a link pointing to the first one out of five posts: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/eve/ubb.x/a/tpc/f/99609816/m/2590999945/r/2870972055#2870972055. Conversely, the process of pushing a page of memory from the page file to RAM is called "paging-in". So virtual memory is limited only by the size of the page file plus the size of all the RAM sticks' capacities, i.e. the system can use gigabytes of memory even if the RAM is only a few hundred megabytes. And these programs do basically the same thing; they free up physical RAM (that would be used otherwise), and they do this simply by forcing as much possible allocated pages in physical RAM into the pagefile (or they allocate it to itself). Therefore the amount of free RAM is increased (why would anyone in the world want that??), but the amount of virtual memory in use is not affected, i.e. there is no increase in free memory, only the increase of free RAM. When the applications whose memory was put into the page file become active again, the pages of memory they use must be loaded back into RAM, incurring substantial overhead and causes performance degradation.


Strictly my opinion, things I've discovered:

I must say that I agree 100% with the articles I linked above. You see, I have tried various different programs of the mentioned kind (on the beginning of my "geek" era), and have actually used some of them in past on both, my Windows 98/SE and Windows XP setup. I've tried to use the ones that were not bloated with additional useless features, and generally looked somehow more or less "promising"), and soon I discovered most of the things that Mark mentioned in his article. Freeing up memory (paging it out to pagefile, btw., do not confuse pagefile concept with Memory Mapped Files), that would be used otherwise can only lead to performance degradation.

1. Why to "free" RAM as many of these programs offers (it's actually page the data out to hard-disk), when more that 10, 20, or 30% of RAM are still available, not allocated? Because if doing so, the system logically becomes much slower for quite some time (obviously slower), till all the used data (the data that were meant to be in RAM) are paged-in back to RAM.

2. Why to run an additional process to do exactly the same job as Windows Memory Management does. In other words, why to run an additional process if Windows itself manages paging in/out data sufficiently, but especially when the RAM is almost full (and of couse in other cases), so the "Free RAM, when only 5% is free" feature offered by some of those programs is completely and 100% useless.

3. There is no such thing regarding the memory management as: "This software will prevent crashes, freezes, lockups, BSODs (hehe), and generally improve the stability and performance of your computer". These programs are rather actually causing the problems (crashes, freezes or at least a delays and sluggishness) during the procedure of "freeing the RAM".

4. I suppose that there is nothing like "freeing/clearing RAM" as it says on many homesites of these programs in the meaning that some amount of data is in RAM, and after this "optimizing process" this data is freed; not in the meaning of being paged-out from RAM to pagefile, but "freed" in the meaning of being cleared (the portion of RAM that was previously allocated to the programs but it's not used anymore), so no more allocated nor in RAM nor in pagefile, i.e. simply vanished. Well, I imagine this might actually happen in certain situations, for instance in cases of so-called "memory leaks" caused by programs that don't perform well the job of clearing the old data from RAM (garbage collecting)

Although on the other hand, I could partially agree with forcing the data to be paged-out to disk, but only on 9x systems. I assume this could be useful for gamers, or hogging music and graphic programs users etc. Like for example after being on internet (when lots of processes and IE instances were opened), and after disconnecting and closing all those processes, to additionally attend to "clean" so-called leaks (see above) because I suppose Windows 9x platforms have/had completely different Memory Management compare to Windows NT systems. However, the main question that remains for Windows 9x platforms - is really all RAM freed after user closes the process that allocated it on execution (or later when working with it, so same as on NT systems, or there could be some section/area that could be called "wasted" and therefore needed to be manually freed and I mean freed and not paged-out to pagefile, since uhm, the programs that was using this portion of RAM was closed, so there is no other programs that would use this memory instead (we are not talking about shared dlls and MMF in this case)



NAVIGATE:  previous --> articles2.html  previous --> articles1.html









Copyright © Tadej Persic. Some Rights Reserved.


Disclaimer: The opinions expressed on my website and in my files are mine, or belong to other individuals/entities where so specified. Each product or service is a trademark of their respective company. All the registered copyrights and trademarks (© and ™) referred in this site retain the property of their respective owners. All information is provided as opinions only. Please, also see the more complete version of it on "disclaimer.html" and "policy.html" pages.

All the pages on this website are labeled with the ICRA label.  ICRA label
The website is maintained solely by its author and is best viewed with a standards-compliant browser.








The Internet Traffic Report monitors the flow of data around the world. It then displays a value between zero and 100. Higher values indicate faster and more reliable connections.