[ Jump to bottom ]




index -- intro -- rules1 -- principles1 -- tweaks -- hints1 -- articles -- software1 -- links1 -- config1 -- glossary -- projects -- diverse -- events16 -- about -- sitemap



Articles 1


Google
Web tadej-ivan.50webs.com
sponsored links


Updated: 06.07.2017


View My Stats

The contents of this site's pages are protected with a Copyscape.  Copyscape Website Content Copyright Protection
Copyscape site's mission is to offer a website plagiarism search and content copyright protection.



Let me emphasize again that my involvement with computers started roughly 7 or 8 years ago when I was still studying architecture, and in fact because of the architecture studying I got my second PC (the first one was Sinclair Spectrum +, a decade ago), and that was the main reason to give up with studying, and as you also probably guessed, since then, I am obsessed with computing, and with pretty much with everything else that's related to it, but especially with generally knowing how to use things more reasonably, efficiently and safely. But to be even more precise about my involvement with computers (because in short, I can only say that computing is my current hobby) I must say that I am simply interested in computing in general (in fact, I am a "self-taught" computer geek and a "self-proclaimed" computing expert), that means for instance being interested in various computing concepts (i.e. how things work, for example by observing the OS's behaviour with Process Explorer application; in particular the handle and DLL views/panes), in customizing the OS and finding its limits and capabilities, then further, having the basic programming/scripting skills, and actually being interested in anything other related to a desktop computer. I must not forgot to mention that during this "process" of using my PC, learning about it, changing its settings and configuration I have also learned a little bit about programming basics in general, particularly the HTML and JS languages, then I was learning also ABC programming language, which is totally neat. Its implementation for Windows/DOS package contains the Interpreter and Environment for Windows/DOS, and heh; no installation required at all. And finally the Python scripting language, which I am still learning very slowly, and a tiny bit of C++ and Intel's native assembler (a.k.a assembly) languages too. But from all the languages except for Python maybe, I've learned only less than "raw" basics, for example I've programmed few "Hello World" windows and similar, mostly based on templates from tutorials I was using at that time. Well, and finally I am adding also these three links that I personally think are worth checking, and for which I think that they somehow suit on this page and into this introductional section; as first the link to Ge0ph - XP Tweaks page, then a very similar link to Optimizing Virtual Memory in Windows XP, and finally a bit unrelated link to Google It, You Moron website. This surely includes also a knowledge on how to cope with various errors (including problems such as computer being slow without any obvious reason, spontaneous restarts etc.), how to change various computer's settings (undocumented/optional parameters under various headers in OS's or programs' .ini files, the various usages of Environment Variables, but mainly numerous registry hacks), and so forth. Therefore as the most important thing, I know how to increase the speed and stability of the computer. And well, since I already mentioned Environment Variables, here're also two links to screenshots of my own settings (both hosted at ImageShack); as first the screenshot of System applets' Advanced tab: systemadvancedzb4.png, and as second the screenshot of the actual Environment Variables sub-window: environmentvariablesdf9.png, but for more about all this, please visit the "config1.html" page. Anyway, I've summarized all the important things into these articles below, and of course the articles on the "articles2.html" page, which is its continuing of this one.




NAVIGATE:  next --> articles2.html  next --> myths.html


PROCESSES AND THEIR BASE-PRIORITIES


Windows is a multitasking operating system, which means that there are various applications that run simultaneously at any give time. The process priority class is therefore a parameter that tells the system which task has priority over the other task(s); for instance, if there are two programs that are running at the same time and with the same priority, they will have equal shares of the CPU's time. But in case if you would set a higher priority for one of them, the programs that has a higher priority would use all the free processor time while the one with a lower priority would use only the rest of it. I also recommend you to see the interesting related "blog-entry" on the Senserely blog of mine titled 16. Regarding process base-priorities (28.08.2006): http://www.senserely.com/tayiper-16_regarding_process_base_priorities_28_08_2006.php, which basically deals with the same question. An example of a so-called "snake oil" program that explains the princip of priorities totally wrong is the PCBoost: http://www.pgware.com/products/pcboost program from "PGWARE" website. Here's what they say: "PCBoost 3 increases computer performance by allocating higher portions of CPU power to active applications and games. PCBoost is a revolutionary product which enhances processor intensive software to run at even faster speeds." which is absolutely not true.

And as opposed to what you may have been led to believe (by some person or a website selling "snake oil" program related to CPU priorities principle), a higher process priority doesn't make things run faster (i.e. making a process or more processes run faster); remember that it is always a particular process and its priority compare to other processes' priorities and the main question: is there a spare CPU time. Also a higher priority has nothing to do with how fast something "comes into action", again as long as there is a spare CPU time. If you're using a programs that is not being responsive because another is hogging the CPU, you would be better off lowering the priority of the one that you aren't using. And also note that there are many things that can really bog down your system that have nothing to do with CPU utilization. If you system is busy processing disk I/O there will be little CPU activity, since disk doesn't need much CPU attention, but the system will be very sluggish to user input. So for example if your CD/DVD burner programs consumes let's say 80% of CPU when burning a CD, setting it to "Above Normal" (10) or even to "High" (13) priority will not speed it up (the process of burning), if there is no other programs consuming those other 20% of CPU. It would only make it a bit more "stable" compare to other processes; but once more, only in cases if those other processes (or yet better, a single process) would start consuming enormeous amount of CPU; in this particular case of CD/DVD burner programs consuming 80% CPU that would mean that the one single hogging process would consume (start consuing) more that 20% of CPU. I highly recommend you to read through this thread on "Ars Technica" forum. The title of the thread is A generic rule on process prioritizing (about process-priorities), and here is a link pointing to the first one out of five posts: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums?a=tpc&s=50009562&f=99609816&m=468005824731&r=468005824731. But also note that the foreground task, i.e. the one that currently has a keyboard focus, has a bit higher priorities of threads anyways (this is because of the "threads queueing"), see the last paragraph in this entry below.

For instance, start Regedit and go to main menu "Edit - Find... (Ctrl+F)", and start searching for some possibly non-existant string (such as "ab_ab" for instance, optionally also check the "Match whole string only" check-box), or alternatively launch your adware programs or on-demand antivirus scanner and let either of these scan your hard-disk. Since the default process priority is Normal, and the process is using 90-100% of CPU, the system becomes sluggish. Now change the priority in Task Manager to Below Normal, and what you'll notice? The process is still using 90-100% of CPU, however, the system is suddenly NOT sluggish anymore. I hope you get it by now, i.e. the principle of priorities. So high priorities should be reserved only for things (i.e. programs/processes) that need to respond quickly to requests to run, but which don't need much CPU time when they do run. Low priorities are meant for the compute-bound operations; in other words for "CPU hogging" processes and have no affect on I/O-bound ones. Raising the priority of a process should be done only when you know that you need that particular process to run higher than all of the others at the current priority or when you are sure that it won't hog the CPU unnecessarily itself. In fact in some cases it may even help lowering the priority to make/keep things work right (i.e. an application running as it should), see the HELP: I can't normally play most of the games anymore thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/639002616731 that I've opened on "Ars Technica" forum when I had problems with the mouse being "delayed" or as I call it, the mouse "moving in steps" when trying to play a Star Wars - Knights of the old Republic game from LucasArts.

However, it is a bit different with "threads priorities" and their queueing. You see, if a newly ready thread is at the same priority as what's currently running, it has to sit on the "ready queue" for that priority until what's currently running has used up its timeslice (and everything else that was already on the ready queue); so if the newly ready thread is of higher priority than the currently running thread, it preempts the current thread immediately, regardless of the timeslice usage of the current thread. And note that threads have also "dynamic" priorities beside the "base" ones. Finally regarding the "spare CPU time"; the CPU being less than 100% busy does not necessarily mean that a thread that wants to run can run right away. On an instantaneous basis the CPU is never anything but 0% or 100% in use; it is either running a real thread or an idle thread. The "% busy" stat we see in places such as Task Manager and Perfmon is an average over the display interval, which is normally at least one second. A low "% busy" state may be hiding busy periods that last significant fractions of a second.

/UPDATE: In the How do I set specific Win Processes to always High/Low Priority? thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/365007825731/inc/-1 thread on Ars Technica I wrote to "consider it as a joke". Although it could mean to consider my post as a joke, I indeed meant the Priority Master program: http://prioritymaster.com (written by Ted Waldron III, a guy which reminds us all at Ars Technica of Alexander Peter Kowalski, a.k.a. AlecStaar/APK, who defends his "RAM optimizer" and his other crappy programs to the point he's frothing in the mouth; see this particular post in the Diabolical and SexyBiyatch wedding pics thread on Ars Technica with a collection of links to APK's posts: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/eve/ubb.x?a=tpc&s=50009562&f=34709834&m=8510980933&r=3650926043#3650926043); here is also a link to its dedicated page at "download.com": http://www.download.com/Priority-Master-2006/3000-2094_4-10498003.html. And so, I got an e-mail message from the author of the Priority Master 2006 program a few days later. Basically, he says that I need to explain myself for "stating that the program is a joke publicly on the web". And the result of this is the Dear Arsians, I really need your opinion on this one ... thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/738001397731, which is "destined for greatness" as one of Arsians wrote in it. Read it for an entertainment value. Oh and by the way, he is also very similar to some other Andrew K/Mastertech guy. You see, basically his main problem is that he failed to answer all the "hard" questions asked, for instance why is he spamming on Ars and on the various other sites (i.e. reposting his crappy website numerous times to Digg) etc., and on the other hand he also failed to provide any links to the supposed origins of these so-called myths, not to mention that he also failed to explain why is he misquoting people and so on; see the Firefox Myths thread: Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/558005957731 thread (the debate that ensued was a hilarious one, and the thread is no less than 12-pages long), and Firefox fanatics decide to make money by punishing users: http://www.edbott.com/weblog/?p=1307 "blog-entry" on Ed Bott’s "Windows Expertise" blog/website (my comment is under number 102. and signed as "Ivan Tadej") for a bit of a fun.






RUNNING A COMPUTER NON-STOP OR NOT


The debate concerning the question is it better to leave computer running 24/7 or shut it down has been questioned and the respective subject debated since the beginning of the "computers era". The answer has more to do with the type of computer, patterns of user's usage and a concern for power-bills. But as a general rule I was told on "Ars Technica" forums that once an electrical device such as computer is powered-up, it appears to be the best thing to left it on running non-stop. The power on/off cycles are damaging to a computer, i.e. they are damaging almost all the crucial PC's components including hard-disk, CPU, graphic-card, buses, mobo-chipsets, various "inner circuits", probably also RAM etc., and shortens a particular device's life-time. The microcircuits to flexing and fatigue due to change in temperatures. Over time this could lead to a break in the circuitry and result in system failure. So in one sentence: leaving the computer on all the time puts excess wear on the mechanical components, i.e. the hard drive spindle motor, cooling fans etc. You see, the thermal cycling occurs at the digital semi-conductor level as the state changes from 0 to 1 and 1 to 0. This in fact is the contributer to early failure mode of semi-conductors. The metallic leads are welded to the silicon. Any welding process has a risk of hydrogen embrittlement which cause a rapid loss of strength and ductility at the point of the weld. For this reason the standard method to produce more reliable devices is to place them (after manufacture)in a circuit and operate them for 48 hours. Throw away the failures and the remaining devices are more reliable than the total lot was prior to burn-in. For an even more reliable lot of devices they can be vibrated while burning in. More initial faiures but the remaining devices will be more reliable than the as manufactured lot. This then is the thermal cycling of semi-conductor devices and the start-up/shut-down temperature changes are largely irrelavant to whats happening at the chip level.

Also I suggest you to see the "20.9.2005" entry on the "events3.html" page or check the Theoretical question regarding DC-projects and 100% CPU usageArs Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums?a=tpc&s=50009562&f=122097561&m=309005425731&r=309005425731 thread for further info, as well as the Turn Off PC? article: http://www.techiwarehouse.com/cms/engine.php?page_id=ca18facc on "TechiWarehouse" website. However, it's different in case of reboots/restarts since the machine and its parts don't get cold; see the Does restarting desktop each day have any adverse impact?: http://techrepublic.com.com/5208-6230-0.html?forumID=5&threadID=200958&start=0 thread/entry on Techrepublic (here's also the alternative link: http://techrepublic.com.com/5208-11183-0.html?forumID=5&threadID=200958&start=0), while my own comments are posted under "Yeah in fact it does have an impact", "Re: Redress", and "Re: Commendation" (these two later ones are replies to user with nick "acsmith"), which are all under same branch. In the first one above (i.e. in the thread on on "Ars Technica" forum) we discussed what a heat actually does to the processor and other hardware-components (especially see Rarian's posts); in one sentence, the problem that arise from heat, particularly a temperature cycling leads to metal fatigue and the increase of speed of chemical reactions, so the bottom like is that running you computer at a constant high temperature is better than running at an oscillating (and relatively high) temperature. Also computer as any mechanical device sees most of the potentially "damaging" stress during the power on/off cycling. So yes, by maintaing your CPU at full load (of course, provided you have adequate cooling), and are not running on excessively high voltage for an overclock, you will reduce the thermal cycling and increase the life of your CPU, i.e. turning the computer on and off (also to some extent putting it under load and taking it off load) causes cycling of the CPU's temperature and metal fatigue. Actually, I am planning to write a full article about it in the near future. Another interesting related thread is the Is there a limit on/in (not sure which) a number of "page faults" for a process ??Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/607003096731 one that I've also opened on Ars Technica back then. It deals with the endlessly increasing number of page faults for the "svchost.exe" process (which is btw. a so-called "carrier process" for various native NT-services), i.e. particularly the one launched with the "-k netsvcs" switch and in my case hosting no less than 16 NT-services. I also recommend you to see the related blog-entry on my Senserely blog which is entitled 15. Running computer non-stop or not (28.08.2006): http://www.senserely.com/tayiper-15_running_computer_non_stop_or_not_28_08_2006.php.

I've read that many manufacturers of various specific computer components (such as hard-drives and power supplies) have used the MTBF ratings (which btw. means "meantime between failure"), to express the life cycle of their products. This is estimated frequency of mechanical failure based on stress testing. Note that "Mean" means that 50% fail before that point and 50% fail after that point, i.e. it is not a prediction of minimum life nor a prediction of estimated life. Power supplies have published ratings such as 50,000 hours (a bit under 6 years) and hard drive ratings have been 300,000 hour or even higher (it's a bit over 34 years), however, note that many computers are running 24 hours a day, 7 days a week, 365 days a year. For example every network-server musts be running constantly and generally they use the same basic components as the average user. But just because we know that the components are capable of running all of the time (be it "full load" or not) does not mean necessarily that they should. Laptops in particular have a higher chance of heat-related problems (because they have very limited ventilation systems), so in addition to the obvious battery power savings, shutting them down when they are not being used will allow them to run cooler and generally more efficiently. So if you use your computer only to check your e-mail once in a while and such (or even if use it constantly throughout the day), leaving it on during the day and turning it off at night makes perfect sense to a normal user (in a "turn it on when you need it and turn it off when you don't" manner), but if we take into an account everything mentioned above, you'll see that this is not the best practice. Also, if you hate to wait for Microsoft Windows operating system to boot, leaving your computer on all the time will probably increase your "quality of life". If saving electricity is your concern, then the monitor is your biggest enemy. Your display screen is the biggest single power consumer, so you can simply turn it off whenever you are not using the computer, but leave the computer itself on so you don't have to wait as long when you want to use it.

A monitoring of new computers at Iowa State University found that the average computer running "all-the-time" costs only about $65 per year. If you were to shut your monitor off on nights and weekends but leave the computer running, the cost would drop to about $40 per year. If you turn everything off at night and on weekends, the cost would drop to about $21 per year. Power-saving systems are now a part of almost every computer/operating system, which will put your computer and monitor in "sleep mode", which in saves electricity. So there is no one answer for this question, but there are a few absolutes for those that plan to keep their computers running all the time. The first general recommendation would be to invest in a good surge protector with a UL 1449 rating (or an UPS), since the likelihood of a power related issue increases with the length of time that your computer is running. The second is to always shut down and un-plug your computer during an electrical storm (cutout of a circuit or power outage/failure). There is no way for your computer to get hit if it is not plugged-in, and it is a cheap way of protecting it your computer. But well, of course this doesn't apply so much to reboots (compare to shut-downs), because on rebooting the computer is started again right after, and so there is no time for the components to cool down. But I would personally also try to aviod rebooting the PC too often, since the computer needs to go through the power-up/down cycles anyway.






DEFRAGMENTING REALLY THAT NECESSARY


In short, yes and now it all depends on many things. In some ways it certainly is since as we all know the hard-disk needs to seek much more to access (and then read) all the fragments of a heavily fragmented file, however, in some cases fragmentation can actually even help by spanning read/write operations across a disk, thus somewhat aiding in performance. An example of such scenario would be a busy file server whose disk is fairly full (and most files heavily fragmented), and this of course slows down the system. But above all remember that so-called "seek time" is far more important than sustained transfer rate, i.e. you can't have 2 I/O intensive apps accessing the same disk without serializing access to it, and therefore slowing-down both of them. As usual, please check out the A good and free disk defragmenter.Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/eve/forums/a/tpc/f/99609816/m/972003161831 thread on "Ars OpenForum" for further info.

But let's continue with some basic information on what hard-disk fragmentation actually is. Hard-disk fragmentation occurs because of the way information is stored on the disk. On a new, clean hard-disk, when you save a file it is stored in contiguous sections called clusters. If you delete a file that takes up, for example, five clusters, and then save a new file that takes eight clusters, the first five clusters worth of data will be saved in the empty space left by the deletion and the remaining three will be saved in the next empty spaces. That makes the file fragmented, or divided. To access that file, then, the disk's read heads won't find all the parts of the file together but must go to different locations on the disk to retrieve it all. That makes it slower to access. If the file is part of a program, the program will run more slowly, and in general a badly fragmented disk will slow down to a crawl. Basically, fragmentation happens when files are deleted and other files written to a hard disk. The file system writes the new files into the gaps, but if some free space gap is not big enough, the remainder of the file has to be written into another gap elsewhere on the disk, possibly several times. The file is correctly written, the only problem is that, when reading it, the disk heads have to jump from one place to the next, which takes a little time. When files are fragmented into many small pieces, these delays can become noticeable. Also the hard disk is worn more.

Today the number of files stored on volumes is much greater than times past. This increased number of files not only requires larger storage requirements but due to inherent fragmentation problems puts a burden on file systems to keep files stored contiguously. File systems need to be able to place files such that they have space to grow in a contiguous fashion. When files are created and deleted, unused space gets fragmented and pieces of free space are spread through the disk. These fragmented unused spaces encourage new files to be created in places where they can’t grow contiguously. It also encourages the file system to put fragments of larger files in these small free space gaps. As a general rule more files equals more fragmentation problems. Another fragmentation issue is the increasing size of files. The typical Word or PowerPoint document is bigger than ever. Additionally the use of video and graphic files have become commonplace for everyone and these files have grown to massive proportions. Bigger files have an obvious connection to increased file fragmentation. Another general rule: bigger files equal more fragmentation problems. Incumbent with the exponential growth of storage, managing one’s backup window becomes a major challenge when designing storage architectures and setting backup practices. Handling disk fragmentation is vital to managing backup windows when file level backups are performed. Studies have shown that defragmenting before backups are performed can decrease backup times. Some other factors are also worth mentioning, for instance the so-called "hard-disk command queuing" (NCQ or Native Command Queuing) that newer disks and controllers offer, and which reduces the hard-disks' head movements and so somewhat reduces the impact of fragmentation, even if it's so small/trivial.


The performance hit happens when:
  1. you have long files
  2. you have files that are split into many small fragments
  3. you have files that are physically far apart from each other (on distant cylinders)
  4. you have files that are frequently used
  5. you have files that don't fit into the disk-cache (depends on the amount of disk cache memory and on the amount of memory, i.e. if you have a lot of it the fragmentation will not matter as much)
  6. you have files that are read quickly with the computer waiting for the file data (if it's read slowly in small pieces while the computer does other things fragmentation doesn't matter much)
  7. you have the computer that doesn't have anything else to do on which to spend the disk wait time usefully.

Also note that when file is deleted it is just marked as deleted in the MFT (but the data still exists physically until it's overwritten by something else; you don't have any real control over this unless you use some special tool) and so the space that this file was occupying looks "available" to the operating system, though now there's no reference of it in the lookup table so the system thinks it's free space and ready to be written to. A hard-drive has a table of contents (called the MFT area) so it knows where files are/reside physically on the hard-disk, and so when you delete a file the operating system simply erases the file from the table of contents, leaving the file's 1s and 0s where they were, but marked as free space for new files. You need a dedicated program to actually erase the file when you want to, otherwise it might survive for quite a while before a new file overwrites it. To access the hard-drive, the computer creates a lookup table of where each file is on disk (see above) so if you went to go open let's say "fooo.txt" file, a signal is sent to the system that you want that file. Then on a lower level the system look at that table, then seeks the file on the physical disk and gives that data on the disk to the program with which you attempted to open it.

Then further, the best place for often used files is near the middle of the disk (and near the other often-used files), while this is a main goal of XP's in-built file-placement optimization routine, i.e. the one you can invoke manually with "rundll32.exe advapi32.dll,ProcessIdleTasks" command. But you have to realize that files aren't read in one large chunk, instead they're demand paged as necessary to save memory (with some readahead for performance) so when you open a program the primary binary is paged in just enough for Windows to get it's dependency information and start it and then it starts loading the dependent libraries as necessary jumping from one to the other as new functions are needed from them. So if those libraries were heavily fragmented but close together it would be better than if they were 100% contiguous but on seperate sides of the disk. Even when a file gets accessed it will often only be loaded in chunks and many chunks may not ever need to be accessed, especially in .exes and .dlls. Because of this clever fragmentation can even be a big benefit (see also the Theoretical question about disk fragmentation (future writes)Ars Technica 12 x 12 pixels icon http://episteme.arstechnica.com/eve/forums/a/tpc/f/99609816/m/253008692731 thread that I opened back then about this exact question), i.e. stick all the chunks of random files that get accessed frequently togeher and shunt the infrequently used chunks farther away. This is also why so many people say defragging is useless, namely these chunks are accessed at random so what benefit would having them neat and tidy be? Guess what: If windows does resize it which it will only do if the initial allocation (normally 1.5x the RAM size, or 1x the RAM size (if RAM size is >=512MB) is too small) it goes back to its previous size, and previous fragmentation state, on your next reboot at the latest.

The speed at which your hard drive transfers data is very important, Especially if you need to copy a 20 GB file, like I did. My laptop was initially copying the file at 2 MB/s because it was transferring in PIO-only mode, which would have taken almost three hours. Not only is PIO terribly slow, it consumes lots of CPU power. While copying that 20 GB file, the CPU usage stayed at 100%. Therefore, I tried to figure out the best way to increase the transfer rate. I changed the transfer mode to UltraDMA-6, speeding it up by 600% to 12 MB/s, and the 20-gigabyte file copied in a little over 30 minutes. Also, CPU usage was only about 20-30%. So, how did the drive get lowered from UltraDMA to PIO-only mode in the first place? Well, because Windows has a particularly dumb way of handling transfer modes for storage devices. After six cumulative (all-time total) errors while reading or writing a storage device, Windows will automatically lower its transfer mode. Worse, it never goes back up unless you reinstall the device. This is bad if you put in a scratched CD, causing those six-in-a-lifetime errors happen all at once. Even your hard drive will experience an occasional hiccup, so eventually its transfer rate is not safe either.



NAVIGATE:  next --> articles2.html  next --> myths.html









Copyright © Tadej Persic. Some Rights Reserved.


Disclaimer: The opinions expressed on my website and in my files are mine, or belong to other individuals/entities where so specified. Each product or service is a trademark of their respective company. All the registered copyrights and trademarks (© and ™) referred in this site retain the property of their respective owners. All information is provided as opinions only. Please, also see the more complete version of it on "disclaimer.html" and "policy.html" pages.

All the pages on this website are labeled with the ICRA label.  ICRA label
The website is maintained solely by its author and is best viewed with a standards-compliant browser.








The Internet Traffic Report monitors the flow of data around the world. It then displays a value between zero and 100. Higher values indicate faster and more reliable connections.