> Certain browsers allow YouTube to set the internal resolution of the timer to a lower value
It's wild to me that browsers expose this kind of control over my system to third party developers. I think making the browser an "application platform" was overall a mistake. Call me crazy, but I just want a browser that fetches and displays web sites.
I don't quite get your objection here. Is it to the fact that browsers are commonly used to show video, or the fact that smoothly showing video requires changing certain power saving settings?
My objection is that merely visiting a website can invoke all kinds of unexpected things happening on my computer. As a web browser user, I don’t expect it to be modifying how timers work on my system, or accessing peripherals and radios, or programming my GPU or the memory of other processes, or my location, or writing to my filesystem, or basically anything else other than what is needed to draw text and images onto a browser window.
The browser isn't exposing it to websites. It's simply due to the fact of playing media that it's lowering the minimum timer resolution on Windows. In the past it would also do this when just scrolling among other things if I remember correctly, I'm not sure if it still does this.
Firefox uses a different method that doesn't require lowering the minimum timer resolution.
Either way the global behavior of this is no longer true on modern Windows 10/11 machines (as of Windows 10 2004) as each process must now call timeBeginPeriod if it wants increased timer resolution:
https://randomascii.wordpress.com/2020/10/04/windows-timer-r...
I was recently astonished when using speech recognition software on my computer finally made the computer silent. So when I use the speech recognition, my fan just stops. I investigated it and it does not stop, but the speech recognition software seems to slow down the fan to the minimal speed, even though the CPU cores are getting hotter and hotter. You never know these days what programs do.
I noticed in the thread that someone mentioned using `Sleep(16, 1)` gives a stable 60 fps, but I like to always drop a link to https://gafferongames.com/post/fix_your_timestep/ and decouple your game movement from your fps. It's a bit more math, but it is usually pretty smooth in my experience.
That link is not right either since it sets now to the current time instead of the time the frame will be displayed. This means that things will be rendered at the wrong positions than where they should be.
In regards to achieving smoothness you'll need to have proper frame pacing and the article doesn't mention how to do that properly.
The same is true when running Winamp. When I was dabbling with FreeBASIC many years ago, my games performed better when I was listening to music. Same reason!
Another example of Windows' technical debt being there, low-hanging fruit-wise, to be cashed in by performance-oriented developers. Its interesting that Youtube changing the timer resolution propagates to other threads .. does this harken to darker days in the MSDOS era? Youtube, another Turbo button?
I was told a story by some hackers in the old multi-user mainframe days. They said that a good speed booster was to have the program open a terminal because it made the mainframe OS think it was a real-time user interactive program and give it more resources.
.. because there was, once, a day when the terminal buffer available for this pipe was bigger than available memory offered to a process by default, meaning the thing would unpack faster if I did that versus:
$ tar zxvf somefile.tar.gz
Admittedly, this discrepancy of available memory was usually because the BOFH hadn't realized there was also an allocation for pipes-per-user, so it was a neat trick to get around the hard limits that BOFH had imposed on some of my processes in terms of heap allocation ..
I this related to when you are scrolling and selecting within a document, and you wiggle the mouse, it scrolls faster ? I always thought it was just a nice UI optimisation, but I could believe it's actually some accidental side-effect at play.
(like make a 20 page word doc, and start selecting from the first page and drag through - it wil go faster if you jiggle. same in excel and nearly every windows app, even windows explorer)
No, it has to do with every time you move the mouse over a window, a hover event is sent to the application, which runs its main event loop. Either the installer only updated its progress bar when an event happened (in which case it would only appear to be going faster, because the progress bar would move more smoothly) or there was some really terribly written code that literally only made progress when an (unrelated) event happened. My guess is the former.
There must be so many subtle features like these that people use subconsciously, and when they try to move to another operating system, they try it, nothing happens and they get frustrated.
A performance issue related to this is more likely a shortcoming in the software experiencing this issue.
The setting in question is the minimum timer resolution. Changing this will only have an impact on applications that depend heavily on that resolution, i.e. it's not some sort of turbo button for general execution speed. In fact according to the docs, a higher resolution can "reduce overall system performance, because the thread scheduler switches tasks more often."
An application whose performance depends on the timer resolution should be setting that resolution itself, using the Win32 API function mentioned in the thread, timeBeginPeriod, which includes the following in its documentation:
> For processes which call this function, Windows uses the lowest value (that is, highest resolution) requested by any process. For processes which have not called this function, Windows does not guarantee a higher resolution than the default system resolution.
> Starting with Windows 11, if a window-owning process becomes fully occluded, minimized, or otherwise invisible or inaudible to the end user, Windows does not guarantee a higher resolution than the default system resolution. See SetProcessInformation for more information on this behavior.
> Setting a higher resolution can improve the accuracy of time-out intervals in wait functions. However, it can also reduce overall system performance, because the thread scheduler switches tasks more often. High resolutions can also prevent the CPU power management system from entering power-saving modes.
Think of it this way, the global timer resolution of the system is minOf(allProcessesTimerResolution). If no process needs higher accuracy timing then there is nothing hindering the system from sleeping longer periods to save power and/or have less interrupt overhead (An feature I'd say).
These API's are from the 90s, in the beginning of the 90s where these API's are from having an global system interrupt firing 1000 times per second could very well have taken a percent or two or more from overall CPU performance (people already complained about the "overhead" of having a "real OS").
On the other hand writing audio-players on DOS you had the luxury of receiving your own interrupt within a few samples worth of audio, this meant that you could have very tight audio-buffers with less latency and quicker response to user triggers.
Not having that possibility to get that timing fidelity would have made Windows a no-go platform for audio-software, thus giving developers the freedom to enable it when needed was needed. Removing it in the next 10 years would probably have risked bad regressions.
Like a sibling comment noted, they finally removed it during Windows 10's lifespan and with modern CPU's _AND_ multicore they probably felt safe enough with performance margins to separate high accuracy threads/processes to separate cores and let other cores sleep more and actually win more battery life out of it.
It might not be "perfect engineering", but considering the number of schedulers written for Linux over the years to address desktop(audio) vs server loads it was a fairly practical and usable design.
DOS was basically bare-metal programming with a few hardware and software calls thrown in. With 50 cent ARM processors these days having the power of an 80's mainframe Bare-metal on $5 dev-board is still my preferred way to go for simple projects that boot instantly and never need updates. I'm currently listening to music on a DOS MP3 player on a throwaway industrial x86 motherboard I built into an amplifier case 23 years ago.
> There should at least be mention that changing this resolution can affect other processes.
That sorta is what it’s saying. If you don’t set it yourself, you won’t get any better than the “default system resolution”. But if the default system resolution changes (say, by entering a sort of “performance mode” when playing games or watching videos), then it would imply it will affect all processes that are using the default, right?
Sorta, on Windows < 10. From the same Microsoft page:
“Prior to Windows 10, version 2004, this function affects a global Windows setting. For all processes Windows uses the lowest value (that is, highest resolution) requested by any process. Starting with Windows 10, version 2004, this function no longer affects global timer resolution.”
I mean, sure, it implies things. But we all know that devs have a hard time reading between the lines when the compiler is boiling away.
You get it, I get it, but I guarantee you there are a thousand developers for each one of us who won't get it and wonder why the heck things change now and then, without realizing they also need to test their timer-dependent code under less than hygienic conditions in order to validate the results ..
I think that this technically is a distasteful situation and whoever wrote those technical docs kind of wanted to avoid having to admit the painful truth, and just out and out state that changing the timer resolution will have a system-wide impact, because .. really .. why should it? There is no good reason for it. Only bad reasons, imho. Ancient, technical debt'ish kinda reasons.
Windows man. While linux is cursed in many ways, not being able to just know your PC's performance profile just seems so backwards to me. It's one of those things (lack of control) I don't miss.
So much that sucks about today’s world comes from people who push Pro Skub vs. Anti Skub attitudes, while being so ignorantly confident that their side is the right one.
Good question, I'm not even sure what I was looking up when I found that thread, I think I was curious about something else entirely, but as someone who learned to code with VB6 the link intrigued me, I mean come on, that thread title is just great.
I wanted FreeBASIC to have a RAD IDE back when I was still clinging to VB6 as it was being replaced by VB.NET. I hope someday Microsoft open sources bits and pieces of VB6.
Connections like this are fun and interesting but highlight what a complete junk pile our (extractive, spying, slow, bloated, eating power for no reason) stack is. we need a rewrite starting from the boot loader of almost every OS in use in the world
No the new OS can run the same applications with about 80% less electricity, apps are exactly the point that’s correct which is why the bloat monster stack is ridiculous it’s not needed
It's wild to me that browsers expose this kind of control over my system to third party developers. I think making the browser an "application platform" was overall a mistake. Call me crazy, but I just want a browser that fetches and displays web sites.
reply