Go Back  PPRuNe Forums > Misc. Forums > Computer/Internet Issues & Troubleshooting
Reload this Page >

Transferring CPU load to the GPU!

Wikiposts
Search
Computer/Internet Issues & Troubleshooting Anyone with questions about the terribly complex world of computers or the internet should try here. NOT FOR REPORTING ISSUES WITH PPRuNe FORUMS! Please use the subforum "PPRuNe Problems or Queries."

Transferring CPU load to the GPU!

Thread Tools
 
Search this Thread
 
Old 23rd Apr 2023, 11:54
  #1 (permalink)  
Ant
even ants need some lovin'
Thread Starter
 
Join Date: Jan 2001
Location: Kent, UK.
Posts: 157
Likes: 0
Received 0 Likes on 0 Posts
Transferring CPU load to the GPU!

I'm a regular user of Accuweather.com's very useful rain radar animation facility.
Right now, there is extensive rainfall activity over northern Europe and Britain, so the radar animation which shows rainfall over the last 2 hours and projected rainfall in the forthcoming hours is pretty busy!

I long ago noticed that when Accuweather animations are running that the CPU fan speeds up, and the Task Manager shows CPU utilisation at 100% and GPU at only 2%. Is this not strange given that it is in fact the GPU which is doing the animation processing surely?

Also, is there some way of balancing the processing load more evenly so that the computer isn't slowing down under the weight of CPU load.
We are running an Asrock AB350M motherboard with onboard Radeon Vega 8 GPU. A search of the Asrock support website gives no help, and delving into the BIOS or UEFI doesn't help either.
A generic internet search on the subject has not helped and perhaps I need to phrase the search wording differently!
Would value any input!
Ant is offline  
Old 23rd Apr 2023, 14:47
  #2 (permalink)  
 
Join Date: Jul 2013
Location: NV (LAS)
Age: 76
Posts: 215
Received 18 Likes on 9 Posts
The app has to be written to use GPU.
IBMJunkman is offline  
Old 28th Apr 2023, 16:22
  #3 (permalink)  
 
Join Date: Aug 2009
Location: Bozeman, MT
Age: 64
Posts: 43
Likes: 0
Received 0 Likes on 0 Posts
A small chance of success...

This link might help. https://www.howtogeek.com/756935/how...in-windows-11/

If you are accessing the site from chrome/edge you might check that 'use hardware acceleration' is on. https://www.howtogeek.com/412738/how...off-in-chrome/
skylimey is offline  
Old 29th Apr 2023, 06:57
  #4 (permalink)  
 
Join Date: Oct 2019
Location: USA
Posts: 864
Received 214 Likes on 118 Posts
GPUs are generally massively parallel processors - you put in some data that needs to be acted upon in, well, parallel, and a serial stream of results comes out.

For example - you can load them with the vertices of triangles and related colors and texture map pointers and then the individual processors in the GPU consult a view transformation (rotation, scaling, perspective) matrix and apply that matrix to all that data on a per-pixel basis. The bulk of the animation is making small changes to the data or to the transformation matrix so the amount of data flowing in per-frame is small.

In 2D animations it is possible that most of the data is changing. Encoded into the data is what color each pixel is; one might have 2D scaling, a pretty simple process, but the bulk of the processing is decompressing the image data that is coming in, an operation that isn't easy to put to parallel processing. Typical compression is that each pixel is a delta from the previous one - it's tough to start in the middle of the chain of deltas and it saves no time to break the task up to schedule the outputs. Imagine breaking up the task of knitting a scarf into individual stitches or small pieces and putting all of them together.

However, if you ever see a TV using HDMI suddenly break into big blocks of random junk, you see that the input stream can have restart points. Those points add overhead, making the data to transfer larger than if they aren't there. You may also notice that in highly dynamic scenes the image gets blockier and fine details disappear - data is discarded to keep the deltas smaller and therefore the amount of data required in the data stream. It may be tempting to think of these as blocks that can be independently worked on, but the data arrives in a serial manner and has to be decoded as fast as it comes in.

The decoding part, after the decryption stage that many video sources impose, can be handled by a chip that sells for less than $10 in quantity. That may not be necessary.

Many CPUs include specific instructions to decode video streams because it is far more likely that a computer user won't have a massively parallel GPU and is certain to have a CPU. Cell phones, tablets, low end laptops - so I can see where the video stream would be as compressed as possible and targeted at the widest audience. See https://www.intel.com/content/www/us...and-newer.html for Intel Core Gen 7 processor capabilities. Also https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video going back to 2011.
MechEngr is offline  
Old 6th May 2023, 15:08
  #5 (permalink)  
Ant
even ants need some lovin'
Thread Starter
 
Join Date: Jan 2001
Location: Kent, UK.
Posts: 157
Likes: 0
Received 0 Likes on 0 Posts
Thanks for your contributions guys. Very interesting.
A few days back, I read with anticipation skylimey's howtogeek link above on the subject of 'hardware accelerated GPU scheduling'. However, I tried both methods shown in the tutorial, first attempting the Windows settings method and next using the regedit way. Neither method gave access to any scheduling, so I am guessing it is a feature available only to some GPUs and not others.

I also experimented with browser controlled hardware acceleration (I'm using CCleaner's version of Chrome). Various experimental runs gave no consistent results, and just to prove the point I am right now running the Accuweather rain radar (lots of rain in Southern England right now) while simultaneously running 2 live railside feeds from railcam.uk. Lots of processing going on, yet most surprisingly, CPU load right now is only around 20%. My take on this is the likelihood that miscellaneous background processes such as antivirus and malware scans amongst others will run themselves silently at various times throughout the day causing resultant CPU load variations as experienced in my top post and thus making judgements of the effectiveness of any experimental 'tweaks' to balance CPU/GPU loads rather difficult.

I mentioned railcam.uk. Here is their take on hardware acceleration as mentioned in their troubleshooting section:
Hardware Acceleration - Turn it off!

This seems to be at the root of many playback problems. Fortunately, many browsers allow this to be turned off
So there you are.
And thank you also MechEngr for your really fascinating glimpse into the mysterious world of the GPU. Like most people I guess its something I seldom bothered thinking about as long as a picture presented itself on the monitor. But actually when I stop and ponder what's actually happening automatically 'under the hood' it all seems to be a minor miracle which for so long I took for granted. Things have come such a long way in home computing since my brother's ZX Spectrum and my Commordore VIC-20.
Ant is offline  

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.