It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
low rated
After the earlier post about consuming programs earlier this week i set out on discovering a couple of facts for me as a gamer.

We all know how there are different tier cpu's that seem to cater a different crowd all together.
Most simple facts seem to indicate that the i9 for example or its AMD counterpart are best served when you want something really really fast. Downloading at superhigh speeds for example or .. lets say gaming at very high fps rates ( i keep the examples gaming orientated.

With this knowledge in the package i decided on checking out some games windowed with XTU running next to it to provide some monitoring regarding the cpu behavior

the games i checked out are Mass Effect Andromeda, Ashes of Singularity and Total War Warhammer 2.

Warhammer 2 is just a fine normal title. Runs the most time on 1 core though lenghty battles can increase the core count to either 2 or even 3 cores used

( my system btw is a 10600k not oc'd 2060 super and 32 gigs of ram )

Ashes of Singularity goes all the way in the benchmark taking all cores but, never used more then about 70W's of power

Andromeda is all in all the most consumptive title regarding cpu with using almost a 100% load during the boot of the gameworld falling down to about 30 to 50% load after in where most of the time is uses all 6 cores

If the cpu uses all cores could it be a safe assumption to say that " gee if a game is this gullible to use all mah cores it could benefit from a core or 2 more ? "
Attachments:
No posts in this topic were marked as the solution yet. If you can help, add your reply
avatar
Zimerius: *snipsnapsnop*
What you're describing is partial bottlenecking. If a CPU isn't used at 100%, chances are that your GPU is at 100% and vice versa. If either your CPU and GPU aren't loaded at 100% then its the game engine that dictates this.

A CPU isn't really responsible for download speeds. Your internet connection and your hard drive are a little more important than your CPU ;). While a CPU is important for every task in one way or the other, there are a lot of things where less can mean more with a more gaming focused CPU with less cores. This is mostly because most games are designed with less cores in mind.

For most video games, what matters most is the GPU (your graphics card). A faster graphics card usually means better FPS, while a faster HDD and RAM can mean better loading times regardless of how new your CPU is. GPUs are also true beasts when it comes to brute forcing stuff. They crunch on data like no other. As a result, they're not as accurate as CPUs in terms of how they process data due to how it is designed. They're designed this way for very specific reasons.

On the other hand, a lot of games don't fully utilize the cores in your CPU because of how the game itself is designed. programmers have to direct the games engine so it understands what it has to do in order to put the things it needs through different cores. More cores = more simultanous thoughtput. Most video game programmers focus more on the GPU side of things than the CPU. The CPU usually handles stuff like polygonal coordination (placing the dots that form the edges of a polygon), AI and physics computation (if the GPU cannot handle physics, that is). Your CPU also processes and compiles data while it loads, which is sometimes you may see CPU loads at 100% during loading times. It depends on the game engine, your RAM and your hard drive as well.

Ashes is a special game because it automatically assigns minor woakloads through many different CPU cores. This is why it doesn't fully work itself above 65% CPU load in your case, because it doesn't need more power in order to work as intended.
Post edited February 06, 2021 by Dray2k
avatar
Zimerius: *snipsnapsnop*
avatar
Dray2k: ........
oke oke, well regarding the cpu and downloading i guess i left a whole bunch of things out of the equasion but yea, thank you for this response.... ever since my ryzen 5 2600 debacle i'm a bit on the edge regarding specific hardware to need. before the i7 920 i had fulfilled its duty with pride, the xeon x5056 performed even better but then that thing came along and really, it felt worse then everything before, this new i5 i'm running works pretty much exemplary but i guess i just need to check the other parts out for myself. Some things i can't just wrap my head around " like how with 2k my whole system seems to perform better then with 1080p " but maybe that is because developers are more aimed at 2k and 4k then at 1080p?
avatar
Dray2k: ........
avatar
Zimerius: oke oke, well regarding the cpu and downloading i guess i left a whole bunch of things out of the equasion but yea, thank you for this response.... ever since my ryzen 5 2600 debacle i'm a bit on the edge regarding specific hardware to need. before the i7 920 i had fulfilled its duty with pride, the xeon x5056 performed even better but then that thing came along and really, it felt worse then everything before, this new i5 i'm running works pretty much exemplary but i guess i just need to check the other parts out for myself. Some things i can't just wrap my head around " like how with 2k my whole system seems to perform better then with 1080p " but maybe that is because developers are more aimed at 2k and 4k then at 1080p?
This may happen because the game is programmed to support certain aspects of the driver that are more modern.
Sometimes certain things in certain GPU drivers are badly programmed, it depends on your driver and graphics card. Some of these things also depend on how clean your OS is. If many of your files are all over the place some effects can be funny, like things looking different at a certain resolution. These problems are almost impossible to really figure out, because many things could cause them.

It also helps understanding computers if you think of a game as nothing but a factory that produces data which your hardware must read and process properly. Some CPUs can process data in a more effective way.
avatar
Dray2k: GPUs are also true beasts when it comes to brute forcing stuff. They crunch on data like no other. As a result, they're not as accurate as CPUs in terms of how they process data due to how it is designed. They're designed this way for very specific reasons.
GPUs are designed for the specific case when you have to do the exact same calculations on a large amount of data. This works great for many tasks, like computing the Mandelbrot set or matrix multiplication. However, they are not good for situations where different actions need to be taken depending on the data; for example, they're actually not that great for prime factorization (which matters because some encryption algorithms, like RSA, can be broken if you're able to factor the public key (which is the product of two large primes, one of which is the private key).
avatar
Dray2k: The CPU usually handles stuff like polygonal coordination (plaing the dots that form the edges of a polygon), AI and physics computation (if the GPU cannot handle physics, that is). Your CPU also processes and compiles data while it loads, which is sometimes you may see CPU loads at 100% during loading times. It depends on the game engine, your RAM and your hard drive as well.
Do you mean "placing" rather than "plaing"?

One important task that is done by the GPU is transforming the 3d coordinates into 2d space (this is done by the vertex shader, and the old fixed function pipeline actually did a matrix multiply to do this in hardware).

Physics can be done with a compute shader, at least until you have to check for collisions and handle them.

If the game is shader heavy (in other words, if the time it takes to compile the shaders is noticeable), the game shold cache its shaders and only re-compile them if there's a change in the GPU, the graphics driver, or in the shader itself (for a release build, the shader should only change when the game is updated; developer builds go through a lot more shader churn).

Also, if you have enough RAM to put the entire game on a ramdisk and still have it run well, and you're using an OS with a proper disk cache (in other words, not Windows XP), then the disk access speed should only matter the first time you start the game after booting, or if you've you've used enough RAM and/or disk space to get the game to be evicted from cache. (Note that rarely-used game data might still force some loads, like if you enter an area with lots of unique textures for the first time since the last boot.) (Note that you do not actually need to put the game on a ramdisk to see this benefit.)
avatar
dtgreene: Do you mean "placing" rather than "plaing"?

*snipsnaptherest*
Yes and thanks for the elaboration!
To Dray2k's point: adding more CPU cores will only show a tangible benefit in situations where all existing CPU cores are near full utilization and when the current workload has enough threads to be further distributed across additional CPU cores.

If your existing cores aren't running at close to 100% utilization, then adding more cores won't be of much benefit since your existing cores still have more capacity. Instead of running 6 cores at 65% utilization, you might run 8 cores at 50% or 10 cores at 40%, etc. But you're not actually doing more work. You're just spreading the same amount of butter across more pieces of toast, so to speak. :-)

Also, adding more cores won't help if your software isn't capable of putting those extra cores to use. Even with multi-threaded programs, there are often hard limits to how many threads the program can run. For example, let's say a game uses four threads: a primary general purpose thread, and then dedicated threads for audio, AI and networking. Each thread can be assigned to a separate CPU core - but a single thread cannot be run on multiple cores simultaneously. So once you go beyond 4 cores the game has no further threads to assign to those additional cores. Sure, a few more cores would be helpful to handle the operating system's various background processes, but eventually you're going to hit the point where you can't break your workload down any further. For gaming, that point is typically around 4 to 6 cores, give or take a couple depending on the game.

I'll also take a slight exception to your statement that "the i9 for example or its AMD counterpart are best served when you want something really really fast". Well... not always. The Intel Core i9 and AMD Ryzen 9 are best served when you're running a heavily threaded workload. That CAN result in things getting done faster. But if you're just looking for raw core speed, you would be better served using a chip with a lower core count and overclocking that chip to a higher frequency. That is much easier to do with a lower core count because it requires less voltage and produces less heat.
If your primary use case is something like 3D modeling, video editing, sound design, etc. then more cores can help. But if your primary use case is gaming or other low thread-count applications then you're best served with a 6 to 8 core CPU and overclocking the heck out of it.
avatar
Dray2k: What you're describing is partial bottlenecking.
Indeed.
Andromeda isn't all that demanding on my i7-6700 - Because I play at 1440P. Actually, I didn't find it a very GPU intensive game either. It played very fluently on relatively high graphics settings on my GTX970! (note: I don't care that much about frame rate, as long as it's stable)

In general: CPU speed is important for (some) non-gaming software, multi-tasking (running multiple programs simultaneously), (some) strategy games.
What else? Some games demand a lot from your CPU for non-obvious reasons (bad optimisation?)

I don't play many (modern) strategy games anymore these days, so I expect my quad core CPU to be fine for the next couple of years (gaming + other, general purposes)... maybe the latest generation of consoles will translate to more CPU demanding PC games, but that's still an unknown factor - and no worry for me, since I'm always a few years behind (or playing really old games).
avatar
Ryan333: If your primary use case is something like 3D modeling, video editing, sound design, etc. then more cores can help. But if your primary use case is gaming or other low thread-count applications then you're best served with a 6 to 8 core CPU and overclocking the heck out of it.
In my experience: by the time that overclocking is actually needed/useful - the CPU is already fit for replacement. (maybe there are different use cases where that isn't so, I just can't think of any)
Post edited February 07, 2021 by teceem
More cores usually only help when it comes to two things: Multi tasking and video editing/encoding.

For example, I have a Ryzen 5 3600 and can encode a normal 720p video of about 30 minutes' length in 15-20 minutes. If I were to upgrade to a Ryzen 5 5600X, that would be a reduced time somewhere in the neighborhood of 10 minutes, and it would scale more on higher length videos or higher quality video recordings like 1080p, HEVC, or HDR10.

In gaming, you do not typically need more than 4 cores in any given game. Some games will use all your cores, but specifically if the most demanding game you tested is Andromeda and under game load it uses ~50% of your cpu, that is not bad and shows that you have a good cpu for your use load.

The 2060 is not a top of the line card, and I don't know a ton about intel chips, but it's possible the 2060 might bottleneck your cpu in certain situations, too. I find that happens with my 5700 XT, which is about on par with an Nvidia 2070.

I don't think you need to upgrade your cpu at this point. If you are going to upgrade anything, upgrade that 2060. But really, unless you're planning on regular video editing for youtube or are planning on streaming on twitch several times a week, you're completely fine and would be even doing those things.

Don't waste your money on upgrades right now. Scalpers have a big grip on the market, at least here in the States. Wait it out, my friend.
avatar
Zimerius: *snipsnapsnop*
avatar
Dray2k: What you're describing is partial bottlenecking. If a CPU isn't used at 100%, chances are that your GPU is at 100% and vice versa. If either your CPU and GPU aren't loaded at 100% then its the game engine that dictates this.
This is true,however, most modern games are developed to be "bottlenecked" by the GPU power or V-sync. No modern game I know of is built to load the CPU to 100% and if this happens, the game will probably stutter, freeze or crash.
If eventually the game is setup to run with 100% load on the CPU most of the time, well it can give nice numbers but no smoothness.
Unlike consoles (well, this is partially false) the computer Operating System need quite some CPU time as well.

Also, let me add the RAM speed, latency and sometimes the amount, is very important to get high (and stable) frame rates, depending on the game. On a recent fast test I did, Cities Skylines went from 72fps using single-channel 2133MT/s, to 76fps using dual-channel 3000MT/s (using JEDEC auto speed change), while Grid Autosport went from 175fps to 235fps. No other change was made.
avatar
Timbroski: More cores usually only help when it comes to two things: Multi tasking and video editing/encoding.
More cores also help with software development, as quite often you need to run a tool that takes input of some sort and generates output; this is especially true in compiled languages like C++, which are commonly used for games. With more cores, you can run more instances of the program at a time, and as a result, it will take less time for the software to build (provided you have enough RAM to handle all instances of said program).

This definitely matters if, say, you want to run something like Gentoo Linux. Gentoo is a source-based distribution, which means that any package you install or update will be compiled from source; as a result, it can take a while to update a Gentoo system. Getting a CPU with more cores will reduce the time these updates take, as the CPU can multi-task when building software, sometimes building multiple packages at once or having multiple cores work on compiling certain troublesome packages that take what seems like forever to compile (I'm thinking of chromium and libreoffice here).

This also matters if you are writing games. So, while you may not need all those cores to play games, you might want them if you are going to be making your own.

Another specialized situation where multiple cores help; if some task has real-time constraints, it could be assigned its own core and chunk of RAM, allowing it to respond instantly should the need arise, bypassing the scheduler and virtual memory (which can cause unpredictable delays). The Jailhouse hypervisor can be used for this purpose, for example. This can also be useful on microcontrollers, when you don't have an OS (or you have a simple RTOS), and have some task that needs to be done in real-time. (In the microcontroller case, sometimes these cores may have different architectures, like having an ESP8266 co-processor to handle all the wi-fi stuff while the main CPU does what it needs to do, or the Raspberry Pi Pico's PIO state machines.)

(Note that the Raspberry Pi Pico is not like the other Raspberry Pis; it's an entirely different class of device, and you can't just swap out one for another and expect things to have a chance of working. The Pico doesn't even run Linux, for example.)

avatar
Dark_art_: No modern game I know of is built to load the CPU to 100% and if this happens, the game will probably stutter, freeze or crash.
Well, there are some situations where a game will legitimately use 100% of the CPU (or at least 100% of some number of CPU cores):
* Compiling shaders
* Generating a world, in games with proceduarally generated worlds that are generated entirely at the start
* AI in turn-based games that have fancy AI and a lot of AI controlled units (I've mentioned Battle for Wesnoth's Survival Extreme mods as a pathological example, but it could happen in more vanilla-like maps, and it's an issue in games like the Civilization games, especially late game on a large map).
Post edited February 07, 2021 by dtgreene
avatar
dtgreene: snippity snip
Since the OP was discussing testing cpu and gpu usage while playing games, I assumed the workload was in playing games.
avatar
Ryan333: To Dray2k's point: adding more CPU cores will only show a tangible benefit in situations where all existing CPU cores are near full utilization and when the current workload has enough threads to be further distributed across additional CPU cores.

If your existing cores aren't running at close to 100% utilization, then adding more cores won't be of much benefit since your existing cores still have more capacity. Instead of running 6 cores at 65% utilization, you might run 8 cores at 50% or 10 cores at 40%, etc. But you're not actually doing more work. You're just spreading the same amount of butter across more pieces of toast, so to speak. :-)

Also, adding more cores won't help if your software isn't capable of putting those extra cores to use. Even with multi-threaded programs, there are often hard limits to how many threads the program can run. For example, let's say a game uses four threads: a primary general purpose thread, and then dedicated threads for audio, AI and networking. Each thread can be assigned to a separate CPU core - but a single thread cannot be run on multiple cores simultaneously. So once you go beyond 4 cores the game has no further threads to assign to those additional cores. Sure, a few more cores would be helpful to handle the operating system's various background processes, but eventually you're going to hit the point where you can't break your workload down any further. For gaming, that point is typically around 4 to 6 cores, give or take a couple depending on the game.

I'll also take a slight exception to your statement that "the i9 for example or its AMD counterpart are best served when you want something really really fast". Well... not always. The Intel Core i9 and AMD Ryzen 9 are best served when you're running a heavily threaded workload. That CAN result in things getting done faster. But if you're just looking for raw core speed, you would be better served using a chip with a lower core count and overclocking that chip to a higher frequency. That is much easier to do with a lower core count because it requires less voltage and produces less heat.
If your primary use case is something like 3D modeling, video editing, sound design, etc. then more cores can help. But if your primary use case is gaming or other low thread-count applications then you're best served with a 6 to 8 core CPU and overclocking the heck out of it.
oke oke, somehow i'm left with this distinct notion that there is more to the whole topic then anyone ever would agree to tell out in the open, a bit like there is a conspiracy going on about how i9's for example enhance your gaming life in more ways then one. It is time to get rid of that notion then, thank you !
avatar
Dray2k: What you're describing is partial bottlenecking.
avatar
teceem: Indeed.
Andromeda isn't all that demanding on my i7-6700 - Because I play at 1440P. Actually, I didn't find it a very GPU intensive game either. It played very fluently on relatively high graphics settings on my GTX970! (note: I don't care that much about frame rate, as long as it's stable)
it remains weird, but yea the biggest load in andromeda is on the cpu, i'm also playing at 1440p though not in the example of course which is a window that appears if i alt tab andromeda ( which i can resolve with alt enter .... ) and gpu loads are hovering between 20's % during idle and some quiet places to about 45%

another silly hting is with andromeda i'm now playing ultra and on 1080p actually i only found high feasible with higher loads on the gpu

i did cap it on 60 i belief .. with the new monitor and all ( 144 hz ) its quite ridicilous to let it run vsynced
Post edited February 07, 2021 by Zimerius
avatar
Zimerius: After the earlier post about consuming programs earlier this week i set out on discovering a couple of facts for me as a gamer.

We all know how there are different tier cpu's that seem to cater a different crowd all together.
Most simple facts seem to indicate that the i9 for example or its AMD counterpart are best served when you want something really really fast. Downloading at superhigh speeds for example or .. lets say gaming at very high fps rates ( i keep the examples gaming orientated.
See the underlined text for the first mistake, that absolutely broke my heart.
With this knowledge in the package i decided on checking out some games windowed with XTU running next to it to provide some monitoring regarding the cpu behavior

the games i checked out are Mass Effect Andromeda, Ashes of Singularity and Total War Warhammer 2.

Warhammer 2 is just a fine normal title. Runs the most time on 1 core though lenghty battles can increase the core count to either 2 or even 3 cores used

( my system btw is a 10600k not oc'd 2060 super and 32 gigs of ram )

Ashes of Singularity goes all the way in the benchmark taking all cores but, never used more then about 70W's of power

Andromeda is all in all the most consumptive title regarding cpu with using almost a 100% load during the boot of the gameworld falling down to about 30 to 50% load after in where most of the time is uses all 6 cores

If the cpu uses all cores could it be a safe assumption to say that " gee if a game is this gullible to use all mah cores it could benefit from a core or 2 more ? "
The trick with the cores is that, well, threading is a thing that's not so simple. Normally, multi-tasking in computers is a myth. You have several "threads" that each get so much time allocated to them before the OS says "ok, now this thread over here needs a turn." Basically, this is manifested as some kind of "interrupt" that is triggered by a timer. The OS will jump to a section of code that has a return address to whatever was executing when the timer ticked to 0. The OS then preserves everything and restores another preserved thread's data into the registers and such, then executes that instead. Some people call this the "round robin" approach.

Now, imagine a program has 2 threads: they have the same memory (since the memory is shared with all the program), so now if one thread gets interrupted for the other thread, what if a particular caculation is only halfway done when it's interrupted?

Now, the multiprocessor specification (your cores count as separate CPUs as far as the kernel is concerned) makes this more complicated, because cache misses (causing wait states while the cache refills from RAM) as well as actual multi-tasking. Something like multi-threading AI is a bad idea, obviously, because of "memory corruption" resulting from that. Now, a separate AI and drawing thread actually makes sense, however you'll either need to "lock" with mutexes (making the thread stop), and maybe even take time creating a buffer of all game data (so you can continue doing calculations between frames). There are alot of tasks which are easy to thread (like compiling code, for example, since each file gets compiled to an object before the objects are slapped together to make the executable), but games are not among those tasks. Does that stop gamers from trying to get a 16 core CPU? no, but odds are they're only going to need 3 or 4 of them at a time (if we're including background processes, like voice chat, external music player/podcast player, etc). GPUs will thread a bit, too, and often have alot of cores. In theory, you could increase that graphics processing quite a bit by having some CPU cores turn into GPU cores as needed (i remember intel was working on this idea some time ago, but i don't know whatever came of it).