Optimizing ForceLowPowerMode

Jerome Glisse glisse at freedesktop.org
Fri Aug 28 02:45:42 PDT 2009


On Thu, 2009-08-27 at 10:57 +0200, Markus Stockhausen wrote:
> Sorry for the wrong inital post ...
> 
> I hope this is the right address to place my thougths. In the last few
> days I experimented with the radeon driver setting ForceLowPowerMode to
> reduce the heat generated by may Mobility Radeon X1600. If I got it
> right the switch forces the following settings:
> 
> - reduce CPU speed to 1/2
> - reduce PCIe lanes to 4
> - leave memory speed as it is (although supported by atombios function )
> 
> This does not help very much as the fan in my laptop still runs at high
> speeds very often. So I decided to experiment with the settings. Going
> down to 1/3 CPU, 1/3 memory and 2 PCIe lanes makes my machine totally
> silent (and it works quite well). 
> 
> And now to my question. Would it be possible to change the behaviour of
> the ruleset as follows?
> 
> - Rename the parameter to ForceStaticPowerMode
> - Add Parameter ForceCoreClockSpeed 
> - Add Parameter ForceMemoryClockSpeed 
> - Add Parameter ForceActivePCIeLanes
> 
> In this way the user could change the settings by himself. Of course the
> driver should enforce some lower and upper bounds for the ranges. 
> 
> If desired I could implement the changes myself and hand them in so that
> they could be implemented in GIT. But as I'm no expert in radeon specs
> and this is my first post to this group I just want to ask politely of
> how something like that could be implemented. Of course some expert
> should explain some more technical details about this all so the code
> will not break anything.
> 
> Thanks in advance.
> 
> Markus

Patch welcome to add such options i guess we are all delaying power
saving code to KMS as we would like to be able to change this parameter
while the GPU is running but this is only doable with KMS as you need
to block GPU access while changing those. But it's safe to do that on
ddx startup in non kms world as GPU should be idle.

Range of valid values are more tricky for vram & GPU i think for vram
it mostly depends on the chips used by the manufacturer and GPU
frequency then need to follow some equation taking into account
the vram frequency. So far it seems that dividing gpu/vram clock
by the same integer is a safe choice. Note that atombios should provide
various valid & tested gpu/vram clock but it mostly up to the
manufacturer and many of them are lazy and don't even bother
testing much various values.

Cheers,
Jerome



More information about the xorg-driver-ati mailing list