But its not perfect and it has issues and im really scratching my head on how to fix it.
Here is an example as its currently written... This doesnt work properly btw...
Code: Select all
long FadeToBlack(uint32_t ledArray, int fadeRate)
//an attempt to centralize the fade effect use in various areas of the program.
long newcol = ledArray;
uint8_t LEDRdecrease = (newcol >> 16);
uint8_t LEDGdecrease = (newcol >> 8);
uint8_t LEDBdecrease = newcol;
LEDRdecrease = (LEDRdecrease < (LEDRdecrease-(LEDRdecrease/fadeRate))) ? 0 : LEDRdecrease-(LEDRdecrease/fadeRate);
LEDGdecrease = (LEDGdecrease < (LEDGdecrease-(LEDGdecrease/fadeRate))) ? 0 : LEDGdecrease-(LEDGdecrease/fadeRate);
LEDBdecrease = (LEDBdecrease < (LEDBdecrease-(LEDBdecrease/fadeRate))) ? 0 : LEDBdecrease-(LEDBdecrease/fadeRate);
return 65536 * LEDRdecrease + 256 * LEDGdecrease + LEDBdecrease ;
So I have three byte variables ranging form 0-255, each representing a colour, R G and B. I need them to reduce with each loop iteration on a linear curve to zero. So they keep the correct colour hue as it fades.
If i simply decrease the number by a fixed rate, the dominant colour quickly shows as the hue alters.
But if i use a devided decrease. While the hue stays the same the number never reaches zero.
I cant hole a global variable because this function is used multiple times by various other functions...
So to simplfy, i want a fixed straight liner curve down to zero, so the numbers reach zero at the same time and keep the same colour hue as they fade with each loop call...