We recently had a gearbox where the iron content and WPC reached over 1000 ppm and 486 ug/ml respectively. The thing that bothered me is that the machine condition was considered marginal in Sept '03 at 486 ppm of iron and a WPC of 156 ug/ml and critical at 1080 ppm of iron and 479 ug/ml in December. However, one month later in January the machine condition was considered normal at 1059 ppm of irom and 211 ug/ml in January. How can the iron content and WPC for a normal sample exceed the iron content and WPC for a marginal sample?
I am assuming this is due to statiscal alarm limits based on standard deviations of the last so many samples. Is this the way most of the oil analysis companies set their alarm limits or condition states? Do you feel this is the best way to go about setting alarm levels for oil analysis or should the statistical alarms be based on what one considers samples at normal running conditions to create the baseline? Should the analysis company consult the end user before adjusting the alarm limits to gain input on the machine condition from other technologies and resources?
We knew this particular unit had a condition, but were waiting for a scheduled outage to address. However, the lube analysis reports in the hands of the untrained could have led operations and/or maintenance to assume the gearbox was now OK.
Thanks for any input in advance.
Original Post