Tuesday, November 17, 2009

At long last... labeled data!

By incredibly popular demand, we are providing a labeled data set. These data are very similar to those used in the competition, although we are reserving the actual competition data for future use. This new data set may be used to develop and evaluate novel algorithms.

The data set is 480 MB, and can be downloaded from the public data set archive of the PHM Society:
https://www.phmsociety.org/references/datasets

Thursday, October 8, 2009

And the winners are...

The winners of the 2009 PHM Society challenge are posted here. Congratulations to both teams! And thanks to everyone who participated...

Sunday, July 12, 2009

FAQ #12: Scoring

Q: What score will be used to rank competitors? Best ever or last submitted?

A: Good question! There is an argument to be made (and in fact we have!) either way. You will be ranked on the basis of your best score ever.

Thursday, July 9, 2009

FAQ #11: Closing Time

Q: When are the final submissions due? Will the closing date be extended?

A: Entries submitted after 13 July 2009 23:59 Eastern Daylight Time (14 July 2009 03:59 Greenwich Mean Time) are not eligible for the competition. This closing date is firm.

Thursday, June 25, 2009

FAQ#10: bad key fault

Q: Is the bad key fault equivalent to a "no load" condition?

A: No, there was partial loading. The shaft and the brake turned at different rates due to slippage between the output shaft and the brake.

Monday, May 25, 2009

FAQ#9: invited papers

Q: Your web site states that the "top scoring teams will be invited to give presentations at the special session, and submit papers to IJPHM". Does this means that only the first two (one from each category) will be invited?

A: No. We expect several competitors from both categories will be invited to present, depending on how they do.

FAQ#8: releasing the answers

Q: Are you going to post the solutions of the Data Challenge, e.g., after the competition?

A: Probably not all of them.

As with last year, we aren't releasing the full data set. Instead, we are holding on to it for use as a "blind standard" for comparing algorithms.