Without understanding the higher level science goal of a deployment, a networked sensing system appears to be nothing more than an ad hoc wireless network forwarding packets to a common sink. From this limited viewpoint all failures seem equally detrimental to the science application. However, all faults are not equal. Being able to prioritize faults based upon their impact to the science application is important because faults are common in sensing systems. Here, we present work on Vigilance, a system that incorporates a scientific model of the sensed phenomenon to enable a system administrator to quantify the impact of failures on the science application.
Scientific applications can be impacted by failures because when faults occur they result in unusable or missing data. Vigilance incorporates statistical techniques to fill in (i.e. impute) faulty and missing data, and quantify the resulting model output uncertainty. Since the imputation model is trained from historical data it has a limited lifetime. Vigilance also predicts when the imputation model will expire, and runs online to determine when the model actually does expire. Using real soil data collected from an ongoing deployment at James Reserve we present preliminary results on sensor data imputation, as well as the resulting impact on scientific model certainty.
document