Ocean Sensor Manifesto
Problem:
- When we calibrate a sensor we delete its history and it is born again with a complete lack of understanding of what its strengths and weaknesses are.
- A typical ocean science sensor has no clue where it is in the world. It’s location is deemed not important.
- A sensor might not know, or be able to detect or respond, when it is in water vs land.
- A sensor collects data alone. Any difficulties it has it must deal with them alone.
- A sensor has no idea about the internet. A human has to install custom software on a proprietary operating system and plug the sensor into the computer via a custom cable to get data off of it and then repeat for all sensors.
- When a human is deploying a sensor they are responsible for choosing all the settings that are often dependent on its location and depth. That means each user of the sensor must be an expert.
- When plugged in a sensor is likely to have a custom communication protocol, custom mechanical interfacing, and custom
Solution:
Ocean Science sensors should have enough memory to log an entire lifetime of data. They should know when they are being calibrated, and measure their offsets to have long and short term metrics for their performance. The sensor should know and keep track of what it is good at and what it is bad at to optimize its refurbishment and performance.
Every sensor should have a default suite of additional sensors and telemetry built into itself: gps, power usage, motion, humidity, wifi, pressure (external and internal), temperature. These should be ‘always on’ even in low power mode while in transit. A sensor should know how many times it has been dropped or experience sudden movements while deployed.
When a sensor’s wifi can see the internet it should upload all of its data to a custom server and be able to download the latest ‘usage database’ of all of the same model sensors. This ‘usage database’ is a compilation of all of the metrics for all the sensors of the same type. The importance of this is for that sensor to get a chance to compare its performance metrics with its entire family. If its pump is in the lower 20% of performance then we know it should be replaced.
Once a sensor knows where it is in the world it can reference an internal (and external) database of what settings other sensors of the same model were set to. Discussions can be had and sensor settings can be agreed upon for each goal of the sensor and each geographical location, so it’s as simple as putting it in the water and the sensor will use it’s GPS and pressure sensor to determine the optimal settings (which can be over ridden for customized tests).
A sensor should sit on a network and be able to communicate to all the other sensors on the same network. If it finds a similar sensor that is older then it can compare itself to that older sensor. If it finds a sensor with a better clock score then it can use that sensors clock. A network of sensors can corroborate its data with others. B
There should be a “Ocean Science Sensor Operating System” that handles all of this for people building new sensors. It could run on the processor running the Bristlemouth stack. The open source Ardupilot software could be a good start for this. It has serial logging and storage, expansion over can (and thus bristlemouth), LUA scripting capabilities, location and motion based logging and being a ROS node. It has a protocol for communicating to higher level systems called “MAVLINK” but it could be adapted to the bristlemouth standard. Just like how they have customized versions for planes, rovers and boats we could make a customized version for sensors.