Sensors and Systems

… Integrating Sensors into the Ubiquitous Computing Stack

“Smart dust”, tiny leaf sensors, wearable computing — these and a host of other sensors that make measurements and communicate without requiring human intervention can now be readily integrated into dispersed systems to provide ambient intelligence, situational awareness, and the capability for adaptive behaviors or intelligent process automation.

Whether the sensor’s output is used to control the opening and closing of relays or thermostats, or to automatically raise alerts — the integration of sensors into systems is at the heart of the promise of ubiquitous computing. And with the ability to place hundreds of embedded sensors within a given coverage area, each wirelessly streaming information, the possibility of self-organizing sensor networks is increasingly becoming a reality.

This article takes a look at the sensor layer of a basic ubiquitous computing stack.

Integrating Sensors into Systems

In Knowledge Engineering, I broadly outlined where I believe the emerging technologies of the next decade are going and the disciplines that are fueling these developments. Whether this area is called Knowledge Engineering, Ubiquitous Computing, or Ambient Intelligence — the fundamentals are the same: sensors, tiny processors and controllers, situational awareness, wireless communication, and robust networking capabilities.

Here, I’ll take a look at the sensors layer of a basic ubiquitous computing stack (Figure 1).

Figure 1.  The Basic Computing Stack (Sensor to Communications) in Ubiquitous Computing Designs.  The top layer is optional and is typically an application layer geared for end-user interaction.

Figure 1. The Basic Computing Stack (Sensor to Communications) in Ubiquitous Computing Designs. The top layer is optional and is typically an application layer geared for end-user interaction.

The interest is with devices that are capable of making physical measurements (”sensing”), filtering or computing in real-time using these measurements (”processing”), and reporting the results (”communications”) in a form that allows a machine, unassisted by a human, to read and potentially take action on the data. Communications can be via a visual display, logged to a file, or via packets of information transmitted wirelessly using one of a variety of communication protocols.

Theory vs. Practice: Direct and Surrogate Measurement

In principle, sensors can be designed to measure any physical property that is of interest. But direct measurement is often difficult, costly, or otherwise infeasible for a variety of technical reasons. So, in practice, it is often the case that a sensor is measuring a surrogate quantity whose correlation with the quantity of interest is known to vary linearly over a useful range of the quantity of interest.

For example, to measure temperature by directly measuring the heat energy present, i.e. the rate of molecular collisions, is difficult, but measuring the volume expansion of a fluid is comparatively easy. Thus, the discovery by Galileo, at the end of the 1500s, that water volume varied stably with apparent change in temperature, led directly to his invention of the water thermometer (sensor). Similarly, the discovery by Gabriel Fahrenheit, almost a century later in the early 1700s, that mercury has a more stable relationship between volume and temperature change, and maintains this relationship across a wider temperature range, led directly to his invention of the mercury thermometer and the Fahrenheit scale.

Time is another example: hard to measure directly, but with steadily improving precision as new surrogates are used for indirect measurement — shadow clocks, sand clocks, water clocks, pendula, mechanical clocks, crystal oscillators, and now atomic clocks. Similarly, devices that measure position and direction typically exploit surrogate quantities in their computation of position, range, distance, or direction: the magnetic compass, the sextant, the astrolabe, the surveyor’s level, laser range finding, the total station (optical-mechanical), radar (RF waves), sonar (pressure waves), and satellite navigation (GPS). Most electric measurements are also made indirectly, exploiting changes induced in an associated magnetic field to exert a force that leads, for example, to a mechanical deflection that can then be read off a scale or sensed by a strain gauge or displacement meter.

Thus, it is the investigation, discovery and calibration of surrogate measurements that makes sensor design and development a blend of applied science and creative engineering.

Active Elements, Scales and Calibration

Sensors can be usefully organized into two groups according to their primary function:

  1. sensors that indicate the presence or absence of the phenomenon of interest, e.g. detectors and alarms, and
  2. sensors that can quantify variation in a measured quantity, e.g. gauges or meters.

Four components are typically at the heart of a sensor’s design: an indicator element, a transducing element, a calibration standard, and a measurement scale.

An indicator element is something that reacts in a known manner to the phenomenon of interest. Understanding the nature and behavior of the indicator element typically emerges from pure science, be it chemistry (for material indicators), physics (for electromagnetic or optical indicators), or mechanics (for deflection strengths). Integration of the indicator element into a sensor design requires familiarity with both the science underlying the sensor’s mechanism as well as the engineering technologies involved in allowing the sensor to function appropriately.

A transducer element is something that transforms energy of one kind into another. Some transducers are one-way, other are two-way. For example, piezo-electric ceramic elements are well known two-way transducers that transform pressure into voltage and vice versa. One can view the human eye as a one-way electro-optical transducer, transforming radiated energy (the energy from photons, or light) into electrical impulses traveling along the nervous system into the brain. Similarly, the human skin can be viewed as a one-way electro-mechanical transducer: transforming the pressure from contact into electrical impulses.

But without a scale and calibration standard, you have at most a correspondence indicator and possibly coarse detector or alarm. For example, a windmill’s motion corresponds to wind speed, but until the windmill’s motion is calibrated and a scale presented, one cannot determine wind speed with any more precision than the relative fuzzy sense of “slower” or “faster”. This level of imprecision might be acceptable if the goal is simply to detect the presence (or absence) of wind, or to trigger an alarm when wind speed passes the threshold needed to begin turning the blades. But more would be needed if the desire is the ability to compare wind speeds at different times, or to quantitatively characterize variations in wind speed with time.

Having a sensor that can be used for reliable (precise and accurate) quantification will typically require active elements (indicator or transducer), knowledge of how the chosen surrogate quantity behaves across the desired variation range of the phenomenon of interest, and, most importantly, a scale and a standardized method of calibration.

A Cornucopia of Off-the-shelf (COTS) Sensors

There are an enormous variety of sensor types and costs: mechanical sensors, electro-magnetic, optical, chemical, biological, environmental, proximity, position, kinematic, acoustic/pressure, and various combinations of these.

Perhaps the most common categories of sensors are as follows:

  • sound, vibration, pressure sensors: these convert forced displacement (force, pressure) into voltage. E.g. geophone for measuring earth tremors, hydrophone for measuring pressure (sound) in the water, microphone for measuring pressure (sound) in air.
  • kinematic sensors: these convert the sensor’s motion characteristics into voltage. E.g. accelerometer, speedometer (pit log, airspeed indicator)
  • position and proximity sensors: these measure position, deviation from position, or closeness. E.g. crank sensor, GPS, optical or ranging sensors, parking sensors, ultrasonic sensors, Hall effect sensor, variable reluctance sensor, water level sensor
  • motion detecting sensors: these measure the presence or characteristics of movement in the field of view of the sensor. E.g. burglar alarms, radar guns (that indicate the velocity of the motion), tachometer (detecting rotation speed)
  • chemical sensors: these measure chemical composition. E.g. oxygen sensor, water sensor, breathalyzers, CO, CO_2, catalytic bead sensor (measuring gas levels), various chemical transistors, gas sensors, electronic nose, H_2, optrodes (for optically detecting the presence of various chemical substances), ion/acid sensor, smoke detector
  • optical and radiation sensors: these use a variety of radiation techniques. E.g. radar, LIDAR, infra-red sensors, laser range finders, pyranometer for detecting solar radiation levels.
  • electro-magnetic sensors: these exploit the relationship between electricity and magnetism, typically using conductor coils and magnets. E.g. current sensors, hall effect sensors, charge detect sensors, magnetic anomaly detectors, metal detectors, radio detection finder, Wheatstone bridge.
  • environmental sensors: these use a variety of techniques to measure environmental conditions. (E.g. rain gauge, snow gauge, dewcheck, moisture sensor for soil (soil moisture probe) or rain (rain sensor) or condensation/relative humidity (dewcheck), temperature sensor (mercury or conductivity, etc.), tide gauge.)

Evaluating and Comparing Competing Sensors

For just about anything that one wishes to measure, there will be a variety of off-the-shelf sensor technologies providing that capability, each employing different methods for measuring the quantity of interest, and each exploiting different physical relationships to make and condition these measurements.

The obvious considerations when choosing a sensor are, of course, cost, size, weight, and performance range. But there are a host of deeper issues that should be considered carefully, since the engineering time and cost of integrating a sensor into a system is not small. Selecting between two otherwise similar sensors often boils down to evaluating the differences between them in calibration, accuracy, precision, reliability, vulnerability to disruptions, and communication protocol.

What follows, then, is a selection of important issues not to overlook in your sensor selection:

Issues related to all sensors

  1. output type (analog or digital): sensors with analog outputs are typically less costly, but the analog signals may require conditioning or filtering, cannot travel as far without loss, are more susceptible to noise, and require post conversion to digital in order to record and/or transmit data; digital outputs typically comes at a higher cost, but is easier to integrate since the engineering involved in getting the signal path from raw analog output to conditioned digital output has been done for you.
  2. calibration: is the sensor calibrated (providing an absolute measurement) or uncalibrated (providing a relative measurement)?
  3. type of calibration: manual or automatic?
  4. sensitivity: how fine is the ability to discriminate between similar strength inputs?
  5. accuracy: how closely does the measurement of a known quantity match its known value?
  6. precision: how repeatable are its measurements of the same quantity?
  7. linearity range: what is the variation range over which the sensor performs linearly?
  8. sources of bias: these can be materials, known interference sources, internal noise in the instrument (self-noise), among others.
  9. smoothing / volatility: the presence and type of averaging algorithms that are used to reduce measurement volatility and the susceptibility to noise.
  10. performance degradation: this can be due to aging, shock/vibration, and environmental change (temperature, humidity).
  11. drift rate: the rate at which a sensor’s performance changes.
  12. longevity: the expected life of the sensor.
  13. reliability: the extent to which the sensor continues to perform within its stated characteristics throughout its expected life.
  14. robustness: the continued and proper functioning of the sensor in the face of environmental or performance stresses.
  15. vulnerability to disruption: causes of failure, malfunction, or worse — the unannounced introduction of bias.
  16. mean time between failure: statistically determined estimate on lifetime.
  17. communication protocol: the means by which the sensor communicates or logs its data.

The Promise of Sensored Systems

Sensors that can make measurements and communicate without requiring human intervention can be readily integrated into systems, providing ambient intelligence, situational awareness, and the capability for adaptive behaviors or intelligent process automation.

So a mercury thermometer with only a visual scale etched on the glass housing is an example of a sensor that would require a second sensor in order to take a reading automatically, whereas a digital thermometer that transmits digital information or logs the measured data is one that we can use as part of an integrated automated system.

Whether the sensor’s output is used to control the opening and closing of relays or thermostats, or to automatically raise alerts — the integration of sensors into systems is at the heart of the promise of ubiquitous computing.

Indeed, continually streaming information from deployed sensors is creating a knowledge engineering industry built around providing real-time analysis, interpretation, and automated decision-making capabilities to applications as varied environmental monitoring, marine renewables, high-tech agriculture and aquaculture, industrial automation, high frequency trading (financial systems), oil & gas infrastructure, and homeland defense.

Miniaturized Sensors
The constituent elements of sensors (power, integrated circuits, antennae, etc.) can now be so small that the potential for widely dispersed deployment is a rapidly becoming a reality. There is sensing dust, used by the military, leaf sensors used in agriculture, wearable computing, and sensors in everything from car steering wheels, elevators, cameras, washing machines, indoor lights, and even shoes.

In each of the various applications where sensored systems have become a key part of the landscape, there is a similar pattern: miniaturized sensors provide real-time signals. These are digitized and fed into a computer. Whose software produces reports. That can be analyzed as time-series. Which guides decisions. And influences outcomes. That leads to the optimization of scarce resources. Which saves money, increases productivity and effectiveness, and at least in theory, boosts profits.

This pattern is touching every industry, from automated check-out stations in supermarkets, to RFID applications in warehousing (think Amazon), to large scale sensored ocean observing systems, missile guidance systems, marine renewables, aquaculture, smart cars, and everything imaginable in between.

Successful Commercialization and the Challenging Economics of Sensor Integration and Deployment

But sensors are not free. In most cases they are highly engineered devices, and the integrated systems and software of which they are a part represent further engineering development — all of which comes at a cost that can be a significant deterrent to their uptake.

Thus, although prototype systems technology and COTS chip and sensor sub-components are increasingly available, and although the upsides to adoption are often impressive, there are three crucial challenges that must be overcome for a sensored product, system or service to enjoy commercial success:

  1. having a compelling economic model rooted in an accurate cost-benefit analysis for the intended customer,
  2. having a practical deployment strategy that reduces risk to early adopters, and
  3. being able to demonstrate that the sensored system can deliver the hoped for results reliably, repeatedly, and throughout its expected life.

These three challenges should therefore be kept closely in mind through-out the system design, sensor selection and sensor integration phase, and should guide the strategy for developing new sensor applications. For those that can deliver the goods (Apple’s iPhone and iRobot’s Roomba, for example) successful commercialization and widespread adoption are a sweet reward!

Related Applications: Sensored Systems, Ambient Intelligence, Automated Monitoring & Control

Related Articles

Leave a Reply




Your comments are valued! (Please indulge the gatekeeping question as spam-bots cannot (yet) do simple arithmetic...) - required

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Optionally add an image (JPEG only)


Dear Readers!

Our Google+ (Buzz) page is where we publish more regular (~monthly), shorter posts. Feel free to check it out! Full length articles will continue to be published here, with notifications through the Feed (you can join the list below).