New IoT perspective

For the Internet of Things

As the Internet of Things (iot) consolidates its position and status, it is a major event in 2015, and more system architects are carefully examining its basic concepts.

As they have seen, these experts mentioned some simple ideas about the IoT structure: the sensors and actuators of the cloud are connected to a simple, low-power wireless hub and then connected to a large cloud data center via the Internet.

But almost every stage of this description has problems. The expert's point of view raises the question, using many simple sensors is the right way to measure the state of the first system?

New IoT perspective

An obvious way to measure the state of the system is to determine the state variables. Place the sensors there, then the sensors can measure them and finally put the sensor data together in a hub. But this way is not necessarily the best way. Installing and connecting all of these sensors makes this method expensive and simply unreliable.

Another way is to choose several key variables that can be remotely sensed and then used to estimate the state of the entire system. This process may be obvious, but it may involve some important mathematical and state of use estimates, such as the Kalman filter. A more intuitive example involves the concept of security cameras, traffic, parking, and smart city.

A typical smart city solution may involve lighting management, parking management, traffic control, and its security. The traditional IoT method places a light sensor on each streetlight, a camera in each of the traffic lanes near each intersection, and a sensor embedded in an important location on the ground. Each of these sensors is wired to the local hub, which in turn generates a wireless link to the Internet access point — in addition to the light sensor, it will use the wireless link in the center of the lamp post.

There is another way. A clever observer watches a video in the video of a security camera. When the traffic signal changes, it is easy to see which lights are occupied and make a judgment. The result is not only a significant cost savings, but also increased reliability and additional safety features without the use of many simple sensors (Figure 1).

Figure 1. A camera can collect more data

This concept can work in other types of systems. Using a computable mathematical model, the state of the system can determine the position of the motor shaft by estimating the motor winding current and voltage or by observing the chemical reaction state from the outside. In general, it is almost a growing trend, favoring the support of a small number of remote sensors with computer resources, rather than the power of a large group of simple sensors and their connectivity, reliability and security issues.

Change everything

The idea of ​​replacing recalculation algorithms—such as convolutional neural networks or Kalman filters—has significant advantages for simple sensors in the cloud. But this also brings problems, and designers seem to face a dilemma. They move the virtual data stored by the original data retention - multiple streams of 4k video to the cloud for computational analysis? Or do they design a lot of computing power close to the sensor? Both methods have their challenges and their viability.

Cloud computing has obvious advantages in computing. As long as you think, you can have as much computing power as possible. If you want to experiment with big data algorithms, you can have almost unlimited storage. You only need to pay for what you are using. But there are three types of challenges: security, latency, and bandwidth issues.

If you can't tolerate algorithms with delays, then you have no choice but to rely on local computing. But if you can tolerate some delay between sensor input and system response, the problem becomes how much. For example, in a loop, some control algorithms can accommodate significant delays, but only the delay must remain almost unchanged. Obviously, when the amount of data moved to the cloud is small, it will not be noticed, and it will not have much impact on time. But if the system design requires moving real-time 4k video from multiple cameras to the cloud, Internet restrictions will become an issue.

Virtualization and its dissatisfaction

Our cloud-centric system needs enter the data center through the network and carry out profound changes and extensions. As more computationally intensive, event-triggered applications fall into the data center, server and storage virtualization is almost mandatory. The data center must be able to run an application on any available resource and also meet the service level requirements of the external system.

There is still a difficult point. Some algorithms are not able to propagate across multiple cores on multiple servers. They rely on a single thread, and the only way to make them faster is to get them to run on faster hardware.

The idea is ultimately to let the cloud data center give users specific applications and fully virtualize operations. Provide users with algorithms for processing, accelerator, and storage resource configuration services to users. For operators, data centers are like the sea, that is, software-defined resources.

fog

We have been discussing how to provide IoT applications and do all the calculations on the cloud. But it does not allow us to see the reasons for the application, security, bandwidth, latency and decisiveness. These applications require a large amount of local computing and storage resources: whether it's in a visual processing surveillance camera - a hub, or in a network switch.

Today these resources are designed as dedicated sensors and hubs that are purely application-specific hardware and are typically supported by accelerators for lightweight processor hardware. Imagine that virtualization penetrates into the walls of the data center and spreads to all the different computing, storage, and connectivity resources. You can find any aspect of the application object: in cloud computing, smart hubs or smart sensors, and ultimately even in the network structure (Figure 2). When you move it, it is based on performance metrics and available resources. The system will be powerful, flexible, and constantly moving toward resource optimization.

figure 2. Internet fog

Many steps must be taken to achieve this vision. Applications must be in a portable container such as the Java Virtual Machine or Open Computing Language (OpenCLTM) platform. For example, allow them to execute different hardware on the platform without any changes. The concept of instructing application networking must go beyond the data center to ensure that QoS supports their own connections and ultimately Internet computing tasks within the nodes. To some extent, all of this must be absolutely safe.

Ss Laundry Bin

Laundry Bin,Ss Laundry Bin,Stainless Steel Laundry Bin,Metal Laundry Bin

NINGBO ZIXING ELECTRONIC CO.,LTD. , https://www.zixingautobin.com

Posted on