The Internet of things is uncovering some important cloud-related issues.
For this reason the Edge Computing architecture is growing more than expected and this is helping to solve
the aformentioned problems.
With the implementation of the Edge Computing, privacy problems are expected to decrease, because security is held
centrally and every edge device is made as unhackable as possible, in a way that transferring data is safe and
About time, it is evident that having a big quantity of data travelling through the internet to reach cloud servers
for being processed and then sent back implies time and bandwidth. Having the data processed locally instead of
kilometres away is very different and faster. Moreover sending less information back and forth between the cloud and
the edge makes easier the data transfer even if the connection is not so good.
Another issue comes with the volume of data that needs to be saved on the cloud and how the information is
filtered to keep only what is demanded. Thanks to the edge computing is possible to give each device the chance to
identify what data to store and what to send. This allows the devices to send to the cloud less amount of data.
So edge computing produces a substantial advantage in both consumer and Industrial IoT use cases. It helps
overcome connectivity costs, because of the ability to send the information that matters instead of raw streams of
sensor data, which is really important on devices that connect via LTE/mobile.
Furthermore avoiding device-to-cloud data round trips is vital for applications using computer vision or machine
learning — for example, a drone tracking and filming its owner or an object. Doing machine learning directly
on-device can enhance natural language interfaces as well, allowing smart speakers to react more quickly by
interpreting voice instructions locally, run basic commands such as turning lights on/off, or adjust thermostat
settings even if internet connectivity fails.
For this reason, the propagation of machine learning for IoT is an important driver for the progress in the field.
Devices need to run complex deep learning algorithms quick while consuming less power as possible since many of them
run on battery. This is prompting the adoption of different computing architectures — integrating diverse engines
such as CPUs, GPUs and DSPs because the architecture diversity is essential since it is not possible to rely on just
one type of engine for all kind workloads while trying to achieve good speed and processing efficiency. For this
particular reason, also DSPs industry is having a huge growth in market size.