Think of a fog computing node as a physical server that resides between the edge devices (thermostats, robots, in-vehicle computers) and the back-end systems, typically hosted on public clouds. The fog nodes respond to an architectural problem: too much latency to pass requests all the way back to the public cloud-based services and not enough horsepower to process the data in the edge device itself.
This three-tier system adds another compute platform that is able to do some—if not most—of the back-end processing. This addresses the concern that cheaper edge devices with lower power don’t have the processing and storage power to deal with the incoming data natively. Now data can be sent to the fog node for processing, without the latency impact of going all the way back to the remote cloud services.