Skip to content

A monitoring-controller tool for dockerized services, inspired from control theory, used at the network edge. Code for paper "Where there is fire there is smoke: a scalable edge computing framework for early fire detection", https://www.mdpi.com/1424-8220/19/3/639/pdf

License

Notifications You must be signed in to change notification settings

maravger/edgy-controller

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Edgy Controller: Control Theory - based, docker container scaling at the Network Edge

In this initiative we describe the experimental design of a Network Edge architecture. At each given moment, the Central Controller component, developed in Django Python, orchestrates the containerized services and allocates the server’s resources to them. The Central Controller contains the load balancing mechanism which aims to compromise the mutually exclusive goals of performance and resource utilisation, by distributing the total requests of the implemented applications among the active containers. Since the edge servers’ resources are not abundant, the optimization objective of our approach is to minimize the number of the active total allocated resources, in terms of active edge servers, with the constraint of meeting the total workload demands. This indirectly results in reducing the consumed energy and optimally allocating the resources in the server side.

Specifically, the offloaded traffic, generated by the Mobile Users, is directed to the Central Controller through a local Wireless Access Point (WiFi). There lies the upper level control process of our mechanism as depicted in the figure bellow; this component selects an appropriate Container topology to be implemented to each Edge Server directly connected to it and consequently distributes the incoming workload accordingly. This decision defines the number of active servers alongside the number and the operating state of the Containers to be placed in them. This upper level process is performed in an on-line and proactive manner, through the use of an internal prediction mechanism, the Workload Predictor. The essential input for this estimation process is provided by the Monitoring Service component, which is responsible for collecting data regarding both the network traffic (e.g. offloading requests issued, end-to-end response times) and the servers’ resources utilization (e.g. CPU usage) at each given time.

Hence, depending on the aforementioned decision and taking into account the predicted workload for each time window, the Global Controller able to create, run, scale and stop application-specific Containers. Additionally, the lower level control process is implemented in this component, as it moderately scales the Containers vertically providing the required resources based on data coming from the Monitoring Service. In this way, it ensures that the Containers remain within the selected operating state, thus guaranteeing minimum and stable application response times.

At the lower level, each container of an edge server is equipped with a Local ("Edgy") Controller, responsible for calculating all request statistics (average response time, requests submitted) needed for the monitoring service and tackling the small fluctuations of the incoming workload, according to the predicted number of requests for each time window and in order to meet QoS requirements. The Central Controller’s REST-API communicates with each Local Controller of the containers through the Docker Platform, in order to scale them vertically.

Alt text

Installation

First of all make sure you have Docker installed. If not, head to https://docs.docker.com/install/

Then, install pip and virtualenv:

sudo apt-get install python-pip python-dev python-virtualenv # for Python 2.7
sudo apt-get install python3-pip python3-dev python-virtualenv # for Python 3.n

After that, install redis:

wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
sudo make install

Clone the existing repo:

git clone https://github.com/maravger/edgy-controller.git

Create a Virtualenv environment by issuing one of the following commands:

virtualenv --system-site-packages . # for Python 2.7
virtualenv --system-site-packages -p python3 . # for Python 3.n

Activate the Virtualenv environment by issuing the following command:

source bin/activate

...and install the requirements:

sudo pip install -r requirements.txt

Make sure Celery is installed:

sudo pip install Celery

Finally, run the following all-inclusive script, after changing the permissions:

chmod +x run_controller.sh
./run_controller.sh

You're good to go. You can test the controller efficiency by spawning containers of the following repo: https://github.com/maravger/ca-tf-image-classifier

About

A monitoring-controller tool for dockerized services, inspired from control theory, used at the network edge. Code for paper "Where there is fire there is smoke: a scalable edge computing framework for early fire detection", https://www.mdpi.com/1424-8220/19/3/639/pdf

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published