This project implements a Kubernetes-managed, Django-based architecture designed for task offloading and scheduling across Cloud, Edge, Fog, and IoT clusters. The system is structured to manage IoT devices (acting as pods) and process the tasks they generate, whether they are periodic or dependent (in Directed Acyclic Graphs, DAGs).
-
IoT Cluster:
- Contains multiple pods representing IoT devices such as sensors.
- Each device generates various tasks, which may be periodic or have dependencies (DAG-based).
- A controller unit on each device selects the appropriate node (Edge, Fog, or Cloud) to perform the tasks.
-
Edge Cluster:
- Comprises several edge devices, each containing:
- Task Handler: Handles task input/output and execution.
- Scheduler: Manages task scheduling.
- Offload Control Unit: Manages the offloading of tasks to other nodes.
- Mobility Control Unit: Handles the mobility of devices, managing changes in their location.
- Comprises several edge devices, each containing:
-
Fog Cluster:
- Contains fog devices equipped with:
- Task Handler: Responsible for task execution.
- Offload Control Unit: Manages task offloading.
- Reinforcement Learning (RL) Scheduler: Optimizes task execution and management based on historical performance.
- Contains fog devices equipped with:
-
Cloud Cluster:
- Includes a pod with:
- Task Handler: Executes and manages tasks.
- RL Scheduler: Coordinates resource management and task scheduling.
- Device Registry Unit: Registers and manages both edge and fog devices.
- Includes a pod with:
-
Monitoring Unit:
- The system is equipped with a monitoring unit that tracks key metrics such as:
- Latency
- Energy consumption
- Resource utilization
- All data and task metadata are stored in a central database.
- The system is equipped with a monitoring unit that tracks key metrics such as:
This architecture allows dynamic task allocation, mobility handling, and efficient resource usage across a heterogeneous distributed network of IoT, Edge, Fog, and Cloud layers.
This repository contains the scripts and configuration required to run task-graph scheduling experiments on a distributed Cloud–Fog–Edge environment.
Task and graph datasets are located in:
Wrapper/tasks
Currently used configurations:
load-6
load-7
The load naming does not represent heavier or lighter workloads.
It is only used as a folder organization scheme.
Each load folder contains specific workflow graphs (e.g., Alibaba).
Example:
Wrapper/tasks/load-7/alibaba
If you want to run experiments with another workflow (e.g., Montage), you must:
- change the task folder
- update the graph configuration
Otherwise the system will produce errors.
The main entry point is:
Wrapper/main.py
At the beginning of this file there are three main configuration sections:
List of scheduling algorithms to run.
Defines which dataset will be executed.
Example:
load-7
Defines which workflow graphs should run.
Example:
Alibaba
If the dataset folder contains only Alibaba, selecting other graphs like Montage will cause errors.
Running Wrapper/main.py requires several system components to already be active:
- Cloud nodes
- Fog nodes
- Edge nodes
To simplify this process, a helper script is provided:
bash.sh
This script:
- starts the required components
- launches the experiment automatically
Device configuration is defined inside:
bash.sh
Current setup:
4 Fog devices
20 Edge devices
You can modify these values depending on your experiment.
Each run takes approximately:
~15 minutes
Example experiment:
4 algorithms
2 task types
= 8 experiment combinations
Estimated total runtime:
~2 hours
To speed up experiments, runs were executed in parallel on:
- a personal laptop
- a university machine
Typical workflow:
git clone <repository>
cd project
bash bash.shThis will:
- Start the required components
- Execute
Wrapper/main.py - Generate raw results
After the execution finishes, raw logs are generated in:
Cloud/Results
Fog/Results
Edge/Results
These files contain low-level execution logs, such as:
- which task was executed
- execution frequency
- timestamps
Example log content:
Task X executed at frequency Y
Raw results must be manually copied to the Wrapper results directory.
Example destination:
Wrapper/Results/config-10
Copy structure:
Cloud/Results → Wrapper/Results/config-10/Cloud
Fog/Results → Wrapper/Results/config-10/Fog
Edge/Results → Wrapper/Results/config-10/Edge
Example:
Wrapper/Results/config-9
Wrapper/Results/config-10
Each config-* folder represents one experiment batch.
Currently this process is manual. Automation was not implemented due to time constraints.
There are two kinds of results in the system.
Execution logs such as:
- task execution
- CPU frequency
- timestamps
Stored in:
Cloud/Results
Fog/Results
Edge/Results
Derived metrics such as:
- energy consumption
- performance metrics
These are exported into Excel files.
Use:
Wrapper/export.py
This script converts raw logs into structured Excel reports.
The first ~20 lines of export.py contain configuration parameters.
You must ensure they match:
- number of devices
- task types
- raw results folder (
config-*)
Example:
config-10
Before running export.py, you must create an empty Excel file in:
Wrapper/
Example files already created:
Alibaba-2000.xlsx
Alibaba-3000.xlsx
These prevent runtime errors when exporting results.
Full experiment pipeline:
1️⃣ Run experiments
bash bash.sh
2️⃣ Copy raw results
Cloud/Results → Wrapper/Results/config-X/Cloud
Fog/Results → Wrapper/Results/config-X/Fog
Edge/Results → Wrapper/Results/config-X/Edge
3️⃣ Generate Excel
python Wrapper/export.py
Make sure to update the config folder inside export.py.
- Raw results are archived in config folders
- These folders are never overwritten by the code
- New experiment results must be manually copied
Example archive structure:
Wrapper/Results/
├── config-7
├── config-8
├── config-9
└── config-10
If you want, I can also **upgrade this README to a top-tier research artifact README** (the kind reviewers expect for **ACM/IEEE artifact evaluation**) with:
- experiment reproducibility section
- system architecture diagram
- dataset description
- command cheatsheet.
## Experiments
The results of our experiments are available in the `Results` directory. Each experiment was conducted with different configurations, showcasing the performance under various setups.
- **Experiment Configurations**: Configurations are found under:
- `Results/config-1/`
- `Results/config-2/`
- `Results/config-3/`
- `Results/config-4/`
- `Results/config-5/`
To see detailed results, refer to the corresponding PDFs in each folder.
## License
This project is licensed under the MIT License. See the [LICENSE](./LICENSE) file for details.