Skip to content

Anonymous0-0paper/MOSAIC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MOSAIC: Mobility-Oriented Scheduling and Intelligent Resource Allocation for IoT

Overview

This project implements a Kubernetes-managed, Django-based architecture designed for task offloading and scheduling across Cloud, Edge, Fog, and IoT clusters. The system is structured to manage IoT devices (acting as pods) and process the tasks they generate, whether they are periodic or dependent (in Directed Acyclic Graphs, DAGs).

MEES framework overview

Key Components:

  1. IoT Cluster:

    • Contains multiple pods representing IoT devices such as sensors.
    • Each device generates various tasks, which may be periodic or have dependencies (DAG-based).
    • A controller unit on each device selects the appropriate node (Edge, Fog, or Cloud) to perform the tasks.
  2. Edge Cluster:

    • Comprises several edge devices, each containing:
      • Task Handler: Handles task input/output and execution.
      • Scheduler: Manages task scheduling.
      • Offload Control Unit: Manages the offloading of tasks to other nodes.
      • Mobility Control Unit: Handles the mobility of devices, managing changes in their location.
  3. Fog Cluster:

    • Contains fog devices equipped with:
      • Task Handler: Responsible for task execution.
      • Offload Control Unit: Manages task offloading.
      • Reinforcement Learning (RL) Scheduler: Optimizes task execution and management based on historical performance.
  4. Cloud Cluster:

    • Includes a pod with:
      • Task Handler: Executes and manages tasks.
      • RL Scheduler: Coordinates resource management and task scheduling.
      • Device Registry Unit: Registers and manages both edge and fog devices.
  5. Monitoring Unit:

    • The system is equipped with a monitoring unit that tracks key metrics such as:
      • Latency
      • Energy consumption
      • Resource utilization
    • All data and task metadata are stored in a central database.

This architecture allows dynamic task allocation, mobility handling, and efficient resource usage across a heterogeneous distributed network of IoT, Edge, Fog, and Cloud layers.


Quick Start

Mosaic Experiment Runner

This repository contains the scripts and configuration required to run task-graph scheduling experiments on a distributed Cloud–Fog–Edge environment.


1. Task and Graph Data

Task and graph datasets are located in:


Wrapper/tasks

Currently used configurations:


load-6
load-7

⚠️ Important

The load naming does not represent heavier or lighter workloads.
It is only used as a folder organization scheme.

Each load folder contains specific workflow graphs (e.g., Alibaba).

Example:


Wrapper/tasks/load-7/alibaba

If you want to run experiments with another workflow (e.g., Montage), you must:

  • change the task folder
  • update the graph configuration

Otherwise the system will produce errors.


2. Main Execution Script

The main entry point is:


Wrapper/main.py

At the beginning of this file there are three main configuration sections:

1️⃣ Algorithms

List of scheduling algorithms to run.

⚠️ Some algorithms were missing on this machine and will be added later.


2️⃣ Task folders

Defines which dataset will be executed.

Example:


load-7


3️⃣ Graph types

Defines which workflow graphs should run.

Example:


Alibaba

⚠️ Important:

If the dataset folder contains only Alibaba, selecting other graphs like Montage will cause errors.


3. Running Experiments

Running Wrapper/main.py requires several system components to already be active:

  • Cloud nodes
  • Fog nodes
  • Edge nodes

To simplify this process, a helper script is provided:


bash.sh

This script:

  • starts the required components
  • launches the experiment automatically

4. Device Configuration

Device configuration is defined inside:


bash.sh

Current setup:


4 Fog devices
20 Edge devices

You can modify these values depending on your experiment.


5. Execution Time

Each run takes approximately:


~15 minutes

Example experiment:


4 algorithms
2 task types
= 8 experiment combinations

Estimated total runtime:


~2 hours

To speed up experiments, runs were executed in parallel on:

  • a personal laptop
  • a university machine

6. Running the Experiments

Typical workflow:

git clone <repository>
cd project
bash bash.sh

This will:

  1. Start the required components
  2. Execute Wrapper/main.py
  3. Generate raw results

7. Raw Results

After the execution finishes, raw logs are generated in:

Cloud/Results
Fog/Results
Edge/Results

These files contain low-level execution logs, such as:

  • which task was executed
  • execution frequency
  • timestamps

Example log content:

Task X executed at frequency Y

8. Collecting Results

Raw results must be manually copied to the Wrapper results directory.

Example destination:

Wrapper/Results/config-10

Copy structure:

Cloud/Results → Wrapper/Results/config-10/Cloud
Fog/Results   → Wrapper/Results/config-10/Fog
Edge/Results  → Wrapper/Results/config-10/Edge

Example:

Wrapper/Results/config-9
Wrapper/Results/config-10

Each config-* folder represents one experiment batch.

⚠️ Note

Currently this process is manual. Automation was not implemented due to time constraints.


9. Result Types

There are two kinds of results in the system.

1️⃣ Raw Results

Execution logs such as:

  • task execution
  • CPU frequency
  • timestamps

Stored in:

Cloud/Results
Fog/Results
Edge/Results

2️⃣ Processed Results

Derived metrics such as:

  • energy consumption
  • performance metrics

These are exported into Excel files.


10. Generating Excel Results

Use:

Wrapper/export.py

This script converts raw logs into structured Excel reports.


Configuration

The first ~20 lines of export.py contain configuration parameters.

You must ensure they match:

  • number of devices
  • task types
  • raw results folder (config-*)

Example:

config-10

Excel File Requirement

Before running export.py, you must create an empty Excel file in:

Wrapper/

Example files already created:

Alibaba-2000.xlsx
Alibaba-3000.xlsx

These prevent runtime errors when exporting results.


11. Complete Workflow

Full experiment pipeline:

1️⃣ Run experiments
bash bash.sh

2️⃣ Copy raw results
Cloud/Results → Wrapper/Results/config-X/Cloud
Fog/Results   → Wrapper/Results/config-X/Fog
Edge/Results  → Wrapper/Results/config-X/Edge

3️⃣ Generate Excel
python Wrapper/export.py

Make sure to update the config folder inside export.py.


12. Notes

  • Raw results are archived in config folders
  • These folders are never overwritten by the code
  • New experiment results must be manually copied

Example archive structure:

Wrapper/Results/
 ├── config-7
 ├── config-8
 ├── config-9
 └── config-10

If you want, I can also **upgrade this README to a top-tier research artifact README** (the kind reviewers expect for **ACM/IEEE artifact evaluation**) with:

- experiment reproducibility section  
- system architecture diagram  
- dataset description  
- command cheatsheet.


## Experiments

The results of our experiments are available in the `Results` directory. Each experiment was conducted with different configurations, showcasing the performance under various setups.

- **Experiment Configurations**: Configurations are found under:
  - `Results/config-1/`
  - `Results/config-2/`
  - `Results/config-3/`
  - `Results/config-4/`
  - `Results/config-5/`

To see detailed results, refer to the corresponding PDFs in each folder.

## License

This project is licensed under the MIT License. See the [LICENSE](./LICENSE) file for details.

Releases

No releases published

Packages

 
 
 

Contributors

Languages